Why Hashtag Position Matters More Than Hashtag Usage
Two people both use hashtags. One puts three at the end as a block. The other weaves six inline through the post. An AI that only tracks 'uses hashtags: yes' gets both wrong. Here's how Bloomberry measures what actually matters.
By Sadok Hasan
Why Hashtag Position Matters More Than Hashtag Usage
Hashtags are a small part of how a LinkedIn post looks and feels. But they are a revealing test case for how carefully an AI voice tool is actually measuring your writing β because the failure mode on hashtags is precise and diagnosable in a way that more abstract voice failures are not.
Here is the test: does the tool know not just whether you use hashtags, but how many you use, where you put them, and how consistent your pattern is across posts? If it cannot answer all three, it is applying its own default for at least one of them. And if it is applying its own default for hashtags, it is almost certainly doing the same for things that are harder to see. This is part of the How Bloomberry Voice Works series.
Why the Binary Fails
The simplest way to track hashtag behavior is a boolean: does this user use hashtags? If yes, add hashtags. If no, do not.
The problem is that "uses hashtags" describes two completely different behaviors:
Pattern A: Three hashtags at the end of every post, separated as a block from the body. #Leadership #Strategy #AI
Pattern B: Six hashtags woven inline throughout the post, each attached to a relevant noun or phrase rather than appended to the end.
Both users get a "true" for hashtag usage. But Pattern A and Pattern B produce visually and tonally distinct posts. A post written in Pattern A's style with Pattern B's hashtag structure looks wrong immediately β it reads like someone else wrote it. The same is true in reverse.
A binary flag cannot distinguish them. Neither can a "hashtag count" that does not also track position. You need three attributes: count, position, and consistency.
The Three Attributes That Actually Matter
Count is the most obvious attribute but still underutilized. There is a large experiential difference between a post that ends with two hashtags and one that ends with twelve. The right count for your voice is whatever you actually use β not the two or three that a general-purpose AI might apply as a safe default, and not the twelve that some engagement-optimization tools recommend.
Position is where most tools fall short. There are three distinct positions for hashtags in a LinkedIn post:
- End block: grouped after the body, either on the same line as the last sentence or separated by a line break
- End inline: appended to the final sentence itself, attached to words within it
- Inline throughout: hashtags appear mid-post, attached to terms as they arise in the body
Each pattern creates a different reading experience and signals different things about the writer. An analytics-focused operator who uses inline hashtags throughout the body writes posts that look substantively different from a founder who appends a block at the end. Both are valid patterns. Neither is generic. A voice tool that does not detect position applies its own default β and that default will be wrong for one of them, probably both.
Consistency is what validates the other two. Detecting that someone sometimes uses end-block hashtags and sometimes uses inline does not tell you their pattern β it tells you they are inconsistent, or that their pattern changes depending on post type. A tool measuring consistency can distinguish between writers who have a stable pattern and those whose hashtag usage varies by context, and can generate accordingly.
What This Reveals About Voice Modeling Generally
Hashtag position is a useful test because the failure is observable. When an AI gets your hashtag pattern wrong, you can see it immediately. The hashtags are in the wrong place or there are too many or too few. It is not subtle.
The subtler failures β hook length, sentence rhythm, whether you tend to end posts on a question or a statement β follow the same pattern. They fail for the same reason: the tool is tracking a binary or an average rather than your specific, measurable pattern. But because those failures are harder to isolate than a misplaced hashtag, they get attributed to the AI "not quite sounding like me" rather than to a specific measurement gap.
The hashtag test is a proxy for how carefully the system is actually measuring your writing. If it gets hashtag position right, that is evidence it is measuring fine-grained patterns rather than just averages. If it gets hashtag position wrong, that tells you something about the precision of the overall voice model.
How Bloomberry Measures Hashtag Behavior
When Bloomberry calibrates your voice profile from writing samples, it extracts hashtag count, position pattern, and consistency as separate attributes for LinkedIn specifically. This data is stored in the voice profile and applied at generation time.
If your LinkedIn samples show a consistent pattern of two to three end-block hashtags, generated posts will follow that pattern. If your samples show no hashtags, generated posts will not add them. If your pattern varies β sometimes inline, sometimes end-block β the system detects that variation and applies it at the appropriate frequency.
The goal is replication of your specific pattern, not application of a generic LinkedIn hashtag convention. The AI LinkedIn post generator applies this to every post it produces β count, position, and consistency, not just the binary.
Frequently Asked Questions
Do hashtags matter for LinkedIn reach?
The evidence on LinkedIn hashtag reach is mixed and has shifted over time as LinkedIn's algorithm has evolved. What is consistent is that hashtag presence affects the perceived tone and register of a post β inline hashtags read differently from end-block hashtags, and using six versus two changes the visual footprint significantly. For voice AI purposes, the question is not whether hashtags help reach, but whether the AI reproduces your specific hashtag pattern, not its own default.
How should I use hashtags on LinkedIn?
The most important thing is consistency β not following a best practice, but following your own pattern. If you put three hashtags at the end of every post, any AI generating content in your voice should do the same. If you use hashtags inline to add context mid-sentence, the AI should mirror that. The specific convention you use matters less than applying it consistently, because inconsistency is a reliable signal of AI-generated content.
Does Bloomberry automatically add hashtags to LinkedIn posts?
Bloomberry detects your hashtag pattern from your writing samples and applies it when generating posts. If your samples consistently end with a three-hashtag block, generated posts will follow the same pattern. If you do not use hashtags in your samples, generated posts will not add them. The goal is replication of your pattern, not application of a general best practice.
Why do AI tools get hashtags wrong?
Because most tools treat hashtag behavior as a single binary attribute: uses hashtags or does not. Two writers who both get a "true" for hashtag usage can have completely different patterns β one uses two at the end, one uses eight inline. The binary cannot distinguish them, so the AI applies its own default. Accurately replicating hashtag behavior requires measuring count, position, and consistency as three separate attributes.
What is the right number of hashtags for a LinkedIn post?
There is no universal right number β and that is precisely the point. The right number for your LinkedIn posts is whatever number appears consistently in your actual writing. Some high-performing LinkedIn writers use none. Others use eight. The question for a voice AI is: what is your number, and can the tool reproduce it? Bloomberry's answer is to measure your actual pattern and apply it, rather than defaulting to a convention that may not reflect how you write.
Related reading: How Bloomberry voice works β the full series | Why your LinkedIn voice and X voice are different | How AI uses more samples to write like you
Hashtag position is not a styling preference. It is a precision test. If the AI gets the small details right, there is a reasonable chance it is getting the bigger ones right too.
Ready to write sharper?
Bloomberry turns your ideas into publish-ready thought leadership.
Try Bloomberry free