How to Tell If Something Was Written by AI (The Patterns to Look For)
AI detection tools are unreliable and easy to fool. But human readers don't need a tool β they can feel it. Here are the structural patterns that give AI writing away, and why they exist in the first place.
By Sadok Hasan
AI detection tools are a mess. GPTZero flags human-written academic papers. Originality.ai produces false positives on dry, precise writing. Turnitin's AI detector is wrong often enough that students have successfully appealed grades based on its errors.
But here's the thing: most readers don't need a detector tool. They can feel it.
There's a quality to AI writing that registers before conscious analysis kicks in β a smoothness that doesn't feel quite right, a structure that's technically correct but somehow expected, an absence of the specific, idiosyncratic details that characterize human experience.
Learning to see these patterns explicitly is useful for two reasons. First, if you're consuming content, it helps you calibrate how much weight to give it. Second, if you're creating content with AI tools, it tells you exactly what to edit out.
Why AI Writing Has Fingerprints
Before getting into the specific patterns, it helps to understand why they exist.
Large language models generate text by predicting the most probable next token given everything that came before. They're trained on massive datasets of human writing, and they learn what tends to follow what. The result is writing that is statistically likely β meaning it gravitates toward the patterns that appear most frequently in training data.
This is why AI writing tends toward clichΓ©s and familiar structures. These patterns appear frequently in training data precisely because they work β they've been used and reused by human writers. The model has learned that these are good patterns, so it reproduces them at rates that exceed what any individual human writer would produce.
Bloomberry's AI Dialects research documented this at scale across three major models. We found consistent structural signatures β what we call AI Sentence DNA β that appear across model outputs regardless of topic, format, or prompt style. Here's what those signatures look like.
The Rhetorical Question Hook
If a post starts with a question like "Have you ever wondered why the most successful people seem to have figured something out that everyone else missed?", there's a high probability it was written by an AI β or by a human imitating AI patterns they've absorbed from their feed.
Rhetorical question hooks became common because they perform well on social media. AI models learned this pattern from training data and now use it far more consistently than any human writer would. Individual human writers vary their openings β an AI defaults to this construction at a frequency that's statistically distinguishable.
The giveaway isn't the rhetorical question itself. Human writers use them too. It's the combination of this hook with the patterns that follow.
The List-and-Takeaway Structure
AI writing gravitates toward:
- Three things I learned from X
- Five reasons why Y
- Here's what most people miss about Z
These structures work for readability and social media engagement. AI models learned them from high-performing content and reproduce them consistently. The problem is that complex ideas often don't fit cleanly into numbered lists, and AI will force them there anyway. The structure becomes a template rather than a genuine fit for the content.
Human writers use lists too. But they're more likely to depart from list structure when the content doesn't naturally fit it. AI stays in list structure even when it doesn't serve the material.
Characteristic Transition Phrases
Certain transitional phrases appear at statistically elevated rates in AI writing. Some of the most common:
"It's worth noting that" β A hedge that distances the writer from the content. Appears in AI writing because models are trained to qualify uncertain information, but they use this phrase far more broadly.
"Let's dive in" β A transition to the main content that became a content marketing clichΓ© and got baked into model training.
"At the end of the day" β A closing-thought phrase that AI uses as a default transition to summaries and conclusions.
"Navigate the complexities of" β A vague framing construction that sounds substantive but says almost nothing specific.
"In today's fast-paced world" β The temporal framing clichΓ© that AI uses to signal contemporary relevance without actually providing it.
"Delve into" β This one is nearly diagnostic. The word "delve" appears at extremely high rates in AI writing and at very low rates in contemporary human writing outside academic contexts.
"Game changer" β Overused in training data, now overused in AI outputs.
No single phrase is definitive. It's the cluster that matters. One "it's worth noting" in a post doesn't mean AI wrote it. Three characteristic phrases in one paragraph is a much stronger signal.
The Smoothness Problem
This is the hardest pattern to articulate but the easiest to feel.
Human writing has texture. It has awkward transitions that the author chose to leave because they felt true. It has specific details that are almost too specific β the name of a restaurant, the exact dollar amount, the time of year when something happened. It has opinions that feel personal and slightly controversial. It has asides that break the structure.
AI writing is smooth. It fits together too neatly. Every sentence connects to the next in an expected way. The structure is clean. The examples are general. The opinions are calibrated to be broadly agreeable.
This smoothness is a direct result of how language models work. They're predicting the most probable next token, which means they're always heading toward the most expected version of whatever they're writing. Human writing departs from expectation constantly β because humans have specific experiences, specific opinions, and specific things they want to say that don't always fit the most probable path.
Bloomberry's research on Claude's emotional architecture connects this to something deeper: AI models have functional internal states, but those states are the model's, not the writer's. The writing reflects the model's architecture, not a human's lived experience. That gap is what smoothness signals.
The Abstraction Layer
Human writers who are trying to make a specific point usually use specific examples. AI writing tends to use abstract examples β the kind that illustrate a point in principle without being grounded in actual experience.
Compare:
Human-written: "I spent three years running customer service at a SaaS startup, and the single most common complaint β by a significant margin β wasn't about features or pricing. It was about the sense that nobody was reading the ticket."
AI-written: "Research suggests that customer satisfaction is significantly influenced by the quality of human interaction. When customers feel heard and valued, retention rates tend to improve."
Both make a related point. The human version is specific, credible, and interesting. The AI version is smooth, accurate, and forgettable.
The AI version isn't wrong. It's just not grounded. It's the median-plausible version of a point about customer service, not a particular human's observation from a particular job.
The Opinion That Isn't Really an Opinion
AI models are trained to be agreeable and to avoid controversy. This produces a specific writing artifact: posts that appear to take a stance but actually don't.
"Leadership is about more than just giving directions β it requires genuine empathy, clear communication, and the willingness to make hard decisions." This sentence sounds like it's saying something. It isn't. Every part of it is broadly acceptable to every reader. No one disagrees with the importance of empathy and clear communication in leadership.
Human writers with genuine opinions say things that some readers disagree with. "Most leadership advice is generic because the people writing it have never had to do a layoff." That's a specific, arguable claim. It will alienate some readers and resonate strongly with others.
AI writing tends toward the first type. The absence of genuine provocation is a tell.
What to Do If You're Creating with AI
If you're using AI writing tools and you don't want your content to read as AI-generated, the edit targets are clear:
-
Remove the characteristic phrases. Search your output for "worth noting," "delve," "navigate the complexities," "game changer," "fast-paced world." Delete all of them.
-
Add specific detail. Replace abstract examples with concrete ones from your actual experience. The specific makes writing feel human and credible simultaneously.
-
Take a real position. Find the most arguable claim in your content and make it more explicit. Say the thing that some readers will push back on.
-
Break the smooth structure. Add an aside. Start a paragraph with "Actuallyβ" or "Here's the counterintuitive part." Interrupt the flow the way human thought actually interrupts itself.
-
Apply a voice layer before publishing. The most reliable solution is to have a persistent voice profile that shapes AI outputs toward your actual writing patterns before they reach you as a draft. This is the difference between editing AI outputs and generating from your voice.
The goal isn't to pass detection tools β those are unreliable anyway. The goal is to write content that resonates with the humans reading it, and humans can feel the difference between writing that came from a specific person with specific experiences and writing that came from a statistical prediction engine.
The structural patterns above are what to edit. Understanding why they exist is what helps you avoid them in the first place.
Bloomberry's AI Dialects research maps the full taxonomy of AI writing fingerprints across ChatGPT, Claude, and Gemini. Read the Vol. 2 research on Claude's emotional architecture for the deeper structural explanation of why these patterns exist.
Ready to write sharper?
Bloomberry turns your ideas into publish-ready thought leadership.
Try Bloomberry free