a person typing on a keyboard

How to make AI content pass the "did a human write this" test

The draft came back from Claude looking clean. Good structure, solid SEO, hit the brief. Then someone read it out loud and stopped after two paragraphs. "This doesn't sound like us. This sounds like... everyone."

That's the test most AI content fails — not detection software, but the ear of someone who knows the brand. Making AI content sound human isn't about fooling algorithms. It's about writing that sounds like it came from someone who actually works at the company, knows the product names, and has opinions about how things should be explained.

Why AI Content Sounds the Same

AI models learned to write by reading everything. That's the problem. They absorbed millions of articles about enterprise software, skincare routines, financial planning — and learned the average way each topic gets discussed.

Ask an AI to write about your custom cabinetry system and it'll produce competent content about custom cabinetry systems in general. The terminology will be industry-standard. The examples will be plausible but fictional. The voice will be helpful and warm in exactly the way every other helpful, warm article sounds.

Detection tools like GPTZero and Originality.ai look for patterns: uniform sentence length, predictable transitions, low perplexity scores. But those metrics miss the more obvious tell. Generic AI content fails because it references no actual products, uses no company-specific language, and sounds like it was written by someone who spent fifteen minutes on Wikipedia before starting.

The Two Tests Your Content Needs to Pass

First test: would someone at your company read this and think a colleague wrote it? Not because of quality — because of specificity. Does it mention the actual product names? The real terminology customers use? The way you'd explain something to someone in your office?

Second test: does it add something to the conversation or just summarise what's already out there? AI is excellent at synthesis. It'll give you a perfectly competent overview of any topic with proper headings and bullet points. What it won't give you — without significant input — is a perspective, a position, or a detail that makes the reader think "they actually know this."

Pass both tests and detection software becomes irrelevant. Fail either and you've published content that technically exists but doesn't represent your business.

How to Humanise AI Writing Without Starting Over

The instinct is to rewrite everything. Expensive, slow, defeats the purpose. Better approach: edit strategically instead of comprehensively.

Replace generic examples with real ones. If the AI wrote "for example, a software company might use this feature to..." — delete it. Insert an actual scenario from your business or your clients' businesses. Real examples do more work than three paragraphs of general explanation.

Add the brand-specific language. Every business has terminology that sounds slightly different from the industry standard. Maybe you call it a "style guide" where competitors call it a "brand book." Maybe your product has a feature with a specific name that the AI doesn't know about. Find-and-replace the generic terms with your actual ones.

Vary the sentence structure. AI tends toward uniformity — medium-length sentences with similar rhythm. Break that pattern. Add a short fragment. Let one sentence run a bit longer because the idea needs room. The variation is what creates the sense of a thinking person.

Insert one opinion. Somewhere in the piece, take a position that not everyone would agree with. "We think X matters more than Y." "Most advice about Z is overcomplicated." AI defaults to balanced, both-sides treatment of everything. Humans have views.

What the Detection Tools Actually Measure

Perplexity measures how predictable each word choice is. AI picks the most statistically likely next word — that's what "low perplexity" means. Burstiness measures variation in sentence length and complexity. AI tends toward consistent, medium-complexity sentences throughout.

Both metrics are proxies for the same thing: writing that sounds processed rather than thought through. You could game the metrics by randomly varying sentence length and swapping synonyms. But that's solving the wrong problem.

The real fix is feeding the AI better input so it produces less generic output in the first place. Specific brand details make AI content harder to detect because they break the statistical patterns — the AI isn't predicting generic industry language anymore, it's working with your actual terminology.

Where the Input Matters More Than the Editing

Most AI content sounds generic because the prompt was generic. "Write a blog post about project management for SaaS companies" will produce exactly the article you'd expect — competent, interchangeable with a hundred others on the same topic.

The leverage point is earlier in the process. Before the AI writes anything, it needs to know what your business actually sounds like. Product names, service descriptions, how you explain things to customers. That context changes what comes out.

This is what BrandDraft AI was built for — it reads your website URL before generating anything, so the output references your actual products and uses your terminology instead of defaulting to industry-standard language. The difference shows up in the first draft, not the fifth edit.

The Editing Pass That Actually Matters

Once you have a draft with real brand context, the editing becomes manageable. You're not rescuing generic content — you're refining something that already sounds like your business.

Read it out loud. Where do you stumble? Those sentences need work. Where does it sound like a press release instead of a person? Loosen it up. Where does it repeat what it just said in slightly different words? Delete the repetition.

Add one thing the AI couldn't have known. A recent client conversation. A detail from your last product update. Something that proves a human with inside knowledge touched this piece.

The goal isn't perfection. Content that's too polished has its own uncanny-valley problem — it reads as corporate rather than human. Leave a little roughness. Let a thought stay slightly incomplete. That's what real writing looks like.

Ready to see what AI content looks like when it starts with your actual brand context? Generate a brand-specific article with BrandDraft AI and see the difference in the first draft.