← All posts

Turning an AI-written case study into something a prospect will believe

There’s a kind of case study that describes a company’s problem in the vaguest possible terms, explains that the solution “streamlined their workflow” or “unlocked new efficiencies,” and closes with a quote like “We couldn’t be happier with the results.”

You’ve seen it. You probably skimmed it and moved on.

The AI-generated version of this is even more so. It has the structure of a case study (problem, solution, results) but none of the content that makes a prospect think: that’s exactly what we’re dealing with. The problem is generic. The results are expressed as percentages with no baseline. The client sounds like a satisfied customer from a stock photo.

Live demo

Try fixing this case study

A real-looking case study that doesn't actually say anything.

Launch demo →

The problem with AI case studies isn’t that they’re dishonest. It’s that they’re accurate in the way a map is accurate when it only shows country borders. Technically correct. Not useful for navigation.

A prospect reading your case study isn’t looking for confirmation that software can help companies. They’re looking for a reason to believe it can help their company, specifically. That requires the kind of detail that only comes from knowing the actual story.

What “40% faster” doesn’t tell anyone

Results with no baseline mean nothing. “Reduced onboarding time by 40%” sounds significant, but without the starting point, it could mean down from 5 days to 3, or down from 2 hours to 72 minutes. The prospect has no way to calibrate it against their own situation.

The AI doesn’t know the baseline because you didn’t give it one. Or you gave it one and it smoothed it out in favor of a cleaner, more impressive-sounding number.

“The onboarding reduction is listed as 40%, change it to the actual numbers: it was 11 days down to 6.5, and the biggest drop was in the tool provisioning step that used to take 3 days”

Now a reader who has a similar provisioning bottleneck knows exactly what they’re reading about.

The problem section that reads like a press release

Most AI-generated case studies describe the client’s problem in abstract terms: “fragmented tooling,” “manual workflows,” “difficulty scaling.” These are real problems, but expressed this way they’re invisible. Every company has them, and no one recognizes themselves in that description.

The actual problem is usually smaller and stranger than that. It’s the spreadsheet that three people update independently and then someone reconciles every Monday. It’s the fact that when a new hire joins, the only way to get them Slack access is to message one specific person who is usually traveling. It’s the Notion page that hasn’t been updated since someone left a year ago and new hires still find it and read it.

That’s the sentence that makes a prospect stop and say we have exactly that.

“The problem description is vague, replace it with the actual issue: three people maintained separate spreadsheets and reconciled them every Monday, and that meeting took two hours”

The quote that could have been written by the AI

When the AI writes a case study, the client quote is often the most obviously generated thing in it. “We couldn’t be happier with the results.” “The team has fully adopted it.” “It’s exceeded our expectations.” These quotes could belong to any client, about any product, in any industry.

A real quote from a client sounds different. It’s specific to one moment or one observation. It has a qualifier, an unexpected word, a detail that wasn’t in the brief. “I didn’t expect the provisioning change to matter this much. Turns out that was the thing making everyone dread hiring season.”

The AI can’t generate that quote because it wasn’t in the room. You were. Or someone who works with you was.

“The quote is generic, replace it with what Marcus actually said on the call: that he didn’t realize how much the Monday reconciliation meeting was demoralizing his team until it disappeared”

The edit isn’t about polish

If you’re reading back a case study draft and thinking it needs to be more compelling, the instinct is usually to rework the structure, punch up the language, or make the results headline bigger.

That’s almost never the issue.

The issue is that the specific information never made it into the draft: the real number, the actual quote, the name of the process that was the real problem. It was in your notes, or in a conversation, or in your head. The AI built a case study around the outline. It needs you to put the substance back in.

One sentence at a time, spoken out loud if that’s faster, the case study turns into something a prospect will actually read to the end.

Editing requires precision.
Redraft keeps the tools where the writing already is.

Open editor →