According to Bloomberg Business, a 25-year-old marketer named Kiara Stent reviewed about 200 posts across 50 LinkedIn profiles over the summer and concluded roughly 75% seemed AI-generated. Her August post detailing the signs, like dramatic metaphors and repetitive syntax, went viral, getting tens of thousands of impressions and sparking a divided reaction. Other users, like 29-year-old technical marketer Brielle Reisman and 32-year-old research director Thomas Manandhar-Richardson, have joined the callout culture for different reasons, from coaching colleagues to direct confrontation. LinkedIn VP of product Gyanda Sachdeva stated the platform has “built strong defenses” against low-quality content, even as it offers paid subscribers AI tools to help draft posts. A recent, non-peer-reviewed preprint study added a twist, finding graduate writing students actually preferred AI-generated passages to those by award-winning novelists nearly two-thirds of the time.
The hunt for robotic prose
Here’s the thing: this isn’t just about grammar policing. It’s a symptom of a deeper anxiety. When your professional network, a place for genuine insight and career advancement, starts to feel like a bot farm, it’s destabilizing. People are seizing on supposed “tells”—like the em dash or the Oxford comma—as a way to reclaim some sense of control. A Washington Post analysis last fall found 70% of ChatGPT messages had at least one emoji and over half contained em dashes, so the suspicion isn’t totally unfounded. Even Sam Altman acknowledged the em-dash problem. But let’s be real. Real humans use em dashes all the time. I do. You probably do. Now we’re all suspect.
The accidental AI mimics
And that’s where it gets messy. This hyper-vigilance is creating collateral damage. Take Bryan M. Vance, the journalist accused on Reddit of using AI because he used emoji bullet points and em dashes in his newsletter. He’d spent eight hours on the piece. His crime? Having a writing style that, purely by coincidence, now matches common AI tics. He’s not alone. As The New York Times explored, people are now unknowingly mimicking ChatGPT’s style because they’re exposed to so much of it. So the callouts might not be catching lazy AI users. They might just be catching writers with particular stylistic quirks. How’s that for irony?
A losing battle for authenticity
Manandhar-Richardson is probably right about the “cat and mouse” dynamic. As people try to sound less like AI, the AI will learn to mimic the new, “authentic” human style. Then the detectives will look for new patterns. It’s a moving target. And honestly, if that study about readers preferring AI writing holds up, what are we even doing this for? What if readers just like it more? That’s the uncomfortable question nobody on LinkedIn wants to ask. The platform is built on personal brand and hustle. If a well-polished AI draft gets more engagement than your authentic, rambling thought, what’s the professional incentive to be “real”?
Policing in a time of paranoia
Basically, we’re in a messy transition phase. The technology is outpacing our social norms for using it. LinkedIn can build defenses and offer verification tools, but the core issue is human. People are scared their hard-won skills are being devalued and confused about what’s real. So they police grammar. It’s a comfort, a simple rule in a complex change. But as the Vance case shows, the callout culture itself can be low-quality and unoriginal. It might make us feel like savvy detectives, but it doesn’t consistently work. And it might just make everyone more paranoid, including the people still writing everything themselves, one painfully human em dash at a time.
