Digital McCarthyism Is Having Its Moment.
Why everyone's so keen to catch each other using AI, and why none of them are the ones we should be listening to.
This is nothing more than a low-stakes ‘Digital McCarthyism’.
The accusation is the punishment.
The defence doesn’t really matter.
The thrill is in the chase.
We don’t know what counts as cheating yet.
The tools are eighteen months past mainstream at best, the norms haven’t been drawn, and nobody — not the people who built the models, not the people actually using them every day, not the journalism schools, not the editors — can give you a clean account of where a person’s work ends and a model’s begins. It’s not a clean line, I’m not sure it ever will be.
But in the meantime, a particularly dull kind of online sport has emerged.
Someone gets caught.
The em dash is the first giveaway. Or the suspicious tidiness of a third paragraph. Or just the vibe… It sounds too AI-ish, the prose is too clean, too wordy, not wordy enough, too symmetrical, too something. A sleuth has done the work, dragged the offender into the public square, and is now accepting applause for their service to authentic writing.
The accused gets a defamation tag, a screenshot of their best paragraph, and a chorus of replies congratulating the detective.
Then everyone moves on to the next one.
This is nothing more than a low-stakes ‘Digital McCarthyism’. The accusation is the punishment. The defence doesn’t really matter. The thrill is in the chase.
But how quickly we forget, every workplace-changing technology has had a stretch like this. Email had it. Spreadsheets had it. The smartphone had it.
Each of these had a long period where none of us knew “the rules” .
Could you put your CV in Word and send it to a recruiter? Was it gauche to take a meeting on speakerphone in a café? Was replying to work email at 9pm a sign of commitment or a sign you’d lost the plot?
It took us nearly three full decades before we all finally agreed that ringtones are utterly moronic and phones should be permanently set to silent mode.
But, we worked it out, slowly, messily, through a lot of social negotiation that no one bothered to write down.
We are only just starting that bit with AI. Eighteen months in (maybe two years if you’re very generous about it).
The work of figuring out what disclosure looks like, what acknowledgement looks like, what attribution looks like in a world where every keystroke is potentially assisted… that work has barely begun. It will take editors, students, writers, publishers, educators, regulators, and audiences all having uncomfortable conversations for a decade. Probably longer. But, it will certainly not be settled by Linkedin or Twitter (X).
People use the tools. Lots of people, in lots of ways. The architect using it to write the boring middle of a planning brief. The teacher using it to draft a parent email at 10pm because the kid finally went to sleep. The junior lawyer using it to summarise a discovery dump that would otherwise eat their weekend.
Not one of them can give you a tidy answer about where their input ended and the model’s began.
Because that answer actually does not exist yet.
And when it does, it’ll likely be philosophical and fuzzy rather than surgical and clean.
For the avoidance of any doubt, I am a huge proponent of disclosure, and I think disclosure matters.
There’s a meaningful difference between “I used a tool to fix my grammar” and “I generated this from a single lazy prompt and put my name on it.”
Different professions are going to need different lines drawn in different places. Academia is one thing, journalism another, marketing copy a third, your group chat a fourth. The work of negotiating those lines is genuinely important, and the people already doing it well (editors writing methodology notes, journals updating their standards, schools rewriting their conduct codes) deserve a lot more credit than they’re getting.
What they don’t deserve is to be confused with the people running em dash forensics on a stranger’s LinkedIn post.
The people doing the most enthusiastic accusing are often avoiding disclosure of their own practice. You won’t find a methodology note pinned to their profile. You won’t see them list the tools they used to brainstorm, to outline, to research, to copy-edit. The accusation isn’t downstream of a disclosure standard they themselves are observing. The accusation is the thing.
Which tells you what the theatre of that accusation is really for. It is designed to say: I am the authentic one. I have not been replaced. The line is bright and clear, and I am on the human side of it, and that person over there is not.
That’s an anxiety wearing the costume of an ethic. I get the anxiety.
Anyone who has spent any time with these tools and isn’t at least a little bit anxious about what comes next is either not paying attention or is selling something. But the anxious are historically a terrible source of rules. We’ve done this before. The first McCarthy era had no shortage of people performing certainty about who could be trusted and who couldn’t, and the wreckage took thirty years to clean up.
There is something that could help the conversation in a meaningful way.
Ask better questions.
“Did you use AI” is boring and instructionless.
“How did you use it, and what’s your line on disclosing it” is at the very least a conversation.
Secondly, if you’re going to publish a view about where the disclosure line should sit, publish your own practice alongside it. Tell us what you used the tool for in the piece itself. Tell us where you drew the line. Tell us what you’d consider on the wrong side of it. The people I trust on this question are the people who’ve actually had to draw the line in their own work. Everyone else is just enjoying the chase.
Norms will arrive. They always do. There will be a moment, sooner than people think, when an unflagged AI assist will feel as off as showing up to a wedding in trainers. That’s coming. It’s coming through the people who actually use the tools trying and testing and sitting down with the people who consume the output and negotiating something we can all live with.
We won’t actually know when it happened. There won’t be a press release. The line will just gradually be somewhere most of us agree (or settle on) it ought to be, and we’ll have gotten there by working with the tools, screwing up, refining, watching what the better editors do, copying it badly, getting better. The same way every other norm in the history of writing has arrived.
Not one of those steps involves a public accusation about an em dash.
The piece you’ve just finished reading was written together with an agentic artificial intelligence named Klaus. Not all of it. Not none of it. Somewhere in between. He refined some of my shitty writing. I corrected most of his.


