The Platform Nobody Needs to Build
AI detection is a dead end. Verification infrastructure is wide open.
There’s a question I keep coming back to, and I’m not sure it has a good answer: could you build a content platform where you could guarantee - or get as close as possible to guaranteeing - that everything on it was written by a human?
On the surface, it sounds like a straightforward proposition. Substack but verified. Twitter but authentic. A place where every post, every essay, every half-formed thought in the replies actually came from a person sitting at a keyboard (or mumbling into a voice note on a walk, which is more my style). No ghostwriting AI. No content engines. No synthesised rehypothecation of someone else’s ideas dressed up as original thinking.
I’ve been turning this over for a few days now, and the more I think about it, the more I think the interesting part isn’t the platform. It’s the verification problem underneath it.
The Arms Race You Can’t Win
The first and most obvious problem is that you’d be stepping into an arms race from day one. AI gets better at replicating human writing every few months. Whatever detection mechanism you build today, it’s outdated by the time it ships. You’d be playing whack-a-mole with models that are specifically trained to sound natural, to vary sentence length, to insert the kind of awkward phrasing and half-finished thoughts that we associate with “real” writing.
And this isn’t a theoretical concern. Plagiarism detection already struggles with content that’s been constructed from a variety of different sources rather than lifted directly from one. If someone feeds three books, a podcast transcript, and a handful of blog posts into a content library, then uses an AI to synthesise all of that into something new - what exactly are you detecting? It’s not plagiarism in the traditional sense. The sentences are novel. The structure is original. The ideas aren’t. But then again, how many ideas are truly original in the first place?
So a platform built on “we detect AI content” is making a promise it can’t keep. The detection gets slightly better, the generation gets significantly better, and within a year you’ve got the same problem as every other platform - you just told your users you’d solved it, which makes the betrayal worse.
Credentials and Credibility Are the Same Word
The more I sat with this, the more I realised the real question isn’t “was this written by an AI?” The real question is “who actually stands behind this, and can I trust them?”
Those are different questions. And they point toward a different kind of solution.
Y Combinator and Andreessen Horowitz both flagged cryptographically verified credentials for AI agents as a top startup opportunity. On the surface, that sounds like a narrow technical problem - how do you verify that an AI agent is what it claims to be? But pull on the thread and it gets much broader.
Credentials just means credibility. The Latin root is the same. When we talk about someone’s credentials, we’re asking: can I trust this person? Is there a verifiable track record behind what I’m reading? Does this person have skin in the game?
That reframing changes the problem entirely. The question stops being “did AI write this?” and becomes “is a real, accountable person willing to put their name and reputation behind it?” Those are completely different engineering challenges, and one of them is actually solvable.
The Verification Layer, Not the Platform
This is where the idea shifts from “build a new platform” to something potentially more interesting: a verification layer that could sit on top of existing platforms.
Think about what that would look like. Not a new Substack competitor (the world doesn’t need another one), but a protocol or a standard that lets any platform signal provenance. This post was written by a verified human. This person has a cryptographic identity tied to a track record of content. The ideas in this piece can be traced back to cited sources.
The value isn’t in building a walled garden. The value is in providing a trust signal that works anywhere. Something that says: regardless of whether AI assisted in the editing, the structuring, the grammar - a specific person conceived these ideas, stands behind them, and is accountable for them.
Because here’s what I think the real anxiety is about, underneath all the hand-wringing about AI content. It’s not that people are worried about the prose quality. AI prose is fine. Sometimes it’s better than fine. The anxiety is about accountability. When you read something and it shapes your thinking, you want to know there’s a real person at the other end of it. Someone who has reasons for believing what they wrote. Someone who’ll be wrong in public and deal with the consequences.
That’s what’s being eroded. Not quality. Accountability.
The Synthesis Problem
There’s a related problem that doesn’t get enough attention. A lot of content right now - human-written or not - is essentially synthesis without attribution. Someone reads four books, listens to a dozen podcasts, absorbs a few hundred tweets, and then writes something that presents all of those ideas as their own thinking.
This isn’t new. It’s what essayists have always done. But the scale and speed of it has changed. When you can feed an entire content library into an AI and have it produce a “fresh” article constructed from a variety of different sources, the line between synthesis and appropriation gets blurry.
And I’m not even sure it’s wrong, exactly. There’s genuine value in synthesis. Taking complex ideas from one domain and making them accessible to a different audience is useful work. Chris Williamson does this, and millions of people benefit from it. He’s not claiming to have invented the ideas. He’s distributing them.
But there’s a difference between good-faith synthesis - where the synthesiser adds context, makes connections, and credits the originating thinkers - and the increasingly common pattern of laundering other people’s ideas through an AI content engine and presenting them as personal insight. The first is journalism. The second is... well, it’s the default incoming mode of content creation, and that should concern anyone who cares about where ideas actually come from.
A citation layer - something that could trace the provenance of ideas back to their origins, even across multiple layers of synthesis - would be more valuable than any detection tool. Not “was this AI-generated?” but “where did this thinking actually originate?”
What I Think This Actually Becomes
I don’t think the answer is a new platform. The last thing anyone needs is another feed to scroll through, another set of notifications to ignore, another growth game to play. The attention ponzi is already exhausting enough without adding another venue for it.
What I think is more likely to matter is infrastructure. Boring, unglamorous, protocol-level infrastructure for trust and provenance. Cryptographic identity. Citation tracking. Accountability signals. Things that don’t make for exciting launch posts but that quietly change the incentive structures underneath content creation.
The interesting question isn’t “can we keep AI out?” It’s “can we make it matter that a human is in?”
Those are different projects. The first one is futile. The second one might actually be worth building.
The platform nobody needs to build is the one with walls. The infrastructure everyone might eventually need is the one that makes trust portable, verifiable, and worth something again. Whether that emerges as a startup, a protocol, or just a cultural norm that enough people start demanding - I genuinely don’t know. But the pressure for it is building, and the people who figure out the trust layer before everyone else will have solved something that matters a lot more than content detection.
For now, I keep coming back to that etymological coincidence. Credentials and credibility: the same word, the same root, pointing at the same fundamental problem. We’ve spent decades building systems that distribute content at scale. We haven’t built the corresponding systems for distributing trust. That gap is where the interesting work is.


