The Content Singularity
What happens when every AI reads the same books.
There’s a thought experiment I keep coming back to. It’s simple, and frankly it’s a bit depressing.
Say you’re building or using an AI content engine (as many are starting to). You feed it your transcripts, your notes, your half-formed ideas. It synthesises all of that into posts, articles, threads. Fine. That’s basically what anyone using AI for content is doing now, to varying degrees of sophistication.
But then you think: why stop at my own material? Why not slide Peter Thiel’s “Zero to One” into the content library? Or a few Naval Ravikant essays? Maybe some Rob Henderson, some Richard Branson, a bit of Nassim Taleb for texture. Before long, your content engine is drawing on the same canon of ideas that every other ambitious person on the internet has also fed into their content engine.
And then what? Everyone’s output starts converging. Not word for word, obviously. But the ideas, the frameworks, the references, the worldview. It all starts to sound like it was written by the same well-read ghost. That is what’s going to happen.
‘Show me the incentive and I’ll show you the outcome’ - Charlie Munger.
If the thought has popped into my head, it’ll pop into everyone else’s over time too.
Synthesis is not the problem
I want to be precise about what I’m describing, because there’s a version of this that’s completely fine, even admirable.
Chris Williamson is a good example. He reads widely, interviews deeply, and synthesises ideas from dozens of thinkers into something accessible and well-presented. He’s not claiming the thoughts are original. He’s a conduit, a discovery layer. You watch a Williamson clip and you encounter an idea from, say, evolutionary psychology or behavioural economics that you might not have found on your own. That’s valuable. That’s how knowledge moves through culture: it gets recycled, reapplied, repackaged for new audiences, and sparks new thinking in the process.
He doesn’t tend to produce deep original insights. He synthesises and re-hypothecates, which is absolutely nothing wrong with that. If I could do it with the fidelity and consumability and presentation quality that he does, I’d be pretty happy with that.
The key is that Williamson does this openly. The synthesis is transparent.
What I’m worried about is the non-transparent version. The version where AI handles the synthesis invisibly, and the person presenting the ideas has nothing other than a moral obligation to tell you where any of it came from.
The intellectual laundering problem
Here’s the disparity that matters: there’s synthesis, and then there’s laundering.
Synthesis is taking ideas from multiple sources, combining them in ways that produce new understanding, and being upfront about the ingredients. Every good essayist, every good podcaster, every good teacher does this. You stand on the shoulders of others and you say so.
Laundering is feeding other people’s thinking into a system that outputs it in your voice (AI content systems will get better and better at this over time), under your name, with no trail back to the source. The ideas pass through the AI like money through a shell company. They come out clean, they sound like yours, and nobody can easily prove otherwise.
The uncomfortable part: these two things look identical from the outside. An synthesised post drawing on Thiel and Henderson and Taleb reads exactly like a thoughtful person who’s read Thiel and Henderson and Taleb. The output is the same. The process is completely different.
And here’s the deeper issue: as the tools get better, even the process distinction starts to blur. Is it laundering if you genuinely understood the source material and just used AI to articulate it more clearly? Is it synthesis if you never actually read the books but your AI did? Where’s the line?
I don’t have a clean answer to this yet, but intuitively, we all know where the line is.
The convergence endpoint
The practical consequence is easier to see. When everyone’s content engine is drawing on the same pool of high-status thinking, the internet’s intellectual diversity collapses.
Right now we’re in an awkward transition period. Some people are still writing from genuine first-person experience and hard-won insight. Others are running the laundering playbook, consciously or not. The two coexist because you can still, if you’re paying attention, feel the difference. There’s a texture to writing that comes from actual experience, actual uncertainty, actual thinking-in-progress. It doesn’t always sound polished. It contradicts itself sometimes. It has rough edges.
AI-synthesised content, by contrast, tends toward a certain effortless competence. Every point is well-made. Every transition is smooth. Every conclusion is tidy. It’s the uncanny valley of thought leadership: technically impressive, emotionally hollow.
But the tools are getting better fast. The rough edges will be engineered in. The “human feel” will become a style parameter, not a signal of authenticity.
The endgame, as a colleague put it to me recently, may just be agents talking to agents. Your content engine publishes something. My content engine reads it, incorporates anything useful, and publishes its own response. Your engine reads that, updates its model of the discourse, and produces the next iteration. The humans involved are, at best, approving outputs. At worst, they’re not even in the loop.
At that point, the entire content ecosystem is just a closed system of AI-generated ideas circulating between AI consumers, with humans occasionally dipping in to skim the output.
Doesn’t that sound healthy and fulfilling? Not really.
And depressingly, this certainly seems to have started playing out to me. Human ‘agents’ all over LinkedIn, Substack and I assume elsewhere too are simply prompting LLMs to create content, then other human agents prompting LLMs to generate a response. And so it goes, back and forth in a depressingly transparent game of AI slop tennis of which I am assumed to be a naive and impressed spectator.
The citation gap
One thing that keeps nagging at me: we have citation infrastructure for academic work. If you write a paper and use someone else’s findings, you cite them. There are norms, there are systems, there are consequences for not doing it.
Content has nothing like this.
As a bit of a thought experiment, imagine a tool, something like a browser extension or an overlay, that could trace a claim or idea back to its likely origin. You’re watching a Williamson video, he surfaces some concept about status signalling, and the tool flags: “The original articulation of this idea was Rob Henderson, ‘Luxury Beliefs’, 2019. Here are the primary sources.” Not to discredit Williamson. Just to give you the lineage.
This would be genuinely useful. Not as a gotcha mechanism, but as a discovery tool. The value of synthesis is that it surfaces ideas you wouldn’t have found on your own. The limitation of synthesis is that it often strips away the context, the nuance, the original argument. A citation layer reconnects you to the source.
And if you built that kind of verification into a content pipeline from the start, if your own system tracked where every idea came from and included that provenance in the output, you’d have something that most content creators don’t: transparent intellectual supply chains. You could scale that to read anyone else’s content and do the same analysis.
That might actually be a meaningful differentiator in a world where everyone’s content engine is regurgitating the same inputs. Not “my ideas are more original than yours.” Just: “I can show you where everything came from, and I do.”
I’m planning to expand on this thought experiment in a forthcoming article.
What’s Actually Scarce
If synthesis is abundant and getting more abundant by the day, the scarce thing isn’t well-articulated thinking. It’s the raw material: genuine experience, genuine uncertainty, genuine first-person encounters with reality that haven’t been pre-digested by anyone else’s framework.
The person who actually built the thing, who actually failed at the thing, who actually sat with the confusion long enough to develop a non-obvious take. That’s the input that no content engine can fabricate, at least not yet. Everything downstream of that, the synthesis, the articulation, the distribution, AI handles increasingly well.
So the question isn’t really “how do we stop the homogeneity crisis.” It’s probably unstoppable. The question is: what are you feeding into the machine that nobody else has?
If the answer is “the same books and podcasts everyone else consumes,” then yeah, your output is going to converge with everyone else’s. That’s just the maths.
If the answer is “things I’ve actually lived through, actually struggled with, actually thought about at 3am when I couldn’t sleep,” then maybe you’ve got something the content singularity can’t easily replicate.
Maybe. Hopefully. We’ll see.


