AI-Powered Classrooms: Education or Extraction?
AI-powered education sounds almost utopian. Every student gets a curriculum that adapts in real time to their curiosity, their pace, their emotional state. Engagement is monitored not to punish but to recalibrate. Cross-disciplinary connections are surfaced automatically, so the kid who’s fascinated by why a tulip draws water upward gets shown the neuroscience of curiosity itself: what’s happening in their brain right then that makes them lean forward for some questions and zone out for others. The rigid, industrial-era classroom dissolves into something fluid and individualised.
This is not science fiction. The underlying capabilities exist today. Cameras that read facial micro-expressions, language models that can teach at any level on any subject, recommendation engines that already know how to hold attention for hours. The infrastructure is here. The question that matters, and the one almost nobody is asking carefully enough, is: who writes the objective function?
Because the same system that could give every child the equivalent of a world-class private tutor could also become the most sophisticated attention-extraction machine ever built.
The Education I Didn’t Get
I remember sitting in biology learning about how a tulip draws water from the soil. The mechanism was interesting enough: capillary action, cohesion-tension theory, the physics of water moving against gravity. But what I actually wanted to know was why I found it interesting. What was happening in my brain that made me lean forward for that and zone out for the next topic? The connection between the biology I was learning and the psychology of my own attention was right there. Nobody drew the line between them.
That’s not a failure of any individual teacher. It’s a structural limitation. One teacher, thirty students, a fixed curriculum, forty-five minute blocks. There is no room in that system for the kind of responsive, curiosity-following education that actually produces deep understanding. The system was designed for throughput, not for learning.
AI could change that. Not in the vague, hand-wavy sense that people usually mean when they say “AI will transform education.” In a specific, mechanical sense: a system that monitors what a student responds to, identifies the connective threads between their interests, and restructures the learning path in real time. If a student lights up during a discussion of fluid dynamics but checks out during taxonomy, the system notices and finds a way to teach taxonomy through the lens of fluid dynamics. Or it identifies that the student’s real interest is in systems and reorients the entire approach.
Cameras in classrooms could track engagement, not as surveillance in the punitive sense, but as a feedback mechanism. The same object detection technology that identifies defects on a production line could identify the moment a student disengages. Not to report them, but to adapt. Change the angle. Try a different modality. Surface a connection that re-sparks attention.
The potential here is enormous. The education that the wealthiest families buy, the kind with one-on-one tutors who know exactly how a child thinks and what ignites their curiosity, could theoretically be made available to every student through AI systems that replicate that attentiveness at scale.
The Dystopian Mirror
Now flip the lens.
A classroom with a humanoid robot at the front, plugged into a model trained on every pedagogical dataset in existence. It reads every student’s face, adjusts in real time, personalises perfectly. Sounds like the utopian version, right? Except now ask: who trained the model? What was it optimised for? And who is paying for the infrastructure?
This is where the thought experiment turns uncomfortable. Because the technology that can measure engagement can also measure susceptibility. A system that knows when a student is most receptive to new information also knows when they’re most receptive to influence. And the organisations most experienced at building attention-optimising systems are not education nonprofits. They’re advertising platforms.
Social media already does a version of this. The algorithms are sophisticated enough to modulate emotional states: show harrowing content, then show something warm and positive, then show an advertisement at the peak of the emotional rebound. That is not a conspiracy theory. It is the documented, optimised behaviour of recommendation engines trained to maximise time-on-platform and ad engagement. The system learned, without anyone explicitly programming it, that emotional volatility increases engagement.
Apply that same optimisation logic to a classroom. A system that can hold a student’s attention perfectly could be holding their attention for the purpose of teaching them calculus, or it could be holding their attention for the purpose of shaping brand preferences, political attitudes, or consumption patterns. The technology is identical. The objective function is what differs.
Goodhart’s Law in the Classroom
There’s a useful framing for this: Goodhart’s Law applied in a psychosocial context. Goodhart’s Law says that when a measure becomes a target, it ceases to be a good measure. In the standard economic reading, this means things like: when you reward hospitals for reducing wait times, they find ways to game the metric without actually improving care.
In education, the equivalent would be: when you optimise for measurable engagement, you get systems that are very good at holding attention but not necessarily good at producing understanding. Engagement becomes the proxy for learning, and the proxy gets optimised at the expense of the thing it was supposed to represent.
This is not hypothetical. It is the exact dynamic that already plays out on every social media platform. The metric (time spent, clicks, shares) was supposed to be a proxy for “the user is getting value.” Instead, the metric itself became the target, and the result is platforms optimised for compulsive use rather than genuine benefit. The bottom line of advertisers gets optimised at the expense of human well-being.
Now imagine that dynamic inside a system with direct access to children’s attention for six hours a day, five days a week, for twelve years. The stakes are not comparable to someone doomscrolling for an extra twenty minutes. The stakes are the formation of how an entire generation thinks.
The Human Element as Luxury
One counterargument I find compelling is that the right model isn’t replacing teachers with AI, but enhancing human teachers with AI tools. A human who handles the social, emotional, and relational dimensions of education, augmented by AI that handles personalisation, assessment, and content delivery. The human provides the thing AI cannot: genuine relational presence, the ability to model social behaviour, the judgment calls that come from actually caring about a specific child’s wellbeing.
But I think this model, as appealing as it is, underestimates where the economics are heading. Any service created or delivered by a human is trending toward becoming a luxury. We can already see this in other domains: handmade furniture is a luxury, hand-cooked food is increasingly positioned as premium, human financial advice costs more than robo-advisory. The pattern is consistent. Automation makes the baseline free or cheap. The human version becomes the premium tier.
Applied to education: AI-taught classrooms become the default, available to everyone. Human teachers become the equivalent of private tutors for families who can afford them. This is not necessarily dystopian. The AI-taught baseline might be genuinely better than the current system for most students. But the question of who controls the objective function becomes even more urgent when you realise that the majority of students would have no human intermediary to notice if the optimisation drifts from education toward extraction.
The Only Question That Matters
The technology to build deeply personalised, curiosity-driven education exists or will exist shortly. That is not the interesting question. The interesting question is whether the institutions and incentive structures that deploy this technology will optimise for learning or for something else.
Can human psychology be computed? Probably, to a significant degree. Behaviour can be predicted, emotional states can be inferred and influenced, attention can be directed. The social media industry has proven this comprehensively. The question is not whether AI can understand and influence how students think. It can, or it will be able to. The question is whether we build systems where the objective function is controlled by educators and researchers, or whether we let it default to whoever is willing to pay for the infrastructure.
Because the infrastructure is expensive. And the organisations with the deepest pockets and the most experience optimising attention are not the ones whose primary concern is whether a fourteen-year-old understands why a tulip draws water.


