Essay for the Weekend: The Limits of Knowledge
AI and the very human capacity for understanding
AI isn’t making knowledge redundant. It’s making the public performance of knowledge cheaper, and that’s exposing a difference modern culture has spent a long time blurring: the difference between knowing more and understanding better. For years, we rewarded people for storing, retrieving, summarising, and displaying information. Now that machines can do much of that on demand, a harder question returns. What does understanding consist in, and why does it matter more when answers are easy to produce?
This isn’t a small shift. It changes how we think about education, work, intelligence, and even what sort of person we’re trying to become. For a long time, modern societies treated knowledge as both power and status. That wasn’t irrational. Information was unevenly distributed, difficult to find, and often slow to verify. If you knew more than other people, or if you could organise what you knew better than they could, you gained a real advantage. Schools rewarded recall. Professions rewarded expertise. Public life rewarded the appearance of being informed.
AI doesn’t remove the need for truth, and it doesn’t make genuine expertise unnecessary. But it does weaken the social value of certain visible performances of knowledge. It can produce a plausible summary, a tidy explanation, a competent comparison, or a polished draft in very little time. That changes the meaning of activities that once looked like strong evidence of intelligence. We may not be entering a post-knowledge society, but we may be entering an early stage of a society in which knowledge, by itself, no longer carries the status it once did.
That shift matters because knowledge and understanding aren’t the same thing. Knowledge can often be stated. It can be listed, tested, quoted, and transferred. Understanding is less portable. It involves seeing how something fits together, where an explanation holds, what it leaves out, and why it matters. A person can memorise the rules of a game and still not understand how the game is actually played. They can learn the language of economics and still not understand how fear moves through a market. They can read about grief and still not understand what grief does to the shape of a life.
This is why Hermann Hesse’s Siddhartha still feels relevant. The novel’s often remembered as a spiritual book, but one of its deepest concerns is the gap between borrowed wisdom and realised understanding. Siddhartha passes through teachers, disciplines, pleasures, disappointments, ambition, and loss. Again and again he encounters systems that promise meaning. Again and again he discovers that no teaching, however refined, can do the seeing for him. That isn’t because teaching is worthless. It’s because there are some truths that can’t simply be handed over in words and possessed like objects. They have to become real within the person who hears them.
At first glance, this may seem like a question in epistemology, the branch of philosophy concerned with knowledge: what it is, how we justify it, and where its limits are. But it quickly becomes an ontological question as well, because the way we know the world shapes the kind of people we become within it. A culture that trains people to collect facts, perform fluency, and move quickly from question to answer will produce one kind of self. A culture that trains people to stay with uncertainty, compare explanations, and return repeatedly to lived reality will produce another.
Hilary Lawson’s idea of closure helps explain why this matters. The term can sound abstract, but the core insight is simple. Reality is always richer, messier, and more open than the descriptions we use to manage it. In order to think and act at all, we draw boundaries around that openness. We simplify. We name. We classify. We frame. Lawson calls these acts of simplification closures.
A tree is a good place to start. To a builder, a tree may be timber. To a walker, it may be shade. To a bird, it’s a habitat. To a child, it might be a place to climb or hide. To a climate scientist, it may be part of a carbon cycle. None of these descriptions is false. Each captures something real. But none of them is the whole tree. Each is a way of closing a larger reality into a form that serves a purpose.
The same thing happens with people. Think about how often we describe one another by professional titles. Someone’s a doctor, a teacher, a founder, a manager, a solicitor. These labels are useful. They tell us something real. But they also reduce a person to a narrow function. The title doesn’t tell you about their private fears, their loyalties, their grief, their humour, their moral blind spots, or the contradictions that shape their life. It gives you a workable frame, not the whole person. That, too, is a closure.
Lawson’s wider philosophy is sometimes described as post-reality, which can sound more dramatic than the point really is. He isn’t saying that reality is unreal, or that truth doesn’t matter. He’s saying that we never encounter reality in a pure and final way, untouched by interpretation. We meet the world through closures. We use maps, categories, models, and names because we must. They’re how we cope with the openness of things. The danger begins when we forget that these closures are partial and start treating them as the whole.
Once that idea is in view, AI looks slightly different. It isn’t just a machine for retrieving facts. It’s a machine for producing closures quickly and persuasively. You bring it an open question and it gives you a usable frame. It can turn confusion into structure, abundance into summary, and uncertainty into a list of options. That’s one reason it feels so helpful. But a useful closure isn’t the same thing as understanding. Understanding includes some awareness of what the closure leaves out, where it stops working, and how it changes what becomes visible to us.
This is also why the growing presence of AI can feel unsettling in a way that goes beyond economics. The anxiety isn’t only about jobs or competition. It’s also about identity. Many people have built their sense of value around being knowledgeable, articulate, or mentally quick. But, when a machine can imitate those outward signs of intelligence, the older question returns with more force. What, exactly, was valuable in the first place? Was it the possession of information, or was it something deeper that information only sometimes serves?
At this point, consciousness becomes hard to avoid. Part of what we ordinarily mean by understanding seems to involve more than correct output. It involves having a point of view within experience. Things matter to us. They wound us, attract us, burden us, embarrass us, and change us. We don’t merely process grief, beauty, shame, love, loyalty, responsibility, or mortality as topics. We encounter them as realities within a life. A system may be able to describe grief with great fluency. That doesn’t settle whether it understands grief in the way a bereaved person does. A system may generate language about love or fear, but those aren’t only concepts. They’re lived states of being.
This doesn’t prove that machines can never understand. That claim is much harder to establish than many people assume. But it does point to something important. Our ordinary sense of understanding includes an experiential dimension. It includes salience, consequence, and inwardness. To understand something, in the richest human sense, isn’t just to be able to describe it. It’s to have been changed by contact with it.
Gödel enters the discussion at precisely this pressure point, though he’s often used too carelessly. His incompleteness theorems don’t prove that human beings are magical, nor do they decisively prove that machines can never think. What they do show is that any formal system rich enough to express arithmetic contains truths that can’t be derived from within the system itself. Gödel himself took this to raise a serious question about whether human mathematical understanding could be fully captured by any finite procedure. One doesn’t have to accept the strongest anti-machine reading of Gödel to see what makes it important. It reminds us that any picture of intelligence as nothing more than rule-following inside a closed system is likely to be incomplete.
That matters beyond mathematics. Understanding often involves more than operating competently within a framework. It also involves recognising the limits of the framework, seeing what it excludes, and sometimes stepping outside it altogether. In practice, this is something human beings do all the time. We notice that a model is too simple, that a category no longer fits, that a theory explains one level of reality while obscuring another. We revise our frames, sometimes painfully. We don’t just calculate within them.
This is part of what makes the pursuit of knowledge so double-edged. Knowledge can steady us, guide us, and in many fields save lives. But when it’s pursued mainly as possession, performance, or defence against uncertainty, it can become a source of mental unrest rather than peace. The person who treats knowledge as status or control never quite has enough of it. There’s always another book to read, another framework to master, another opinion to track, another update to absorb. The mind becomes crowded, but not necessarily settled.
This isn’t only a modern problem, but modern culture intensifies it. We live amid endless information, endless commentary, and endless incentives to stay current. The result is a peculiar form of strain. We’re invited to know more and more while being given fewer conditions under which understanding can deepen. Speed is rewarded. Reaction is rewarded. Accumulation is rewarded. But understanding often requires the opposite conditions. It needs time, repetition, perspective, and contact with something more stubborn than discourse.
That’s why the distinction between knowledge and understanding matters so much in the age of AI. The technology didn’t invent the confusion, but it’s made it harder to ignore. When answers become cheap, the value of understanding becomes easier to see. Not because understanding is mystical or anti-intellectual, but because it includes the things that cheap answers don’t automatically give us: judgement, context, proportion, first-hand contact with reality, and some capacity to recognise when a neat frame is too neat.
If that’s right, then the task ahead isn’t simply to defend knowledge from machines, as if the important thing were to preserve our monopoly over information. The deeper task is to recover forms of learning and culture that value understanding more highly than performance. In education, that would mean rewarding explanation, application, and comparison more than recall alone. In work, it would mean valuing the person who can ask the better question, spot what a model has missed, or recognise which human reality is being flattened by an efficient abstraction. In ordinary life, it would mean reading less frantically, returning more often to direct experience, and using AI as a tool for first drafts rather than final judgements.
It would also mean becoming more alert to closures in our own thinking. When we reduce a person to a title, a problem to a slogan, or a life to a metric, we gain speed and lose depth. Sometimes that trade is worth making. Often it isn’t. Understanding grows in the ability to notice the trade, rather than living inside it unconsciously.
The point, then, isn’t to set knowledge and understanding against each other as though one were scientific and the other spiritual. We need knowledge. We depend on it. But knowledge is at its best when it serves understanding rather than pretending to replace it. It’s a tool for orientation, not a substitute for it. AI is making that harder to miss. The cheaper answers become, the more clearly we can see that a human life can’t be guided by answers alone. It also needs the slower, less transferable, and more demanding work of learning what those answers mean, where they fail, and how to live with what they reveal.
I publish these essays without a paywall so they remain open to anyone who finds them valuable. If this piece added something to your week, you are welcome to support the work by buying me some research fuel!


