The Human Donkey Problem
Where productivity tools silently destroy agency (I'm building something to fix this)
Most productivity software gets marketed on a single promise: more output, less effort. More posts. Faster drafts. Content in half the time. This framing treats output as the goal and the human as the chokepoint. The better the tool, the less the human has to do.
There’s an extent to which this makes sense. Removing genuinely unnecessary friction is good design. But there’s another viewpoint, and it’s where most productivity tooling can risk ending up: optimising output velocity by progressively removing the human from the cognitive process. Not just the administrative parts. The thinking parts.
The result is what we might call a human donkey. The user fills in fields, selects from menus, drags blocks into templates. The system handles the rest. Output is consistent, reliable, and increasingly divorced from anything the user actually thought.
The human is no longer creating. They are executing. Efficiently, reliably, and with diminishing understanding of what they are producing or why.
The drag-and-drop trap
Over-structured content tools are the most visible version of this. The logic is intuitive: give users a clear structure, reduce the open-ended ambiguity of a blank page, and they will produce more. Which is true. They will produce more.
What tends not to survive this process is creative ownership. When the structure is provided rather than built, when the user’s job is to fill rather than to form, the cognitive load shifts from the person to the system. That sounds like a good trade. But what gets relocated is not just the complexity of deciding how to structure a piece. It’s the process of working out what you actually think.
Anyone who has written something properly knows that the thinking and the writing are not separate activities. You do not think first and then write. You think by writing. Remove that process, and you do not have a faster version of the same outcome. You have a different outcome entirely, one where the author has produced content without developing understanding.
I’ve found it useful to distinguish between two things a tool can do: it can increase what you produce, or it can increase what you can think. Over-structured, drag-and-drop systems optimise aggressively for the former. The latter barely registers as a design goal.
Why this is hard to see
The disquieting part of this dynamic is that it tends to feel productive. The human donkey is not idle. Posts go out. Metrics accumulate. The dashboard looks healthy.
What is harder to measure is whether the person operating the system is developing anything. Whether they have a clearer sense, six months in, of what they think and why it is worth reading. Whether they could hold a conversation on their stated area of expertise that goes deeper than the surface their content has skimmed. Whether they are building fluency or merely generating volume.
These questions do not surface naturally when the output numbers are good. They require a deliberate decision to ask them.
And this is exactly the problem. The default success metric for a productivity tool is user-reported satisfaction, almost always correlated with output volume. “I doubled my content in two weeks” is a testimonial that moves product. “I have a clearer sense of what I think about my domain” is almost impossible to attribute to any specific tool and does not feature prominently in any growth playbook.
The incentives, in other words, are structurally aligned against the right outcome.
The commercial pressure
This is where the design ethics become genuinely complicated. A tool that converts users into efficient, low-agency executors is, in many cases, commercially successful. Retention is strong. The value proposition is legible. The comparison to a world without the tool is flattering.
The pressure to build toward this outcome is real, and it is not cynical. It is structural. Optimise for user-reported satisfaction, optimise for output metrics, and you eventually arrive at exactly the kind of tool that severs the user from the creative process. Not through bad intentions, but through the accumulated weight of incentive.
The guiding principle I keep returning to is this: the right test for any tool is not whether it increases what a person produces, but whether it increases what a person can think. These are not always the same thing. They can, in fact, run directly against each other.
A tool that holds your hand through every step of the content process may genuinely produce more output. But it is also training you, session by session, to rely on the scaffold rather than develop the capability. The productivity gain is real. The dependency that accumulates alongside it is also real.
What extending thinking actually looks like
There’s a different category of tool that is harder to describe precisely because it does not optimise for one clean metric.
A tool that genuinely extends thinking creates friction in the right places. It prompts before it generates. It asks what you mean rather than assuming. It holds enough structure to be navigable, but enough open space to allow genuine discovery. When you are done with it, you have produced something, but you have also clarified something. The output and the thinking moved forward together.
Two pieces of content can look identical on the surface: one produced by someone who was thinking throughout the process and one produced by someone who was filling templates. What differs is what the author knows afterward, and what they are capable of the next time they sit down.
That second kind of difference does not feature in any dashboard. But it is the only one that compounds.
The line is fine, and it moves
I do not think this is a problem that can be solved once at the design stage and then set aside. The commercial pressure to drift toward high-efficiency output tooling is continuous. It does not announce itself.
It arrives as small decisions. Adding one more template option. Making one more step automatic. Removing one more moment of friction that users reported as a pain point. Each of these, individually, looks like an improvement. Cumulatively, they can shift a tool from one that extends human thinking to one that replaces it, without anyone having made that decision explicitly.
The only reliable response is to treat this as a guiding principle that needs to be surfaced actively and often. The purpose of the tool is to help people think and express themselves better. Not just more. Better. And each design decision should be tested against that honestly, with awareness that the commercial pressure will consistently pull in the opposite direction.
The line between a tool that extends human capability and one that silently replaces it may be fine. That is exactly why it requires sustained attention rather than a one-time commitment.
Where this leaves us
The human donkey problem is not a critique of any particular tool or platform. It is a description of a failure mode that most productivity software is structurally incentivised to drift toward over time.
Understanding that drift is the first condition for designing against it. And recognising it in tools you already use is the first condition for making deliberate choices about which kind of productivity you are actually after.
The goal worth pursuing is not a tool that makes humans more efficient. It is a tool that makes humans more capable. These are different ambitions. They lead to different products. And the distance between them is not always as obvious as it should be.
I’ve been working on just such a tool and I’m looking for early testers. If you’re interested, click below, I’d love to hear from you!



