The Discipline Everybody Should Build: Killing Your Own Ideas
Most strategic failure isn't choosing the wrong direction - it's the inability to systematically eliminate directions before they consume you (plus a free tool to help).
There’s a mistake I’ve seen creators/founders/builders/entrepreneurs/leaders make more than any other. More than hiring the wrong person, more than misjudging a market, more than running out of cash. It’s more fundamental than all of those, and it underpins most of them.
It’s the failure to understand how many assumptions are baked into an idea - and then having no system to kill those assumptions one by one.
I’ve seen this everywhere I’ve worked. Across automotive, cleantech, FinTech, VC, PE, Web3, and now in the AI space. The pattern is always the same. A team generates a promising thesis. The thesis gains internal momentum. People start building toward it. And buried inside that thesis are four or five assumptions that nobody has explicitly named, let alone stress-tested. When one of those assumptions turns out to be wrong - and it always does - the entire edifice wobbles, and the team is left wondering what happened.
But there’s something worse than backing the wrong thesis. It’s what happens when an organisation accumulates multiple theses simultaneously and lacks the discipline to kill any of them.
The Accumulation Problem
Most organisations I’ve encountered don’t suffer from a shortage of ideas. They suffer from an inability to eliminate them. The ideas pile up. Each one has merit. Each one has a champion. Each one has a plausible case for why it could work. And because no single idea is obviously wrong, the organisation tries to keep all of them alive.
This is especially acute in crypto/Web3, where the strategic landscape can completely shift quarterly and the absence of proven revenue models means almost everything is speculative. You’re not choosing between a known good option and a known bad option. You’re choosing between five uncertain options, each with a different risk profile, each requiring different resources, and each defended by someone who genuinely believes in it.
The result is predictable. The organisation enters what I’d describe as an endless validation loop. Teams attempt to validate each thesis. Every thesis surfaces problems. Because every thesis has problems - that’s the nature of operating in uncertainty - the validation process never produces a clean winner. And so the loop continues. More conversations. More analysis. More “what about this angle?” meetings.
Eventually, people get exhausted. The energy drains out of the process. And the attitude becomes: “We need to do something, so let’s just do something.”
That’s not strategy. That’s surrender dressed up as decisiveness.
Why This Happens
The root cause, in my experience, is that most teams confuse exploring an idea with validating an idea. Exploration asks: “What could this become?” Validation asks: “What would have to be true for this to work, and can we prove or disprove those things quickly?”
Exploration is energising. It’s creative. It generates excitement. People enjoy it. Validation is tedious, difficult and uncomfortable. It requires you to take your favourite idea and try to destroy it. It requires you to name your assumptions explicitly - not the comfortable ones, but the uncomfortable ones. The ones that sound like: “This only works if institutions actually care about X” or “This only works if we can convince custodians to do something they’ve never done before.”
Most teams skip the explicit assumption-naming step entirely. They move straight from “this is a promising direction” to “let’s start building toward it.” And when you haven’t named your assumptions, you can’t kill them. They just sit there, invisible, load-bearing, and untested, until the structure collapses.
The Morale Tax
There’s a second-order consequence that rarely gets discussed: the impact on the broader team.
When senior leadership cycles through strategic directions without resolution or well-founded conviction - or worse, when they shield the rest of the organisation from the fact that it’s happening - the effect on morale is corrosive. People aren’t stupid. They sense the drift even when they can’t name it. They notice that priorities shift without explanation. They notice that last quarter’s urgent initiative has quietly been deprioritised. They notice the gap between the stated values (transparency, alignment, focus) and the lived reality (ambiguity, shifting goalposts, hedged bets).
Shielding teams from strategic uncertainty might feel protective, but it often produces something worse than anxiety: it produces cynicism. People stop investing emotionally in any given direction because they’ve learned that it will probably change. The organisation retains bodies but loses belief.
The alternative isn’t radical transparency about every internal debate. It’s having a process rigorous enough that, by the time a direction reaches the broader team, it has survived genuine scrutiny. The team doesn’t need to see the sausage being made. They need to trust that the sausage was actually made well, not just assembled from whatever happened to be lying around.
What Assumption-Killing Actually Looks Like
It’s less glamorous than it sounds. The discipline is straightforward, even if the execution requires real honesty.
For any thesis under consideration, you explicitly list the assumptions that must be true for it to work. Not the obvious ones. The ones buried two or three layers deep. The ones that start with “we’re assuming that...” and end with something that makes the room go quiet.
Then you rank those assumptions by two criteria: how critical they are (if this assumption is wrong, does the whole thesis collapse?) and how testable they are (can we get signal on this within weeks rather than months?).
Then - and this is the part most teams resist - you design cheap, fast tests for the most critical and most testable assumptions. Not pilots. Not MVPs. Just: what’s the fastest way to get evidence that this assumption holds or doesn’t?
When an assumption fails its test, you kill the thesis. Not reluctantly. Not “let’s revisit it next quarter.” You kill it, free up the resources, and move to the next one. The emotional difficulty of this is exactly why most teams don’t do it. Killing an idea feels like failure. In reality, it’s the opposite. It’s the only way to concentrate resources on the ideas that have survived genuine scrutiny.
The Crypto-Specific Version of This Problem
Crypto amplifies the accumulation problem for a specific structural reason: the absence of revenue clarity. In industries with established business models, strategy debates are bounded by economics. You can model the revenue case. You can look at comparable companies. You can run the numbers.
In much of crypto, there is no revenue, or the revenue model is speculative. Blockchains are commoditised. Token prices are decoupled from fundamentals. The industry has spent years in a mode where developers build for other developers and nobody asks where the money comes from, because token appreciation masked the absence of a business model.
When that mask slips - as it does in every downturn - organisations suddenly have to confront questions they’ve been deferring. And they discover that they have five or six strategic directions, each speculative, each requiring significant resourcing, and no framework for choosing between them. The result is exactly the paralysis described above, except with the added pressure of needing to survive a market that’s no longer forgiving.
The teams that navigate this well are the ones that accept a hard truth early: in the absence of proven revenue models, your strategic process matters more, not less. You can’t rely on market signals to tell you which direction is right. You have to do the unglamorous work of naming assumptions, testing them, and killing the ones that don’t hold up. The process is the competitive advantage.
Building the Muscle
This isn’t a framework you implement once. It’s a muscle you build over time. It requires a culture where attempting to kill or killing an idea is celebrated rather than mourned. Where the person who identifies the fatal assumption is thanked, not resented. Where the question “what would have to be true for this to work?” is asked reflexively, not reluctantly.
I’ve been guilty of the opposite more times than I’d like to admit. Falling in love with a thesis, ignoring the wobbly assumptions underneath it, pushing forward on momentum rather than evidence. Every significant mistake in my career traces back to that pattern in some form.
The discipline isn’t complicated. Name your assumptions. Test the critical ones fast. Kill what doesn’t survive. The hard part is doing it honestly, repeatedly, when the ideas you’re killing are the ones you most want to be true.
To help with this challenge, F3, my venture studio, created a free tool designed to pressure-test whether a business idea addresses a real problem and if the proposed solution is credible. We call it ‘Gary,’ a name that’s a confusing nod to both Y Combinator CEO, Gary Tan, and a young illegal immigrant in the classic sitcom, Only Fools and Horses, who is found in Denzil’s lorry by Del and Rodney. They hide him in their flat and nickname him “Gary,” as it’s the only word he says.
Gary operates as a structured validator and thinking partner (ours, not the Only Fools and Horses one), not a brainstorming assistant - he won’t come up with ideas for you and he won’t research and validate ideas for you either. His sole objective is to challenge assumptions and design tests to extract evidence for any given idea. While Gary should not be considered the final word on assumption testing and idea validation, over 40 users have found it a useful tool for establishing problem legitimacy, customer segment precision, solution credibility, evidence of demand and moreover, gaining a practical understanding of what it really means to try and kill and idea to make it stronger.
Gary is free to use, and you can find him here. Give it a try and let me know what you think in the comments!



Gary did not disappoint 🧍🏽♂️
Great article. Will definitely be testing out Gary