Two Cognitive Biases That Make Smart People Foolish

Narrative fallacy and hindsight bias feed each other, creating false confidence that compounds. Here's how to break the cycle.

9 min read
product-leadershipproduct-strategyframework

December 2024. Sam Altman tells the world AGI will hit sooner than most think. Maybe during Trump's term. Maybe this year. OpenAI announces they're "confident we know how to build AGI."

November 2025. Those timelines look stretched. The AI agents supposed to "join the workforce" haven't shown up at scale. Altman's already walking things back, saying AGI "will matter much less" than people think.

We're watching two cognitive biases operate together in real time. The same ones that turned a $47 billion coworking company into bankruptcy and convinced investors to fund blood tests that didn't work.

They affect all of us.

Why Smart People Keep Making Predictable Mistakes

Narrative fallacy is what Nassim Taleb called "our limited ability to look at sequences of facts without weaving an explanation into them." When things happen, our brains compress them into stories with clear cause and effect. These stories feel like understanding. Often they're just compression artifacts.

Hindsight bias works differently. The "I knew it all along" effect. After an outcome becomes known, we convince ourselves it was more predictable than it actually was. A 2025 survey found 68% of retail investors believed their past decisions were more predictable in hindsight than they actually were. Nearly half admitted taking more risks because of this false confidence.

These biases don't just coexist. They feed each other.

The Double Whammy

Hindsight bias tells you the outcome was obvious. Narrative fallacy hands you a story explaining why it had to happen that way. Together, they create dangerous false confidence that compounds over time.

Think of it this way. Hindsight bias is the feeling. Narrative fallacy is the explanation your brain invents to justify that feeling. One says "I knew it." The other fills in the blanks with a tidy story about why you knew it.

Neither is true. But together they're incredibly convincing.

Why It Gets Worse Over Time

Something happens. You convince yourself you saw it coming. You build a story about why it was inevitable. You internalize this model as truth. You make predictions based on it. Those predictions fail. You're surprised.

The cycle repeats.

Each iteration makes you more confident. You've "learned" from experience. You've got pattern recognition now. Except the patterns are made up. We're building decision frameworks on foundations that don't exist.

This is compounding error in action. Each bad mental model feeds the next one. And because the stories feel so coherent, we rarely question them.

Meta's $68.8 Billion Bet on a Story

October 2021. Zuckerberg rebrands Facebook as Meta. By 2024, Reality Labs had burned through $68.8 billion cumulative. Revenue from 2019 through September 2022? Only $5.3 billion.

The narrative kept shifting to fit the facts. First: "Get in early or get left behind." When losses mounted: "This is a long-term bet, these losses were expected." When Meta's stock dropped 60%: "We're entering the year of efficiency."

But the underlying reality stayed constant. Consumer VR adoption remained niche. The business model couldn't generate returns. The technology wasn't ready.

What's really happening here is the sunk cost fallacy and narrative fallacy reinforcing each other. When you've committed billions to a vision, constructing explanations for why it's still valid becomes more appealing than admitting it was wrong. This continues until external pressure forces a reckoning.

WeWork's $37 Billion Narrative Collapse

July 2019. Adam Neumann had already liquidated $700 million of his WeWork stock at a $47 billion valuation.

August 2019. WeWork files its S-1. Investors read the numbers: a company hemorrhaging cash, with long-term lease obligations and short-term revenue. The CEO was paid $5.9 million for the trademark "We" that he personally owned.

September 2019. The IPO gets postponed. Valuation drops below $10 billion. Neumann steps down.

A decade of growth. Gone in weeks.

The hindsight narrative now paints Neumann's behavior as obviously problematic. But SoftBank had invested billions. Top-tier VCs had poured money in. The business model was always flawed. Long-term leases without long-term revenue. These facts were available the entire time. The narrative just made them invisible until regulatory disclosure forced visibility.

Theranos and the Decade-Long Story

For over a decade, Elizabeth Holmes convinced investors that Theranos had revolutionized blood testing. The Edison machine could supposedly perform 200+ tests using just drops of blood.

October 2015. John Carreyrou's Wall Street Journal investigation exposes that Theranos wasn't using its own machines for most tests. The technology didn't work.

January 2016. CMS reports "immediate jeopardy to patient health and safety." Patients had been misdiagnosed with conditions from diabetes to cancer.

Holmes had crafted a personal story. Stanford dropout. Young female founder challenging a stagnant industry. This story was more powerful than technical verification. Investors who couldn't examine how the technology actually worked should have walked away. They didn't.

The question isn't why didn't people see through it. The question is what allowed narrative to override verification for a decade.

Why We're Especially Bad at This

Product leaders face particular pressure to construct tidy narratives. Teams expect clear explanations. Stakeholders want confident direction. Nobody gets promoted for saying "I'm not sure why that worked."

A B2B SaaS company launches a feature on Tuesday. By Friday, daily active users jump 23%. The product team builds a narrative: the feature is driving engagement, this validates our roadmap. They get budget to double down.

Six weeks later, engagement drops below baseline.

What actually happened: their biggest competitor had a three-day outage that same Tuesday. Fiscal year-end drove a seasonal spike. An industry conference had just ended. None of these had anything to do with the feature.

The reverse happens constantly too. Engagement drops, teams blame internal failures and implement sweeping changes. When the real cause was a competitor's aggressive discounting or a regulatory shift.

Here's what nobody wants to say: most of the time, we don't actually know what caused what. Systems are too complex. Multiple factors interact in ways we can't untangle. But we have to provide explanations anyway.

So we construct narratives that feel true. We connect data points into causal chains. We make strategic bets based on made-up models.

Messy reality doesn't get funding. Clear narratives do.

Criticizing Your Predecessor (And Why It's Usually Unfair)

A nasty form of this plays out during leadership transitions. A new executive joins, assesses the situation, and delivers their verdict: the previous leader made obvious mistakes.

The new CTO calls the technical debt "inexcusable." The incoming CPO wonders how anyone prioritized those features. They're evaluating past decisions with present information, stripped of original context.

That monolithic architecture was built when the company had 12 engineers and needed to ship in 6 months to survive. The "wrong" priorities were responses to a competitor threat that no longer exists because those priorities worked.

New leaders build credibility by criticizing predecessors. Constructing narratives where problems are obvious and solutions simple. The previous leader becomes a convenient explanation for everything wrong.

The truth: most decisions that look wrong in hindsight were reasonable given available information. Constraints that shaped them (budget, team capability, market conditions) are invisible to someone reading outcomes backward.

The best leaders assume predecessors were competent people making reasonable decisions under different constraints. They ask "what did they know that I don't?" before asking "what did they get wrong?" They recognize that in three years, someone will judge their decisions with the same unfair hindsight.

What You Can Actually Do About This

We can't eliminate these biases. They're hardwired. The goal is building systems that make them visible.

  1. Document predictions before they happen. Write specific predictions with timestamps before launching anything significant. "We expect 23% take-rate in this segment because of these three reasons." This creates a baseline you can't rewrite later.

  2. Run pre-mortems, not just post-mortems. Assume your initiative failed spectacularly. Write down why before launch. Post-mortems are especially vulnerable to both biases. Pre-mortems force you to articulate risks you'll later claim were "obvious."

  3. Separate skill from luck explicitly. Your product gets viral traction? Don't conclude you've found a formula. Run it 20 more times. Most "successful patterns" are luck that narrative fallacy turned into strategy. Often 70% of features fail regardless of execution quality.

  4. Challenge causal stories systematically. Every time someone proposes a causal explanation, ask: What else could explain this? How would we test it? What evidence would prove it wrong? Most causal stories fall apart under basic scrutiny.

  5. Make uncertainty acceptable. Quantify uncertainty explicitly: "We're 40% confident this will work." Make "I don't know" and "that was luck" acceptable phrases. Teams that punish uncertainty get confident narratives built on nothing real.

  6. Track prediction accuracy. Keep a running log of predictions versus outcomes. If you predicted ten things and got three right, you're not a good predictor. You got lucky three times.

The Difference Between Good and Lucky

The point isn't that all narratives are wrong. It's distinguishing between what happened and the story we tell ourselves about what happened.

When we fail to make that distinction, we confuse skill with luck, build strategies on false causal models, and make riskier decisions based on fake learnings.

The teams that succeed aren't the ones with the best narratives. They're the ones who build systems that account for their own cognitive biases. They document reasoning, test assumptions, and admit when they got lucky.

The alternative is what we're watching with AGI predictions, metaverse investments, and countless other strategic bets: bold timelines, massive capital, walking back, new narratives explaining why original predictions were "misunderstood."

Nobody's immune to this.

The difference is whether we build systems that account for these biases or pretend we're above them.

Related Articles