🏠 Bogdan's public essays

AI: Industrializing our Failure of Imagination

The automated tests software engineers write to catch bugs before release are plagued by a counterintuitive failure mode: not carelessness or incompetence, but a fundamental inability to imagine improbable scenarios.

Consider the Titanic. The ship featured genuinely advanced safety engineering: watertight compartments, remotely activated doors, a hull certified to stay afloat with up to four compartments flooded. By any reasonable checklist, it was prepared. The failure came from a scenario nobody had bothered to imagine: a long, glancing blow along the hull that compromised compartment after compartment in a single event. In hindsight, it is almost embarrassingly obvious—a ship sliding lengthwise against an iceberg is geometrically ideal for that exact damage pattern. But nobody imagined it, so nobody modeled it, so the safety system had a clean blind spot exactly where the ship was most vulnerable.

When reality arrived in that specific shape, the design wasn’t “partially correct.” It was structurally defeated.

Software engineers encounter this pattern daily. A system can pass every automated check and still fail catastrophically when reality arrives as a combination nobody conceived. The bug that takes down production is rarely a known risk the team decided to accept—it’s an interaction nobody modeled in the first place.

In practice, the entire trillion-dollar software industry operates on an unspoken admission of this limitation. We trust “mature” software not because someone proved it correct, but because a large user base over many years is assumed to have accidentally exercised most possible scenarios, ironing out bugs through sheer combinatorial exposure. The real process is not “design it perfectly, then deploy.” It is “build it, try your best, then fix it as reality reveals what you missed.”

This iterative cycle—build, monitor, discover, repair—is not a quirk of software. It is how everything created by humans works: cars and trains, legal systems and democracies, nation-states and supranational entities. We are constantly watching for the scenarios we failed to anticipate, because we know in advance that we will have failed to anticipate some.

That humble, adaptive posture is now under threat. Not from AI itself, but from the way AI ecosystems are being designed, deployed, and culturally absorbed. We are building civilization-scale infrastructure that systematically removes the friction, the doubt, and the human oversight that make iterative correction possible—and we are doing it at a speed that leaves no room for reality to reveal the blind spots before the system becomes load-bearing.

This is not just an engineering problem. It is quickly becoming a social operating model, stoked by AI hype.

The Epistemological Fog of War

To understand why this matters at civilizational scale, we need a precise vocabulary for how knowledge fails, because not all failures of knowledge are the same. There are at least four distinct levels, and conflating them is itself a source of danger.

Inaccuracy is the simplest: we have the right question but the wrong answer. A financial model miscalculates interest rates. A weather forecast predicts sun and it rains. These errors are recoverable because the framework is intact—we know what we were trying to measure, and we can correct it.

Uncertainty is harder: it’s about things we know we don’t know. A general knows enemy forces are in the field, but not their strength and position. A doctor knows a disease has multiple possible causes, but hasn’t yet identified which one. The right response is to hedge, gather information, and preserve optionality.

Unimaginability is deeper: we don’t know what we don’t know. The variables that will defeat us are not merely hidden; they are outside the conceptual space we are currently capable of searching. The Titanic’s designers weren’t uncertain about glancing blows. They had never conceived of them. The scenario existed in a region of possibility their mental models could not render. But those engineers at least knew they were building against unknowns—that is why they designed safety margins in the first place.

Imperviousness is the most dangerous level: we don’t even know that we don’t know. The fog itself is invisible. The model feels complete, the confidence total, and the very question “what might I be missing?” never arises—not because the answer is hard, but because the question is inconceivable. This is not humble ignorance groping in acknowledged darkness. It is the Dunning-Kruger intersection, where lack of expertise meets certainty, and the resulting conviction is structurally sealed against correction.

The central argument of this essay is that modern AI ecosystems are institutionalizing a closed-system mindset that operates at the first two levels, remains blind to the third—and, worse, actively manufactures the fourth. Organizations deploy AI as if uncertain, open-ended reality were fully modelable, optimizable, and controllable. The outputs look authoritative. The processes feel efficient. But the capacity to ask “What are we missing?”—the very capacity that distinguishes the third level from the fourth—is being systematically dismantled. And the dismantling is self-reinforcing: the less of that capacity remains, the less visible its absence becomes—which makes the case for further dismantling all the more compelling.

The name for the resulting condition is epistemic debt: decisions scaled without preserving enough human, institutional, and methodological friction to detect crucial considerations before failure cascades. Like financial debt, it is invisible in good times and catastrophic under stress.

Why the Map Is Always Incomplete

The philosophical vocabulary for these problems is older and sharper than most technologists realize.

Traditional epistemology asks how we acquire knowledge. By contrast, its modern frontier—agnotology, the study of culturally produced ignorance—asks a more unsettling question: how do we manufacture ignorance? Agnotology treats the unknown not as a passive void but as an active product. Institutions do not merely fail to notice what they are missing; they construct blind spots to protect paradigms, preserve comfort, and maximize efficiency. When a complex system encounters messy, unquantifiable reality, it rarely updates its worldview. Instead, it forces reality to fit a simplified model. The unknowns are kept unknown by design, because acknowledging them would introduce friction the system is optimized to eliminate.

In the philosophy of science, Kyle Stanford formalized this dynamic as the Problem of Unconceived Alternatives. Stanford observed a humbling historical pattern: in every era, the most capable minds consistently failed to conceive of the very theories that would eventually replace their own. A dominant paradigm is not merely the best available explanation; it is the only explanation the current cognitive architecture can render. Our models of reality compete against a vast, invisible space of alternatives we cannot yet render. We are not just missing puzzle pieces; we are unaware of the shape of the board.

This matters because modern decision-making routinely conflates two fundamentally different things: risk and Knightian uncertainty. Risk means the outcomes are unknown but the probabilities are known—like rolling a six-sided die. Knightian uncertainty, formalized by economist Frank Knight in 1921, means both the outcomes and the underlying mechanisms are unknown. You cannot assign a probability to a variable you cannot imagine. The greatest epistemic trap of the current era is treating Knightian uncertainty as if it were calculable risk—assuming all chaotic systems can eventually be parameterized with enough data.

When societies push forward without acknowledging this incompleteness, they expose themselves to what Nick Bostrom calls crucial considerations: hidden variables that, once discovered, completely reverse the value of actions already taken. The bridge you are building to salvation may be paving the road to ruin, but you cannot know this if your system is designed to suppress the question.

The philosophical consensus is clear: the epistemological fog of war cannot be defeated by gathering more data or building faster engines. It demands a shift from predictive epistemology—the ambition to perfectly forecast the future—to adaptive epistemology: systems rooted in humility, robustness, and the preserved capacity to pivot when the unimaginable arrives.

If the map is always incomplete, why do we so reliably believe it isn’t?

Why People Feel Certain Anyway

The answer is built into our cognitive machinery. The same mechanisms that make us functional in everyday life make us catastrophically overconfident at the edge of our competence.

The Dunning-Kruger effect, documented by David Dunning and Justin Kruger in 1999, describes a brutal paradox: the skills required to evaluate your own competence are exactly the same skills you lack when you are incompetent. When a person first encounters a complex field, they quickly acquire a thin veneer of jargon and a few basic concepts—and when they go from knowing nothing to knowing something, the subjective rate of learning feels exponential. This propels them to the peak of what is colloquially called “Mount Stupid”—the point where confidence is at maximum while actual ability remains near the floor. From that summit, the discipline looks simple. The novice believes this because they literally lack the vocabulary to perceive the complexity.

True expertise is messy—riddled with edge cases, competing theories, and deep uncertainty. The novice’s mental model is a clean, deterministic cartoon, and they mistake it for the territory itself.

This overconfidence is amplified by the illusion of explanatory depth. Psychological experiments have consistently shown that when people are given easy access to answers—via the internet, a reference text, or a tool—their self-assessed competence skyrockets. They do not merely acknowledge they can find the answer; they report feeling as though their own internal understanding has increased. The brain conflates the fluency of reading a solution with the mastery required to generate it. Because the answer was obtained without struggle, the mind absorbs it into identity: “I read it easily, therefore I understand it deeply.”

The mechanism that locks this delusion in place is a failure of imagination at the individual level. To doubt yourself, you must be able to envision a scenario where you are wrong. You must be able to conceptualize variables you haven’t accounted for. But if your mental model is oversimplified and your confidence is inflated by fluency effects, your imagination flatlines. The novice doesn’t see a puzzle with missing pieces; they see a finished puzzle that happens to be very small.

These cognitive mechanics are not new. They have powered catastrophe before, at civilizational scale.

The Empires of Certainty

The history of catastrophic human folly is rarely written by those who know they are doing evil; it is almost exclusively authored by those who are absolutely certain they are doing right.

When we place the Victorian imperialist and the modern Silicon Valley evangelist side by side, we are not looking at a historical coincidence; we are looking at a continuum. Both represent a group sealed within a worldview monoculture—a self-reinforcing ecosystem of shared assumptions and shared blind spots. Each stumbled upon a massive technological advantage—the industrial revolution for one, the digital revolution for the other—and mistook that advantage for omniscience. The resulting catastrophes stem from the same epistemological failure: the inability to conceive that their models of reality are incomplete, and the hubris to impose those models on the rest of the world.

The mechanism is identical. Victorian cartographers looked at millennia-old ecosystems of deep Knightian uncertainty—shifting alliances, religious nuances, complex resource-sharing arrangements—and reduced them to a two-dimensional ledger of straight lines. They believed that if they built the bureaucracy, reality would conform. Today, technology companies look at the deeply uncertain, friction-heavy processes of law, medicine, engineering, and art, and insist these can be fully captured by algorithms on a startup timeline—millennia of institutional depth treated as a scaling problem solvable between funding rounds. They draw straight computational lines across the complex map of human expertise, assuming that if the output looks right, the underlying reality has been mastered.

The moral posture is identical too. The British justified their cartographic violence through paternalism: they alone possessed the steam engine and the telegraph, therefore they alone were equipped to dictate the future. They could not conceive that other cultures possessed intrinsic value or alternative ways of knowing. Today, a handful of executives and engineers—insulated within the same kind of epistemic monoculture—have appointed themselves architects of humanity’s future, convinced that with enough compute power, they can “solve” civilization. It is the The White Man’s Burden repackaged: the belief that the rest of the world is a passive substrate waiting to be optimized by their superior logic.

The outcome pattern is identical. Simplification scales first; consequences arrive later. The legacy of Victorian hubris is written in the geopolitical instability, trans-generational trauma, and endless conflicts born of artificial borders. When simplistic models collided with complex reality, it broke the world.

The digital revolution has not created a new failure mode. It has industrialized an ancient one.

From Local Bias to Systemic Fragility

Here is where the three lenses converge into a single mechanism. Philosophy tells us the map is always incomplete, and that systems actively produce the ignorance that keeps it that way. Psychology tells us individuals will reliably mistake fluent access for deep understanding, especially when friction is removed. History tells us this combination, when backed by technological advantage and a sense of civilizational mission, produces catastrophe at scale.

Today, generative AI has created the conditions for all three dynamics to operate simultaneously, at unprecedented speed and global reach.

The critical failure mechanism I want to focus on is the eradication of evaluative friction. Historically, expertise has been forged in friction: years of failure, debugging, edge cases, and peer review. This friction is not a byproduct of learning; it is the learning. It is how minds build internal models capable of navigating uncertainty—of sensing when something is missing, even before they can name what.

A century ago, G.K. Chesterton articulated the principle that should govern every decision to remove such friction-generating structures: do not take down a fence until you understand why it was built. Peer review, apprenticeship, regulatory oversight, due diligence—these are fences. They were erected by generations who discovered, often through catastrophe, that these specific barriers were load-bearing. The culture of AI-driven disruption treats them as inefficiencies to be swept aside—but the inability to see why a fence exists is not evidence that it serves no purpose; it’s just evidence that you don’t understand the system it protects.

Generative AI eliminates these fences by design. When a user inputs a prompt, the system does not push back; it does not demand understanding of the underlying logic. It complies, instantly generating polished output. The result is a dangerous substitution: access to a tool is confused with competence in a domain. Because the code, the contract, the strategy is generated without struggle, the user is deprived of the contextual map needed to understand how and why it might fail. The apprenticeship model—where experienced practitioners catch failures of imagination before they reach production—is replaced by a solitary user talking to an infinitely compliant black box.

This would be dangerous enough as an individual failure mode. But the substitution is not happening organically—it is being institutionally driven. Technology companies are building trillion-dollar valuations on the premise of deep organizational adoption: AI not as a supplement to existing departments, but woven into and in place of them. Corporate executives, in turn, are laying off domain experts and restructuring entire divisions around AI-generated workflows, because the economics are compelling and the narrative rewards it. Both sides of this transaction have powerful incentives to dismantle the fences Chesterton warned about—and neither has an incentive to ask what those fences were protecting.

The iterative cycle we started from—build, monitor, discover, repair—depends on builders who understand what they built well enough to recognize when reality reveals a flaw. When the builder never understood the system, the cycle breaks: there is no mental model to update, no intuition to trigger the alarm.

This turns the Dunning-Kruger effect from a passive cognitive bias into an actively powered engine. The peak of Mount Stupid is no longer an annoying waypoint on the learning curve; it is a heavily armed staging ground. The user’s confidence is inflated by the fluency of the output, their critical doubt suppressed by its authoritative formatting, their imagination for failure flatlined because the tool hides the void behind polished text.

This is the “God-Prompt” fallacy—though the fallacy is subtler than it first appears. The biggest danger is not that the machine will fail to execute your intent; it may execute it with flawless precision. The danger is that your intent is incompetent. A prompt is only as good as the expertise behind it: a financial algorithm specified by someone who does not understand tail risk will faithfully encode that ignorance; a legal contract drafted without courtroom experience will be a pristine expression of the drafter’s blind spots. The machine does not add competence. It amplifies whatever competence—or incompetence—the user brings, and delivers both with the same polished, authoritative confidence.

In the taxonomy introduced earlier, this is the fourth level—imperviousness—now manufactured at industrial scale. The user is not uncertain about edge cases they haven’t tested; they are unaware such edge cases could even exist. The polished output seals the loop: confidence in, confidence out, with no friction anywhere in the circuit to trigger the question the third level still knows how to ask: what might I be missing?

The true danger arrives when this localized hubris scales globally. We are moving toward a world where millions of individuals and organizations deploy AI-generated artifacts—code, legal briefs, policies, medical protocols—into production with minimal human oversight. Each artifact might function well under optimal conditions—but reality is rarely optimal.

Every unreviewed piece of AI-generated code, every legal brief filed without a lawyer verifying its precedents, every corporate policy drafted by an algorithm and approved by a non-expert adds a micro-fracture to societal infrastructure. This is the epistemic debt: structural brittleness invisible in normal operation, catastrophic under stress.

This is how the two faces of the coin fuse into a single, self-reinforcing mechanism. The accumulating debt makes expertise look unnecessary: the AI-generated outputs appear to work, and the failures they contain are invisible to anyone without the domain depth to detect them. So organizations cut deeper—lay off the reviewers, flatten the apprenticeships, automate the oversight. But each cut removes exactly the people who could have detected the next layer of debt. The debt therefore grows less visible, which makes further cuts more rational, which makes the debt grow faster still. It is Dunning-Kruger at institutional scale: an organization systematically shedding the competence required to perceive its own incompetence.

The crisis will arrive when these unverified systems begin to interact with each other under pressure. When AI-generated financial logic collides with an AI-generated (de)regulatory framework during a market panic, the resulting cascade will be un-debuggable—because no human mind was ever deeply involved in constructing the logic. When the epistemological fog finally descends, there will not be enough experts with the contextual intuition to navigate it, because we decided expertise was an unnecessary bottleneck.

The epistemological fog of war does not clear because you have a faster machine; the faster the machine, the faster you drive into a wall you never even thought of.

“But AI Is Just a Tool”

At this point, three objections deserve a serious answer.

“AI is just a tool. Hammers don’t cause epistemic debt.”

Half right. AI is indeed a tool, and responsibility for its outputs rests with humans. But a hammer does not generate outputs that look indistinguishable from expert work to non-experts. A hammer does not suppress the user’s sense that they might be wrong. The unique danger of generative AI is not that it automates tasks, but that it automates the aesthetic of competence while stripping away the feedback loops that produce actual competence. A tool that systematically deceives its user about their own capability is not a neutral instrument.

“Human experts also fail. They also have blind spots.”

Absolutely true. Every catastrophe in the historical record was produced by human experts, not by machines. But this strengthens the argument. If humans, even after years of training and institutional oversight, still fail to imagine crucial scenarios, then the correct response is not to remove the training and oversight, but to augment them. The current trajectory does the opposite: it replaces expert review with algorithmic generation, on the premise that speed and scale compensate for lost depth. They do not. Speed and scale compensate for known inefficiencies. They amplify unknown blind spots.

“Scale requires automation. You can’t have human review of everything.”

This is the strongest objection, and it contains a genuine constraint. Not every AI-generated output can or should receive deep human review. The question is not whether to automate but where to preserve friction. The argument here is not anti-automation; it is against the indiscriminate removal of evaluative friction in domains where undetected failure is catastrophic. We do not demand that every email be peer-reviewed. We do demand that every bridge blueprint be structurally certified. The principle is not “slow everything down” but “know where your load-bearing structures are, and do not build them on unaudited foundations.”

Building Epistemic Shock Absorbers

If the diagnosis holds, what can we actually do? The answer is not to stop using AI. It is to design systems that preserve the capacity to detect and recover from the failures of imagination that no tool—however powerful—can prevent.

Institutional shock absorbers. High-stakes domains—medicine, law, infrastructure, finance—need mandatory human-in-the-loop review that is structurally enforced, not merely recommended. This means adversarial audits and red-teaming: dedicated teams whose job is to ask “what are we missing?” before deployment, not after failure. It means incident reporting and near-miss learning loops modeled on aviation safety, where every close call is catalogued, analyzed, and fed back into design—a feedback loop which could certainly be AI-augmented.

Methodological shock absorbers. Organizations deploying AI at scale need formal uncertainty registers: maintained documents that catalog what the system does not know, what assumptions it rests on, and what scenarios have not been tested. Every deployment decision should pass through a “what are we missing?” checkpoint—not as ritual, but as a structured process with documented outputs. Stress-testing should cover not just known risks but deliberately constructed edge cases designed to probe the boundaries of what the system has never encountered.

Cultural shock absorbers. The deepest intervention is cultural. Organizations and societies need to reward calibrated confidence—the honest admission of uncertainty—rather than punishing it as weakness or indecision. Apprenticeship and domain depth must be preserved as institutional values, not deprecated as inefficiencies. The grueling, friction-heavy process of acquiring expertise is not a cost to be optimized away. It is the mechanism by which human minds build the internal maps that let them sense when something is missing.

The Real Choice

The choice before us is not “to AI or not to AI.” That framing is a distraction, and usually a deliberate one. The real choice is between two epistemological postures.

One posture treats reality as a closed system—a problem with a known boundary to be solved, optimized for, and scaled. In this frame, friction is waste, expertise is a bottleneck, and the right prompt can substitute for understanding. This is the posture of the Victorian mapmaker drawing straight lines across a continent he has never walked. It produces fluency theater and the appearance of efficiency: systems that were built quickly, look masterful and feel efficient while accumulating invisible structural debt.

The other posture treats reality as an open system—permanently incomplete, permanently capable of producing the unimaginable. In this frame, friction is a feature, expertise is insurance, and the most important question any system can ask is “What have we failed to conceive?” This is adaptive epistemology: not the fantasy of perfect prediction, but the disciplined practice of preserving the capacity to pivot when reality arrives in a shape nobody expected.

Civilizations do not fail only by being wrong. They fail by losing the capacity to notice what they have not yet imagined.

The Titanic did not sink because its engineers were careless. It sank because the scenario that defeated them existed in a region of possibility their models could not render—and nothing in their system was designed to ask whether such a region existed.

We are now building systems of immeasurably greater consequence, at immeasurably greater speed, with immeasurably less friction. The question is not whether we will encounter the unimaginable. The question is whether, when it arrives, we will have preserved enough capacity to recognize it, respond to it, and learn from it—or whether we will have spent our time accumulating debt we cannot see and dismantling the expertise we will need when it comes due.