🏠 Bogdan's public essays

Disinformation: Why Facts Don’t Land

Eight internal blockers behind stubborn beliefs.

We all have a story of “I showed them the facts… nothing changed.” This essay makes a narrow claim about internal blockers—the stuff happening inside a person’s head that makes good evidence bounce. We’re specifically focusing on the internal psychology of disinformation “stickiness”, not the algorithms, laws, or botnets that allow, enable or amplify it. If you remove whichever of these are active in a given exchange, resistance tends to drop and acceptance becomes psychologically easier.

1) Cognitive Bandwidth Availability

What it is. Cognitive bandwidth is the limited pool of attention and working memory we have for processing information. When load is high (too much, too fast, too complex), people default to shortcuts and avoid costly updates. Decades of work on working memory and cognitive load shows that holding and manipulating several novel elements at once is hard; structure and timing matter. (PMC)

Bandwidth availability isn’t only dependent on the person’s nominal cognitive capacity (which itself is higher or lower, depending on several factors intrinsic to the individual), but also on that person’s current cognitive load. A person currently living under a lot of stress will have limited available bandwidth to process information, even if their nominal cognitive capacity is relatively high.

Why it blocks facts. If your correction demands several mental jumps (new terms, unfamiliar sources, nested logical articulations), many readers will be unable to engage. Dual-process research (Systems 1 & 2) frames this as low-elaboration processing when ability/motivation are low. (ScienceDirect)

What helps. Aim for one-jump paths: headline → short TL;DR → verifiable artifact. Reduce ambiguity first; then add depth for those who want it. (Cognitive load theory consistently finds that simplifying the presentation improves comprehension.) (SpringerLink)

2) Belonging (social alignment)

What it is. Social identity theory shows we derive part of “who I am” as an individual from “which group I belong to.” Messages from in-group voices feel safer than the same message from out-group voices; in many settings, in-group messengers persuade more. (Christos A. Ioannou)

Why it blocks facts. If accepting your evidence looks like stepping away from “my people,” the status cost can feel too high—even in private. (MSU Digital Commons)

What helps. Route corrections via in-group validators and use value-aligned framing (“because we care about X…”). This isn’t spin; it’s removing a social penalty unrelated to truth. (PubMed)

3) Reputation Preservation (audience cost)

What it is. People manage “face” (public self-worth) and often silence dissent when they feel outnumbered—the spiral of silence. They may privately agree and publicly resist if backing down feels humiliating. Classic sociology (Goffman), political economy (preference falsification), and media effects converge here. (Taylor & Francis Online)

Why it blocks facts. The same person who concedes in private messages can double down on the timeline. Publicness—not identity—drives the resistance. Also, unanimity effects matter: one visible dissenter sharply reduces conformity relative to a unanimous majority. (Vnecas)

What helps. Lower visibility (small rooms, private messages), provide face-saving scripts (“Happens to all of us; here’s the short version”), and break false unanimity early with one civil, credible dissent. (Wiley Online Library)

4) Identity Preservation (identity-protective cognition)

What it is. When information threatens who we are or what our group stands for, people unconsciously credit confirming evidence and discount disconfirming evidence well beyond ordinary bias. This is motivated reasoning and, in public-policy domains, identity-protective cognition. (Frank Baumgartner)

Why it blocks facts. Even in private, with time and clarity, identity-threatening corrections get resisted. In some studies, the stronger the identity cue, the stronger the “motivated skepticism.” (Wiley Online Library)

What helps. Identity-safe frames (explaining why the very act of updating one’s beliefs is aligned with the group’s values), incremental pivots, and—when possible—trusted in-group communicators. (SSRN)

5) Authority Cues (obedience triggers)

What it is. Titles, badges, “official” aesthetics, or moderator/admin power activate authority heuristics that short-circuit scrutiny. Milgram’s lab work shows how far obedience can go; persuasion models (ELM/HSM) formalize source credibility as a powerful non-content cue, especially under low elaboration. (Columbia University)

Why it blocks facts. If the falsehood carries strong authority signals, many will accept it by default—or treat counter-evidence as lesser. (richardepetty.com)

What helps. Counter-authority in-thread (named expertise, time-stamped artifacts), and teach readers to recognize authority theater as a cue—not a conclusion. (richardepetty.com)

6) Moralized Contempt (hate/disdain)

What it is. When attitudes become moral convictions or sacred values, out-groups get dehumanized and compromise feels like betrayal. This supercharges resistance and increases willingness to reject “procedural” solutions. Coupled with affective polarization, contempt hardens identity boundaries. (compass.onlinelibrary.wiley.com)

Why it blocks facts. If hating the out-group feels morally righteous, the very act of rejecting their evidence becomes a virtue signal. (Sarah Dimakis)

What helps. De-moralize the exchange (tone and language), re-humanize with concrete exemplars, and use value-aligned reasons for updating. (ResearchGate)

7) Fluency/Familiarity Bias (repetition)

What it is. Repetition increases processing fluency; fluent statements feel truer (the illusory truth effect). Strikingly, knowledge doesn’t fully protect; even people who objectively should “know better” are nudged by repetition. (ScienceDirect)

Why it blocks facts. If the falsehood has been seen everywhere, it enjoys a fluency head-start over any single correction. (SAGE Journals)

What helps. Use truth sandwiches and stable phrasing for corrections across contexts, and prebunk where possible (warn + alternative causal story) to blunt first-exposure advantages. (Center for Climate Change Communication)

8) Alternate Reality (epistemic closure)

What it is. In an echo chamber, outside sources are not merely absent—they’re actively discredited. New evidence from “their” ecosystem is rejected on arrival. (This differs from an epistemic bubble, where other voices are just missing.) Empirical and theoretical work parses the difference and its consequences. (PhilArchive)

Why it blocks facts. Even perfect syllogisms fail if premises and trusted sources don’t overlap. Cross-ecosystem links bounce. Large-scale media-network analyses show how asymmetries in media ecosystems can stabilize these closures. (OAPEN Library)

What helps. Deliver the same content inside the trusted ecosystem (in-group voices, familiar venues), and agree upfront on what evidence would count (shared priors, time-stamped timelines). (PhilArchive)

Quick Field Tests (A/B testing to diagnose the active blocker)

What about “ego”?

Stubbornness from “I can’t be wrong” is real—but in practice it decomposes into identity defense (being right is who I am), face costs (I said it publicly), self-as-authority (I trust me over you), metacognitive miscalibration (overconfidence), moralized contempt (updating feels like betrayal), epistemic closure (your sources “don’t count”), and fluency (I’ve repeated it so often it now feels true).

Classic findings on overconfidence/Dunning–Kruger, plus work on reactance (resisting controlling language), sit neatly inside that composite. If someone can show a residual “ego effect” that survives isolating the eight, we’ll update. (PubMed)

Why not add confirmation bias, reactance, or overconfidence as separate mechanisms?

The practical promise (and a gentle falsifiability note)

The working claim is straightforward: in real conversations, resistance to sound evidence can be explained by one or more of these eight internal blockers. If you neutralize the active ones (the diagnosis matters), you remove resistance and make acceptance the least costly path. If you can demonstrate a ninth internal blocker with independent causal bite—or show that one of these eight has none—we’ll revise the map.

Minimal playbook to try today:

  1. Start private if you can (reputation). (Wiley Online Library)
  2. Borrow an insider if you can (belonging/identity). (PubMed)
  3. Lead with a one-jump TL;DR (bandwidth). (SpringerLink)
  4. Keep tone de-moralized (contempt). (compass.onlinelibrary.wiley.com)
  5. Use stable wording for corrections (fluency). (Center for Climate Change Communication)
  6. If they source-gate, deliver inside their ecosystem (closure). (PhilArchive)
  7. If it still stalls, you’re in identity—pivot to “updating = loyalty to our standards.” (SSRN)

Sources (selected, non-exhaustive)