Skip to content

Gullible by Design: The Psychology Behind Humanity’s Latest Stupidity Crisis

    Abstract Red Portrait of a woman with Glitch Effect
    Photo by Lisa from Pexels

    The rise of weapons-grade ignorance in the deepfake digital age

    We’re living in the dumbest timeline. Not because people don’t know things, but because they’ll believe absolutely anything. All across the globe, people are falling for AI slop and conspiracy leftovers the way toddlers fall for peekaboo. It’s not just the fakery itself – it’s the speed at which people want to believe.

    The instinct isn’t to pause, to verify, to, at the very least, squint a bit at the pixels. No. The instinct is to hit share, to nod along, to add a pithy, “Yup, sounds about right.” As long as it validates their previous conviction, they’ll run it up the flagpole and salute it all day long.

    There was a meme that went around a couple of years ago that went something like, “Remember before the internet, we used to think lack of access to knowledge was the root cause of human stupidity. Well, now we know it wasn’t that.” The web gave us instant fact-checking power. It also gave us millions of people retweeting a blurry AI-generated image of the Pope in a Balenciaga puffer coat captioned, “This one is sooo powerful. It really moves me.”

    In an era where artificial intelligence can fabricate reality with unprecedented sophistication, humanity appears to be losing its most useful defense mechanism: critical thinking. The ability to question, analyze, and verify information before accepting it as truth and disseminating it as fact seems to be eroding at precisely the moment we need it most. From deepfake videos that dupe millions to conspiracy theories that gain traction faster than a pontiff genuflects with a collection plate, we’re witnessing what may be the most dangerous intellectual regression of the digital age.

    AI-generated image of Pope Francis in a puffer coat.
    🎭 The first viral AI Pope in a Balenciaga coat fooled millions in 2023.

    A New Age of Online Gullibility: Believe First, Justify Later

    The symptoms of this intellectual decline are everywhere. And once entrenched, it’s extremely difficult to shift. Consider a recent viral incident where an obvious deepfake video showed Minnesota Governor, and former US vice-presidential candidate, Tim Walz dancing down an elevator wearing a t-shirt with “Fuck Trump” on it. Famous bro Joe Rogan almost wet himself with glee, reporting it as genuine to the world’s largest podcast audience. Instead of saying, “That looks phony,” like everyone with half a brain did, he said, “Wow, can you believe this guy?”

    His producer tried to fact-check him, but Rogan dug in, even going as far as to say maybe it’s made up, but this is just the kind of thing Walz would do. Implying it’s fake, but it’s true anyway. So there. The numbers paint a troubling picture. Deepfake fraud incidents rose tenfold from 2022 to 2023, with 88% in the crypto sector alone, while AI-powered videos posing as genuine footage have been going viral online ever since.

    Yet perhaps more concerning than the technology itself is how readily people accept what they see without a second thought. Here’s the root of the problem: we’ve built a culture that leaps to conclusions first and scrambles for evidence afterward. And it’s not hard to see why. For centuries, social hierarchies, cultural norms, and religions have trained populations to take all sorts of dubious claims on faith alone. Talking snakes and eternal souls, trickle-down economics and billionaire philanthropists. You didn’t need proof that these things were real; you just needed to believe the story the system spat out.

    Man Holding Silver Bitcoin Coin
    📈 88% of deepfake fraud in 2023 targeted the crypto sector.

    Photo by Crypto Crow

    Childhood Conformity bred Grown-up Suckers

    That mindset hasn’t disappeared. It’s migrated. Instead of scripture, we’ve got Facebook memes and TikTok edits. Grimly fiendish newsfeeds. Becky from high school landed a rich husband, so she must know what she’s on about, right? People are still wired to believe first – except now they’re believing that COVID vaccines contain microchips or that The Donald was an FBI informant and not, as the evidence certainly suggests, Epstein’s vice-head ball-coddler.

    When Trump says the bad press about him is a deep-state hoax, his supporters don’t say, “That doesn’t make sense.” Instead, they tap the sides of their noses and wink, adding, “He’s playing 5D chess again, that cheeky little genius.” Anything but the obvious explanation or the most reasonable conclusion. Occam’s razor now comes in colours with a lockable titanium blade-guard.

    AI has supercharged this gullibility. In the old days, a fake photo looked like your mom did it in MS Paint. Now you can churn out thousands of plausible proofs of whatever you want. A Pope in a rave outfit. Putin riding a dolphin. A South African pastor raising a “dead” man at a funeral – he sat up in the coffin, blinking myopically, while churchgoers screamed hallelujah – it would have been hilarious if it wasn’t so very sad to see.

    The real tragedy is that people want these things to be true. A sham video confirming your bias feels better than a true report that challenges it and forces you out of your comfort zone. If the deepfake aligns with your worldview, it’s not just believable, it’s almost irresistible.

    Born Yesterday: The Global Village Idiot

    This susceptibility to misinformation can be seen all over the world. During India’s 2024 elections – the world’s largest democratic exercise so far – researchers documented thousands of instances where voters shared obviously fabricated content about political candidates. Videos showing politicians making statements they never made circulated widely on WhatsApp, often accompanied by bold “This is AI” disclaimers that were ignored or even removed as the content spread.

    In Brazil, manipulated audio clips of Supreme Court justices allegedly discussing election interference gained millions of views before being identified as AI-generated. The sophistication of the recordings was less concerning than the speed with which they were accepted and amplified by users who were far more interested in confirmation than in verification.

    Indonesia provides perhaps the most striking example. During the 2024 presidential election, a deepfake video showing candidate Prabowo Subianto allegedly promising to ban certain religious practices exploded in popularity, sparking protests in several provinces. The video was crude – facial mapping was blatantly imperfect, and audio synchronization was way off – yet it influenced public opinion for days before being debunked.

    This pattern is consistent across cultures: people are increasingly willing to believe content that aligns with their preexisting beliefs, despite obvious signs of tampering.

    A Hand Holding a Cellphone with a WhatsApp Logo on the Screen
    🌍 WhatsApp is the world’s most influential misinformation platform, particularly in developing democracies.

    Photo by Anton

    South Africa: A Case Study in Digital Vulnerability

    South Africa offers a particularly illuminating example of how misinformation tends to exploit existing social divisions. In the context of SA elections, disinformation takes myriad forms, from misleading narratives about candidates and their policies to false claims about the voting process itself. The country’s complex social dynamics and historical tensions are fertile grounds for volatility and manufactured outrage.

    During the 2024 South African elections, concocted stories about electoral fraud spread rapidly through WhatsApp groups, particularly in townships where access to fact-checking resources is limited. One particularly egregious example involved a viral video purporting to show ballot stuffing in KwaZulu-Natal. The video, which was actually footage from a 2019 election in another country, was shared by thousands of users who failed to notice the foreign language on the ballot papers or question why the voting procedures differed from South African standards.

    Social media platforms, whether open or closed, were used by individuals and specific groups to organise and aid in the spread of the unrest. These forums served to advance various agendas during the July unrest, including establishing collaborative networks, strategically spreading misinformation and disinformation, and mobilising and celebrating looting and violence, as documented by South Africa’s Human Rights Commission.

    The speed with which false narratives spread through South African social networks demonstrates how confirmation bias overrides critical analysis. Users shared content not because they had verified its authenticity, but because it reinforced their existing political or social grievances.

    The Psychology of Wilful Credulity

    What makes this phenomenon particularly egregious is that the technology required to verify information has never been more accessible. Reverse image searches, video analysis tools, and fact-checking websites are available to anyone with an internet connection. Yet people consistently choose not to use these resources. This suggests that the problem isn’t technological but psychological. We appear to be witnessing a mass abandonment of intellectual rigour in favor of emotional satisfaction. The dopamine hit of having one’s worldview validated far outweighs the effort needed to verify information.

    Research in cognitive psychology offers some explanation for this behaviour. Confirmation bias – our tendency to seek information that confirms our existing beliefs while ignoring contradictory evidence – has been amplified by algorithms that feed users content similar to what they’ve previously engaged with. Which is why refreshing your feed only results in more of the same. Social media platforms have essentially weaponized our cognitive weaknesses for profit.

    But there may be a deeper cultural component at play. The rise of conspiracy thinking and the rejection of expertise coincides with declining trust in traditional institutions, such as government, media, science, and academia. When people lose faith in these established authoritative sources, they become more susceptible to alternative narratives, regardless of their credibility.

    Carl Sagan standing in a model of the solar system.
    📚 Carl Sagan’s rule: “Extraordinary claims require extraordinary evidence.”

    Photo by Cornell University

    The Religious Connection: Faith-Based Reasoning in Secular Contexts

    There appears to be a correlation between religious upbringing and susceptibility to conspiracy theories and misinformation. This is not to suggest that religious people are less intelligent, but rather that certain patterns of thinking cultivated by faith-based belief systems may predispose individuals to accept extraordinary claims without extraordinary evidence, to paraphrase Carl Sagan.

    Religious training often emphasizes faith over empirical fact, the importance of revealed truth over discovered truth, and the notion that some knowledge is too sacred or complex for ordinary understanding. These cognitive habits, when applied to secular contexts, can make individuals more susceptible to conspiracy theories and less likely to demand rigorous proof for fanciful claims.

    Consider how conspiracy theories mirror religious narratives: they often feature hidden knowledge revealed only to the enlightened, powerful forces working in secret, and a cosmic battle between good and evil. For someone trained to accept religious mysteries on faith, accepting secular mysteries may feel intellectually familiar and comforting.

    This pattern is visible across cultures. In the United States, communities with high levels of religious adherence show higher rates of conspiracy theory belief. In parts of rural Africa and Asia, where traditional beliefs emphasize supernatural explanations for natural phenomena, populations show increased susceptibility to medical misinformation and election conspiracy theories.

    The Institutional Failure

    Educational systems worldwide have failed to adapt to the digital age. While schools teach students to question sources in traditional media, they have not adequately prepared them for an environment where anyone can create professional-looking content and distribute it globally within minutes.

    The conventional model of education, which emphasized memorizing information, has become obsolete in an age where information is infinite and verification skills are paramount. We need educational systems that prioritize critical thinking, source verification, and, crucially, healthy skepticism over rote learning.

    Media literacy programs exist in many countries, but they are often underfunded and treated as secondary to core subjects. This represents a fundamental misunderstanding of the modern information environment. In an age of deepfakes and algorithmic influence, the ability to distinguish between reliable and unreliable information is no longer a luxury – it’s an essential survival skill.

    Joe Rogan smoking weed during a podcast.
    🎧 Joe Rogan’s podcast reaches over 11 million listeners per episode, amplifying misinformation at scale.

    International Examples of Digital Deception Success

    The global scope of this problem becomes clear when examining specific international cases where the trickery has triumphed. In the Philippines, fabricated videos showing opposition candidates making inflammatory statements influenced voting patterns in the 2022 elections. The technology was sophisticated enough to fool casual viewers but crude enough that careful analysis would have revealed the deception. Yet such analysis rarely occurred.

    In Nigeria, fake news about electoral violence spread faster than actual news about peaceful voting, creating a climate of fear that suppressed turnout in certain regions. Citizens shared unverified reports of attacks that never happened, demonstrating how quickly false narratives can create real-world consequences.

    Bangladesh provides another troubling example. During civil unrest in 2024, fabricated videos showing police brutality were shared millions of times before being identified as footage from other countries or entirely computer-generated. The emotional impact of these videos mobilized protesters, but the fraudulent nature of the evidence undermined the legitimacy of their concerns.

    The Technology of Deception Evolves

    The most visible use of AI in many countries was to create memes and content whose contrived origins weren’t disguised. They were often openly shared by politicians and their supporters during 2024’s global elections. This represents a rather disturbing shift from attempting to deceive about the artificial nature of content to normalizing its use as a political tool.

    The sophistication of current AI technology means that distinguishing between real and fake content requires increasingly specialized knowledge. While less than 1% of all fact-checked misinformation during the ’24 election cycles was AI content, according to Meta, the impact of that small percentage was disproportionately large due to its viral nature and emotional impact.

    More concerning is the democratization of these tools. Software that once required expert knowledge and expensive equipment is now available on your smartphone as an app. Anyone can create convincing fake audio, manipulate video, or generate realistic images with minimal or no technical skill. This has lowered the barriers to creating misinformation while the ones needed to identify it remain higher, or require more conscious effort, at least.

    Woman Holding Burning Newspaper
    ⚖️ Meta says less than 1% of fact-checked misinformation in 2024 elections was AI-generated.

    Photo by Jhefferson Santos

    The Economics of Misinformation

    The financial incentives driving misinformation creation reveal another dimension of the problem. False stories often generate more engagement than true ones because they provoke stronger emotional responses. Social media algorithms, optimized for engagement rather than accuracy, amplify misleading content to achieve the desired outcome.

    This creates a pretty perverse economic incentive where truth becomes less profitable than fiction. Content creators have learned that outrageous claims drive more clicks than measured analysis, leading to an information ecosystem that rewards sensationalism over accuracy. Hence, you’re shown something that makes you feel, rather than something that makes you think.

    The rise of subscription-based platforms has also created echo chambers where users naturally only pay for content that confirms their biases. This model makes challenging or correcting false beliefs economically counterproductive and thus undesirable for content creators who depend on subscriber loyalty.

    The Path Forward

    Addressing this crisis requires a multi-pronged approach. Educational systems must prioritize critical thinking and source verification as core competencies. Technology platforms must be held accountable for the content they amplify, even if they don’t create it. And individuals must take personal responsibility for verifying information before sharing it.

    Perhaps most importantly, we must acknowledge that this is fundamentally a cultural problem requiring cultural solutions. We need to cultivate a social norm where sharing unverified information is seen as socially irresponsible, similar to how attitudes toward drunk driving or smoking in public spaces have changed through sustained social pressure.

    The tools to combat misinformation exist; what we lack is the collective will to use them. Until verification becomes as automatic as breathing, and until intellectual humility becomes as valued as emotional certainty, we’ll remain vulnerable to those who profit from our gullibility.

    Conclusion

    The erosion of critical thinking in the digital age represents one of the most pressing challenges facing modern democracy. When citizens can’t distinguish between reliable and unreliable information, they can’t make informed decisions about their governance, their health, or their future.

    The examples from around the world – from deepfake political videos in the United States to manufactured election fraud claims in South Africa – demonstrate that this is not a problem confined to any particular political system or level of economic development. It’s a human problem, rooted in our cognitive biases and amplified by technologies designed to exploit them.

    The solution requires more than better technology or stricter regulations. We need a fundamental shift in how we approach information consumption – from passive reception to active verification, from emotional reaction to analytical assessment, and from tribal acceptance to universal truth-seeking.

    The cost of inaction is not only an ill-informed populace, but the collapse of the shared reality upon which all democratic discourse depends. In an age where anyone can fabricate evidence and distribute it globally, the ability to think critically isn’t merely an intellectual virtue; it could be the foundation of civilization itself.

    The question isn’t whether we can solve this problem, but whether we’ll choose to do so before the damage becomes irreversible. The answer may well determine the future of human knowledge and social cohesion.

    Read More Articles

    If you enjoyed this article, please consider a small donation to support my work (Buy Me A Coffee), or becoming a patron (Patreon). Thank you

    Game meme with the words: If the news is fake, imagine how bad the history is!