June 10, 2025
Written By. Dr. Patrick Adriel H. Aure
The promise was seductive: generative AI would democratize creativity, accelerate innovation, liberate human potential. The reality? We've built the perfect bullshit machine. Not lies—lies at least respect truth enough to oppose it. This is something worse: infinite nonsense generated at zero cost, each piece precisely calibrated to exploit our individual emotional weaknesses.
A troll farm needed hundreds of employees to flood social media with disinformation. Today, one person with ChatGPT can generate more manipulative content in an afternoon than those farms produced in a month. Each message can be personalized, localized, optimized for maximum emotional impact. The asymmetry is staggering: one prompt spawns a thousand deceptions, each finding its mark through algorithmic precision.
Harry Frankfurt warned us about bullshit's unique danger—it's not false but indifferent to truth. The bullshitter doesn't care whether their words correspond to reality. They speak only to manipulate, impress, obscure. In Frankfurt's analysis, this indifference corrodes something deeper than facts: it erodes the very idea that truth should matter at all.
The Futility of Fact-Checking
Here's where our usual defenses fail us. Fact-checkers assume people care about accuracy, that exposing falsehoods will somehow restore order. But Yuval Noah Harari reminds us of an uncomfortable reality: human societies have never run primarily on facts. Money is fiction—paper and pixels we collectively pretend have value. Nations are stories we tell ourselves about imaginary lines on maps. Corporations are legal fantasies that somehow own property and sign contracts.
Information has always been about emotional resonance and network building first, facts second (if at all). We believe what strengthens our tribal bonds, what validates our identity, what makes us feel seen and understood. When AI-generated content arrives perfectly pitched to our psychological frequency, speaking our fears and hopes back to us in our own voice, fact-checking becomes as relevant as correcting grammar in a love song.
The Parasitic Machine
This mechanized bullshit operates as pure extraction. Each fabricated narrative, each manufactured controversy, each artificial outrage depletes our collective capacity for trust and cooperation. Unlike productive fictions that build social reality—the stories that create institutions, enable collaboration, inspire achievement—algorithmic BS only consumes. It feeds on attention, erodes common ground, fragments shared understanding.
The hyper-personalization makes it worse. We no longer inhabit even loosely shared information environments. Each person receives a bespoke reality, carefully crafted to confirm biases, inflame anxieties, validate resentments. The question isn't whether this tears societies apart—we can see that happening. The question is what we do about it.
Beyond Defense: The Proleptic Alternative
If Harari is right that we're condemned to live by fictions, and if Frankfurt is right that bullshit threatens our capacity for collective action, then perhaps we need a different approach. Not fact-checking but future-making. Not debunking but constructing. Not defense but creation.
Prolepsis—the rhetorical act of representing a future as if already present—offers a path. When Kennedy declared "we choose to go to the moon," he created what we might call a generative fiction. Not a lie, not bullshit, but an invitation to make something true through collective effort. The statement created accountability, marshaled resources, aligned efforts. Fiction became fact through shared commitment.
The difference between bullshit and proleptic narratives isn't about truth versus falsehood. Both create effects in the world. But bullshit extracts value while proleptic statements generate it. One depletes trust; the other builds it. One fragments; the other connects. One discards responsibility; the other embraces it.
This isn't about abandoning facts or embracing relativism. It's about recognizing that in an age of infinite narrative generation, fact-checking alone is like bringing a fire extinguisher to a flood. We need stories that can compete with BS on its own terms—emotional resonance, network effects, viral transmission—while building rather than destroying.
What would it look like to use AI's power for proleptic creation? To generate visions that expand rather than contract possibility? To craft narratives that invite participation rather than manipulation? These aren't idle questions. With every person now wielding tools of mass narrative production, they're survival skills.
The machines will keep generating bullshit—that's what they're optimized for in an attention economy. But we still choose which stories to amplify, which futures to speak into being, which fictions to inhabit. In that choice lies whatever hope we have.
Listen to the podcast version of this article!
A-Ideas is an AI-generated podcast created using Notebook.LM
Patrick Adriel H. Aure, PhD (Patch) is the founding director of the PHINMA-DLSU Center for Business and Society and assistant dean for quality assurance of the DLSU Ramon V. del Rosario College of Business. patrick.aure@dlsu.edu.ph