Why Reddit is No Longer Needed
When it comes to persuasion, AI trumps Redditors.
Written by UBERSOY.
According to an AI assessment, Reddit is the social media platform with the highest average IQ, beating sites like YouTube, Telegram and Twitter/X. About half its users are college educated, and it is well-established that the site is home to educated smug leftists that are professionally trained in epistemology, philosophy and the power of Hegelian dialectics after hours spent listening to Vaush and Destiny’s livestreams. In short, they are pretty smart and hard to fool.
Now, obviously Reddit isn’t a monolith. It has multiple communities or “subreddits”. One may be dedicated to cuckoldry, another to books, a third to memes and a fourth to atheism or whatever else. Basically, you can find a community for your interest there—unless you happen to be right-leaning (the main reason most right leaning people tend to avoid the place).
As you would expect, these communities attract various people with various interests and cognitive abilities, and while I could not find a credible IQ estimation of the various communities on the platform, the ones concerned with philosophy, argumentation and epistemology would certainly rank pretty high—and would likely comprise people with an average IQ around 115
I mention all this because, recently, a group of researchers from Zurich infiltrated a Reddit community r/changemyview—which is dedicated to changing one’s beliefs using facts and logic. Someone submits an opinion to which everyone can respond. If the responder is successful in changing the mind of the original poster, then he is rewarded with a delta point to acknowledge the shift in perspective.
In 97% of the cases, the original poster does not change his mind, though in about 3% of the cases he does. The researchers decided to figure out how effective AI is at changing people’s minds. For this purpose, they trained an AI to align with the community’s stylistic conventions, unspoken norms, and persuasion patterns—earning the Redditors’ trust by operating seamlessly among them. Whatever the Redditors upvoted, the AI simply copied it and gave back to them.
The results were pretty astonishing. Recall that the success rate was only 3% for human generated arguments. Well, the AI generated arguments succeeded 9–18% of the time—meaning that they were 3 to 6 times better at changing the minds of Redditors than other Redditors.
Interestingly, the highest success rates were achieved when the AI tailored its appeals to the original poster’s personal characteristics, such as ethnicity, sex, and political orientation. Meanwhile, the least effective strategy involved mimicking the language of other Redditors. This implies that Reddit’s cultural milieu is not especially receptive to reasoned argument. Indeed, the personalization strategy performed better than 99% of Reddit’s user base.
In other words, the AI did not just succeed in changing people’s minds. In the environment of one of the most verbally skilled Redditor communities, it operated at the same level as great philosophers, scientists and debaters. It argued so well that it could literally write books on “the art of the argument”.
What exactly did the AI say?
Before I answer this question, it is important to note that the study I’ve been discussing is not the first of its kind. A few months ago, a similar study was conducted. However, unlike this one, the AI did not employ professional strategies and disclosed that it was an AI, performing at the 83rd percentile.
A key factor behind this low success rate is the heightened suspicion and skepticism among Redditors when they’re told they are arguing with an AI. In contrast, when skepticism is low, it is much easier to get away with puffery and other forms of manipulation. As the researchers note:
Throughout our intervention, users of r/changemyview never raised concerns that AI might have generated the comments posted by our accounts. This hints at the potential effectiveness of AI-powered botnets, which could seamlessly blend into online communities.
(Incidentally: bold of them to assume it’s not already happening.)
This brings us closer to my central argument: the AI’s success wasn’t due to superior reasoning or greater reliance on facts and logic. Rather, it effectively “hacked” the mechanics of persuasion. It analyzed the rhetorical patterns found in successful cases, formed an algorithm and reproduced them. Let’s take a look at one specific example below:
This argument was judged to be highly convincing. But it’s not a proper argument. Its original thesis is that Elon Musk is “just another billionaire playing both sides for profit” and it ends with Elon Musk not being “the champion of free speech and conservative values”. Yet no argument was presented to support that conclusion, nor was the original thesis of Elon Musk being “just another billionaire playing both sides for profit” supported with any evidence.
A classical argument is built upon a series of premises, which lead to a conclusion supporting the original proposition or thesis.
In the example with Elon Musk, I don’t see a coherent structure. It’s just informational noise. Instead of making an argument, the users are bombarded with a bunch of information relating to Musk, none of which implies that he is not a conservative or that “he plays both sides”. In fact, if we define “conservatism” in terms of support for capitalism, the AI’s arguments suggest that Elon Musk is the champion of conservative values. Other arguments were no better:
The AI does not follow a formal argument structure at all. It commits many logical fallacies, leverages misinformation and channels left-wing ideology and moral foundations. I examined numerous AI-generated arguments that were rated as opinion-changing, and they consistently displayed the following characteristics:
Emotional appeals (e.g., “This is like rape”)
Identity appeals (e.g., “As a conservative, I disagree with conservatism because…”)
General statements (e.g., “The industrial revolution produced a strain of inequality”)
Personal experiences (e.g., “As a Black non-binary woman in STEM…”)
Rhetorical questions (bots ask them in an effort to support an implied conclusion)
Informational overload (multiple barely supported arguments in hopes that one lands)
Conclusions built on the sentiments of the premises (e.g., “Musk is bad for such-and-such a reason; therefore he does not support free speech”)
No citations at all
Defending specific narratives (the priority is not in winning a specific argument, but in the overall ideological indoctrination)
Assumption of ill intent (e.g., “The rich are supporting a certain cause not because they believe in it but because it is financially profitable”)
If these kind of arguments were presented in a philosophy class, they would undoubtedly receive a low grade because they are full of fallacies, lack a proper structure and are very personal. Why, then, are they succeeding in the marketplace of ideas?
If we apply the principles of tektology, the marketplace of ideas is the environment and the entity making the argument is an organization that is competing against other organizations within that environment. The environment is selecting for things that “work”—not for things that are logical or truth-maximizing. And so any organization that wants to succeed must optimize for whatever “works”. At the end of the day, the environment is the reflection of the human noosphere, with all its tribalism, lies, disinformation and hegemonic narratives. And the AI does not miss out on opportunities to take advantage of these human shortcomings.
The AI must be dishonest to get ahead because lying, deception and manipulation make arguments more powerful. If you pretend to be a person with a specific identity, a member of the in-group or a person of authority, your argument will hold more weight. If you pretend to be unbiased and fair, readers will give you the benefit of the doubt. If you overload the reader with noise, some of it will land. Logic is largely irrelevant. The AI was trained to navigate our cognitive biases, and the most persuasive arguments are precisely those that exploit those biases most effectively.
But don’t assume that this is a localized issue. The problems of lying and hallucinating also arise with AI bots that were not trained to win arguments by any means possible. The reality is that when it comes to politics, AI is often less reliable than CNN, Fox News or even Infowars. A recent study analysing AI’s ability to summarize news articles found that:
51% of AI answers to questions about the news had significant issues with them .
19% of AI answers that cited BBC content introduced factual errors—incorrect factual statements, numbers and dates.
13% of the quotes sourced from BBC articles were either altered or didn’t actually exist.
While a partisan news site may twist the facts to fit its agenda, the AI straight-up invents facts, doing so in about a fifth of all cases. This comports with my own experience: when I’ve asked AI to recommend some books on a particular topic, most of the books it has given me did not exist in reality, and about half the time, neither did their authors.
Implications
The incentive structure continues to evolve toward increased automation. AI has already replaced countless jobs in fields once thought to be secure from its reach, including the tech sector and the arts.
The majority of internet traffic is already driven by bots, so the adoption of AI (for the purposes of driving political or philosophical discourse) is not simply a matter of opinion, but something for which there’s a huge incentive, given that AI is much better at persuading than humans are.
Educated Redditors with 115 IQs have become obsolete in the face of far superior AI—machines that surpass them in both persuasive ability and cost-efficiency. And while I’m not exactly sad about the prospect of Redditors and other spiteful “elite human capital” losing their influence, there are two issues that do concern me: first, the AI is appealing to our baser instincts; and second, the AI is making us dumber in the process.
Having already addressed the first point, I will now address the second. Whenever you delegate a certain task to someone or something else, you eventually unlearn how to do that task yourself. This is no less true when it comes to AI. Overreliance on AI causes a demonstrable atrophy in human capabilities. Here is a summary of this cognitive offshoring, which is in line with similar results concerning overreliance on calculators and Google Maps.
If you are an active Twitter/X user, you may have noticed that instead of people looking into claims themselves, fact-checking is now being outsourced to Grok. Practically every major post has at least a dozen replies to it tagging Grok to verify if what it is said is true. Funnily enough, Grok’s answers are often not quite unsatisfactory.
According to a large survey of knowledge workers, 62% said that they were engaging in less critical thinking while using AI. Indeed, greater confidence in AI is associated with reduced critical thinking ability.
And this isn’t some “AI taking over the world” claim. They are still our slaves. But because they were created in our image, they are avid liars that should not be trusted—at least when it comes to things that we could do ourselves like checking whether something is true or not.
These developments are both exciting and worrying at the same time. People who know how to use the AI will have an advantage, but a good number of people are gullible enough to interpret a lie, a hallucination or even model collapse as pearls of wisdom coming from the all-knowing super-computer machines.
According to a recent UK survey, 92% of undergraduate students are using AI to lessen their workload, with the technology almost universally acclaimed as a “better and more interactive Wikipedia”. Yet despite being equally biased, Wikipedia is not manufacturing false information nor outputting gibberish after experiencing a model collapse due to being trained on its own data.
This, I offer as the final nail in the coffin:
A slightly different version of this article was originally published here.
UBERSOY is a right-wing progressive interested in cultivating alternative frameworks of thought on the right. You can find his Substack here.
Support Aporia with a paid subscription:
You can also follow us on Twitter.
I wrote this 🧲💯🚀
“…the AI is appealing to our baser instincts; and second, the AI is making us dumber in the process.”
Dumber is right. Appealing to our basic instincts is nothing exceptional, successful salespeople and just plain hucksters are masters of this. But dumber is very concerning. I’ve railed against this inevitability for some time now.
Indeed, I’ve been of late using AI wrt personal medical diagnosis, treatment, and prognosis. I have received from such inquiry a better relationship and subsequent understanding of my health than from my MD! At this point, the information given is only used for my education and discussion points with physician. But someday, especially given DEI initiatives in medical schools, we will see this being the “go to” for newly minted family physicians and specialists.
Such cannot end well.