Aporia

Aporia

Aporia Magazine

Yes, you're going to be replaced

So much cope about AI

Jan 26, 2025
∙ Paid
A dramatic and ominous scene featuring a massive mainframe computer in the foreground, its design futuristic and intricate, glowing with red and blue lights. Behind it is a sprawling futuristic city with towering skyscrapers illuminated by neon lights. The sky is filled with heavy, dark clouds, creating a foreboding and dramatic atmosphere. The overall scene combines a sense of advanced technology and looming danger, with details of cables and circuitry intertwining around the mainframe and blending into the urban environment. The wide composition emphasizes the scale and grandeur of the setting.

Written by Noah Carl.

When the word “cope” started to catch on a few years ago, I initially opposed its use. As far as I could see, it had come to mean nothing more than “to try and justify a position that someone else disagrees with”. Person A would say something like, “Here’s why I believe such-and-such.” And person B would chime in with, “Oh yeah, that’s why you believe it? That’s cope.” However, I have changed my mind. “Cope” is a useful word, even if it does embody the kind of irreverence that is only too common in internet discourse. It basically means, “to try and justify a position that you really should have abandoned”.

Which brings me to my point. There is an immense amount of cope about AI, especially from conservatives. This cope comes in two forms. First, there is the claim that AI isn’t really very impressive and can’t really do very much. Second, there is the claim that while AI is quite impressive and can do quite a lot, its effects on society will be largely or wholly positive.

The first form of cope is easy to expose, as a brief trawl of the academic literature and few germane examples will illuminate.

Qiaozhu Mei and colleagues prompted AI to play economic games such as the Dictator Game, the Ultimatum Game and the Prisoner’s Dilemma, and then compared its behaviour to that of humans from a large international sample. They found that the best-performing AI behaved in a way that was indistinguishable from the average human.

Cameron Jones and Benjamin Bergen invited human participants to have a five minute conversation with either a human or an AI, and then asked them whether they thought their interlocutor was human. The best-performing AI was judged to be human 54% of the time, whereas humans were judged to be human only 67% of the time. (The worst-performing AI, an obsolete system, was judged to be human 22% of the time.)

Peter Scarfe and colleagues submitted AI-written answers to an online exam for a psychology course at a major British university. They found that 94% of the AI-written answers went undetected, and that the AI-written answers were awarded grades half a grade-boundary higher than those written by human students.

John Ayers and colleagues identified 195 exchanges where a verified physician responded to a public question online. They then posed the same questions to AI. Responses were evaluated by a team of physicians who were blind to their source (human versus AI). Evaluators preferred the AI responses in 79% of cases, rating them as more informative and more empathetic.

Erik Guzik and colleagues administered the Torrance Tests of Creative Thinking to AI and compared its performance with that of humans from several US samples. They found that AI scored within the top 7% for flexibility and the top 1% for both originality and fluency, as judged by humans who were not aware of the study’s purpose.

Karan Girotra and colleagues asked Wharton MBA students to “generate an idea for a new product or service appealing to college students that could be made available for $50 or less”, and randomly selected two hundred ideas. They then compared these with two hundred ideas generated by AI. Of the 40 best ideas, as judged by humans who were not aware of the study’s purpose, 35 were generated by AI and only 5 were generated by humans.

Brian Porter and Edouard Machery asked humans who were not experts in poetry to judge poems written by AI or well-known human poets. The humans were unable to reliably distinguish AI-written from human poetry, and rated the AI-written poems as more rhythmic and more beautiful.

Lauren Martin and colleagues had AI review legal contracts and compared its performance to that of human lawyers from the US and New Zealand. They found that the best-performing AI matched human performance in terms of accuracy, and far exceeded human performance in terms of both speed and cost efficiency.

An AI was recently able to solve 25% of frontier math problems, which typically take humans specialists hours or even days to solve. Only a month earlier, legendary mathematician Terrence Tao had opined that the problems would “resist AIs for several years at least” because relevant training data is “almost non-existent”. The same AI achieved a competitive coding score at the 99th percentile of human coders.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Aporia Magazine · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture