Aporia

Aporia

Share this post

Aporia
Aporia
AI's Original Sin
Copy link
Facebook
Email
Notes
More
Aporia Magazine

AI's Original Sin

The future enabled by AI may be dazzling, but will it leave room for artists, novelists and philosophers?

Jun 12, 2025
∙ Paid
13

Share this post

Aporia
Aporia
AI's Original Sin
Copy link
Facebook
Email
Notes
More
1
1
Share

Written by Nicholas Agar.

In 2015, Sam Altman and Elon Musk birthed OpenAI with the ambition of producing Artificial General Intelligence (AGI), an artificial mind as capable as any human one. The rapid improvement of digital technologies suggests that the very artificiality of that mind will bring future riches and joys, and possibly terrors as well.

As we contemplate fever dreams about AI, anyone who makes a living out of thinking should consider what OpenAI’s creations might be inviting us to relinquish. We idolize tech-types who look beyond the horizon and imagine a world of universal abundance. Artists, philosophers and other creative workers have an obligation to imagine, with equal vividness, that their share of the abundance might turn out to be nil.

While contemplating these imaginary futures, we must also look to the stories we tell about AI’s origins to consider their myths and assumptions.

One origin story points to a 1950 paper by the computing genius Alan Turing that envisaged computers with linguistic capacities indistinguishable from a human. Turing imagined a year 2000 in which language-using machines expand what we mean by “mind” to encompass them. In a world where Turing had never forwarded his eponymous Test, we might have focused on other aspects of human minds. Perhaps we wouldn’t be quite so ready to believe that Large Language Models make AGI imminent. Would OpenAI still command such astronomical valuations if we didn’t think verbal fluency equalled intelligence?

Here's a more recent origin story specific to OpenAI. It suggests a flaw as momentous as the sin of pride that prompted God to cast out his brightest angel, Lucifer. Greed is embedded in the very definition of ‘AGI’ found in OpenAI’s charter: “highly autonomous systems that outperform humans at most economically valuable work.”

To see the corrupting influence of greed, compare OpenAI’s definition with something from pre-ChatGPT times, a definition offered by Stuart Russell and Peter Norvig in 1995. They characterised ‘AGI’ as “an AI that performs any intellectual task a human can”. This definition embraces many intellectual tasks with no obvious economic value, such as writing a letter to your love or composing a haiku for yourself.

What is the significance of shifting from performing any intellectual task that a human can to outperforming humans at most economically valuable work?

Incidentally, outperforming humans at most economically valuable work does not require outperforming us at all economically valuable work. OpenAI’s definition therefore invites a question. Which economically valuable tasks can be excluded? The definition suggests that a candidate AGI whose omissions are less economically valuable would be more likely to qualify.

Keep reading with a 7-day free trial

Subscribe to Aporia to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Aporia Magazine
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More