Written by Noah Carl.
Dwarkesh Patel is a podcaster who’s interviewed a lot of people in the world of AI. He put the following question to Anthropic CEO Dario Amodei:
What do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven't been able to make a single new connection that has led to a discovery?
Amodei’s answer was fairly simple: “I'm not sure, but this is going to change with scaling”. For those not well-versed in the jargon, “scaling” refers to the process of boosting a model’s performance by jacking up the number of parameters, the amount of data or the number of CPUs—the idea being that sufficiently “scaled” models will start making new connections that lead to discoveries.
Patel subsequently posted his question on Twitter and various users proffered answers of their own. The most popular—as judged by the number of likes—was from Eric Michaud, an MIT graduate student. He speculated that learning “algorithms associated with higher cognition” may simply not be optimal for LLMs, which are trained to do nothing more than “minimize next-token prediction loss across human text”. Michaud also agreed that it’s a “super interesting question”.
What’s noteworthy is that most of the commenters accepted the premise of the question—though some followed Amodei in insisting that “better and more creative LLM reasoning will solve it”. The whole discussion seemed to vindicate sceptics who claim that AI isn’t close to making any real contributions to knowledge.
When Patel re-upped his question in February of this year, the tweet caught my eye and I posted an answer in the replies. Specifically, I cited the study in which Wharton MBA students and AI were asked to generate ideas for new products or services, and of the 40 best ones, 35 were generated by AI and only 5 were generated by humans. My point was that AI can already make new connections.
However, I don’t think my answer was particularly good. In fact, it was kind of smart-ass. After all, what Patel clearly had in mind was a new connection that has led to a scientific discovery—not a new connection that has led to a business idea that may or may not pan out (and in all likelihood would not pan out). Fast forward a few weeks, which is a long time in AI development, and we now have a much more compelling answer to Patel’s query.
Keep reading with a 7-day free trial
Subscribe to Aporia to keep reading this post and get 7 days of free access to the full post archives.