As someone working on AI implementation for a Fortune 500 company, I can appreciate the general fears and “coping” about AI’s potential, but this article overlooks some critical realities. While AI has made huge strides in the past 5 years, it’s not limitless. Much of its progress depends on massive datasets, and we’re already hitting limits in terms of quality and availability. Without new data, growth will plateau. As AI generated data contaminates its own dataset, we see a degenerative process called “model collapse”
AI also struggles with real-world applications because it can’t make accurate observations about the physical world. This isn’t just a small flaw—it’s a major roadblock for industries that require context, nuance, or real-time interaction with the environment.
Hallucinations remain a serious issue as well. AI will still often produce false or nonsensical results, and when tasked with complex, multi-step processes, these errors tend to compound. That’s a big problem when reliability is a priority. And while benchmarks and studies are great for PR, they rarely reflect the messy, ambiguous problems we deal with in real-world use cases.
Then there’s the economic angle. If AI were truly as revolutionary as claimed, it would already be profitable. Yet, the major players in the space are burning through cash without demonstrating financial sustainability.
AI is a powerful tool, but it’s far from ready to replace humans in most scenarios. Its best use is as a complement to human intelligence—not a substitute. The article seems to miss that balance entirely.
Thanks for sharing your perspective. I didn't mean to suggest that AI is "ready" to replace humans in most scenarios, but that it probably will be within the next 5–10 years.
Thanks for the reply. I appreciate your perspective. However, a 5–10 year timeframe would likely require moving away from LLMs, which have inherent limits. Linear growth demands exponentially more data, and we’re already hitting the limits of what’s available. Diminishing returns are inevitable unless there’s a shift toward something like symbolic AI or another revolutionary approach. Without that, the progress you’re envisioning in such a short time seems unlikely.
The chain-of-reasoning approach adopted by O1 and other recent models seems like a step back toward classical/symbolic AI, albeit one grafted on top of the LLM neural-net system. My inclination would have been to start with symbolic reasoning and graft perception/motor functions on top, but I'm not an expert.
Yes, concerns around the data wall have dropped off a lot since around 3-4 months ago once O1 demonstrated a new way to scale and especially since O3 demonstrated rapid progress on that new calling parameter.
I also work in AI. I disagree with the way you're applying your point. I 100% agree that we won't get to AGI, much less in all ways ASI (as everyone knows the frontier models already know more facts than any human, including ultra specialized ones), without further algorithmic developments. RL on top of giant LLMs as the intelligence won't 100% replace humans.
But even if we never got anything better than the current Google, anthropic, and openai models, a ten year infrastructure build out and human/process adaptation will almost certainly replace 80-99% of knowledge workers depending on industry. That's an absolutely INSANE disruption at unprecedented speed. We have no idea what the consequences will be but they will almost surely be between bad and catastrophic.
Why? If a job can be done by a machine instead of a human, but the machine can do it faster and more reliably, then the machine doing it instead simply ADDS both time and money to humanity as a result. The time alone that the machine saves us is worth more than any amount of money from any stupid, obsolete job.
I do not think it will. I think you greatly underestimate the complexity and difficulty. The people I find least worried about mass job replacement are typically technical people on the implementation end, especially who have looked into practical AI implementation in actual companies.
Gary Marcus has a terrific Substack, Marcus on AI, that you might find useful in understanding the issues.
'As AI generated data contaminates its own dataset, we see a degenerative process called “model collapse” '
This is evident any place where AI has been implemented really quickly. Case in point: most internet mainstream news pieces, narration for internet videos. Tedious doesn't even begin to describe it.
True! Another example of model collapse can be seen in retail. If an AI system is used to predict customer demand and is then retrained only on data generated by its own recommendations (e.g., stocking certain products based on past predictions), it can start reinforcing its own biases. Over time, this narrows the range of products being stocked, ignoring actual customer preferences and ultimately harming sales
Orthogonal to this, can you envision a competitive strategy, either at corporate and/or nations scales, to contaminate a competitor/opponent's AI datasets?
That’s an interesting question! An example could be in social media advertising. Let’s say a smaller ad-tech company is using AI to optimize ad placements and relies on public engagement data from platforms like Facebook or Instagram to train their models. Now imagine one of the big players—like Meta—flooding their platform with synthetic engagement signals, like fake likes, clicks, or comments. These fake signals could throw off the smaller company’s AI, leading it to make bad decisions.
I don’t know of any examples where this has been done
Very interesting -- species of "mistaking the facsimile for the thing itself," but on a much larger scale and faster pace than ever before. Come to think of it, a more familiar example might even be found in music lists on streaming services. If you developed a sense of taste prior to the internet, they're always dissatisfying. They're workable to the extent that you keep going back and "re-seeding" new data yourself.
"While AI has made huge strides in the past 5 years, it’s not limitless. Much of its progress depends on massive datasets, and we’re already hitting limits in terms of quality and availability. Without new data, growth will plateau. "
I keep hearing this but the current "plateu" is more like an exponential wall.
"AI will still often produce false or nonsensical results"
So do humans, the real question becomes which one will do it more?
What about the Tesla training model? Gather loads of real-world data from devices (cars) and then feed that into a simulated training environment where interactions can be run (presumably with some controlled variations) and the model then "learns" in there. Does that work?
I hope you're right, but AI evolves thousands if not millions of times faster than human intelligence did, so the AGI scenario seems inevitable to me within a few decades at most if research isn't severely curtailed.
Another thought I had, that I think warrants a separate thread / comment: I can’t help but feel like a lot of the fear around AI—and even the hype about benchmarks—is being pushed by the big players in the industry who have a lot to gain from it. These companies know that regulations framed as “safety measures” will mostly hurt smaller competitors who don’t have the resources to comply. Meanwhile, the big players are more than equipped to handle expensive compliance processes, which just helps them lock down their position at the top.
The obsession with flashy benchmarks also feels like PR more than substance. Sure, it’s cool to see AI passing creative tests or mimicking humans in specific scenarios, but how much of that really matters in the real world? A lot of it doesn’t translate into practical, scalable solutions. It’s starting to feel like the big companies are controlling the narrative to make themselves look untouchable while quietly making it harder for anyone else to compete.
The regulatory capture hypothesis needs to explain (1) why big tech and the main players (except sorta Anthropic) have consistently opposed basically all regulation and (2) why, if there is such an incumbent advantage, do we see the main results coming from new companies like OpenAI, Anthropic, and Deepseek? Even Google DeepMind was an acquisition.
This is not a conspiracy. It is a description of an existing, real-world incentive. Whether or not the big AI players are taking advantage of this incentive, is a separate question.
Really looking forward to the second and third order effects. It’s one thing to realise that AI can write a good essay, but which qualifications/exams will survive everyone knowing you don’t have to study that subject to get access to its knowledge base and answers? It’s not that teachers will be replaced, but that schools will be pointless.
If you have to work to come up with an upside, and that upside is itself likely under threat from *the same* mechanism that has provided the initial upside, I'd call that coping, yep.
Well, the medical services and UBI and so on won't necessarily go away, it's just that drawbacks of mass unemployment might be worse, at least beyond a certain point.
This is unlikely. Michael Polanyi (The Tacit Dimension, Personal Knowledge) pointed out in the 1940s that nearly all occupations have a large hidden base of knowledge that is unwritten and can only be gained by observation and practise.
Schools will not be pointless, but the assessment of students will have to return to being a techless, chalk and talk, *viva voce* exam.
Indeed. Problem is, tacit knowledge is precisely not what is acquired in school, it’s what you develop after you leave school. There will still be apprenticeships but schools as we’ve known them since the Victorian era of mass education are toast.
Serious education (except for *reserves* confined to areas where big economic players have an interest) is already gone — without the need of help from AI.
I've been learning this first hand in my recent first aid training. Taking a pulse, a BP, putting in an IV etc all seem pretty straightforward when you read it in the books or watch a video. In the real world, it's way more complicated when you might be dealing with a muscular 20yo man one day, and a frail 90yo with paper thin skin the next. Repetition and mentor support / feedback is crucial.
Right. As I argued, AI could replace a lot of jobs even if it "never actually surpasses the smartest and most creative humans in their respective domains of expertise".
As a “conservative,” I completely agree. Excellent article.
I do think Kaczynski’s “conservatives” reference were aimed more at neocons, but I could be wrong. My small, anecdotal sample size opinion of “conservatives” in my circle could be that — AI feels like a slow motion ‘world killer’ asteroid that some can see, while most can’t, and of those who can see it, many think it’s just an opportunity to mine for minerals rather than an existential risk (yet inevitable either way).
It can’t currently. It has no common sense, no context, and doesn’t have real reasoning ability. Plus it hallucinates. Those are not minor limitations.
I was skeptical of AI (don't like the noun usage of cope) until I saw it being embedded in my coding tools at work, popping up suggestions when I made an error. Will be interesting to see how the next generation works when it has more AI generated content to ingest. Already serious issues in law, where it has invented likely sounding cases (that is probably easily fixed). I'm conjunction with ai for knowledge work, we see automation in low skill manual work (the McDonald's app as an example has put 1-2 workers, minimum, on the street at each location worldwide). In a perfect world the savings would be passed on to consumers; all products would get cheaper. We will see more savings going into corporate profits. Self driving vehicles, especially trucks will put another chunk of the population out of work. We can't stop progress and shouldn't want to; but we do need to think clearly through what life looks like (and costs) in ten years. I laughed at the passage where CDs replaced most live music; consider why Coursera/Khan Academy type recorded teaching has not replaced the average instructor. How do teacher unions and universities protect their status?
Online learning is increasingly replacing the university pipeline, it's just a slow process because it takes time for the prestige of the college system to degrade. Women in particular seek refuge in the corporate hierarchy and use credentialism to establish loyalty.
This is why we are more like to see a repeat of the Long Depression of the 1870s - 1890s than overnight mass unemployment. Unions--excuse me: professional associations--of all kinds will fight tooth and nail against the process, using credentialism to draw it out for decades.
Well, I certainly hope so. I *was* hoping that the hardware costs of research would enable some kind of non-proliferation agreement to be enforced, but if DeepSeek was built on a shoestring budget with second-hand GPUs then it may be too late for that.
I thought the Lotus Eaters' take on the topic was rather blithe and careless, to be honest- saying 'fuck you' to Sam Altman isn't really a sufficient reason to render all humans obsolete. "Hooray, we all get robot waifu assistants! This won't destroy intimate relationships at all!"
Unions launching a Butlerian Jihad might be the only stopgap measure at this point. This *kinda* happened in the music industry and the same *might* apply to graphic design, but at the moment it's not clear that the Trump admin/Techbro alliance is ideally suited to shoring up professional associations, and obviously leaning on institutions like the RIAA comes with its own downsides.
I see your point with your overall post but this then brings to question "progress towards what?" And what we're progressing to may not even be good. Is it progress to careen off the cliff... or to reverse or change course? Are we even heading off a cliff? We don't even know. We're driving blind, we don't even know if we're going to go off a cliff, which is arguably as bad as seeing the cliff and not doing anything to avoid it.
"Progressing towards what" is exactly the question. The best-case scenario I can see for ubiquitous AI is that we all become boutique luxury craftsmen of some description, but the Wall-E/Idiocracy scenario seems equally plausible, and that's not even the worst-case scenario here.
I guess we won't know where we're going till we're already there. And that probably goes for everyone. A friend of mine gifted me a book last week, Kissinger's last book he had any involvement with and it talks about AI. He, Eric Schmidt (a former Google CEO), and another guy cowrote it. I'm about through with it but it's leaving me with the impression the Top Men™️ know as much about where we'll end up as we do.
I work in banking and have been using AI on an ad hoc basis for around 18 months. If I get writers’ block while preparing a credit paper I just chuck it into ChatGPT and it churns out something I can edit in seconds. The main limitation at present is that I’m breaking privacy law if I feed it any customer data so I’m waiting for an internally hosted system with the required safeguards.
I have no doubt that AI can already do most of the tasks I perform better than I can. My cope for now at least is that the tasks it’s best at are those I enjoy doing the least. Cross checking the paper submission against our risk grading system (at least an hour of non productive work) yes, please take that out of my hands. Then I can spend more time visiting clients and sites which (perhaps naively) feels like the part less likely to be replaced at least in the short to medium term.
I gotta confess, I'm amazed at people who work white-collar desk jobs, with AI exposure and aren't impressed with what AI can do.
My assessment is similar to yours. AI can do my white-collar, desk job better than I can. And even taking the worse possible assessment of AI you should still pick ChatGPT due to the cost savings.
I would tell any young person beginning their career enter a career path that requires being in the room to be effective. The author mentioned lawyers, think prosecutor/public defender rather than contracts attorney.
I did call for "lamenting the inevitable". In all seriousness, it's unclear what a "solution" would look like; recognising that there's a problem would be a good start.
I think the potential for an international non-proliferation treaty is much more tenable than people think. There are only a handful of facilities capable of manufacturing the hardware needed for LLM training and other high-end AI research on the planet, so in principle it wouldn't be that hard to to talk to the management.
The argument that China or other countries will beat the US to the punch is overstated if you consider, e.g, Peter Zeihan's arguments that China is utterly dependant on the US guaranteeing global maritime trade in order to remain economically functional, and is probably doomed to a demographic implosion within the next decade or two.
My guess is the real motivation for AI-investment is that our society has been built on the assumption of continuous economic growth, without which institutions like retirement funds will go insolvent, and on a planet with collapsing TFR and diminishing returns on education, AI is the only remaining avenue for juicing GDP, so the powers that be are betting the farm on that one remaining option. Quite risky strategy, IMO.
I think that what you propose is the best outcome I can think of so far.
Only the technologically advanced nations can implement AI for now, and an international greatly limiting its deployment/use, similar to nuclear weapons, might be possible.
There would maybe need to be a parallel to "mutually assured destruction" to enforce the limitation, and I can'tyet see what that might look like, but...
Also, ironically, the entire AI thingie could entirely screw up daily life in the 1st world, but in the 3rd world, not that much would change, I think.
Hah. Maybe *that's* the individual solution! Go live in Uganda.
Right now is not the time for denial. That time will come when, after vast and protracted consideration, no positive--or even neutral--solutions can be found.
@Noah : Do you think there’s any value in learning programming anymore? My New Year’s resolution was to change my career and get into programming. So you’re saying I shouldn’t and I have no hope?
I worked with SW developers starting in the early 80s until I retired about 10 years ago. In that period I saw the field as "democratizing"...becoming more and more broadly accessible by means of IDEs, interpreted rather than compiled languages, a myriad of specialized libraries, etc.
Formerly there was a demand from what amounted to computer illiterate consumers like financial analysts, gamers, HW manufacturers to have a way to translate the "problems" they wanted solved, or the results of any kind that can be delivered by computational means. SW engineers were those "translators". They had to make a major paradigm shift to get the consumer requirement (what they wanted) encoded in such a way that a computer could chew on it and give a desired (hopefully) result.
It appears to me that pretty much any fairly intelligent person can now frame a request for a result in such a way that no additional translation is needed, and more...
The level of intelligence needed for the human user of AI needs to have to make an effective request is being reduced as the AI model is becoming more and more tolerant of individual peculiarities in expressing what it is they want. They are getting very close to supplying what you actually wanted, but were too inarticulate to ask for in any logical fashion.
That is not consistent with what I am currently seeing. Useful tool, yes. Likely to increase productivity greatly, absolutely yes. Complete substitute for a human, or likely to be so in the near future, absolutely not.
The difference is that between being able to do tasks and being able to do all the parts of a job.
I do some programming (50/50) at the moment. I hate to give unsolicited advice, but I’d reconsider, or try to focus on LLM itself. Each of us at work are concerned.
Example — older ChatGPT models were mediocre, but would handle some “money work” to write the bones of a script. However, OpenAI’s o1 is VERY good, provided you give it the necessary details in the prompt (sort of like — “be careful what you wish for.”).
6 months ago, ChatGPT was like an intern we needed to guide, but picked up some of the drudge work.
NOW it’s a senior engineer that I refer to.
In 3 months (or this week with r1), holy hell.
A year ago I said “this is going to change things exponentially.” We’re there. I have no idea what the next year will entail except to prevent the full use of AI to wipe out my job simply due to ITAR/EAR concerns (which will eventually be figured out — probably also more quickly than normal thanks to an AI boost in the review process).
The time horizon for making predictions about the future is shrinking day by day, just as Kurzweil said would happen approaching the “Singularity.”
AI cannot currently replace developers, except for very simple projects, and does not currently appear to be on a trajectory to do so. It does well on making developers more productive and on creating code, but it is a productivity boost, potentially a big one, not a substitute.
The issue is more the effect on productivity and thus the rate of increase of jobs, not directly replacing workers.
10% of our team has already been axed due to the productivity increase from AI. It is a force multiplier, but if the business needs to reduce costs over increased production, it goes from 10 people doing the job to 9 people. As AI advances, 9 will becomes 8, then 7, etc. if the business goal switches to “do more” instead of “do the same with less,” then it will stay at 7 for a while instead of hiring an 8th person (again, the AI will advance and make up the difference over time).
It won’t happen overnight, but it is the trend as far as I’ve seen on my team, and others’ experience (Amazon, Google, various local shops, etc.). Perhaps we just suck in comparison to others, but even that would support the observed projection of AI impacts (whittling teams down to a handful of senior engineers approving AI outputs).
i really do wonder how my blue collar career choice as a machinist will be affected by this. Because in stark contrast to most of the work that is mentioned in this article, the knowledge required to do a job like this, at least in most situations, just isn't readily available on the internet.
I think it will take a bit longer for AI research to handle fine-grade hand-eye-coordination in real-world applications, but I just don't think there's strong evidence that meat brains are doing something that can't be replicated in silicon.
There's nothing to stop a physical robot from practicing physical skills, and once you have *one* top-grade robot plumber that never needs to sleep or get paid, there's nothing to stop you printing a million more. Their entire neural net can be copied, remember?
I mean, that’s all theoretically possible, but unlike in the field of language models or self-driving cars I’m seeing the same slow pace of progress in the physical realm as there has been my whole life.
This is an excellent piece. The jobs in the firing line are much of the upper middle class to lower upper class. Contra liberals these people have more power than the rich. Contra conservatives, these people have way more power than the poor. That makes the era of AI displacement politically tenuous.
Assuming imminent AI displacement is correct, there are 3 possible distribution outcomes:
1. AI released without regs and way more redistribution. Those displaced at the end of their career will get early retirement. Those in middle age will get a lot of subsidies to encourage them to stay in the workforce. Long-term a lot more public ownership over AI related products to ensure broadly shared gains.
2. AI prohibited/limited. The upper middle class uses their power to strangle where AI can be used. AI adoption is slowed considerably. As AI is adopted the class able to block AI shrinks and eventually it gets faded out. Basically the process in # 1 happens but at a slower pace. AI owners/small class of workers considerably made poorer.
Long-term: the transition to a post AI economy is lengthened/less disruption and wealth in short term.
3. Same as # 1 but no redistribution. The capital class succeeds in blocking redistribution. Lots of upper middle class jobs are eliminated. Those pushed into the lower classes begin organizing them. As you said in this piece, there are going to be a ton of well-educated, ambitious professionals who face a sudden loss of income and status. These folks have the human capital to fight for their interests.
We have a massive class conflict. Eventually rich lose but probably some deaths and a ton of democratic decay before ending up in the same spot as # 1.
As a liberal, you can make argument for 1 or 2 but 3 is markedly worse. Honestly though, I think the economic challenge AI is way less than the social challenge.
This is the primary challenge: “The gist of our argument is that humans don’t just value the products of our intellect; we also value the process of applying our intellect. So far from enhancing our well-being, a world in which future civilizational advancements are largely automated could give rise to profound ennui.”
I’m not a conservative but I’ll try to answer why don’t conservatives who think AI is meaningful care about the upcoming transition?
As an outsider looking-in, compared to liberals conservatives are:
1. More personally optimistic;
2. More self-interested;
3. More Darwinian;
4. More Risk-tolerant;
5. Marginally more likely to work non-computer jobs.
That combination explains a lot of the difference. Being that they are more optimistic and risk-tolerant, they think they’re either in jobs protected from AI or part of the elite class that can survive the transition. From that their more Darwinian, self-interested worldview takes over. This is going to make us rich; others may lose their jobs but those that can’t compete should be punished.
I’m a liberal so that sounds harsh but I sincerely believe that explains most of the discrepancy. I’m not trying to insult conservatives.
One other component is at play which explains why conservatives are sincerely less likely to believe that AI is important and that is the difference between the Republican and Democratic Party. Conservatives and liberals get a lot of their beliefs from their respective parties. The Democratic Party is considerably more honest about economic challenges facing the country. So the risks AI poses for workers are openly discussed.
By contrast the Republican Party is still captured by rich people who just want lower taxes. The economic discourse in the party is considerably more dishonest so the potential challenges of AI haven’t reached as many conservatives.
The potential credentialist and elite reaction to this hypothetical AI future might be that while it can draft better legal contracts than lawyers and develop better product ideas than MBAs, these professions will not go quietly and will use regulatory law to maintain their status. Just as many professions excessively prioritize occupational licensing to limit competition. So, while it becomes much easier for anyone to draft a legal contract, it wouldn't be legally binding unless "written" by a credentialed lawyer. In that sense, AI could become a tool that makes these jobs much easier without threatening their existence. As you mentioned with the industrial revolution, there will be a transitionary period where people will not trust an AI to do high-consequence activities like legal work without being reviewed by a human, even if that review is redundant.
>>why, then, are conservatives coping so hard? Can they not face facts?
Most people who do use AI only use it as a search engine.
Many people I know don’t know how it works or why it works. Conservatives I know are still “coping”, repeating anecdotal stories they’ve heard about how AI answers questions incorrectly. Or real life stories about how “AI was used by X company and they had to go back to the way they were doing it and rehire a bunch of people.”
The future will involve some form of UBI. There will be no other practical option with massive unemployment and no new jobs or occupations for more than a small percentage of those who list jobs due to automation. Being wealthy and successful is not a virtue on and of itself. Many at the top of our financial plutocracy whatever their industry aren’t our friends nor care about our interests.
This is an interesting article and its message deserves to be taken seriously. But in making its wake-up call about myopic conservative AI 'cope' it seems paradoxically to manifest a strange myopia of its own. How?
The 'conservative' world of 2025 is not some huge Substack. Yes, AI will, I daresay, be transformative in just the way the article predicts but the vast majority of people (conservative or otherwise) will witness this transformation without having any opinion (or cope) about it one way or the other. It will just happen to them with little or no intellectual engagement on their part with the process happening to them. That's just the way most people are.
Yes, but I care about *me* and mine. And I *do* known about AI, and so it's best to think of individual "survival" strategies as best I can. Be proactive, not reactive, like the "most people" you mention.
As someone working on AI implementation for a Fortune 500 company, I can appreciate the general fears and “coping” about AI’s potential, but this article overlooks some critical realities. While AI has made huge strides in the past 5 years, it’s not limitless. Much of its progress depends on massive datasets, and we’re already hitting limits in terms of quality and availability. Without new data, growth will plateau. As AI generated data contaminates its own dataset, we see a degenerative process called “model collapse”
AI also struggles with real-world applications because it can’t make accurate observations about the physical world. This isn’t just a small flaw—it’s a major roadblock for industries that require context, nuance, or real-time interaction with the environment.
Hallucinations remain a serious issue as well. AI will still often produce false or nonsensical results, and when tasked with complex, multi-step processes, these errors tend to compound. That’s a big problem when reliability is a priority. And while benchmarks and studies are great for PR, they rarely reflect the messy, ambiguous problems we deal with in real-world use cases.
Then there’s the economic angle. If AI were truly as revolutionary as claimed, it would already be profitable. Yet, the major players in the space are burning through cash without demonstrating financial sustainability.
AI is a powerful tool, but it’s far from ready to replace humans in most scenarios. Its best use is as a complement to human intelligence—not a substitute. The article seems to miss that balance entirely.
Thanks for sharing your perspective. I didn't mean to suggest that AI is "ready" to replace humans in most scenarios, but that it probably will be within the next 5–10 years.
—NC
Thanks for the reply. I appreciate your perspective. However, a 5–10 year timeframe would likely require moving away from LLMs, which have inherent limits. Linear growth demands exponentially more data, and we’re already hitting the limits of what’s available. Diminishing returns are inevitable unless there’s a shift toward something like symbolic AI or another revolutionary approach. Without that, the progress you’re envisioning in such a short time seems unlikely.
The chain-of-reasoning approach adopted by O1 and other recent models seems like a step back toward classical/symbolic AI, albeit one grafted on top of the LLM neural-net system. My inclination would have been to start with symbolic reasoning and graft perception/motor functions on top, but I'm not an expert.
Thoughtful comment and a solid point ;)
Yes, concerns around the data wall have dropped off a lot since around 3-4 months ago once O1 demonstrated a new way to scale and especially since O3 demonstrated rapid progress on that new calling parameter.
O3 didn’t really address this. It took 17,100% increase in compute power to score 12% higher on ARC tests, and still failed at pretty basic stuff.
Also, the bullets in the benchmark read “ARC-AGI-TUNED”, which makes me think they did an undisclosed amount of fine tuning behind the scenes.
I also work in AI. I disagree with the way you're applying your point. I 100% agree that we won't get to AGI, much less in all ways ASI (as everyone knows the frontier models already know more facts than any human, including ultra specialized ones), without further algorithmic developments. RL on top of giant LLMs as the intelligence won't 100% replace humans.
But even if we never got anything better than the current Google, anthropic, and openai models, a ten year infrastructure build out and human/process adaptation will almost certainly replace 80-99% of knowledge workers depending on industry. That's an absolutely INSANE disruption at unprecedented speed. We have no idea what the consequences will be but they will almost surely be between bad and catastrophic.
Why? If a job can be done by a machine instead of a human, but the machine can do it faster and more reliably, then the machine doing it instead simply ADDS both time and money to humanity as a result. The time alone that the machine saves us is worth more than any amount of money from any stupid, obsolete job.
I do not think it will. I think you greatly underestimate the complexity and difficulty. The people I find least worried about mass job replacement are typically technical people on the implementation end, especially who have looked into practical AI implementation in actual companies.
Gary Marcus has a terrific Substack, Marcus on AI, that you might find useful in understanding the issues.
'As AI generated data contaminates its own dataset, we see a degenerative process called “model collapse” '
This is evident any place where AI has been implemented really quickly. Case in point: most internet mainstream news pieces, narration for internet videos. Tedious doesn't even begin to describe it.
True! Another example of model collapse can be seen in retail. If an AI system is used to predict customer demand and is then retrained only on data generated by its own recommendations (e.g., stocking certain products based on past predictions), it can start reinforcing its own biases. Over time, this narrows the range of products being stocked, ignoring actual customer preferences and ultimately harming sales
Orthogonal to this, can you envision a competitive strategy, either at corporate and/or nations scales, to contaminate a competitor/opponent's AI datasets?
That’s an interesting question! An example could be in social media advertising. Let’s say a smaller ad-tech company is using AI to optimize ad placements and relies on public engagement data from platforms like Facebook or Instagram to train their models. Now imagine one of the big players—like Meta—flooding their platform with synthetic engagement signals, like fake likes, clicks, or comments. These fake signals could throw off the smaller company’s AI, leading it to make bad decisions.
I don’t know of any examples where this has been done
Very interesting -- species of "mistaking the facsimile for the thing itself," but on a much larger scale and faster pace than ever before. Come to think of it, a more familiar example might even be found in music lists on streaming services. If you developed a sense of taste prior to the internet, they're always dissatisfying. They're workable to the extent that you keep going back and "re-seeding" new data yourself.
"While AI has made huge strides in the past 5 years, it’s not limitless. Much of its progress depends on massive datasets, and we’re already hitting limits in terms of quality and availability. Without new data, growth will plateau. "
I keep hearing this but the current "plateu" is more like an exponential wall.
"AI will still often produce false or nonsensical results"
So do humans, the real question becomes which one will do it more?
Okay but just like, shove an AI into a robot and have it beam in photo data from the real world for its data set.
Like an infant does.
I know you’re being funny, but this scenario is exactly where the current paradigm in AI starts failing IRL.
Constant visual input doesn’t translate well into tokenized strings of language, which is what current AI is built to handle
What about the Tesla training model? Gather loads of real-world data from devices (cars) and then feed that into a simulated training environment where interactions can be run (presumably with some controlled variations) and the model then "learns" in there. Does that work?
I hope you're right, but AI evolves thousands if not millions of times faster than human intelligence did, so the AGI scenario seems inevitable to me within a few decades at most if research isn't severely curtailed.
It doesn’t, really. Vinge is not the real world.
Thanks for the reply. Please see my comment above!
Another thought I had, that I think warrants a separate thread / comment: I can’t help but feel like a lot of the fear around AI—and even the hype about benchmarks—is being pushed by the big players in the industry who have a lot to gain from it. These companies know that regulations framed as “safety measures” will mostly hurt smaller competitors who don’t have the resources to comply. Meanwhile, the big players are more than equipped to handle expensive compliance processes, which just helps them lock down their position at the top.
The obsession with flashy benchmarks also feels like PR more than substance. Sure, it’s cool to see AI passing creative tests or mimicking humans in specific scenarios, but how much of that really matters in the real world? A lot of it doesn’t translate into practical, scalable solutions. It’s starting to feel like the big companies are controlling the narrative to make themselves look untouchable while quietly making it harder for anyone else to compete.
The regulatory capture hypothesis needs to explain (1) why big tech and the main players (except sorta Anthropic) have consistently opposed basically all regulation and (2) why, if there is such an incumbent advantage, do we see the main results coming from new companies like OpenAI, Anthropic, and Deepseek? Even Google DeepMind was an acquisition.
When we sink to conspiracies with tangible evidence, we're on weak footing.
This is not a conspiracy. It is a description of an existing, real-world incentive. Whether or not the big AI players are taking advantage of this incentive, is a separate question.
Really looking forward to the second and third order effects. It’s one thing to realise that AI can write a good essay, but which qualifications/exams will survive everyone knowing you don’t have to study that subject to get access to its knowledge base and answers? It’s not that teachers will be replaced, but that schools will be pointless.
Scary and exciting in equal measures.
Agreed
—NC
Boy, I don't know.
I have to really *work* to see any positive outcomes for individual humans, and really, that's a form of desperate coping...
Well, there are potential upsides. The equivalent of top-level doctors and engineers available for free to everyone in the global south, for example.
Eventually they'll have mass unemployment as well, though.
If you have to work to come up with an upside, and that upside is itself likely under threat from *the same* mechanism that has provided the initial upside, I'd call that coping, yep.
Well, the medical services and UBI and so on won't necessarily go away, it's just that drawbacks of mass unemployment might be worse, at least beyond a certain point.
This is unlikely. Michael Polanyi (The Tacit Dimension, Personal Knowledge) pointed out in the 1940s that nearly all occupations have a large hidden base of knowledge that is unwritten and can only be gained by observation and practise.
Schools will not be pointless, but the assessment of students will have to return to being a techless, chalk and talk, *viva voce* exam.
Indeed. Problem is, tacit knowledge is precisely not what is acquired in school, it’s what you develop after you leave school. There will still be apprenticeships but schools as we’ve known them since the Victorian era of mass education are toast.
Serious education (except for *reserves* confined to areas where big economic players have an interest) is already gone — without the need of help from AI.
I've been learning this first hand in my recent first aid training. Taking a pulse, a BP, putting in an IV etc all seem pretty straightforward when you read it in the books or watch a video. In the real world, it's way more complicated when you might be dealing with a muscular 20yo man one day, and a frail 90yo with paper thin skin the next. Repetition and mentor support / feedback is crucial.
I think we've had a good 150-200 year run for mass human literacy. It was in interesting interlude, but in many ways we will be returning to default.
Yes.
It seems to be that AI is the optimization of (human potential)^(computer potential)
Human potential being “what has been created (even if we don’t recognize it immediately)”
Computer potential being “scalable processing speed and memory.”
I haven’t seen AI create anything beyond what humans have created yet, but I have seen AI recreate maximum human potential, and do it very quickly.
Right. As I argued, AI could replace a lot of jobs even if it "never actually surpasses the smartest and most creative humans in their respective domains of expertise".
—NC
As a “conservative,” I completely agree. Excellent article.
I do think Kaczynski’s “conservatives” reference were aimed more at neocons, but I could be wrong. My small, anecdotal sample size opinion of “conservatives” in my circle could be that — AI feels like a slow motion ‘world killer’ asteroid that some can see, while most can’t, and of those who can see it, many think it’s just an opportunity to mine for minerals rather than an existential risk (yet inevitable either way).
Hah!
Did you ever see the VonTrier film, Melancolia?
I haven’t, but heard it was appropriately named. I’ll check it out.
What do you see as the analogy?
Slow motion killer asteroid, and how a variety of people react to impending and inescapable doom.
I thought "Don't Look Up" was the more direct parallel, although Melancholia is probably the better movie.
It can’t currently. It has no common sense, no context, and doesn’t have real reasoning ability. Plus it hallucinates. Those are not minor limitations.
"It can’t currently. It has no common sense, no context, and doesn’t have real reasoning ability."
And it probably never will. Too many get wrapped up in the hype and forget it is a computer program and does what it is told by a human(s).
It is as if all these years improving the human-to-computer interface has reached something similar to critical mass in a nuclear reaction..
And there is as of this time, no concept of control rods.
I was skeptical of AI (don't like the noun usage of cope) until I saw it being embedded in my coding tools at work, popping up suggestions when I made an error. Will be interesting to see how the next generation works when it has more AI generated content to ingest. Already serious issues in law, where it has invented likely sounding cases (that is probably easily fixed). I'm conjunction with ai for knowledge work, we see automation in low skill manual work (the McDonald's app as an example has put 1-2 workers, minimum, on the street at each location worldwide). In a perfect world the savings would be passed on to consumers; all products would get cheaper. We will see more savings going into corporate profits. Self driving vehicles, especially trucks will put another chunk of the population out of work. We can't stop progress and shouldn't want to; but we do need to think clearly through what life looks like (and costs) in ten years. I laughed at the passage where CDs replaced most live music; consider why Coursera/Khan Academy type recorded teaching has not replaced the average instructor. How do teacher unions and universities protect their status?
Online learning is increasingly replacing the university pipeline, it's just a slow process because it takes time for the prestige of the college system to degrade. Women in particular seek refuge in the corporate hierarchy and use credentialism to establish loyalty.
This is why we are more like to see a repeat of the Long Depression of the 1870s - 1890s than overnight mass unemployment. Unions--excuse me: professional associations--of all kinds will fight tooth and nail against the process, using credentialism to draw it out for decades.
Well, I certainly hope so. I *was* hoping that the hardware costs of research would enable some kind of non-proliferation agreement to be enforced, but if DeepSeek was built on a shoestring budget with second-hand GPUs then it may be too late for that.
I thought the Lotus Eaters' take on the topic was rather blithe and careless, to be honest- saying 'fuck you' to Sam Altman isn't really a sufficient reason to render all humans obsolete. "Hooray, we all get robot waifu assistants! This won't destroy intimate relationships at all!"
https://www.youtube.com/watch?v=eEiouT3we9Y
Unions launching a Butlerian Jihad might be the only stopgap measure at this point. This *kinda* happened in the music industry and the same *might* apply to graphic design, but at the moment it's not clear that the Trump admin/Techbro alliance is ideally suited to shoring up professional associations, and obviously leaning on institutions like the RIAA comes with its own downsides.
>We can't stop progress and shouldn't want to
I see your point with your overall post but this then brings to question "progress towards what?" And what we're progressing to may not even be good. Is it progress to careen off the cliff... or to reverse or change course? Are we even heading off a cliff? We don't even know. We're driving blind, we don't even know if we're going to go off a cliff, which is arguably as bad as seeing the cliff and not doing anything to avoid it.
"Progressing towards what" is exactly the question. The best-case scenario I can see for ubiquitous AI is that we all become boutique luxury craftsmen of some description, but the Wall-E/Idiocracy scenario seems equally plausible, and that's not even the worst-case scenario here.
I guess we won't know where we're going till we're already there. And that probably goes for everyone. A friend of mine gifted me a book last week, Kissinger's last book he had any involvement with and it talks about AI. He, Eric Schmidt (a former Google CEO), and another guy cowrote it. I'm about through with it but it's leaving me with the impression the Top Men™️ know as much about where we'll end up as we do.
As vast swathes of humanity become unemployable historically society seems to devolve into either bread and circuses and profound civil unrest.
I work in banking and have been using AI on an ad hoc basis for around 18 months. If I get writers’ block while preparing a credit paper I just chuck it into ChatGPT and it churns out something I can edit in seconds. The main limitation at present is that I’m breaking privacy law if I feed it any customer data so I’m waiting for an internally hosted system with the required safeguards.
I have no doubt that AI can already do most of the tasks I perform better than I can. My cope for now at least is that the tasks it’s best at are those I enjoy doing the least. Cross checking the paper submission against our risk grading system (at least an hour of non productive work) yes, please take that out of my hands. Then I can spend more time visiting clients and sites which (perhaps naively) feels like the part less likely to be replaced at least in the short to medium term.
I gotta confess, I'm amazed at people who work white-collar desk jobs, with AI exposure and aren't impressed with what AI can do.
My assessment is similar to yours. AI can do my white-collar, desk job better than I can. And even taking the worse possible assessment of AI you should still pick ChatGPT due to the cost savings.
I would tell any young person beginning their career enter a career path that requires being in the room to be effective. The author mentioned lawyers, think prosecutor/public defender rather than contracts attorney.
This is one of the most profoundly and ruthlessly honest essays that I can recall reading.
AS individual who want to survive and prosper, we've got to think like we've never thought before, *simply to make it to the end of the decade*.
So...
Prostitution?
Direct criminal activities?
???
...
The article made it quite clear what pitfalls the author believes await us. What I didn't see was a solution.
I did call for "lamenting the inevitable". In all seriousness, it's unclear what a "solution" would look like; recognising that there's a problem would be a good start.
—NC
I think the potential for an international non-proliferation treaty is much more tenable than people think. There are only a handful of facilities capable of manufacturing the hardware needed for LLM training and other high-end AI research on the planet, so in principle it wouldn't be that hard to to talk to the management.
The argument that China or other countries will beat the US to the punch is overstated if you consider, e.g, Peter Zeihan's arguments that China is utterly dependant on the US guaranteeing global maritime trade in order to remain economically functional, and is probably doomed to a demographic implosion within the next decade or two.
My guess is the real motivation for AI-investment is that our society has been built on the assumption of continuous economic growth, without which institutions like retirement funds will go insolvent, and on a planet with collapsing TFR and diminishing returns on education, AI is the only remaining avenue for juicing GDP, so the powers that be are betting the farm on that one remaining option. Quite risky strategy, IMO.
I think that what you propose is the best outcome I can think of so far.
Only the technologically advanced nations can implement AI for now, and an international greatly limiting its deployment/use, similar to nuclear weapons, might be possible.
There would maybe need to be a parallel to "mutually assured destruction" to enforce the limitation, and I can'tyet see what that might look like, but...
Also, ironically, the entire AI thingie could entirely screw up daily life in the 1st world, but in the 3rd world, not that much would change, I think.
Hah. Maybe *that's* the individual solution! Go live in Uganda.
" In all seriousness, it's unclear what a "solution" would look like; recognising that there's a problem would be a good start."
I think the 'problem' is overstated. I said, "the author believes await us", not that I hold that belief.
Absolutely.
Right now is not the time for denial. That time will come when, after vast and protracted consideration, no positive--or even neutral--solutions can be found.
Ahahahah...
@Noah : Do you think there’s any value in learning programming anymore? My New Year’s resolution was to change my career and get into programming. So you’re saying I shouldn’t and I have no hope?
I don't like to give career advice but it may be worth a rethink. Do seek out alternative perspectives!
—NC
I worked with SW developers starting in the early 80s until I retired about 10 years ago. In that period I saw the field as "democratizing"...becoming more and more broadly accessible by means of IDEs, interpreted rather than compiled languages, a myriad of specialized libraries, etc.
AI is simply the end game.
What do you mean AI is the end game? That AI will cause mass technological disemployment especially in programming jobs?
Yes.
Formerly there was a demand from what amounted to computer illiterate consumers like financial analysts, gamers, HW manufacturers to have a way to translate the "problems" they wanted solved, or the results of any kind that can be delivered by computational means. SW engineers were those "translators". They had to make a major paradigm shift to get the consumer requirement (what they wanted) encoded in such a way that a computer could chew on it and give a desired (hopefully) result.
It appears to me that pretty much any fairly intelligent person can now frame a request for a result in such a way that no additional translation is needed, and more...
The level of intelligence needed for the human user of AI needs to have to make an effective request is being reduced as the AI model is becoming more and more tolerant of individual peculiarities in expressing what it is they want. They are getting very close to supplying what you actually wanted, but were too inarticulate to ask for in any logical fashion.
That is not consistent with what I am currently seeing. Useful tool, yes. Likely to increase productivity greatly, absolutely yes. Complete substitute for a human, or likely to be so in the near future, absolutely not.
The difference is that between being able to do tasks and being able to do all the parts of a job.
I do some programming (50/50) at the moment. I hate to give unsolicited advice, but I’d reconsider, or try to focus on LLM itself. Each of us at work are concerned.
Example — older ChatGPT models were mediocre, but would handle some “money work” to write the bones of a script. However, OpenAI’s o1 is VERY good, provided you give it the necessary details in the prompt (sort of like — “be careful what you wish for.”).
6 months ago, ChatGPT was like an intern we needed to guide, but picked up some of the drudge work.
NOW it’s a senior engineer that I refer to.
In 3 months (or this week with r1), holy hell.
A year ago I said “this is going to change things exponentially.” We’re there. I have no idea what the next year will entail except to prevent the full use of AI to wipe out my job simply due to ITAR/EAR concerns (which will eventually be figured out — probably also more quickly than normal thanks to an AI boost in the review process).
The time horizon for making predictions about the future is shrinking day by day, just as Kurzweil said would happen approaching the “Singularity.”
AI cannot currently replace developers, except for very simple projects, and does not currently appear to be on a trajectory to do so. It does well on making developers more productive and on creating code, but it is a productivity boost, potentially a big one, not a substitute.
The issue is more the effect on productivity and thus the rate of increase of jobs, not directly replacing workers.
10% of our team has already been axed due to the productivity increase from AI. It is a force multiplier, but if the business needs to reduce costs over increased production, it goes from 10 people doing the job to 9 people. As AI advances, 9 will becomes 8, then 7, etc. if the business goal switches to “do more” instead of “do the same with less,” then it will stay at 7 for a while instead of hiring an 8th person (again, the AI will advance and make up the difference over time).
It won’t happen overnight, but it is the trend as far as I’ve seen on my team, and others’ experience (Amazon, Google, various local shops, etc.). Perhaps we just suck in comparison to others, but even that would support the observed projection of AI impacts (whittling teams down to a handful of senior engineers approving AI outputs).
i really do wonder how my blue collar career choice as a machinist will be affected by this. Because in stark contrast to most of the work that is mentioned in this article, the knowledge required to do a job like this, at least in most situations, just isn't readily available on the internet.
Atlas is coming for the blue collar jobs as well, it'll probably just take a few years longer to get there.
How specifically? Knowledge is important but practice and physical skill are even more so.
I think it will take a bit longer for AI research to handle fine-grade hand-eye-coordination in real-world applications, but I just don't think there's strong evidence that meat brains are doing something that can't be replicated in silicon.
There's nothing to stop a physical robot from practicing physical skills, and once you have *one* top-grade robot plumber that never needs to sleep or get paid, there's nothing to stop you printing a million more. Their entire neural net can be copied, remember?
I mean, that’s all theoretically possible, but unlike in the field of language models or self-driving cars I’m seeing the same slow pace of progress in the physical realm as there has been my whole life.
You don't think Atlas doing backflips or drone warfare in Ukraine represent advances in the realm of physical robotics?
A very excellent summary of the state of things as at 26 January 2025. By the end of next week, who knows what will happen?
For example, just this week I replaced ChatGPT with DeepSeek. Next week, what might appear to replace that?
This is an excellent piece. The jobs in the firing line are much of the upper middle class to lower upper class. Contra liberals these people have more power than the rich. Contra conservatives, these people have way more power than the poor. That makes the era of AI displacement politically tenuous.
Assuming imminent AI displacement is correct, there are 3 possible distribution outcomes:
1. AI released without regs and way more redistribution. Those displaced at the end of their career will get early retirement. Those in middle age will get a lot of subsidies to encourage them to stay in the workforce. Long-term a lot more public ownership over AI related products to ensure broadly shared gains.
2. AI prohibited/limited. The upper middle class uses their power to strangle where AI can be used. AI adoption is slowed considerably. As AI is adopted the class able to block AI shrinks and eventually it gets faded out. Basically the process in # 1 happens but at a slower pace. AI owners/small class of workers considerably made poorer.
Long-term: the transition to a post AI economy is lengthened/less disruption and wealth in short term.
3. Same as # 1 but no redistribution. The capital class succeeds in blocking redistribution. Lots of upper middle class jobs are eliminated. Those pushed into the lower classes begin organizing them. As you said in this piece, there are going to be a ton of well-educated, ambitious professionals who face a sudden loss of income and status. These folks have the human capital to fight for their interests.
We have a massive class conflict. Eventually rich lose but probably some deaths and a ton of democratic decay before ending up in the same spot as # 1.
As a liberal, you can make argument for 1 or 2 but 3 is markedly worse. Honestly though, I think the economic challenge AI is way less than the social challenge.
This is the primary challenge: “The gist of our argument is that humans don’t just value the products of our intellect; we also value the process of applying our intellect. So far from enhancing our well-being, a world in which future civilizational advancements are largely automated could give rise to profound ennui.”
I’m not a conservative but I’ll try to answer why don’t conservatives who think AI is meaningful care about the upcoming transition?
As an outsider looking-in, compared to liberals conservatives are:
1. More personally optimistic;
2. More self-interested;
3. More Darwinian;
4. More Risk-tolerant;
5. Marginally more likely to work non-computer jobs.
That combination explains a lot of the difference. Being that they are more optimistic and risk-tolerant, they think they’re either in jobs protected from AI or part of the elite class that can survive the transition. From that their more Darwinian, self-interested worldview takes over. This is going to make us rich; others may lose their jobs but those that can’t compete should be punished.
I’m a liberal so that sounds harsh but I sincerely believe that explains most of the discrepancy. I’m not trying to insult conservatives.
One other component is at play which explains why conservatives are sincerely less likely to believe that AI is important and that is the difference between the Republican and Democratic Party. Conservatives and liberals get a lot of their beliefs from their respective parties. The Democratic Party is considerably more honest about economic challenges facing the country. So the risks AI poses for workers are openly discussed.
By contrast the Republican Party is still captured by rich people who just want lower taxes. The economic discourse in the party is considerably more dishonest so the potential challenges of AI haven’t reached as many conservatives.
The potential credentialist and elite reaction to this hypothetical AI future might be that while it can draft better legal contracts than lawyers and develop better product ideas than MBAs, these professions will not go quietly and will use regulatory law to maintain their status. Just as many professions excessively prioritize occupational licensing to limit competition. So, while it becomes much easier for anyone to draft a legal contract, it wouldn't be legally binding unless "written" by a credentialed lawyer. In that sense, AI could become a tool that makes these jobs much easier without threatening their existence. As you mentioned with the industrial revolution, there will be a transitionary period where people will not trust an AI to do high-consequence activities like legal work without being reviewed by a human, even if that review is redundant.
>>why, then, are conservatives coping so hard? Can they not face facts?
Most people who do use AI only use it as a search engine.
Many people I know don’t know how it works or why it works. Conservatives I know are still “coping”, repeating anecdotal stories they’ve heard about how AI answers questions incorrectly. Or real life stories about how “AI was used by X company and they had to go back to the way they were doing it and rehire a bunch of people.”
And people think “that couldn’t happen to me.”
The future will involve some form of UBI. There will be no other practical option with massive unemployment and no new jobs or occupations for more than a small percentage of those who list jobs due to automation. Being wealthy and successful is not a virtue on and of itself. Many at the top of our financial plutocracy whatever their industry aren’t our friends nor care about our interests.
This is an interesting article and its message deserves to be taken seriously. But in making its wake-up call about myopic conservative AI 'cope' it seems paradoxically to manifest a strange myopia of its own. How?
The 'conservative' world of 2025 is not some huge Substack. Yes, AI will, I daresay, be transformative in just the way the article predicts but the vast majority of people (conservative or otherwise) will witness this transformation without having any opinion (or cope) about it one way or the other. It will just happen to them with little or no intellectual engagement on their part with the process happening to them. That's just the way most people are.
Yes, but I care about *me* and mine. And I *do* known about AI, and so it's best to think of individual "survival" strategies as best I can. Be proactive, not reactive, like the "most people" you mention.