Discussion about this post

User's avatar
nh's avatar
Jan 26Edited

As someone working on AI implementation for a Fortune 500 company, I can appreciate the general fears and “coping” about AI’s potential, but this article overlooks some critical realities. While AI has made huge strides in the past 5 years, it’s not limitless. Much of its progress depends on massive datasets, and we’re already hitting limits in terms of quality and availability. Without new data, growth will plateau. As AI generated data contaminates its own dataset, we see a degenerative process called “model collapse”

AI also struggles with real-world applications because it can’t make accurate observations about the physical world. This isn’t just a small flaw—it’s a major roadblock for industries that require context, nuance, or real-time interaction with the environment.

Hallucinations remain a serious issue as well. AI will still often produce false or nonsensical results, and when tasked with complex, multi-step processes, these errors tend to compound. That’s a big problem when reliability is a priority. And while benchmarks and studies are great for PR, they rarely reflect the messy, ambiguous problems we deal with in real-world use cases.

Then there’s the economic angle. If AI were truly as revolutionary as claimed, it would already be profitable. Yet, the major players in the space are burning through cash without demonstrating financial sustainability.

AI is a powerful tool, but it’s far from ready to replace humans in most scenarios. Its best use is as a complement to human intelligence—not a substitute. The article seems to miss that balance entirely.

Expand full comment
nh's avatar

Another thought I had, that I think warrants a separate thread / comment: I can’t help but feel like a lot of the fear around AI—and even the hype about benchmarks—is being pushed by the big players in the industry who have a lot to gain from it. These companies know that regulations framed as “safety measures” will mostly hurt smaller competitors who don’t have the resources to comply. Meanwhile, the big players are more than equipped to handle expensive compliance processes, which just helps them lock down their position at the top.

The obsession with flashy benchmarks also feels like PR more than substance. Sure, it’s cool to see AI passing creative tests or mimicking humans in specific scenarios, but how much of that really matters in the real world? A lot of it doesn’t translate into practical, scalable solutions. It’s starting to feel like the big companies are controlling the narrative to make themselves look untouchable while quietly making it harder for anyone else to compete.

Expand full comment
188 more comments...

No posts