Discussion about this post

User's avatar
Marginal Gains's avatar

Good luck with your medical journey! I hope AI or doctors find a cure for it with or without an AGI.

The genie is undoubtedly out of the bottle, and it seems unlikely that we can reverse AI advancements. The only scenario that could slow progress is another AI winter. However, even in that case, large language models (LLMs) have already demonstrated beneficial applications, such as coding, writing, analyzing, and summarizing text and data. These use cases make it likely that AI will continue evolving incrementally, even during periods of stagnation.

If we assume that progress continues and we eventually reach AGI (Artificial General Intelligence) or ASI (Artificial Superintelligence), the future becomes unpredictable. I don’t believe current models, algorithms, or scaling alone will get us there. The human brain, for example, operates on just 20 watts of power, performing a remarkable range of functions. While evolution has imposed restrictions on what the brain can achieve—such as limits on memory and processing power—it remains an extraordinary machine.

What’s concerning is that AI development lacks these natural or physical constraints. Nature built intelligence within the boundaries of physical laws, limited resources, and evolutionary trade-offs. In contrast, we are building machine intelligence with no significant restrictions—scaling endlessly and assuming we can figure out how to control it later. This raises an essential question: “What could go wrong?”

Even if we align AI with human goals and values, there’s no guarantee it will remain aligned as it evolves. As highlighted in your post, instrumental convergence could lead AI to pursue sub-goals such as resource acquisition, self-preservation, or capability enhancement, regardless of its original objectives. These sub-goals emerge not out of malice but as a natural consequence of efficiency. This makes it impossible to predict how such systems will behave once they reach a certain level of capability.

Another challenge is that significant changes may not happen immediately after achieving AGI. Instead, we might experience a delayed tipping point, where AI deployment reaches a critical stage and suddenly begins reshaping society at an unprecedented pace. By then, it may be too late to intervene effectively.

The current “build first, control later” approach is deeply flawed. This mindset ignores the lessons of nature, where intelligence evolved within strict parameters. Without similar safeguards, we risk creating systems that evolve beyond our understanding or control. We must actively build constraints and alignment mechanisms before scaling intelligence further.

Can human ingenuity save us again? Only time will tell.

I will end with a quote from Max Tegmark: "Humanity has a history of valuing innovation over caution, but AI is one area where our survival could depend on getting it right."

Expand full comment
Frank Karsten's avatar

Excellent article. Thank you.

My concern is that even if AI remains benevolent toward humanity, a significant challenge persists.

What happens in a world where humans are no longer needed, and AI surpasses us in patience, knowledge, intelligence, capability, kindness, humor, creativity, and availability? Do we need to become cyborgs in order to keep up?

Many risk succumbing to passivity or addiction to digital escapism, akin to "digital cocaine." Some may even abandon human connections entirely, preferring AI interactions over real-world relationships, much like smartphone addiction already isolates people today.

Humans are generally ill-equipped psychologically to handle abundance. It’s striking how few seem concerned about this AI-driven future.

Expand full comment
12 more comments...

No posts