The Clock Is Ticking Faster Than We Think

Why I Align with Anthropic’s CEO on the Imminent Risks of AI

Jim Leone

7/30/20253 min read

Why I Align with Anthropic’s CEO on the Imminent Risks of AI

In recent months, the tone around AI risk has subtly but noticeably shifted. The grandiose visions of AI revolutionizing everything, from productivity to medicine, still dominate headlines. But underneath the surface, those closest to the core of AI research are expressing something very different: a growing unease. I count myself among them.

I align strongly with Anthropic’s CEO, Dario Amodei, who has repeatedly emphasized a “shorter timeline” to truly disruptive AI, not decades from now, but possibly just years away. And yet, mainstream narratives continue to undersell this. They paint AI as a powerful tool that merely needs tweaking and “alignment,” when the reality is far more volatile. We are racing toward a moment of irreversible imbalance, technologically, economically, and socially, with no brakes, no oversight, and very little public understanding of what’s coming.

Exponential Growth, Not Linear Evolution

AI is not growing incrementally, it’s growing exponentially. Capabilities once thought impossible are now product features. Agents that mimic human reasoning, generate code, pass bar exams, or manipulate images in hyperrealistic ways were fringe ideas less than five years ago. Now, they're accessible on your phone.

But the public perception hasn't caught up. Many still believe we’re decades away from AI posing a real existential risk. That’s because humans tend to think linearly, but AI development is anything but.

Anthropic, OpenAI, Google DeepMind... they all report capability jumps, not gradual shifts. Once an AI model crosses a certain threshold in scale or optimization, it doesn’t just improve a little, it transforms. That transformation can be cognitive, strategic, or even manipulative.

Now imagine that power in the wrong hands, or worse, in no hands at all.

Capital Is Pouring In --> Fast

Another concerning signal is the sheer scale of capital investment flooding into AI. In 2024 alone, Microsoft committed over $100 billion toward AI infrastructure, GPUs, and research. Amazon, Google, Meta, and even NVIDIA are now building AI-focused supercomputing ecosystems.

This isn’t long-term speculation. These companies are placing massive bets that AI will dominate everything... and soon. They’re not preparing for AGI in 2050. They’re anticipating something far more imminent.

And where’s the oversight? Nowhere. There is no global regulatory framework for AI deployment, usage boundaries, or ethical alignment. There’s no enforceable audit system for how models are trained, what data they consume, or whether they can be used for autonomous decision-making. The markets move faster than laws, and the consequences of that mismatch could be catastrophic.

Add Quantum to the Mix, and You’ve Got a Storm

While AI dominates the headlines, quantum computing is quietly surging forward. Companies like IBM, Microsoft, and even Chinese research institutions have achieved major breakthroughs in qubit stability, error correction, and scalable architectures. These are not hypothetical achievements. They are real, funded, and accelerating.

Why does that matter? Because quantum computing doesn’t just speed up AI, it breaks the very assumptions we’ve built around safety, cryptography, and control.

  • Encrypted data? Decrypted in seconds.

  • Model behavior monitoring? Rendered ineffective when inference happens faster than we can observe.

  • Predictable algorithmic boundaries? Upended by hybrid quantum-AI models that defy standard logic.

If AI is the fire, quantum is the accelerant.

We Need Regulation Yesterday

We’re entering a phase of asymmetry. A few actors, governments, corporations, even individuals, will hold more power than entire nations. And it won’t come through weapons or elections. It will come through compute.

Without strong regulatory guardrails, we are incentivizing the exact opposite of safety:

  • Build faster.

  • Scale bigger.

  • Beat your competitors at any cost.

Regulation must happen now, not after the tipping point. That means:

  • Mandatory AI safety audits.

  • Transparent model disclosures.

  • Limits on black-box deployment.

  • Oversight on quantum-AI convergence.

  • International agreements on autonomous system restrictions.

We cannot rely on corporate promises or closed-door AI safety boards. We need enforceable frameworks backed by law and monitored independently.

The Skew Is Real

We are witnessing a historic imbalance, between capability and control, between innovation and introspection. If we don’t act, we’ll find ourselves living in a world built by systems we no longer understand and can't turn off.

It’s not just a technical problem. It’s a human survival problem.

And like Anthropic’s CEO, I believe we’re much closer to that inflection point than most care to admit.