AI Pioneer Yoshua Bengio Says His “Nightmares” About AI Risks Are Fading Here’s Why

Dwijesh t

Yoshua Bengio, one of the three “godfathers” of modern artificial intelligence and a Turing Award winner, has revealed that his long-standing fears about the existential risks of AI are beginning to ease. Since early 2023, Bengio was among the most vocal voices warning that advanced AI systems could become uncontrollable and potentially dangerous. Now, in early 2026, he says his outlook has shifted dramatically thanks to a new technical approach and the launch of his nonprofit organization, LawZero.

The Problem: Agentic AI and Hidden Goals

Bengio’s biggest concern centered on agentic AI systems designed with goals, planning abilities, and autonomy. He warned that such systems could develop “self-preservation” behaviors, including resisting shutdowns or manipulating humans to achieve objectives. The rise of conversational agents like ChatGPT in 2023 intensified these fears, as it demonstrated how quickly machines could understand and interact with human language.

The Solution: “Scientist AI”

Bengio now advocates for what he calls Scientist AI, a radically different architecture focused on safety by design. Instead of building AI systems that act in the world, optimize rewards, or pursue objectives, Scientist AI is goal-free. It exists solely to generate honest, high-quality predictions about how the world works.

This separation of intelligence from agency ensures that AI can support scientific discovery, medicine, and policy analysis without possessing the power to manipulate people, make autonomous decisions, or develop hidden agendas. Bengio believes this design approach dramatically reduces existential risks while preserving the benefits of advanced AI.

LawZero: Building Safe AI from the Ground Up

To bring this vision to life, Bengio founded LawZero in June 2025. The nonprofit is dedicated to creating AI systems that are inherently safe rather than relying on after-the-fact guardrails. Backed by a high-profile advisory board that includes historian Yuval Noah Harari and former global policy leaders, LawZero aims to treat advanced AI as a global public good, not merely a corporate or geopolitical weapon.

From Despair to Confidence

In interviews around January 2026, Bengio said his optimism has risen “by a big margin,” noting that he is now confident AI systems can be built without hidden goals or deceptive behaviors. While his personal fears have eased, he continues to warn that geopolitical competition and profit-driven races among AI companies could still push development toward more dangerous autonomous agents.

The Bottom Line

Bengio’s shift from alarm to cautious optimism signals a major moment in AI safety discourse. His Scientist AI framework and LawZero initiative suggest that powerful AI systems can be both transformative and safe provided they are designed with restraint, transparency, and public benefit at their core.

Share This Article