Friday, January 23, 2026

AI luminaries at Davos clash over how close human level intelligence really is

Some of the world’s best-known names in artificial intelligence descended on the small ski resort town of Davos, Switzerland, this week for the World Economic Forum (WEF).

AI dominated many of the discussions among corporations, government leaders, academics, and non-governmental groups. Yet a clear contrast emerged over how close current models are to replicating human intelligence and what the likely near-term economic impacts of the technology will be.

The large language models (LLMs) that have captivated the world are not a path to human-level intelligence, two AI experts asserted in separate remarks at Davos.

Demis Hassabis, the Nobel Prize-winning CEO of Google DeepMind, and the executive who leads the development of Google’s Gemini models, said today’s AI systems, as impressive as they are, are “nowhere near” human-level artificial general intelligence, or AGI.

Yann LeCun—an AI pioneer who won a Turing Award, computer science’s most prestigious prize, for his work on neural networks—went further, saying that the LLMs that underpin all of the leading AI models will never be able to achieve human-like intelligence and that a completely different approach is needed.

Their views differ starkly from the position asserted by top executives of Google’s leading AI rivals, OpenAI and Anthropic, who assert that their AI models are about to rival human intelligence.

Dario Amodei, the CEO of Anthropic, told an audience in Davos that AI models would replace the work of all software developers within a year and would reach “Nobel-level” scientific research in multiple fields within two years. He said 50% of white-collar jobs would disappear within five years.

OpenAI CEO Sam Altman (who was not at Davos this year) has said we are already beginning to slip past human-level AGI toward “superintelligence”, or AI that would be smarter than all humans combined.

In a joint WEF appearance with Amodei, Hassabis said that there was a 50% chance AGI might be achieved within the decade, though not through models built exactly like today’s AI systems.

In a later, Google-sponsored talk, he elaborated that “maybe we need one or two more breakthroughs before we’ll get to AGI.” He identified several key gaps, including the ability to learn from just a few examples, the ability to learn continuously, better long-term memory, and improved reasoning and planning capabilities.

“My definition of [AGI] is a system that can exhibit all the cognitive capabilities humans can—and I mean all,” he said, including the “highest levels of human creativity that we always celebrate, the scientists and artists we admire.” While advanced AI systems have begun to solve difficult math equations and tackle previously unproved conjectures, AI will need to develop its own breakthrough conjectures—a “much harder” task—to be considered on par with human intelligence.

Source link

Hot this week

Ambitious scheme to spur next-gen battery manufacturing in India stumbles

Delays in visa approvals for Chinese technical specialists,...

Elon Musk kills Tesla Autopilot to push Full Self-Driving subscriptions

Tesla (TSLA) removed its Autopilot basic...

President Donald Trump threatens tariffs on France over Greenland deal

NEWYou can now listen to Fox News articles! ...

Topics

Related Articles

Popular Categories