Photo: India Today
As artificial intelligence accelerates toward human-level capabilities, one of the world’s top AI minds is urging the public to shift focus—from fears of losing jobs to fears of losing control.
Demis Hassabis, the CEO of Google DeepMind and a Nobel laureate in physiology, stated at the 2025 SXSW conference in London that while AI could disrupt traditional employment, the greater risk lies in how this powerful technology might be misused by malicious actors.
“Yes, job displacement is a concern,” Hassabis said during an interview with CNN’s Anna Stewart, “but the far more urgent issue is: What happens if these tools fall into the wrong hands?”
This statement comes just a week after Anthropic CEO Dario Amodei warned that up to 50% of entry-level white-collar jobs could vanish as AI models become more capable. Hassabis, however, emphasized that the rise of artificial general intelligence (AGI)—AI with human-level cognition—could be weaponized by bad actors if access remains unregulated.
AI has made astonishing strides in recent years. Models like OpenAI’s GPT-4, Anthropic’s Claude, and Google’s Gemini are not only writing code, composing music, and generating images, but are also powering real-time decision-making in finance, defense, and healthcare.
But with that power comes new danger.
These are not hypothetical threats. “We’re already seeing the dark side,” Hassabis said. “A bad actor doesn’t need to build a supercomputer. They just need access to the right model.”
The lack of global consensus on how to manage AI is compounding the problem. While the EU has begun implementing its AI Act, and China has placed heavy restrictions on generative AI platforms, the United States remains fragmented in its approach.
In February 2025, Google quietly removed language from its own AI ethics page, eliminating its earlier pledge not to use AI for military applications or surveillance.
“There should be international agreements on how AI can be used and by whom,” Hassabis noted. “Much like nuclear arms treaties, we need a framework that ensures AI remains a force for good.”
Yet with increasing geopolitical tension between the U.S., China, and Russia, Hassabis admitted that reaching consensus will be difficult—at least for now.
Despite his warnings, Hassabis is also optimistic about AI’s potential to improve daily life. He envisions a future where individuals rely on personal AI agents—virtual assistants that handle admin tasks, curate recommendations, and even help discover social connections.
“Think of it as a universal AI companion,” he said. “Not just answering your questions, but helping you live a more productive, enriched life.”
This vision aligns with Google's broader strategy. The company is actively integrating AI into products like Search, Gmail, Google Docs, and is experimenting with AI-powered smart glasses, turning science fiction into commercial reality.
The fear that AI will lead to widespread job losses is not unfounded. Meta CEO Mark Zuckerberg, for instance, recently said he expects AI to write 50% of the company’s code by 2026. At the same time, companies like Klarna, Amazon, and IBM have cut thousands of jobs while investing heavily in AI-based automation.
Yet Hassabis remains unconvinced that AI will lead to long-term unemployment. Instead, he draws parallels to previous technological revolutions.
“When the internet arrived, some jobs disappeared—but many more emerged. I believe we’ll see a similar transition here.”
He added that productivity gains from AI could allow societies to reduce working hours, increase wages, or fund public services—if policymakers find ways to equitably distribute those gains.
Despite its promise, today’s AI still suffers from critical flaws. From biased outputs to hallucinated facts, high-profile mishaps are common:
These incidents underscore why human oversight and strong governance remain essential.
Demis Hassabis is not downplaying AI’s impact on the workforce. But he’s pushing for a broader, more strategic conversation—one that prioritizes safety, governance, and equitable access.
“This technology is transformational,” he said. “But if we don’t build the right guardrails now, we’ll regret it later.”
As AI edges closer to matching—and in some areas, exceeding—human ability, the race is no longer just about who builds the smartest model. It’s about who ensures it’s used responsibly.