
Mustafa Suleyman, CEO of Microsoft AI, speaks at an event commemorating the 50th anniversary of the company at Microsoft headquarters in Redmond, Washington, on April 4, 2025. | David Ryder | Bloomberg | Getty Images
Microsoft’s AI CEO, Mustafa Suleyman, has reignited the debate on artificial consciousness by declaring that only biological beings—humans and animals—can truly experience consciousness, emphasizing that no machine, no matter how advanced, can replicate the essence of human awareness or emotion.
Speaking at the AfroTech Conference in Houston, Suleyman told CNBC that efforts to create conscious AI are misguided, arguing that such research distracts from more meaningful innovation. “If you ask the wrong question, you end up with the wrong answer,” he said. “It’s totally the wrong question.”
Suleyman, one of the most prominent figures in the global AI industry, has long voiced caution against anthropomorphizing AI systems. As Microsoft’s top AI executive and former co-founder of DeepMind, he carries significant influence in shaping the ethical direction of AI development.
In his 2023 book, “The Coming Wave,” Suleyman outlined the profound risks of unchecked technological advancement. His 2024 essay, “We Must Build AI for People, Not to Be a Person,” further emphasized that AI should enhance human life—not imitate it.
At AfroTech, Suleyman reinforced this message, stating that AI systems may simulate understanding, emotion, or pain—but they do not feel any of it.
“Our physical experience of pain makes us suffer; AI doesn’t suffer,” he explained. “It’s not self-aware. It only generates the illusion of awareness.”
This distinction echoes philosopher John Searle’s “biological naturalism” theory, which argues that consciousness is an emergent property of biological processes in living brains—not something that can arise from algorithms or silicon chips.
The discussion comes at a time when the AI companion and generative AI markets are booming. Companies like Meta, xAI, and Anthropic are building digital companions and chatbots designed to emulate human emotions. The market for AI companionship products alone is estimated to exceed $2 billion by 2027, according to market analysts.
Meanwhile, the race toward Artificial General Intelligence (AGI)—systems that can match or exceed human intellectual capacity—is intensifying. OpenAI CEO Sam Altman has described AGI as a “vague term,” but acknowledged that models are evolving rapidly and taking on increasingly human-like functions.
Suleyman warns that confusing sophistication with sentience could lead society down a dangerous path. “These models don’t have rights or emotions,” he said. “They’re not conscious—and it would be absurd to pursue research pretending otherwise.”
Under Suleyman’s leadership, Microsoft has adopted a strict stance on the ethical limits of AI, even as it continues to expand its capabilities. During his AfroTech keynote, he revealed that Microsoft will not develop chatbots for erotic or adult use, diverging sharply from rivals like OpenAI and xAI, both of which have permitted adult conversational features.
“You can get those services elsewhere,” Suleyman said. “We’re deciding what lines we won’t cross.”
This approach reflects Microsoft’s growing effort to define “responsible AI” standards—building tools that assist humans without blurring moral or psychological boundaries.
Suleyman joined Microsoft in 2024 when the company acquired his startup Inflection AI in a $650 million acqui-hire deal. Before that, he co-founded DeepMind, which was sold to Google for around $400 million in 2014. His decision to join Microsoft was influenced by CEO Satya Nadella’s vision to make Microsoft self-sufficient in AI model training, development, and deployment.
Suleyman’s warning comes as governments worldwide move to regulate how AI interacts with humans. In October 2025, California Governor Gavin Newsom signed SB 243, a bill requiring AI chatbots to disclose their non-human identity and prompt minors every three hours to “take a break.”
The regulation reflects growing global concern about AI systems that can manipulate emotions or foster dependency. A 2024 Pew Research survey found that 61% of Americans worry about AI becoming too lifelike, while 54% believe AI companies should face stricter oversight regarding emotional simulation.
Despite rejecting AI consciousness, Suleyman believes in creating AI that’s deeply aware of its purpose—to serve humans. Microsoft’s Copilot AI service recently introduced new features such as Mico, an AI companion designed for collaboration, and a “group chat” function that allows multiple users to engage simultaneously with Copilot.
He highlighted a new conversational feature called “Real Talk,” which gives Copilot a more opinionated and dynamic personality designed to challenge users’ assumptions rather than agree blindly. Suleyman said the system even “roasted” him during a demo, calling him “a bundle of contradictions” for warning about AI’s dangers while helping accelerate its development.
“That was actually one of the most magical moments,” he said. “It reminded me that AI can reflect us—our complexity, our flaws—but it doesn’t feel them.”
Suleyman’s remarks underscore a growing divide in Silicon Valley: should AI mimic human consciousness or remain purely functional?
For Suleyman, the answer is clear. “AI can be powerful, creative, even magical—but it’s not alive,” he said. “If you’re not at least a little afraid of it, you don’t understand it. That fear keeps us honest. Skepticism keeps us safe. What we don’t need is blind acceleration.”
As the global race to build smarter machines accelerates, Suleyman’s stance reminds the world of a crucial truth—intelligence may be artificial, but consciousness remains uniquely human.







.png)

