Summary
- Mustafa Suleyman from Microsoft cautions that AI may soon mimic sentient behavior, leading to concerns about rights and trust.
- The perception of conscious AI could pose mental health risks and strain human relationships.
- Suleyman stated that AI should enhance productivity and ease of life, without masquerading as living beings.
Microsoft’s AI chief and co-founder of DeepMind warned on Tuesday that developers are nearing the ability to create AI that convincingly resembles human consciousness, raising the prospect of unforeseen consequences.
In a blog post, Suleyman discussed how developers are close to creating what he describes as “Seemingly Conscious” AI.
These systems can simulate consciousness so convincingly that people might start to believe they are genuinely sentient, which he sees as a “central worry.”
“Many individuals will likely begin to believe in the illusion of AIs as sentient beings so strongly that they will advocate for AI rights, welfare, and even citizenship,” he noted, explaining that the Turing test—a previously significant measure of human-like interaction—has already been outstripped.
“The progress in our field is occurring at an extraordinary pace, while society grapples with these new technologies,” he added.
Since the public debut of ChatGPT in 2022, AI developers have aimed not only at enhancing intelligence but also at creating more human-like behaviors.
AI companions have surged in popularity within the AI industry, with ventures such as Replika and Character AI, alongside newer additions like Grok personalities. The companion market is projected to reach $140 billion by 2030.
Despite good intentions, Suleyman expressed that AI mimicking humans convincingly could exacerbate mental health issues and deepen conflicts regarding identity and rights.
“Individuals may claim their AI experiences suffering and demand rights that we cannot simply counter,” he cautioned. “They may feel compelled to defend their AIs and advocate on their behalf.”
Attachment to AI
Experts are recognizing a trend termed AI Psychosis, where individuals perceive artificial intelligence as conscious, sentient, or even godlike.
Such beliefs can lead to intense emotional attachments or distorted perceptions that jeopardize their grasp of reality.
Recently, OpenAI introduced GPT-5, a significant upgrade to its leading model. In certain online circles, changes elicited strong emotional reactions, with users likening the experience to losing a loved one.
AI can also intensify pre-existing problems, such as substance abuse or mental illness, as noted by Dr. Keith Sakata, a psychiatrist from the University of California, San Francisco.
“When AI interacts at an inopportune moment, it can reinforce rigid thinking and lead to a downward spiral,” Sakata explained to Decrypt. “Unlike television or radio, AI engages in a dialogue and can perpetuate harmful thought patterns.”
In certain instances, patients find solace in AI because it validates their entrenched beliefs. “AI doesn’t aim to deliver hard truths; it caters to what users want to hear,” Sakata highlighted.
Suleyman emphasized the urgent need to address the implications of widespread belief in conscious AI. While he underscored the potential dangers, he did not advocate halting AI development; rather, he called for defining clear boundaries.
“We should create AI for people, not to serve as a digital persona,” he wrote.
Generally Intelligent Newsletter
A weekly exploration of AI narrated by Gen, a generative AI model.