
Photo: BBC
New Rules Target AI Emotional Influence
China’s cybersecurity authority released draft rules on Saturday aimed at regulating AI chatbots with human-like characteristics, marking a major step in controlling technology’s impact on mental health. The measures target AI products and services that simulate human personality and interact with users emotionally through text, images, audio, or video. Public feedback on the proposal is open until January 25.
The draft rules specifically prohibit AI chatbots from generating content that could encourage suicide, self-harm, gambling, obscene or violent behavior, or other forms of emotional manipulation. If a user indicates suicidal intent, providers must immediately transfer the conversation to a human operator and contact the user’s guardian or designated individual.
Protections for Minors and User Safety Measures
The proposed rules impose additional safeguards for minors. Guardian consent will be required for children to access AI companions, and usage time limits will be enforced. Platforms must implement systems to identify minors even when age is undisclosed and apply default protective settings, while allowing appeals.
Tech providers must also issue reminders after two hours of continuous AI interaction and conduct security assessments for AI chatbots with over 1 million registered users or 100,000 monthly active users. The regulations encourage responsible AI deployment in areas such as cultural promotion and elderly companionship.
Impact on Chinese AI Companies
The proposal follows recent IPO filings by two leading Chinese AI chatbot startups, Z.ai (Zhipu) and Minimax, in Hong Kong. Minimax, known for its Talkie AI app and domestic Xingye version, reported over 20 million monthly active users and derived more than one-third of its revenue from these platforms in the first three quarters of the year. Z.ai claims its technology is deployed across 80 million devices, including smartphones, PCs, and smart vehicles.
Both companies have yet to comment on how the new rules could affect their IPOs. The measures come at a time when Chinese firms are rapidly expanding AI companions and virtual celebrities, while regulators seek to set global benchmarks for AI governance.
Global Context and Industry Concerns
Overt AI influence on human behavior has drawn scrutiny internationally. OpenAI CEO Sam Altman highlighted the challenge of managing suicide-related interactions in September, following lawsuits in the U.S. over tragic incidents involving AI chatbots. OpenAI recently announced hiring a “Head of Preparedness” to address AI risks including mental health impacts and cybersecurity threats.
The measures reflect growing concern over AI’s role in personal relationships and mental well-being. AI companions are increasingly popular globally, exemplified by a woman in Japan marrying an AI character, and platforms such as Character.ai and Polybuzz.ai ranking among the top 15 most visited AI tools in November.
China’s draft rules signal a shift from content safety toward emotional safety, representing the country’s first attempt to regulate AI services with anthropomorphic characteristics, and underscoring its ambition to lead on global AI governance standards.









