
China's Cyberspace Administration released comprehensive draft regulations on December 27 targeting artificial intelligence systems that simulate human personalities and engage users in emotional interactions. The proposed "Interim Measures for the Management of Artificial Intelligence Human-like Interactive Services" mandate strict disclosure requirements and addiction prevention measures, signaling Beijing's intensifying oversight of consumer-facing AI technologies.
The draft rules, open for public consultation until January 25, 2026, apply to AI products and services available to the Chinese public that mimic human thinking patterns, personality traits, and communication styles through text, images, audio, video, or other media. The regulations represent one of the world's most detailed frameworks for governing emotionally engaging AI systems.
Mandatory User Notifications and Addiction Safeguards
Under the proposed framework, AI service providers must inform users they are interacting with artificial intelligence at login and at two-hour intervals during use. The requirement extends to situations where systems detect signs of user overdependence or addiction.
Providers would assume full lifecycle safety responsibilities, establishing comprehensive systems for algorithm review, data security, and personal information protection. The regulations mandate intervention when users exhibit addiction behaviors, requiring companies to warn against excessive use and implement technical measures to prevent psychological manipulation.
The rules specifically target risks including blurred human-machine boundaries, addiction, data misuse, and erosion of social trust. These concerns reflect growing global anxiety about AI companions and chatbots designed to form emotional bonds with users.
Enhanced Protections for Vulnerable Populations
The draft imposes strict protections for minors and elderly users, acknowledging these demographics face heightened risks from emotionally engaging AI systems. Providers must implement age-appropriate safeguards and special monitoring for vulnerable populations.
Service providers face tight controls on collecting and using emotional and interaction data. The regulations prohibit using sensitive personal data for model training without explicit user consent, addressing privacy concerns around AI systems that learn from intimate user conversations.
Tiered Risk-Based Supervision Framework
China's approach combines tiered, risk-based supervision with security assessments and app-store enforcement mechanisms. The framework introduces regulatory sandboxes, signaling Beijing's intent to allow controlled experimentation while conditioning AI growth on demonstrable compliance and social responsibility.
The regulations establish content red lines tied to national security, misinformation prevention, and ethical norms. Providers must ensure AI systems do not generate content violating Chinese laws or undermining social stability.
App stores would bear responsibility for enforcing compliance, creating distribution chokepoints for noncompliant services. Security assessments would evaluate AI systems before deployment, similar to existing requirements for algorithm recommendation services.
Global Regulatory Trend Intensifies
China's move reflects accelerating global efforts to regulate AI systems, particularly those capable of forming parasocial relationships with users. The European Union's AI Act, California's proposed legislation, and various national frameworks indicate converging concern about emotionally manipulative AI.
However, China's regulations go further than most international counterparts in requiring frequent reminders that users are interacting with machines rather than humans. The two-hour notification requirement appears unique among global AI governance approaches.
The regulations could significantly impact Chinese AI companies developing companion chatbots, virtual assistants, and customer service applications. Compliance costs may favor larger established players with resources to implement comprehensive safety systems.
International AI companies operating in China would need to adapt systems to meet disclosure and intervention requirements, potentially creating technical and operational challenges for global platforms.
The Cyberspace Administration's public consultation period suggests Chinese regulators seek input before finalizing requirements, though the core framework appears likely to advance with modifications rather than fundamental changes based on historical regulatory patterns.



