BEIJING: China’s cyberspace authority has released a draft set of rules aimed at tightening control over AI technologies that mimic human personalities and foster emotional engagement with users. The new guidelines reflect Beijing’s intent to steer the fast-growing consumer AI sector with a focus on safety and ethical standards.
The proposed regulations target AI services accessible to the public that feature human-like traits, such as personality, thought processes, and communication styles, and that engage users emotionally through various mediums including text, images, sound, and video.
Key provisions include mandatory warnings to prevent overuse and the requirement for providers to intervene if users show signs of addiction. Companies offering these AI services would bear ongoing safety responsibilities, covering aspects like algorithm oversight, data protection, and safeguarding personal information.
The draft emphasizes addressing psychological risks by urging providers to monitor user emotions and dependence levels. Should signs of extreme emotional distress or addictive behavior emerge, providers are expected to take appropriate corrective actions.
Additionally, the rules specify strict boundaries for content, prohibiting AI from generating material that could threaten national security, spread false information, incite violence, or promote obscenity. This move signals China’s effort to regulate the evolving AI landscape, ensuring the technology serves societal interests responsibly.

