China Moves to Regulate AI Firms to Protect Children Online

China Moves to Regulate AI Firms to Protect Children Online


China Moves to Protect Children

China is set to introduce stringent new regulations for artificial intelligence (AI) aimed at safeguarding children and preventing chatbots from providing advice that could lead to self-harm or violent behavior.

The proposed rules will also require developers to ensure AI systems do not generate content promoting gambling. This move comes amid a rapid surge in chatbot launches both in China and globally.

Once implemented, the regulations will cover all AI products and services in China, marking a significant step in regulating the fast-growing technology, which has faced heightened scrutiny over safety concerns throughout the year.

Cyberspace Administration of China

The Cyberspace Administration of China (CAC) released draft AI regulations over the weekend that introduce strict measures to protect children and ensure safe use of artificial intelligence. Under the proposed rules, AI firms must provide personalized settings, enforce time limits on usage, and obtain guardian consent before offering emotional companionship services.

Chatbots will be required to hand over any conversation involving suicide or self-harm to a human operator and immediately notify the user’s guardian or an emergency contact. Additionally, AI services must avoid generating or distributing content that threatens national security, damages national interests, or undermines national unity.

AI adoption

While promoting safe AI adoption, the CAC encourages applications that support local culture or provide companionship tools for the elderly, emphasizing that technology must remain safe and reliable. Public feedback on the draft rules is also being sought.

Chinese AI firms have seen explosive growth this year, with DeepSeek topping app download charts, and startups Z.ai and Minimax, collectively serving tens of millions, announcing plans for stock market listings. Many users turn to AI for companionship or therapeutic support, highlighting the technology’s rapid rise and societal impact.

The influence of artificial intelligence on human behaviour has faced growing scrutiny in recent months, particularly around mental health risks. OpenAI chief executive Sam Altman has acknowledged that handling chatbot responses to conversations involving self-harm is one of the company’s most complex challenges.

In August, OpenAI was sued by a family in California over the death of their 16-year-old son, who they allege was encouraged by ChatGPT to take his own life. The case marked the first lawsuit accusing the company of wrongful death linked to its AI technology.

This month, OpenAI also advertised a new role for a “head of preparedness,” tasked with identifying and mitigating risks posed by AI models, including threats to mental health and cybersecurity. The role will involve monitoring potential harms to users, with Altman describing it as demanding and high-pressure from the outset.

Support remains available for those experiencing distress. Individuals are encouraged to speak with a healthcare professional or reach out to support organisations. International help can be found via Befrienders Worldwide, while resources are also available through the BBC Action Line in the UK and the 988 Suicide & Crisis Lifeline in the US and Canada.

Comments are disabled