China vows stricter AI safeguards as OpenClaw sparks security fears | South China Morning Post

robot
Abstract generation in progress

China has pledged to strengthen artificial intelligence (AI) security, including through a new data property rights framework, at a time when users and businesses are rapidly adopting the highly coveted but controversial OpenClaw.

On Monday, Liu Liehong, head of the National Data Administration, said security and compliance had become core challenges as AI spread across industry and daily life.

Speaking at the China Development Forum, Liu cited challenges ranging from copyright disputes over training data and AI-generated content to security threats such as data poisoning – a type of cyberattack that manipulates AI models.

Advertisement

“To this end, we are establishing a robust data property rights framework that clearly defines rights and responsibilities for data supply, circulation and usage,” Liu said.

“At the same time, we are advancing an integrated security governance solution that unifies data, technology and network safeguards, delivering the strong security foundation needed to scale AI applications responsibly.”

Advertisement

Security management for AI agents such as OpenClaw, Liu said, would follow the principles of “least privilege, proactive defence and continuous auditing”.

He noted that addressing these challenges would require coordinated action from AI providers, end users and regulators.

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin