Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
China vows stricter AI safeguards as OpenClaw sparks security fears | South China Morning Post
China has pledged to strengthen artificial intelligence (AI) security, including through a new data property rights framework, at a time when users and businesses are rapidly adopting the highly coveted but controversial OpenClaw.
On Monday, Liu Liehong, head of the National Data Administration, said security and compliance had become core challenges as AI spread across industry and daily life.
Speaking at the China Development Forum, Liu cited challenges ranging from copyright disputes over training data and AI-generated content to security threats such as data poisoning – a type of cyberattack that manipulates AI models.
Advertisement
“To this end, we are establishing a robust data property rights framework that clearly defines rights and responsibilities for data supply, circulation and usage,” Liu said.
“At the same time, we are advancing an integrated security governance solution that unifies data, technology and network safeguards, delivering the strong security foundation needed to scale AI applications responsibly.”
Advertisement
Security management for AI agents such as OpenClaw, Liu said, would follow the principles of “least privilege, proactive defence and continuous auditing”.
He noted that addressing these challenges would require coordinated action from AI providers, end users and regulators.