Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
AI face theft is rampant! From celebrities to amateurs, your face might be secretly being misused.
Your social profile picture, lifestyle photos, may be secretly stolen—without permission, a single high-definition photo can be used to generate your AI doppelgänger, for short dramas, ads, and even scams.
On April 5, 2026, a statement issued by the Yi Yang Qianxi Studio brought the public’s attention back to AI face-cloning infringement—multiple platforms have appeared with unauthorized AI short dramas using his likeness. The studio explicitly demanded that the relevant content be taken down immediately and stopped from spreading, while also launching a rights-protection process.
Almost at the same time, the Hanfu influencer “Bai Cai Hanfu Makeup & Styling” also ran into a similar predicament: the carefully shot Hanfu photo shoot was copied by the AI short drama “Peach Blossom Hairpin” without authorization. Not only was it used for villain characters in the drama, but it was also subjected to malicious uglification. No exception—commercial model blogger “Qihai Christ” also posted a statement seeking to protect her rights, saying that her image was likewise used in the short drama without permission.
From well-known celebrities to ordinary netizens, from public figures to niche bloggers, faces are being mass stolen and recklessly misused. How do you seek rights protection for AI face-cloning? Is AI face-swapping illegal? A digital security crisis involving everyone’s right of image and personal dignity is quietly spreading across the entire internet.
AI face-cloning has long since turned from a one-off case into an industry practice
Recently, the AI short drama track has seen explosive growth, and unauthorized face-swapping has already become a high-frequency operation in this space.
Some netizens have reported that on a certain short drama platform there are multiple AI short dramas that use AI synthesis technology to steal the likeness and voice of the entertainer Yi Yang Qianxi without authorization. Among them, in the short drama “Midnight Bus: She’s Catching Ghosts—So Fierce!” there are characters with facial appearances highly similar to Yi Yang Qianxi, and the voice is also almost indistinguishable. Another work, “Trick Me Into Choosing a Good Fate? Sure—Don’t Regret It Later,” has even seen its popularity approach 75 million.
All these short dramas are AI-generated. As of the time of this release, the two involved short dramas have been taken down from the Hongguo Short Drama platform.
But this is only a glimpse of the iceberg—In its statement, Yi Yang Qianxi Studio clearly said that Yi Yang Qianxi did not participate in the relevant shows and has not authorized any third party to AI-generate his likeness and the like. The studio has now retained lawyers to carry out rights protection, and will continue to monitor the infringement while preserving evidence and assessing litigation at any time.
It’s not just AI short dramas—the claws of AI face-cloning have already penetrated multiple scenarios such as short videos, live-stream shopping, and fake advertisements, with a far wider reach than anyone could imagine.
On February 26, actor Wang Jingsong posted that his image was used to generate a video with AI face-cloning. “It’s terrifying—the video, voice, and lip-sync are completely impossible to tell from the real thing.” In addition, several public figures such as He Saifei and Li Zimeng have also encountered AI fake endorsements: their images were used without permission in marketing scenarios like weight loss and wealth management, misleading consumers.
More chillingly, ordinary people also have no safe zone. Everyday photos on social media—lifestyle shots, Hanfu photo shoots, and travel videos—could become the ‘raw material’ in an AI content library. For black-and-gray industries, all it takes is a single high-definition front-facing photo to quickly generate dynamic videos and virtual characters, which can then be used for pranks, defamation, or even scams. Many people only realize that their face has “appeared” in videos they never participated in when they are reminded by friends or receive unfamiliar comments.
What’s even more worth watching is that face-cloning has already formed a standardized workflow. According to media reports, it involves scraping publicly available photos, training a facial model, generating a video character, and distributing it across multiple platforms. The entire process is fast and efficient. And the infringers are often small studios or anonymous accounts with strong concealment and high turnover. Even if they are complained about and taken down, they can switch accounts and upload again, creating a vicious cycle of “you can’t stop it once it starts.”
Hard to stop: Where is the core problem behind AI face-cloning?
In response to the increasingly rampant chaos of AI face-cloning, relevant parties have long begun to speak up and fight back.
On the evening of April 2, the China Association of Radio and Television Social Organizations—Actors Committee issued a solemn statement, directly pointing to infringement such as frequent AI face-swapping and face-cloning, voiceprint cloning, and arbitrary tampering with film and television materials.
The statement clearly said that performers are legally entitled to personal rights such as the right of portrait, right of voice, and rights to artistic imagery, and are protected by law throughout. Any entity is strictly prohibited from arbitrarily collecting, using, synthesizing, or disseminating related images, voiceprints, and exclusive artistic imagery without the person’s formal written authorization.
More importantly, the statement breaks a common misconception: for infringement content like AI “look-alikes,” voice imitation, and face-swapped short dramas that can be associated with specific public performers—even if it is labeled with phrases such as “non-commercial,” “public-interest sharing,” or “personal creative reinterpretation”—it does not constitute a basis for legal exemption. The infringer still must bear full responsibility for infringement.
At the same time, the statement also requires that all online platforms strictly implement their review responsibilities. They must comprehensively search for and take down existing infringing works, and strictly control newly added AI synthesis content violations.
In fact, related regulations have already been in place.
Article 7 of the “Interim Measures for the Administration of Generative Artificial Intelligence Services,” effective since August 15, 2023, clearly stipulates that providers of generative AI services should carry out training data processing activities in accordance with the law, and must use data and foundational models that have legitimate sources. If personal information is involved, consent from the individual must be obtained or other circumstances must comply with laws and administrative regulations.
If there are regulations and industry voices, why is AI face-cloning still hard to stop?
Gao Chengfei, deputy director of the Brand and IP Committee at the Institute for Influence Studies, gave the answer: the sudden drop in technical barriers and the imbalance between illegal costs and gains are the core causes. Open-source models make the cost of face-swapping approach zero—making an AI short drama only requires grabbing publicly available photos to generate characters. Meanwhile, rights protection has to go through lengthy processes such as evidence preservation and litigation, which is time-consuming and labor-intensive—so the infringer’s profits far exceed the risks they bear.
Previously, First Finance reported that on e-commerce platforms, a 200 yuan service can customize a video where a celebrity “speaks,” while the price for making an AI face-swapped video ranges from 20 to 500 yuan.
In addition, delayed platform review mechanisms are also an important reason. Gao Chengfei pointed out that the difficulty of identifying AI materials is far higher than that of traditional content, leading to many infringing contents being disseminated first and taken down later, creating a “gray area” of “getting on the train first and paying the fare afterward.” A deeper issue is that some creators treat the “AI-generated” label as a “get-out-of-liability card,” with a blurry understanding of the boundaries of portrait rights. Coupled with the fact that the industry is still in a stage of rough and uncontrolled growth, lacking a clear self-regulatory consensus, the chaos spreads even more.
Keep the line for faces: How should we respond?
A case report of an AI face-swapping portrait-rights dispute released on March 20 by the Beijing Internet Court served as a warning to the industry.
The well-known actor Dilireba’s lawsuit against the makers and broadcasters of an AI face-swapped short drama has been settled. The court ultimately found that the short drama producer improperly used deep-synthesis technology to generate an image highly similar to the actor, infringing on her portrait rights; the short drama broadcaster, failing to fulfill its duty of reasonable review, also had to bear corresponding responsibility.
In this case, Dilireba’s side discovered that in a short drama produced and published by defendant A, her portrait was stitched into the faces of characters in the drama using AI face-swapping technology. The related topics sparked discussions across multiple social platforms, and many internet users mistakenly thought the plaintiff had participated in the drama. At the same time, defendant B’s company uploaded the short drama involved on its operating video account.
This case clearly sends a signal: AI is not a law-free zone. If someone infringes on another person’s portrait rights, they must bear legal responsibility.
So, when faced with pervasive AI face-cloning, how should we prevent it and how should we seek rights protection?
Gao Chengfei suggests that individual protection should build a three-tier system: ‘prevention—monitoring—rights protection’ to reduce the risk of being infringed at the source.
In the prevention layer, when posting photos on social media, people should reduce resolution, add semi-transparent watermarks, and avoid exposing high-definition front-facing photos directly, leaving no openings for black-and-gray industries.
In the monitoring layer, you can periodically use reverse image search to check whether your portrait has been misappropriated, and also pay more attention to trending content on short drama platforms to detect anomalies in time.
In the rights-protection layer, once infringement is found, you should immediately preserve evidence via blockchain or notarization, send a formal takedown notice letter to the platform, and if necessary, retain lawyers to file portrait-rights and right-of reputation lawsuits. It is especially important to note that even content posted on a limited social media circle is still fully protected under portrait-rights law for individuals.
He also pointed out that regulators and platforms must build a ‘double line of defense’ of ‘technology + systems’. Regulatory departments should accelerate the rollout of specialized rules for AI content, clarify boundaries for authorized training data, establish a punitive damages mechanism for infringement, and increase the cost of illegality. Platforms must bear primary responsibilities: at the content upload stage, embed AI material traceability technology and require mandatory provision of proof of portrait authorization chains, rather than remediation after the fact. They should also establish a fast-response channel to shorten complaint-handling cycles.
In addition, industry associations should promote building a unified portrait authorization database so that authorizations are verifiable and traceable. The key to coordination across society is to form a consensus of “technology for good”—AI is not a law-free zone, and any technological innovation should respect personal rights as the bottom line. This requires joint governance across legislation, law enforcement, platforms, and creators to curb the spread of “face-cloning” chaos.
The chaos caused by AI face-cloning is never just one person’s problem—it concerns the digital security of everyone. Only by coordinating governance among legislation, law enforcement, platforms, and creators to form a collective force can we keep our “face boundary” and stop the spread of face-cloning chaos.
Compiled from Huashang.com, Red Star News, and others
Save this article—if you ever encounter AI face-cloning, take action and seek rights protection right away