Photo: The Express Tribune
OpenAI is stepping up its fight against AI-generated deepfakes after a wave of backlash from Hollywood stars and unions. The company announced on Monday that it will enforce stricter controls and new content guardrails for its video creation tool, Sora 2, in response to growing concerns from actor Bryan Cranston and SAG-AFTRA, the union representing over 160,000 performers.
The decision marks a major turning point in how one of the world’s most influential AI companies handles the ethical use of artificial intelligence in creative media—a topic that has become increasingly urgent as realistic deepfakes continue to spread across social platforms.
The controversy began shortly after Sora 2’s late-September release, when AI-generated clips surfaced using Bryan Cranston’s likeness and voice without his consent. The “Breaking Bad” and “Malcolm in the Middle” star publicly criticized the misuse, calling it a violation of personal and professional rights.
“I’m grateful that OpenAI has taken action,” Cranston said in a statement. “I hope every company working in this space understands that our voice and image are not just data points—they are part of who we are.”
Cranston’s statement came after SAG-AFTRA confirmed that his likeness had appeared in unauthorized AI-generated videos. The union quickly mobilized support from other industry groups, urging OpenAI to strengthen its safeguards and prevent future misuse.
In a joint announcement with SAG-AFTRA, OpenAI said it will now work closely with several major Hollywood organizations, including United Talent Agency (UTA), the Creative Artists Agency (CAA), and the Association of Talent Agents (ATA).
These collaborations will help create stricter verification processes within Sora 2, ensuring that users cannot generate content featuring real people without explicit consent.
The partnerships also represent a broader industry effort to redefine AI ethics in entertainment—an issue that gained momentum during recent union strikes over digital likeness rights. Both CAA and UTA have previously accused OpenAI and other AI developers of using copyrighted material without authorization, calling Sora a “potential threat” to artists’ creative ownership.
Cranston’s case wasn’t the first time Sora users faced scrutiny for deepfake misuse. Just last week, OpenAI had to block videos depicting Martin Luther King Jr., after his estate filed complaints over what it described as “disrespectful portrayals.”
Similarly, Zelda Williams, daughter of the late comedian Robin Williams, pleaded with users to stop circulating AI-generated clips that imitated her father’s voice. “These deepfakes are not tributes—they are distortions,” she said in a statement.
These incidents underscore the broader ethical challenges AI companies face as generative tools become increasingly sophisticated and accessible.
OpenAI’s new policies represent its most comprehensive response yet. Since Sora 2’s launch on September 30, the company has refined its approach to intellectual property and likeness protection.
On October 3, CEO Sam Altman announced updates to Sora’s opt-out policy, giving rightsholders greater control over how their identities and intellectual properties are used. Previously, creators and studios had to request removal from AI training datasets. Now, OpenAI has introduced a granular control system that allows them to manage permissions at a much more detailed level.
At launch, Sora 2 already required explicit opt-in consent for using an individual’s likeness or voice. The new framework expands on that by ensuring the company will respond promptly to any reported misuse.
Altman reaffirmed OpenAI’s support for the NO FAKES Act, a proposed U.S. federal bill designed to outlaw the creation or distribution of unauthorized AI-generated replicas. “We will always stand behind performers’ rights,” Altman said. “OpenAI remains deeply committed to protecting individuals from digital misrepresentation.”
OpenAI’s move comes amid intensifying scrutiny over AI-generated content in film, music, and media. With deepfake technology now capable of mimicking real people almost flawlessly, concerns over identity theft, misinformation, and creative exploitation are mounting.
Experts estimate that over 15,000 deepfake videos featuring public figures circulated online in 2024 alone—a number expected to double by 2026 if regulations don’t keep pace with innovation.
For Hollywood, the issue hits especially close to home. Actors, musicians, and voice artists fear that without strict guardrails, their digital replicas could be used commercially without compensation or consent. The latest OpenAI measures, therefore, represent not just a company policy update but a critical industry precedent for ethical AI use.
As the technology evolves, the line between creativity and exploitation will remain under constant debate—but for now, OpenAI’s actions signal a growing recognition that human identity deserves as much protection in the digital world as it does in reality.