Craig Peters, CEO of Getty Images, speaks onstage during Vox Media's 2023 Code Conference. Credit: Jerod Harris / Stringer | Getty Images North America
Getty Images, a global leader in visual content licensing, is pouring “millions and millions of dollars” into a high-stakes copyright lawsuit against Stability AI, the developer behind the image-generating tool Stable Diffusion. The lawsuit, filed in both the U.S. and U.K., accuses Stability AI of unlawfully using over 12 million copyrighted images without permission or compensation in order to train its artificial intelligence models.
The case is shaping up to be one of the most closely watched legal confrontations at the intersection of AI and intellectual property rights.
According to Getty’s CEO Craig Peters, Stability AI scraped millions of Getty’s licensed images, including accompanying metadata and watermarks, to train its Stable Diffusion model—an AI tool capable of generating hyper-realistic images from text prompts. The core of Getty’s argument is that this was done purely for commercial gain and in direct competition with Getty's own licensing services.
“It’s not innovation, it’s appropriation,” said Peters. “You can’t just take copyrighted content, repackage it with a neural network, and call that progress. That’s not how fair competition works.”
Getty claims this conduct amounts to copyright infringement and unfair business practices, especially as the AI-generated images could displace the market for professionally licensed photos.
Peters emphasizes the financial weight of the litigation: Getty is already spending tens of millions of dollars in legal fees across both jurisdictions. Even for a company of Getty’s size, he admits the financial burden is immense.
“Just one lawsuit like this is massively expensive. We simply can't afford to go after every single instance of infringement we identify.”
Getty’s lawsuit was filed simultaneously in both the U.S. District Court for the District of Delaware and the High Court in London. However, because Stability AI is a U.K.-based company and the location of the model training is ambiguous, Peters says Getty had no choice but to pursue action in both countries.
Stability AI, backed by substantial venture capital from investors like Coatue, Lightspeed Venture Partners, and others, has pushed back firmly. The company acknowledges that some Getty content may have been ingested during training but insists that its model falls within fair use—a U.S. legal doctrine that allows limited use of copyrighted material for purposes such as commentary, criticism, or transformative creation.
Stability AI claims that the model doesn’t store or reproduce copyrighted works in any meaningful way and that the images generated are new, derivative outputs that don’t infringe upon original content.
But critics argue that the “transformative use” argument wears thin when billions in venture capital are at stake and when AI-generated content can directly compete with original licensed material.
The lawsuit against Stability AI isn’t happening in a vacuum. A wave of legal challenges has been building against generative AI firms over the last year:
The lawsuits reflect growing public scrutiny over how foundational AI models acquire and use data. Industry giants like Google, Microsoft, and Amazon have poured billions into generative AI—often without transparency on training data sources.
So why go after Stability AI, and not the bigger names like OpenAI or Google?
According to Peters, Getty had to be selective due to legal costs. “We had to make a decision on who to pursue and when. This case represents the clearest example of unlicensed use directly tied to commercial outcomes,” he said.
He also hinted that a favorable outcome could set a precedent, potentially impacting how other AI firms handle copyrighted content.
The initial liability trial is set to begin on June 9, 2025, in the U.S. The court will determine whether Stability AI’s use of Getty’s images constitutes copyright infringement. If the court rules in Getty’s favor, the implications could ripple across the entire generative AI industry—forcing companies to reconsider how they acquire training data.
Peters says this isn’t just about Getty protecting its bottom line—it’s about defending the rights of photographers, journalists, and visual storytellers in an era of algorithmic disruption.
“We’re not afraid of innovation,” he said. “But if AI is going to be built on the backs of creatives, then there must be a system of compensation and consent. Otherwise, it’s not innovation—it’s exploitation.”
As the legal battle unfolds, companies across the creative and tech industries are watching closely. The outcome could help shape the future boundaries of AI development, copyright law, and digital ethics.