
Photo: PBS
Instagram chief Adam Mosseri told a Los Angeles courtroom that while social media can be used in ways that feel unhealthy, he does not believe it meets the threshold of clinical addiction. His testimony came during a closely watched trial that examines whether major technology companies misled the public about the safety of their platforms and knowingly introduced features that harmed young users.
Taking the stand in Los Angeles Superior Court, Mosseri emphasized that there is a meaningful distinction between “problematic use” and medically recognized addiction. He compared casual references to social media addiction to saying someone is “addicted” to a Netflix series — a phrase commonly used but not reflective of a clinical diagnosis.
“I think it’s possible to use Instagram more than you feel good about,” Mosseri said, noting that excessive use is subjective and varies from person to person. He repeatedly clarified that he is not a medical professional.
High-stakes legal battle over teen mental health
The case centers on a plaintiff identified in court documents as “KGM” and her mother, who argue that Instagram and other social platforms contributed substantially to the plaintiff’s mental health struggles. The lawsuit originally included Meta, YouTube, TikTok and Snap, alleging that the companies made design decisions — such as infinite scroll, algorithmic recommendations and immersive engagement features — that encouraged compulsive use among minors.
TikTok and Snap are no longer involved in the Los Angeles proceedings after reaching settlements with a plaintiff connected to the case. The jury must now determine whether Instagram was a “substantial factor” in the mental health challenges at issue.
Meta has pushed back strongly against the claims, arguing that the plaintiff faced serious personal challenges before ever using social media. A company spokesperson said the evidence would demonstrate that Instagram was not the root cause of her struggles.
The broader legal context is significant. Across the United States, dozens of lawsuits have been filed by families, school districts and state attorneys general alleging that social media platforms contributed to rising rates of anxiety, depression and self-harm among teenagers. Public health research in recent years has pointed to correlations between heavy social media use and mental health challenges, though establishing direct causation remains legally complex.
Debate over design choices and profit motives
Plaintiff’s attorney Mark Lanier questioned Mosseri extensively about whether Instagram’s product decisions prioritize engagement and profit over user well-being — particularly for minors.
When asked whether problematic usage exists on Instagram, Mosseri responded that it depends on the individual. Lanier pressed further, asking whether Mosseri, as a key decision-maker, leans toward maximizing growth or prioritizing safety testing before releasing new features.
Mosseri testified that protecting minors is a long-term business imperative, arguing that safeguarding young users ultimately supports sustainable growth. “Protecting minors over the long run is good for business,” he said, framing safety as aligned with corporate incentives rather than in conflict with them.
Instagram’s revenue model is overwhelmingly advertising-driven. Meta generates tens of billions of dollars annually from ads across its platforms, with revenue tied largely to user engagement and time spent viewing content. However, Mosseri maintained that not every product feature is designed to drive ad impressions.
Inside Meta’s debate over plastic surgery filters
A significant portion of testimony focused on internal discussions from 2019 about augmented reality filters that altered users’ appearances to resemble cosmetic surgery enhancements.
Lanier introduced internal email exchanges showing executives debating whether to ban such filters amid concerns from health experts and potential public relations fallout. One internal subheading referenced “PR fire on plastic surgery,” reflecting awareness of reputational risk.
In the emails, Meta’s technology chief Andrew Bosworth indicated that CEO Mark Zuckerberg wanted to review the issue, expressing concern about whether there was sufficient data demonstrating real-world harm. Another executive, John Hegeman, warned that a blanket ban on effects that could not be replicated with makeup might hurt competitiveness in Asian markets, including India.
Mosseri testified that he interpreted those concerns as relating more to cultural relevance than direct revenue impact. He told the court that Meta does not earn money from filters themselves. Instead, the company’s revenue is generated by advertising exposure.
Internal documents presented three policy options: a temporary ban pending further research, lifting the ban while limiting algorithmic promotion, or fully lifting restrictions despite higher well-being risks. Mosseri indicated he favored a middle-ground approach that removed certain filters from recommendations while allowing limited availability.
Margaret Stewart, then vice president of product design and responsible innovation, responded in writing that she believed the selected option carried too much risk and supported a stronger ban.
Ultimately, Mosseri testified, the company implemented what he described as a more “focused ban,” removing a subset of plastic surgery-style filters while allowing others to remain.
Revenue model under scrutiny
During cross-examination, Mosseri reiterated that digital filters are used by a minority of Instagram’s user base and that there is no internal data showing they increase content consumption or advertising exposure.
“We want to help people express themselves,” he said, emphasizing that ad revenue is based primarily on how many ads users view, not on whether they use visual effects.
The testimony underscores a key tension in the case: plaintiffs argue that engagement-oriented design choices inherently incentivize prolonged use, while Meta contends that its business model does not depend on exploiting vulnerable users.
Broader implications for the tech industry
The Los Angeles trial is one of several high-profile cases testing the legal boundaries of platform accountability. Regulators, lawmakers and courts are increasingly scrutinizing how recommendation algorithms, infinite scrolling and immersive design influence user behavior — particularly among teenagers, who represent a significant portion of Instagram’s global audience.
Meta has introduced parental controls, time-management tools and content sensitivity filters in recent years, positioning these updates as evidence of its commitment to youth safety. Critics argue that such measures were introduced only after mounting public pressure.
As the trial continues, jurors will weigh internal communications, expert testimony and research on adolescent psychology. The outcome could shape not only Instagram’s legal exposure but also the broader regulatory landscape governing social media design and youth protection.
For now, Mosseri’s central argument remains clear: excessive use may occur, but labeling Instagram as inherently addictive mischaracterizes the platform — and the science behind addiction itself.









