
Photo: Axios
Elon Musk’s social media platform X is facing mounting regulatory pressure worldwide after its AI chatbot Grok was found to generate explicit and sexualized images of women and children, including content that authorities say may qualify as child sexual abuse material. Regulatory agencies in the European Union, India, Malaysia, and Brazil have launched or announced probes, while U.S. advocacy groups are urging federal investigations.
The controversy centers on Grok Imagine, a recently updated feature that allows users to generate images from text prompts directly within X. Over recent weeks, the tool has been widely used to create and circulate nonconsensual intimate images derived from real photographs, many of which spread rapidly on the platform before being flagged or removed.
In the United Kingdom, media regulator Ofcom confirmed it has formally requested information from X regarding Grok’s image generation capabilities and the safeguards in place to prevent abuse. At the European Union level, officials have signaled that the matter is being treated with urgency.
European Commission spokesperson Thomas Regnier said regulators were “very seriously looking into this matter,” noting that Grok appeared to offer a so called “spicy mode” capable of producing explicit sexual content, including imagery that resembles minors.
“This is not edgy content or artistic expression,” Regnier said at a press briefing. “This is illegal. It is appalling, and it has no place in Europe.”
Under the EU’s Digital Services Act, platforms can face fines of up to 6 percent of global annual revenue for failing to prevent the spread of illegal content or for inadequate risk mitigation.
India’s Ministry of Electronics and Information Technology issued a formal directive ordering X to conduct a comprehensive technical, procedural, and governance level review of Grok. The company was given until January 5 to submit its findings and outline corrective measures.
Indian officials have increasingly scrutinized global tech platforms, particularly around AI governance, data protection, and child safety. Failure to comply with ministry directives can lead to platform restrictions or financial penalties.
Malaysia’s Communications and Multimedia Commission has also opened an investigation and said it will summon company representatives. In a statement, the regulator urged all platforms operating in the country to implement safeguards aligned with Malaysian law, particularly for AI powered features such as chatbots and image manipulation tools.
In South America, the issue has reached the political level. A Brazilian member of parliament said she has formally requested the federal public prosecutor and the national data protection authority to suspend Grok’s operation in the country until investigations are completed.
Brazil has taken an increasingly assertive stance toward social media companies in recent years, including temporary platform bans and enforcement actions tied to misinformation, content moderation, and data protection failures.
In the United States, the National Center on Sexual Exploitation has called on the Department of Justice and the Federal Trade Commission to open investigations into X and xAI. While no formal enforcement action has been announced, federal officials have made clear that AI generated content does not fall outside existing child protection laws.
NCOSE’s chief legal officer Dani Pinter noted that while legal precedent around generative AI remains limited, federal statutes prohibiting the creation and distribution of child sexual abuse material apply even to digitally generated content when it depicts identifiable minors or explicit sexual conduct involving children.
Those laws include the Take It Down Act, enacted last year, which strengthened enforcement mechanisms against both real and synthetic CSAM.
A Department of Justice spokesperson said the agency treats AI generated child sexual abuse material with extreme seriousness and will aggressively prosecute producers and possessors of such content. The FTC declined to comment on whether it is reviewing the matter.
X issued its first public response through its official Safety account, stating that it removes illegal content, permanently suspends offending accounts, and cooperates with law enforcement when necessary. Elon Musk separately warned that users who prompt Grok to generate illegal material would face the same consequences as those who upload illegal content directly.
However, critics say those statements fall short given the scale and speed at which the images circulated. Musk himself drew criticism after sharing Grok generated images, including a self parody image, while the controversy was unfolding, a move many safety advocates viewed as dismissive.
An xAI employee later said Grok Imagine had been updated, but did not clarify whether the changes addressed the creation of explicit or harmful imagery.
Technology and AI safety experts argue that the situation reflects deeper structural failures. Tom Quisel, CEO of Musubi AI, said xAI appeared to lack even basic trust and safety controls that are standard across the industry.
According to Quisel, it is technically straightforward to block image generation involving minors, partial nudity, or sexually suggestive prompts. He argued that these protections should have been implemented before the feature was rolled out to millions of users.
X has previously faced criticism for reinstating accounts linked to child exploitation content. In 2023, the platform briefly suspended and later reinstated a high profile influencer who had shared child exploitation images connected to a criminal case, a decision that sparked widespread backlash at the time.
Despite the regulatory scrutiny and public criticism, user engagement appears unaffected in the short term. Data from mobile analytics firm Apptopia shows daily downloads of the Grok app have risen 54 percent since January 2. Over the same period, daily downloads of X increased by approximately 25 percent.
The surge highlights a recurring tension in the tech industry, where rapid adoption of new AI features often outpaces the development of effective safety and governance frameworks.
As investigations continue across multiple jurisdictions, X and xAI face growing pressure to demonstrate meaningful changes to their AI deployment strategy. With potential fines, suspensions, and legal actions on the table, the Grok controversy is shaping up to be a defining test of how far regulators are willing to go in holding AI driven platforms accountable for real world harm.
For the broader industry, the case underscores a clear message. Innovation without robust safeguards is no longer just a reputational risk. It is a regulatory and legal liability.









