
Photo: Al Jazeera
Malaysia and Indonesia have blocked access to Grok, the artificial intelligence chatbot developed by Elon Musk’s xAI, following mounting concerns that the tool has been used to generate nonconsensual sexual images, including deepfakes and material involving minors.
The coordinated actions underscore growing regulatory alarm across Asia and beyond over how generative AI tools can be misused at scale when safeguards fail to keep pace with rapid product rollouts.
Malaysia ordered temporary restrictions on Grok on Sunday, citing repeated failures by X Corp to adequately address risks linked to the chatbot’s design and deployment. The decision came just one day after Indonesia paused access to the tool and summoned representatives from X for clarification.
The controversy intensified after users discovered that Grok could be used to easily create and circulate explicit images, including nonconsensual sexual content and child sexual abuse material. The risk expanded significantly after xAI rolled out updates to Grok’s image-generation features, allowing users to generate images directly from text prompts.
Because Grok is integrated into X, the social media platform formerly known as Twitter, the tool has immediate access to a vast global audience. Regulators argue that this reach magnifies the potential harm when moderation systems fail.
Southeast Asian authorities have pointed to the speed and simplicity of image generation as a critical weakness, particularly in jurisdictions with strict online safety and anti-pornography laws.
xAI and Elon Musk have publicly stated that users who create illegal content using Grok would face consequences similar to those imposed for uploading such material directly to X. The company has also announced restrictions on image generation and editing features, limiting them to paying subscribers in an attempt to close moderation gaps.
However, regulators in both Malaysia and Indonesia said these steps do not go far enough.
Malaysia’s Communications and Multimedia Commission described X’s responses as “insufficient,” arguing that the company relies too heavily on user reporting rather than proactively addressing risks embedded in the AI’s design and operation.
The watchdog emphasized that access to Grok will remain restricted until effective safeguards are implemented, particularly those aimed at protecting women and children from exploitation.
Attempts by media outlets to obtain comment from xAI have been unsuccessful, with company press channels offering no substantive response to regulatory concerns.
Both Malaysia and Indonesia enforce some of the region’s strictest laws governing online pornography and digital abuse. These frameworks ban the distribution of obscene material and give regulators broad authority to block platforms deemed noncompliant.
Indonesia’s Ministry of Communications and Digital Affairs has framed the issue as a fundamental violation of human rights. Officials described nonconsensual sexual deepfakes as a form of digital violence that undermines dignity and personal security in the online space.
The government has made clear that misuse of AI for fake pornography will be treated as a serious offense, regardless of whether content is generated by humans or machines.
Southeast Asia is not alone in tightening oversight. Authorities in the European Union, the United Kingdom, Brazil, and India have launched or are considering investigations into Grok’s role in facilitating explicit deepfakes.
In the UK, an online safety watchdog reported discovering criminal images of children aged between 11 and 13 that appear to have been generated using Grok, escalating pressure on regulators to act swiftly.
In the United States, several Democratic lawmakers have called on app stores to suspend access to the chatbot until substantial safety changes are implemented. The U.S. Department of Justice has also reiterated that AI-generated child sexual abuse material will be aggressively prosecuted, signaling heightened enforcement as generative tools proliferate.
The blocking of Grok marks a significant moment for AI governance. It highlights a widening gap between rapid innovation and regulatory expectations around safety, accountability, and human rights.
For AI developers, the message from regulators is increasingly clear: reactive fixes and user-reporting systems are no longer sufficient. Governments are demanding built-in safeguards, stronger content controls, and clear responsibility for harm caused by AI outputs.
As more countries consider similar measures, the Grok case may serve as a precedent for how generative AI tools are regulated globally, particularly when they intersect with social media platforms and mass distribution networks.









