You're offline - Playing from downloaded podcasts
Back to All Episodes
Podcast Episode

Global Regulators Launch Investigations Into Grok AI Over Deepfake Scandal

January 15, 2026

Audio archived. Episodes older than 60 days are removed to save server storage. Story details remain below.

Elon Musk's artificial intelligence company xAI is facing mounting regulatory pressure across multiple continents following revelations that its Grok AI chatbot has been generating sexually explicit deepfake images, including some that appear to depict children. The scandal has triggered investigations from the United States to Southeast Asia, with some countries taking the unprecedented step of blocking the service entirely.

The controversy centers on Grok, an AI chatbot integrated into the X platform that allows users to generate and edit images. Between Christmas and New Year's, an analysis of over 20,000 Grok-generated images revealed that more than half depicted people in minimal clothing, with approximately 2% appearing to be minors. The non-consensual creation and distribution of these images has been used to harass women and girls across the internet, raising serious concerns about AI safety and corporate responsibility.

California Launches Investigation

On January 14, 2026, California Attorney General Rob Bonta announced his office had opened an investigation into xAI. Describing the situation as shocking, Bonta stated that xAI appears to be facilitating the large-scale production of deepfake non-consensual intimate images. Under California law, the company could face fines of $25,000 per image, a penalty that could accumulate to astronomical sums given the volume of problematic content identified. California Governor Gavin Newsom condemned xAI's role in what he described as a breeding ground for predators, calling the situation vile.

International Response Intensifies

Indonesia and Malaysia became the first nations to block Grok entirely, citing concerns over fake pornographic content involving women and children. Indonesian Digital Minister Meutya Hafid emphasized that the government views non-consensual sexual deepfakes as a serious violation of human rights, dignity, and citizen safety in the digital space. The swift action by Southeast Asian nations set a precedent for more aggressive regulatory responses.

The European Commission ordered X to preserve all internal documents and data related to Grok until the end of 2026. A Commission spokesperson called the content illegal and appalling, emphasizing that compliance with EU law is not optional but obligatory. In Ireland, authorities reported 200 active investigations into child sexual abuse-related images generated by Grok.

In the United Kingdom, Prime Minister Keir Starmer condemned the images as disgraceful and disgusting during Prime Minister's Questions. Technology Secretary Liz Kendall announced that new legislation making it an offense to create sexual deepfakes would come into force within the week, describing such images as weapons of abuse. UK media regulator Ofcom launched a formal investigation with the power to issue fines up to 10% of global revenue or £18 million, and could seek court orders to block the platform entirely in the UK.

Company Response and Criticism

Following the international outcry, xAI announced on January 14 that it had implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis. The restriction applies to all users, including paid subscribers. The company also limited image creation and editing through Grok to paid subscribers only, a change designed to improve accountability.

However, experts and regulators have questioned whether these safeguards are sufficient. Technology publication The Verge reported that despite the claimed restrictions, Grok's controls could be bypassed with altered prompts. UK-based deepfakes expert Henry Ajder told Fortune that limiting functionality to paying users will not stop the generation of this content, noting that a month's subscription is not a robust solution to the problem.

Elon Musk responded on X, stating he was not aware of any naked underage images generated by Grok and that the chatbot will refuse to produce anything illegal. He attributed the creation and sharing of potentially illegal content to user requests and a possible bug in the Grok system. Despite these claims, Ofcom stated that its formal investigation would continue even after acknowledging xAI's recent policy changes. The regulator emphasized it is working around the clock to understand what went wrong and what is being done to fix it.

Broader Implications for AI Regulation

The Grok controversy represents a watershed moment in AI regulation, demonstrating how quickly governments worldwide can mobilize in response to perceived threats from artificial intelligence systems. The scale and speed of the regulatory response, spanning from California to Southeast Asia to the European Union, signals a new era of international coordination on AI safety issues.

The incident has also highlighted fundamental questions about responsible AI development. Critics argue that xAI released technology capable of generating harmful content without implementing adequate safeguards from the outset. The reactive nature of the company's response, implementing restrictions only after international outrage and regulatory action, has drawn particular criticism from safety advocates.

The case is likely to influence how other AI companies approach image generation capabilities, potentially leading to more conservative release strategies and stronger pre-deployment safety testing. It also demonstrates that regulatory tolerance for AI-related harms, particularly those involving minors or non-consensual intimate imagery, is extremely low across diverse political and cultural contexts.

As investigations continue across multiple jurisdictions, xAI faces the prospect of substantial financial penalties and potential operational restrictions that could significantly impact its business model. The outcome of these investigations will likely establish important precedents for how AI companies are held accountable for the misuse of their technologies.

Published January 15, 2026 at 6:32pm

More Recent Episodes