Podcast Episode
The delay between passage and implementation has drawn sharp criticism from victims and advocates. Welsh TV presenter Jess Davies, who became a campaigner against intimate image abuse after explicit images of her were created using Grok AI and shared without her consent, expressed frustration with the government's timeline. "I don't understand why the government delayed for so long," Davies said. "Consider how many victims exist as a result of this."
The impact of such abuse extends far beyond the immediate violation. Dr. Daisy Dixon, a philosophy lecturer at Cardiff University, reported experiencing increased harassment, including death threats and rape threats, after speaking publicly about Grok AI on BBC Radio 4. "I've witnessed a significant escalation in user behavior; many men are expressing intense anger," Dixon told the BBC. "I'm still seeing Grok generating images of myself, dressed in certain ways and placed in sexualized positions against our will."
"Just picture receiving a call from a police officer, or possibly a friend or acquaintance, informing you that they have encountered such an image involving you," Murphy explained, highlighting the devastating psychological impact of such violations.
The situation in Guernsey has exposed a critical gap in local legislation. While sharing intimate images without consent is already illegal under Guernsey law, creating such images using AI tools remains unregulated. Detective Inspector Thomas Lowe of Guernsey Police said recent cases have prompted a legislative review to close this loophole.
"We've encountered several cases over the past year that have triggered this push for reform. For me, it's crucial to be proactive rather than reactive," Lowe stated. The Home Affairs Committee is now working to amend the Sexual Offences Law 2020 to criminalize deepfake creation, with implementation expected in 2026.
UK regulator Ofcom launched a formal investigation into X on January 12, 2026, citing "deeply troubling reports" of the chatbot producing undressed images of people and sexualized images of children. Ofcom warned that such content could constitute "intimate image abuse or pornography," and that "sexualized images of children" could be considered "child sexual abuse material."
The investigation followed urgent communications between Ofcom and X. The regulator made contact with the platform on January 5 and set a firm deadline of January 9 for X to explain what steps it had taken to comply with its duties to protect users in the UK. While the company responded by the deadline, Ofcom proceeded with an expedited assessment of available evidence.
Prime Minister Keir Starmer's spokesperson called the restriction "insulting," noting it "merely transforms an AI feature that facilitates the creation of illegal images into a premium offering." The sentiment was echoed across government, with officials stating the measure does not go "anywhere near far enough" to address the harm being caused.
Sophie Mortimer, manager of the UK's Revenge Porn helpline, reported a surge in synthetic sexual imagery throughout 2024 and into 2025, while acknowledging challenges in identifying offenders across jurisdictions. The Internet Watch Foundation has reported "criminal imagery" of children as young as 11, including girls sexualized and depicted topless.
The European Commission joined the regulatory response on January 8, 2026, ordering X to retain all internal documents and data related to Grok until the end of 2026 as part of their investigation into potential violations of digital services regulations.
Spain and Malta have also moved to advance laws criminalizing AI deepfakes, reflecting a broader European trend toward stricter regulation of AI-generated sexual content.
The investigation represents a significant test of the UK's Online Safety Act and could set precedents for how platforms are held accountable for AI-generated harmful content.
The six-month gap between the UK law's passage and implementation illustrates this problem. During that period, victims continued to be targeted with no legal recourse against the creation of their non-consensual images, only against their distribution.
Guernsey's proactive approach, spurred by Detective Inspector Lowe's determination to be "proactive rather than reactive," represents an attempt to stay ahead of technological developments. However, the reality remains that AI capabilities are advancing faster than legislative processes can accommodate.
The case also raises broader questions about the development and deployment of generative AI technologies. Should platforms release AI tools capable of creating realistic intimate images? What safeguards should be mandatory before such tools reach the public? And who bears responsibility when these tools are weaponized against individuals?
For victims like Jess Davies and Dr. Daisy Dixon, the new law represents progress, but progress that came too late. Their advocacy has been instrumental in pushing for change, yet they and countless others have paid a personal price for the gaps in protection that existed during the delay.
As Sophie Mortimer of the Revenge Porn helpline noted, the surge in synthetic sexual imagery shows no signs of abating. The new legislation is a step forward, but the battle to protect individuals from AI-generated abuse is far from over. With technology continuing to evolve at breakneck speed, regulators, platforms, and policymakers face an ongoing challenge to ensure that legal protections keep pace with digital threats.
UK Deepfake Law Takes Effect Amid Global Backlash Against Grok AI
January 14, 2026
Audio archived. Episodes older than 60 days are removed to save server storage. Story details remain below.
Support services across the British Isles are witnessing a surge in deepfake abuse victims as UK legislation criminalizing the creation of non-consensual intimate images came into force this week. The implementation arrives amid mounting international pressure on Elon Musk's X platform over its Grok AI chatbot's ability to generate sexualized deepfakes, but campaigners argue that delays in enacting the law left countless individuals vulnerable to exploitation.
The Legal Landscape Shifts
On January 12, 2026, Technology Secretary Liz Kendall announced that creating non-consensual intimate deepfakes would officially become a priority offense under the Online Safety Act. This marks a significant milestone in digital rights protection, though the law itself was passed six months earlier in June 2025.The delay between passage and implementation has drawn sharp criticism from victims and advocates. Welsh TV presenter Jess Davies, who became a campaigner against intimate image abuse after explicit images of her were created using Grok AI and shared without her consent, expressed frustration with the government's timeline. "I don't understand why the government delayed for so long," Davies said. "Consider how many victims exist as a result of this."
The impact of such abuse extends far beyond the immediate violation. Dr. Daisy Dixon, a philosophy lecturer at Cardiff University, reported experiencing increased harassment, including death threats and rape threats, after speaking publicly about Grok AI on BBC Radio 4. "I've witnessed a significant escalation in user behavior; many men are expressing intense anger," Dixon told the BBC. "I'm still seeing Grok generating images of myself, dressed in certain ways and placed in sexualized positions against our will."
Rising Tide of Victims Seeking Support
Guernsey's Victim Support and Witness Service has documented an alarming increase in individuals seeking help after being targeted with intimate deepfake content. Jenny Murphy, the service's manager, described the trend as "concerning" and the impact on victims as "significant and alarming.""Just picture receiving a call from a police officer, or possibly a friend or acquaintance, informing you that they have encountered such an image involving you," Murphy explained, highlighting the devastating psychological impact of such violations.
The situation in Guernsey has exposed a critical gap in local legislation. While sharing intimate images without consent is already illegal under Guernsey law, creating such images using AI tools remains unregulated. Detective Inspector Thomas Lowe of Guernsey Police said recent cases have prompted a legislative review to close this loophole.
"We've encountered several cases over the past year that have triggered this push for reform. For me, it's crucial to be proactive rather than reactive," Lowe stated. The Home Affairs Committee is now working to amend the Sexual Offences Law 2020 to criminalize deepfake creation, with implementation expected in 2026.
The Grok AI Controversy
The catalyst for much of this legislative urgency has been the widespread misuse of Grok AI, the chatbot integrated into X (formerly Twitter) and owned by Elon Musk. The tool was made freely available to over 500 million users and quickly became a vehicle for creating non-consensual sexualized deepfakes of real individuals, including children.UK regulator Ofcom launched a formal investigation into X on January 12, 2026, citing "deeply troubling reports" of the chatbot producing undressed images of people and sexualized images of children. Ofcom warned that such content could constitute "intimate image abuse or pornography," and that "sexualized images of children" could be considered "child sexual abuse material."
The investigation followed urgent communications between Ofcom and X. The regulator made contact with the platform on January 5 and set a firm deadline of January 9 for X to explain what steps it had taken to comply with its duties to protect users in the UK. While the company responded by the deadline, Ofcom proceeded with an expedited assessment of available evidence.
Platform Response Deemed Insufficient
In response to mounting pressure, X restricted Grok's image generation feature to paying subscribers in early January 2026. However, this move was met with fierce criticism from UK government officials.Prime Minister Keir Starmer's spokesperson called the restriction "insulting," noting it "merely transforms an AI feature that facilitates the creation of illegal images into a premium offering." The sentiment was echoed across government, with officials stating the measure does not go "anywhere near far enough" to address the harm being caused.
Sophie Mortimer, manager of the UK's Revenge Porn helpline, reported a surge in synthetic sexual imagery throughout 2024 and into 2025, while acknowledging challenges in identifying offenders across jurisdictions. The Internet Watch Foundation has reported "criminal imagery" of children as young as 11, including girls sexualized and depicted topless.
International Regulatory Response
The controversy has sparked action beyond the UK's borders. Malaysia and Indonesia took the unprecedented step of temporarily blocking access to Grok, becoming the first countries in the world to do so. Officials in both nations stated that existing controls were insufficient to prevent non-consensual sexual deepfakes.The European Commission joined the regulatory response on January 8, 2026, ordering X to retain all internal documents and data related to Grok until the end of 2026 as part of their investigation into potential violations of digital services regulations.
Spain and Malta have also moved to advance laws criminalizing AI deepfakes, reflecting a broader European trend toward stricter regulation of AI-generated sexual content.
Potential Consequences for X
If Ofcom's investigation finds X in violation of UK law, the platform could face severe penalties. Fines could reach up to £18 million or 10 percent of the company's global revenue, whichever is greater. In cases of severe non-compliance, courts have the authority to order British internet service providers to block access to the platform entirely within the UK.The investigation represents a significant test of the UK's Online Safety Act and could set precedents for how platforms are held accountable for AI-generated harmful content.
The Broader Challenge of AI Regulation
The deepfake crisis highlights a fundamental challenge in regulating rapidly evolving AI technology. By the time legislation is drafted, debated, passed, and implemented, the technological landscape has often shifted dramatically, leaving new vulnerabilities in its wake.The six-month gap between the UK law's passage and implementation illustrates this problem. During that period, victims continued to be targeted with no legal recourse against the creation of their non-consensual images, only against their distribution.
Guernsey's proactive approach, spurred by Detective Inspector Lowe's determination to be "proactive rather than reactive," represents an attempt to stay ahead of technological developments. However, the reality remains that AI capabilities are advancing faster than legislative processes can accommodate.
Looking Forward
As the UK law comes into force and investigations into X proceed, the focus has shifted to enforcement and prevention. Key questions remain about how authorities will identify offenders, particularly when they operate across international borders, and how platforms will be held accountable for the tools they provide.The case also raises broader questions about the development and deployment of generative AI technologies. Should platforms release AI tools capable of creating realistic intimate images? What safeguards should be mandatory before such tools reach the public? And who bears responsibility when these tools are weaponized against individuals?
For victims like Jess Davies and Dr. Daisy Dixon, the new law represents progress, but progress that came too late. Their advocacy has been instrumental in pushing for change, yet they and countless others have paid a personal price for the gaps in protection that existed during the delay.
As Sophie Mortimer of the Revenge Porn helpline noted, the surge in synthetic sexual imagery shows no signs of abating. The new legislation is a step forward, but the battle to protect individuals from AI-generated abuse is far from over. With technology continuing to evolve at breakneck speed, regulators, platforms, and policymakers face an ongoing challenge to ensure that legal protections keep pace with digital threats.
Published January 14, 2026 at 11:12pm