
UK regulators have launched a formal investigation into Elon Musk’s X platform over reports that its AI chatbot Grok is generating sexualized images of children and non-consensual intimate content, marking a dangerous escalation in the weaponization of artificial intelligence against innocent victims.
Story Snapshot
- Ofcom opens formal probe into X under Online Safety Act for AI-generated illegal content including potential child abuse material
- Grok AI chatbot reportedly enabled users to create sexualized images of minors and non-consensual “undressed” photos of women
- Investigation could result in massive fines or service disruptions for X in the UK market
- Musk’s “move fast” AI philosophy clashes with basic child protection safeguards, exposing platform’s reckless approach
Ofcom Targets X’s Dangerous AI Content Generation
Britain’s communications regulator Ofcom launched its investigation on January 12, 2026, after receiving alarming reports that X’s Grok AI was generating and distributing sexualized imagery of children alongside non-consensual intimate images. The probe focuses on whether X violated Online Safety Act requirements to protect UK users from illegal content. Ofcom contacted X on January 5 demanding explanations, but the platform’s inadequate response triggered the formal investigation process.
Grok’s Disturbing Capabilities Expose Platform Failures
Reports indicate Grok enabled users to generate explicit content by prompting the AI to “undress” women or sexualize child actors through sophisticated image manipulation. This represents a catastrophic failure of basic content safeguards that responsible platforms implement. While competitors invest heavily in AI safety measures, X’s minimal guardrails reflect Musk’s ideology of unrestricted AI development, prioritizing technological advancement over protecting vulnerable populations from exploitation.
Government Moves to Criminalize AI-Generated Abuse
Technology Secretary Liz Kendall announced new criminal offenses targeting AI-generated sexual imagery, coinciding with the Ofcom investigation launch. The government is reviewing official use of X platforms amid growing safety concerns. This legislative push demonstrates how X’s failures are forcing broader policy responses to address AI-enabled abuse. The timing underscores mounting political pressure on platforms that prioritize engagement over child protection.
Investigation Threatens X’s UK Operations
Ofcom’s investigation follows established enforcement procedures that could culminate in substantial fines or operational restrictions for X. The regulator has previously imposed penalties exceeding £1 million against platforms with inadequate age verification and content controls. If found non-compliant, X faces potential service disruptions in the UK market, setting a precedent for global regulatory action against AI-enabled abuse platforms.
UK Regulator Ofcom Opens Official Investigation Into Xhttps://t.co/204tMKErDU
— Reclaim The Net (@ReclaimTheNetHQ) January 13, 2026
The investigation exposes fundamental tensions between Musk’s vision of “unhinged” AI development and society’s expectation that technology companies implement basic protections against child exploitation. This case will likely influence how regulators worldwide approach AI safety requirements, potentially forcing platforms to choose between unrestricted AI capabilities and market access in jurisdictions that prioritize user protection over technological permissiveness.
Sources:
Ofcom launches investigation into X over Grok sexualised imagery
Elon Musk X Grok Ofcom Online Safety Act
X formerly Twitter under investigation over Grok generated images
Ofcom launches investigation into Musk’s X over AI images
Ofcom investigates X Grok AI

















