
Ofcom has opened a formal investigation into X under the UK’s Online Safety Act, following reports that the Grok chatbot has been used to generate and share illegal sexual imagery, including non-consensual intimate images and material involving children.
The regulator said the probe will test whether X properly assessed and mitigated the risk of UK users encountering illegal content, and whether it acted quickly enough to remove it.
Last Monday, Ofcom made contact with X and set a deadline of Friday 9 January for it to explain what steps it has taken to comply with its duties to protect its users in the UK. The company subsequently responded ahead of the deadline and Ofcom carried out an expedited assessment of available evidence as a matter of urgency.
Grok’s actions so far have been to put the functionality behind a paywall; taking a commercial decision rather than addressing the concerns around its AI technology.
Business Secretary Peter Kyle said X was “not doing enough to keep its customers safe online”, and signalled the government would support Ofcom’s next steps.
The UK’s Online Safety Act is concerned with protecting people in the UK. It does not require platforms to restrict what people in other countries can see.
An Ofcom spokesperson said: “Reports of Grok being used to create and share illegal non-consensual intimate images and child sexual abuse material on X have been deeply concerning. Platforms must protect people in the UK from content that’s illegal in the UK, and we won’t hesitate to investigate where we suspect companies are failing in their duties, especially where there’s a risk of harm to children.”
If Ofcom finds a breach, it can require remedial action and impose penalties of up to £18 million (€20.7 million) or 10% of qualifying worldwide revenue, whichever is greater, and in the most serious cases seek court-backed measures that could restrict access in the UK.