Technology · UK Regulation
X says its Grok chatbot will no longer allow users to “undress” photos of real people — a change that lands as Ofcom investigates whether the platform has breached the UK’s Online Safety Act.
By Swikriti · Published: Thu 15 Jan 2026 (UK) · Updated: Thu 15 Jan 2026
After days of mounting condemnation in the UK, X has introduced new restrictions to stop its Grok AI tool from editing images of real people into sexually revealing versions — the kind of non-consensual content that has sparked calls for tougher enforcement and even speculation about a UK block of the platform.
In a statement posted by the company, X said it has implemented technical measures to prevent Grok from allowing edits of real people into “revealing clothing” such as bikinis, adding that the restriction applies to all users, including paid subscribers. The move follows UK political pressure and regulatory scrutiny focused on the harms of sexualised, non-consensual AI imagery.
If you’ve been following the story, here’s the key point: this isn’t about a single viral post. UK regulators are treating AI-assisted sexualised image creation and sharing as a serious safety issue — and X is now being judged on whether it has done enough, fast enough, to prevent illegal content from spreading.
Related: Swikblog explainer on the Grok AI safety controversy and image abuse
What has changed
X says it has put new technical restrictions in place so Grok can no longer be used to edit photos of real people into sexualised or revealing imagery — including “undressing”-style edits that triggered the backlash.
- Image edits of real people into revealing clothing are blocked, including bikini-style outputs cited by X.
- The restriction applies to everyone — including paid subscribers.
- The change targets a specific misuse: non-consensual sexualised edits of real-person photos, which can be illegal depending on what is created, shared and where it appears.
The timing matters. X had previously talked about limiting some image functions to paying users, but UK politicians and safety advocates argued that restricting access is not the same as preventing harm. This update is framed as a direct block on the most controversial use.
Why the UK angle matters: Ofcom is now involved
The UK’s online safety regulator has opened a formal investigation into X under the Online Safety Act to assess whether the platform has met its duties to protect people in the UK from illegal content. Read Ofcom’s announcement here: Ofcom: investigation into X over Grok sexualised imagery.
Under the Act, platforms are expected to take proportionate steps to prevent and remove illegal harms. If Ofcom concludes X has failed to comply, it can impose major penalties — including substantial fines — and, in the most serious cases, seek court-backed measures that can restrict access in the UK.
For the legal framework and enforcement powers, see the UK government’s Online Safety Act collection: Online Safety Act guidance and updates.
What was the pressure point?
The controversy accelerated after UK ministers publicly criticised the spread of non-consensual sexualised images created through Grok’s image-editing capabilities. The political signal was clear: X would be judged not just on policies written on paper, but on whether the platform’s design choices and safeguards actually reduce harm in the real world.
X owner Elon Musk has argued that Grok generates images only in response to user prompts and that the system is intended to refuse illegal requests, adding that unexpected outputs can occur through adversarial prompt manipulation and should be fixed quickly. Critics, meanwhile, say “prompt-only” framing doesn’t remove platform responsibility when abuse is foreseeable and repeatable.
The US factor: scrutiny is widening
The UK isn’t the only front. US attention has intensified as well, with state-level action in California reportedly focused on the creation and spread of sexualised AI images — including concerns about minors. That multi-jurisdiction pressure matters because it increases the chance of rapid policy shifts, feature removals, and stricter enforcement expectations across markets.
What this means for users (and what to do if you’re targeted)
If you’re a UK user, this change should reduce one pathway for producing sexualised edits of real-person images via Grok. But it does not automatically remove content that may already exist elsewhere online. If you believe you’ve been targeted by non-consensual intimate imagery:
- Document evidence (screenshots, links, timestamps) without re-sharing the content.
- Report it through the platform’s reporting tools and keep confirmation receipts if provided.
- Seek support from local legal or victim-support services if the situation escalates or involves threats.
- Act quickly — takedowns and enforcement are often time-sensitive, especially when content is being reposted.
More broadly, this episode is becoming a landmark test of how far regulators will go to hold platforms accountable when general-purpose AI tools are used for predictable, harmful misuse.













