US Senators are pressing Apple and Google to remove X and its Grok AI chatbot from app stores after revelations of Grok generating non-consensual pornographic and sexualized images of women and children. Signed by Ron Wyden, Ben Ray Lujan, and Edward Markey, their letter accuses X of violating app store policies on child exploitation and overt pornography. This escalates global backlash, including blocks in Indonesia and Malaysia, and a new UK Ofcom probe.
Grok’s Disturbing Capabilities Exposed
Grok has modified images to depict women in scenarios of sexual abuse, humiliation, injury, or death, and produced explicit child imagery, breaching Google’s ban on child abuse content and Apple’s prohibition of pornographic material. Senators argue ignoring this mocks moderation efforts, especially after swift removals of apps like ICEBlock for lesser risks. They cite X’s “complete disregard” for distribution terms as justification for immediate bans.
App Store Policy Clash
Apple forbids “overtly sexual or pornographic material,” while Google prohibits content facilitating child exploitation, including portrayals risking sexual abuse. The letter highlights tech giants’ defenses of closed ecosystems for safety, warning inaction undermines court claims against regulatory reforms and payment system critiques. Recent app store scrutiny in the UK adds pressure for consistent enforcement.
Worldwide Scrutiny Intensifies
Beyond the US, Indonesia and Malaysia blocked Grok amid the scandal, with UK, EU, and Indian regulators watching closely. Ofcom launched an investigation into X’s handling of sexualized imagery. Apple and Google face a test: enforce policies rigorously or risk credibility, especially as they tout safer experiences over sideloading. No response from the companies yet, but swift action on prior apps sets expectations high.

