The tech world’s deepfake pornography downside is now larger than simply X.
In a letter to the leaders of X, Meta, Alphabet, Snap, Reddit and TikTok, a number of U.S. senators are asking the businesses to offer proof that they’ve “sturdy protections and insurance policies” in place, and to clarify how they plan to curb the rise of sexualized deepfakes on their platforms.
The senators additionally demanded that the businesses protect all paperwork and knowledge referring to the creation, detection, moderation, and monetization of sexualized, AI-generated photographs, in addition to any associated insurance policies.
The letter comes hours after X stated it updated Grok to ban it from making edits of actual folks in revealing clothes, and restricted picture creation and edits by way of Grok to paying subscribers. (X and xAI are a part of the identical firm.)
Pointing to media reviews about how easily and often Grok generated sexualized and nude photographs of ladies and kids, the senators identified that platforms’ guardrails to stop customers from posting non-consensual, sexualized imagery is probably not sufficient.
“We acknowledge that many firms preserve insurance policies towards non-consensual intimate imagery and sexual exploitation, and that many AI methods declare to dam express pornography. In observe, nonetheless, as seen within the examples above, customers are discovering methods round these guardrails. Or these guardrails are failing,” the letter reads.
Grok, and consequently X, have been closely criticized for enabling this pattern, however different platforms should not immune.
Techcrunch occasion
San Francisco
|
October 13-15, 2026
Deepfakes first gained recognition on Reddit, when a web page displaying synthetic porn videos of celebrities went viral earlier than the platform took it down in 2018. Sexualized deepfakes focusing on celebrities and politicians have multiplied on TikTok and YouTube, although they normally originate elsewhere.
Meta’s Oversight Board final yr referred to as out two cases of explicit AI images of feminine public figures, and the platform has additionally allowed nudify apps to promote advertisements on its companies, although it did sue a company called CrushAI later. There have been a number of reviews of kids spreading deepfakes of peers on Snapchat. And Telegram, which isn’t included on the senators’ checklist, has additionally develop into infamous for hosting bots built to undress photos of ladies.
X, Alphabet, Reddit, Snap, TikTok and Meta didn’t instantly reply to requests for remark.
The letter calls for the businesses present:
- Coverage definitions of “deepfake” content material, non-consensual intimate imagery,” or related phrases.
- Descriptions of the businesses’ insurance policies and enforcement method for non-consensual AI deepfakes of peoples’ our bodies, non-nude footage, altered clothes and “digital undressing.”
- Descriptions of present content material insurance policies addressing edited media and express content material, in addition to inner steering supplied to moderators.
- How present insurance policies govern AI instruments and picture mills as they relate to suggestive or intimate content material.
- What filters, guardrails or measures have been applied to stop the technology and distribution of deepfakes.
- Which mechanisms the businesses use to establish deepfake content material and stop them from being re-uploaded.
- How they stop customers from cashing in on such content material.
- How the platforms stop themselves from monetizing non-consensual AI-generated content material.
- How the businesses’ phrases of service allow them to ban or droop customers who publish deepfakes.
- What the businesses do to inform victims of non-consensual sexual deepfakes.
The letter is signed by Senators Lisa Blunt Rochester (D-Del.), Tammy Baldwin (D-Wis.), Richard Blumenthal (D-Conn.), Kirsten Gillibrand (D-N.Y.), Mark Kelly (D-Ariz.), Ben Ray Luján (D-N.M.), Brian Schatz (D-Hawaii), and Adam Schiff (D-Calif.).
The transfer comes only a day after xAI’s proprietor Elon Musk stated that he was “not aware of any bare underage photographs generated by Grok.” Afterward Wednesday, California’s legal professional common opened an investigation into xAI’s chatbot, following mounting stress from governments across the world incensed by the dearth of guardrails round Grok that allowed this to occur.
xAI has maintained that it takes motion to take away “unlawful content material on X, together with [CSAM] and non-consensual nudity,” although neither the corporate nor Musk have addressed the truth that Grok was allowed to generate such edits within the first place.
The issue isn’t constrained to non-consensual manipulated sexualized imagery both. Whereas not all AI-based picture technology and modifying companies let customers “undress” folks, they do let one simply generate deepfakes. To choose just a few examples, OpenAI’s Sora 2 reportedly allowed users to generate explicit videos that includes kids; Google’s Nano Banana seemingly generated a picture exhibiting Charlie Kirk being shot; and racist videos made with Google’s AI video mannequin are garnering hundreds of thousands of views on social media.
The problem grows much more complicated when Chinese language picture and video mills come into the image. Many Chinese language tech firms and apps — particularly these linked to ByteDance — supply simple methods to edit faces, voices and movies, and people outputs have unfold to Western social platforms. China has stronger artificial content material labeling necessities that don’t exist within the U.S. on the federal stage, the place the lots as an alternative depend on fragmented and dubiously enforced insurance policies from the platforms themselves.
US lawmakers have already handed some laws in search of to rein in deepfake pornography, however the influence has been restricted. The Take It Down Act, which grew to become federal legislation in Might, is supposed to criminalize the creation and dissemination of non-consensual, sexualized imagery. However a number of provisions in the law make it troublesome to carry image-generating platforms accountable, focusing many of the scrutiny on particular person customers as an alternative.
In the meantime, a quantity states are attempting to take issues into their very own fingers to guard customers and elections. This week, New York Governor Kathy Hochul proposed laws that may require AI-generated content material to be labeled as such, and ban non-consensual deepfakes in specified intervals main as much as elections, together with depictions of opposition candidates.

