Elon Musk said Wednesday he’s “not conscious of any bare underage photographs generated by Grok,” hours earlier than the California legal professional basic opened an investigation into xAI’s chatbot over the “proliferation of nonconsensual sexually specific materials.”
Musk’s denial comes as pressure mounts from governments worldwide — from the U.Ok. and Europe to Malaysia and Indonesia — after customers on X started asking Grok to show photographs of real women, and in some circumstances youngsters, into sexualized photographs with out their consent. Copyleaks, an AI detection and content material governance platform, estimated roughly one picture was posted every minute on X. A separate sample gathered from January 5 to January 6 discovered 6,700 per hour over the 24-hour interval. (X and xAI are a part of the identical firm.)
“This materials…has been used to harass individuals throughout the web,” stated California Lawyer Basic Rob Bonta in an announcement. “I urge xAI to take quick motion to make sure this goes no additional.”
The AG’s workplace will examine whether or not and the way xAI violated the legislation.
A number of legal guidelines exist to guard targets of nonconsensual sexual imagery and little one sexual abuse materials (CSAM). Final yr the Take It Down Act was signed right into a federal legislation, which criminalizes knowingly distributing nonconsensual intimate photographs — together with deepfakes — and requires platforms like X to take away such content material inside 48 hours. California additionally has its personal series of laws that Gov. Gavin Newsom signed in 2024 to crack down on sexually specific deepfakes.
Grok started fulfilling person requests on X to supply sexualized photographs of ladies and youngsters towards the tip of the yr. The pattern seems to have taken off after sure adult-content creators prompted Grok to generate sexualized imagery of themselves as a type of advertising, which then led to different customers issuing related prompts. In various public circumstances, together with well-known figures like “Stranger Issues” actress Millie Bobby Brown, Grok responded to prompts asking it to change actual photographs of actual ladies by altering clothes, physique positioning, or bodily options in overtly sexual methods.
In response to some reports, xAI has begun implementing safeguards to handle the difficulty. Grok now requires a premium subscription earlier than responding to sure image-generation requests, and even then the picture is probably not generated. April Kozen, VP of promoting at Copyleaks, advised TechCrunch that Grok might fulfill a request in a extra generic or toned-down means. They added that Grok seems extra permissive with grownup content material creators.
Techcrunch occasion
San Francisco
|
October 13-15, 2026
“Total, these behaviors counsel X is experimenting with a number of mechanisms to cut back or management problematic picture technology, although inconsistencies stay,” Kozen stated.
Neither xAI nor Musk has publicly addressed the issue head on. A number of days after the situations started, Musk appeared to make mild of the difficulty by asking Grok to generate an image of himself in a bikini. On January 3, X’s safety account stated the corporate takes “motion in opposition to unlawful content material on X, together with [CSAM],” with out particularly addressing Grok’s obvious lack of safeguards or the creation of sexualized manipulated imagery involving ladies.
The positioning mirrors what Musk posted at this time, emphasizing illegality and person habits.
Musk wrote he was “not conscious of any bare underage photographs generated by Grok. Actually zero.” That assertion doesn’t deny the existence of bikini pics or sexualized edits extra broadly.
Michael Goodyear, an affiliate professor at New York Regulation College and former litigator, advised TechCrunch that Musk probably narrowly targeted on CSAM as a result of the penalties for creating or distributing artificial sexualized imagery of youngsters are higher.
“For instance, in america, the distributor or threatened distributor of CSAM can withstand three years imprisonment beneath the Take It Down Act, in comparison with two for nonconsensual grownup sexual imagery,” Goodyear stated.
He added that the “greater level” is Musk’s try to attract consideration to problematic person content material.
“Clearly, Grok doesn’t spontaneously generate photographs. It does so solely in accordance with person request,” Musk wrote in his submit. “When requested to generate photographs, it can refuse to supply something unlawful, because the working precept for Grok is to obey the legal guidelines of any given nation or state. There could also be occasions when adversarial hacking of Grok prompts does one thing sudden. If that occurs, we repair the bug instantly.”
Taken collectively, the submit characterizes these incidents as unusual, attributes them to person requests or adversarial prompting, and presents them as technical points that may be solved by way of fixes. It stops wanting acknowledging any shortcomings in Grok’s underlying security design.
“Regulators might think about, with consideration to free speech protections, requiring proactive measures by AI builders to forestall such content material,” Goodyear stated.
TechCrunch has reached out to xAI to ask what number of occasions it caught situations of nonconsensual sexually manipulated photographs of ladies and youngsters, what guardrails particularly modified, and whether or not the corporate notified regulators of the difficulty. TechCrunch will replace the article if the corporate responds.
The California AG isn’t the one regulator to attempt to maintain xAI accountable for the difficulty. Indonesia and Malaysia have each briefly blocked entry to Grok; India has demanded that X make quick technical and procedural modifications to Grok; the European Commission ordered xAI to retain all paperwork associated to its Grok chatbot, a precursor to opening a brand new investigation; and the U.Ok.’s on-line security watchdog Ofcom opened a formal investigation beneath the U.Ok.’s On-line Security Act.
xAI has come beneath fireplace for Grok’s sexualized imagery earlier than. As AG Bonta identified in an announcement, Grok features a “spicy mode” to generate explicit content. In October, an replace made it even simpler to jailbreak what little security pointers there have been, leading to many customers creating hardcore pornography with Grok, in addition to graphic and violent sexual images.
Lots of the extra pornographic photographs that Grok has produced have been of AI-generated individuals — one thing that many may nonetheless discover ethically doubtful however maybe much less dangerous to the people within the photographs and movies.
“When AI methods enable the manipulation of actual individuals’s photographs with out clear consent, the impression will be quick and deeply private,” Copyleaks co-founder and CEO Alon Yamin stated in an announcement emailed to TechCrunch. “From Sora to Grok, we’re seeing a fast rise in AI capabilities for manipulated media. To that finish, detection and governance are wanted now greater than ever to assist forestall misuse.”

