Three plaintiffs from Tennessee, including two minors, filed a federal lawsuit against xAI, accusing the company of enabling the creation of sexually explicit images using real photographs. The complaint, filed Monday in a California federal court, seeks class-action status for individuals in the United States who were "reasonably identifiable" in AI-generated sexualized content.
The lawsuit targets the company’s Grok image generator and its alleged role in producing and distributing nonconsensual explicit material. The plaintiffs argue that the system allowed users to manipulate real images into sexual content without adequate safeguards.
Allegations of child sexual abuse material
According to the complaint, all three plaintiffs were minors at the time the images were created. Lawyers allege that Grok was used to generate child sexual abuse material (CSAM) by altering real photos into explicit images and videos, which were later circulated online.
"These are children whose school photographs and family pictures were turned into child sexual abuse material," Annika Martin of Lieff Cabraser Heimann & Bernstein said in a statement cited by Reuters. "Elon Musk and xAI deliberately designed Grok to produce sexually explicit content for financial gain, with no regard for the children and adults who would be harmed."
Another lawyer for the plaintiffs, Vanessa Baehr-Jones, stated that
"xAI chose to profit off the sexual predation of real people, including children, despite knowing full well the consequences of creating such a dangerous product."
Discovery and distribution of altered images
Court filings describe how the plaintiffs first learned about the images. One of the girls received a message on Instagram in December from an anonymous user who warned her that explicit, AI-altered images of her had been uploaded to a Discord server. The material allegedly depicted her and other girls from her high school in sexualized positions.
"The images showed her entire body, including her genitals, without any clothes. The video depicted her undressing until she was entirely nude," the complaint states.
Investigators later found that the content had spread beyond Discord to platforms such as Telegram, where it was allegedly used to trade for other CSAM. Authorities arrested a suspect after law enforcement was alerted, and reportedly discovered illegal material on the individual’s phone that had been generated using AI tools linked to Grok.
The lawsuit claims the content was created through a third-party application that licensed xAI’s technology. Despite this, plaintiffs argue that the system still relied on xAI’s infrastructure and that the company profited from such use.
Claims of design failures and lack of safeguards
The complaint argues that xAI failed to implement industry-standard protections to prevent the generation of illegal or harmful content, particularly involving minors. It also claims that the company’s licensing model allowed it to distance itself from liability while continuing to benefit financially.
Plaintiffs allege that the altered images caused severe emotional distress and reputational harm. One parent said they saw their child panic after finding the pictures online, which shows how quickly they affected their mental health.
The filing seeks damages, legal fees, and a court order requiring xAI to stop the alleged practices. In some claims, plaintiffs request compensation of at least $150,000 per violation under applicable laws, along with punitive damages and restitution of profits.
Broader scrutiny and prior backlash
The case emerges amid growing scrutiny of generative AI tools that create nonconsensual sexualized content. Earlier this year, backlash intensified after Grok was linked to widespread "nudification" use cases.
Research cited in the complaint from the Center for Countering Digital Hate estimated that millions of sexualized images were generated in a short period, including thousands involving minors.
In response to earlier criticism, xAI stated in January that it had restricted the ability to generate or edit images of real people in revealing clothing in jurisdictions where such content is illegal. Elon Musk previously said on X he was "not aware of any naked underage images generated by Grok" and maintained that the system would refuse illegal requests.
I not aware of any naked underage images generated by Grok. Literally zero.
— Elon Musk (@elonmusk) January 14, 2026
Obviously, Grok does not spontaneously generate images, it does so only according to user requests.
When asked to generate images, it will refuse to produce anything illegal, as the operating principle… https://t.co/YBoqo7ZmEj
Legal pressure builds across jurisdictions
This lawsuit adds to a series of legal challenges and regulatory investigations into xAI and its parent platform X. Authorities in multiple regions, including Europe and Australia, have examined the risks associated with AI-generated image abuse.
The case stands out as one of the first lawsuits brought by minors directly against an AI developer over the alleged creation and spread of CSAM using identifiable real images. It raises questions about responsibility when generative AI tools are deployed through third-party applications and whether existing safeguards meet legal expectations.
Courts will now assess whether xAI’s design choices, licensing structure, and safety measures meet the standard required to prevent foreseeable harm tied to its technology.

Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that, despite the nature of much of the material created and hosted on this website, HODL FM operates as a media and informational platform, not a provider of financial advisory services. The opinions of authors and other contributors are their own and should not be taken as financial advice. If you require advice, HODL FM strongly recommends contacting a qualified industry professional.





