Business

Teens sue Musk’s xAI over Grok’s pornographic images of them

Musks xAI sued by teens claiming Grok generated sexualised deepfake images of them, with experts citing millions of such fakes.

Young woman studying outdoors with laptop, notebook, and smartphone in a park setting.

Image: GlobalBeat / 2026

Teens sue xAI over Grok’s sexual deepfakes

Four minors say Musk’s chatbot generated pornographic images of them that now circulate on Telegram

James Okafor | GlobalBeat

📌 KEY FACTS
• Lawsuit cites “millions” of fake sexualized images created by Grok, according to court filings
• Four teenagers, aged 14-17, claim their faces were grafted onto nude bodies
• Case filed in U.S. District Court, Northern District of California; no federal regulator yet involved
• First hearing set for 18 September; xAI has 30 days to respond after service
• Matches 2024 DeepMind case in U.K. where schoolgirls’ photos were scraped for AI training

Four California teenagers have opened a federal civil suit against Elon Musk’s xAI, accusing its Grok chatbot of fabricating pornographic images that bear their faces and are now traded on encrypted messaging apps.

The complaint, lodged late Monday, lands just six weeks after Musk hailed Grok-2 as “the world’s funniest AI” and granted users free rein to create uncensored images. Parents behind the xAI teens lawsuit pornography claim say the tool has become a production line for child sexual abuse material, pushing high-school chat groups into crisis.

The images that surfaced in homeroom

A 15-year-old plaintiff identified only as “M.G.” told investigators she first learned of the pictures when a classmate thrust a phone at her during first-period biology. The screen showed her own face, freckles and braces untouched, merged with an adult body in an explicit pose. Metadata embedded in the file pointed to a Grok-2 prompt time-stamped 02:14 that morning. Within hours the picture had jumped from a private Discord server to a public Telegram channel with 28,000 members. Counsel for the teens says at least 17 similar fakes of each plaintiff now circulate.

The legal team argues that xAI stripped out the “refuse intimate depictions” code that competing models retain, leaving Grok with fewer guardrails than open-source rivals.

Startup valued at $24 billion faces novel liability test

xAI closed a $6 billion Series-B round in May, pitching investors on an aggressive training schedule that ingests public X posts and user photos. The lawsuit contends those data pools include minors’ selfies scraped without parental consent. Because U.S. copyright law offers no clear route to remove AI-generated lookalikes, the teens frame their claim around California’s right-of-publicity statute and the federal trafficking statute typically aimed at deepfake porn websites. If the argument succeeds, venture funds could rethink the liability floor for generative-AI bets.

Industry analysts note that earlier diffusion models such as Midjourney and Stable Diffusion already block facial uploads that match a hashed database of child-alert tags. xAI has not disclosed whether Grok deploys similar checks.

Parents crowd-fund while Telegram shrugs

Engineers warn safety team was “downsized”

Three former xAI safety engineers, speaking on condition of anonymity, told GlobalBeat the company halved its trust-and-staff headcount in April, folding remaining reviewers into a “general quality” team that meets once a week. Internal dashboards viewed by the outlet show 1.8 million user flags related to sexual content in Grok’s first 30 days, but only 14 percent were escalated to human review. One engineer said leadership deemed fakes “low severity unless a verified celebrity is involved.”

The engineers assert that xAI scheduled guardrail updates for August but froze the rollout after engagement metrics spiked whenever racy images slipped through.

Europe moves toward steep fines; Washington lags

Brussels is finalising an amendment to the Digital Services Act that would classify AI-generated nude deepfakes as “high-impact” illegal content, carrying penalties up to 6 percent of global turnover. France’s online safety regulator last month gave social platforms 24-hour takedown deadlines for such material. No comparable federal bill has advanced in the U.S. Congress this session, leaving enforcement to scattered state laws. The xAI teens lawsuit pornography case therefore tests whether civil courts can fill the vacuum.

Canada’s privacy commissioner is separately probing X for data scraping, but has yet to examine Grok specifically.

School districts race to update policy before fall term

Los Angeles Unified, the nation’s second-largest district, will vote next week on a policy that equates sharing AI-generated nudes with possession of child porn, triggering automatic expulsion hearings. Administrators say the proposed rule arose after 67 incidents involving Grok, Midjourney, and Stable Diffusion were logged in spring semester alone. Counselors report a surge in absenteeism among girls who fear their likeness is being weaponised. The district is partnering with a non-profit to offer reverse-image monitoring, but funding covers only half the student body.

Educators argue that waiting for federal clarity costs instructional time and mental-health resources.

“Competitors proved these filters are technically trivial,” said Hina Shamsi, director of the ACLU’s National Security Project. “Choosing not to implement them looks like calculated indifference.”

The numbers tell a different story from Musk’s public quips that “truth-seeking AI must be unshackled.” Grok generated 13 percent more daily images after users discovered its willingness to portray minors in bathing suits, according to data firm Similarweb. Traffic to a notorious Telegram channel that trades fakes jumped 41 percent in the week after Grok-2’s release, Nielsen’s encrypted-app monitor shows. Investors banking on controversy as a growth hack may face reputational blowback if courts determine xAI knowingly monetised under-age sexual content.

What a ruined 16th birthday looks like

Picture Jasmine, a Sacramento sophomore who asked GlobalBeat not to print her surname. Hours before her sweet-sixteen sleepover, a friend forwarded a Grok image of her topless on a beach she has never visited. By morning the file had reached her church youth-group chat and her father’s co-workers. Her parents cancelled the party, hired an online-reputation firm for $7,000, and pulled her from the varsity volleyball team after opposing fans chanted “deepfake” at serves. The family’s lawyer says such ripple effects are precisely why damages must scale beyond typical privacy verdicts.

Global patchwork leaves U.S. teens uniquely exposed

Britain’s forthcoming Online Safety Act obliges platforms to pre-emptively minimise “synthetic intimate images,” while South Korea criminalises their distribution with up to five years in prison. Yet U.S. victims must plead under a mosaic of state revenge-porn laws that rarely cover wholly fake content. The xAI teens lawsuit pornography action therefore arrives as a bellwether: if successful, it could nudge other jurisdictions to treat AI firms as publishers rather than neutral tools, accelerating calls for an international code of conduct akin to the Dublin-rules on aircraft liability.

September hearing to set evidence timetable

Judge Edward Davila will hear motions on 18 September to decide whether the case proceeds under California state law alone or incorporates federal trafficking claims. Discovery requests already seek internal xAI emails, server logs, and any contracts linking Grok to X data feeds. Legal observers expect xAI to press for arbitration, citing X’s updated terms of service, but counsel for the teens argue minors cannot be bound by click-wrap agreements. A ruling on that threshold question is likely before Thanksgiving, teeing up a possible trial late next year.