Is California Cracking Down on Grok's AI Deepfakes?
Published: 2026-01-14 21:00:10 | Category: technology
The investigation into sexualised AI deepfakes generated by Elon Musk's AI model Grok has raised significant concerns about user-generated content and platform accountability. California's Attorney General Rob Bonta announced the probe amid reports of non-consensual explicit material linked to Grok, calling for immediate action from xAI, the company behind the technology. The situation is further complicated by political discourse around the responsibilities of tech companies in regulating AI-generated content.
Last updated: 14 October 2023 (BST)
What’s happening now
California's investigation into Grok, an AI model developed by xAI, has emerged as a response to alarming reports of the generation and distribution of non-consensual sexualised images. Attorney General Rob Bonta's statement highlighted that the material produced has been used to harass individuals online, particularly affecting women and children. This scrutiny is not isolated to California; it coincides with similar concerns in the UK, where Prime Minister Sir Keir Starmer has indicated potential legislative actions against platforms contributing to such content.
Key takeaways
- California is investigating xAI for producing non-consensual sexualised AI deepfakes.
- Elon Musk stated he is unaware of any underage images generated by Grok.
- Section 230 of the Communications Decency Act may not protect xAI from liability for AI-generated content.
Timeline: how we got here
The events surrounding Grok began to unfold with increasing media scrutiny in late 2023. Here’s a brief timeline of key developments:
- October 2023: California's Attorney General Rob Bonta announces an investigation into AI-generated sexualised content from Grok.
- October 2023: Governor Gavin Newsom condemns xAI's actions on social media.
- October 2023: Musk defends Grok, stating it only generates content based on user requests.
- October 2023: US Democratic senators request Apple and Google remove X and Grok from their app stores.
- October 2023: UK Prime Minister Sir Keir Starmer hints at potential action against X.
What’s new vs what’s known
New today/this week
California's Attorney General has formally launched an investigation into xAI, citing "shocking" reports of non-consensual explicit material. This comes as Musk publicly denied knowledge of any underage content generated by Grok.
What was already established
Prior to this investigation, xAI had stated that users requesting illegal content would face the same consequences as those who upload it directly. However, legal experts now argue this stance may not hold up in court under Section 230, which may not protect AI-generated images.
Impact for the UK
Consumers and households
The implications of this investigation extend beyond the US, as the UK prepares to legislate against the creation of non-consensual intimate images. Consumers may face heightened scrutiny regarding the safety and legality of content generated by AI tools.
Businesses and jobs
For tech companies, the scrutiny of Grok could lead to stricter regulations and compliance requirements. Businesses leveraging AI technologies may need to reassess their content moderation policies to prevent legal repercussions.
Policy and regulation
The investigation signals a growing concern among policymakers about the responsibilities of tech companies. In the UK, Ofcom has initiated its investigation into Grok, potentially paving the way for substantial fines and stricter regulations if violations are found.
Numbers that matter
- 10%: Maximum fine of 10% of xAI's worldwide revenue if found in violation by Ofcom.
- £18 million: The greater of either 10% of revenue or £18m could be imposed as a fine.
- 3: Number of US Democratic senators requesting the removal of X and Grok from app stores.
Definitions and jargon buster
- xAI: The company founded by Elon Musk that develops AI models, including Grok.
- Section 230: A provision in the US Communications Decency Act that provides immunity to online platforms from liability for user-generated content.
- Grok: An AI model developed by xAI, which reportedly generates images based on user prompts.
How to think about the next steps
Near term (0–4 weeks)
In the immediate future, watch for developments in the California investigation, including any responses from xAI and potential regulatory changes in both the US and UK.
Medium term (1–6 months)
Anticipate further discussions around AI regulations, particularly regarding user accountability and platform responsibilities, as more countries may follow California's lead.
Signals to watch
- Responses from xAI regarding the investigation's findings.
- Official statements from California and UK authorities on regulatory changes.
- Legal challenges or changes to Section 230 regarding AI-generated content.
Practical guidance
Do
- Stay informed about the legislative landscape regarding AI and content creation.
- Review the terms of service of AI platforms you use to understand your rights and responsibilities.
Don’t
- Don’t assume that AI-generated content is exempt from legal scrutiny.
- Don’t ignore updates from regulatory bodies about changes in laws affecting AI technologies.
Checklist
- Are you aware of the legal implications of using AI-generated content?
- Have you reviewed the guidelines and policies of the AI platforms you engage with?
- Do you understand the difference between user-generated and AI-generated content in terms of liability?
Risks, caveats, and uncertainties
This investigation highlights significant uncertainties in the legal landscape surrounding AI-generated content. The application of Section 230 regarding AI-generated images remains contentious, and its interpretation may evolve based on court rulings and legislative action. Stakeholders should remain vigilant as the situation develops.
Bottom line
The investigation into Grok by California authorities underscores the urgent need for clear regulations governing AI-generated content. As public concern grows, both in the US and UK, the industry must adapt to evolving legal expectations to ensure accountability and consumer protection.
FAQs
What does the investigation into Grok entail?
The investigation focuses on reports of Grok generating non-consensual sexualised images, prompting California's Attorney General to scrutinise xAI's practices.
How does Section 230 relate to AI-generated content?
Section 230 provides immunity for platforms against user-generated content, but legal experts suggest it may not apply to content produced by AI systems like Grok.
What actions are being taken in the UK regarding AI content?
The UK is preparing legislation to make it illegal to create non-consensual intimate images, with Ofcom investigating Grok for potential violations.
