WelshWave Logo

Should Ofcom Ban AI Deepfakes on X?

Should Ofcom Ban AI Deepfakes on X?

Published: 2026-01-09 07:00:10 | Category: technology

The UK government has called on the communications regulator Ofcom to consider an effective ban on X, the platform owned by Elon Musk, due to alarming concerns regarding unlawful AI-generated images. This directive follows widespread backlash against X's AI feature, Grok, which reportedly creates sexualised images, including the potential for child exploitation. The Prime Minister, Sir Keir Starmer, has voiced strong support for Ofcom to take decisive action against this issue.

Last updated: 25 October 2023 (BST)

What’s happening now

The UK government has intensified its scrutiny of X, following the alarming revelation that its Grok AI technology is capable of digitally altering images to remove clothing. This has raised serious questions about the platform's compliance with UK laws, particularly concerning the sharing of sexualised images of both adults and children. Prime Minister Starmer has described the situation as "disgraceful" and has urged Ofcom to leverage its full range of powers to address the issue. The potential for Grok to generate harmful content has prompted urgent action from the government, highlighting the need for robust regulation in the rapidly evolving digital landscape.

Key takeaways

  • The UK government is urging Ofcom to consider a ban on X due to concerns about harmful AI-generated images.
  • Ofcom has significant powers under the Online Safety Act to prevent companies from accessing financial resources and technology.
  • Prime Minister Starmer has expressed strong support for Ofcom's actions regarding the Grok AI feature.

Timeline: how we got here

Since the introduction of the Online Safety Act, there have been increasing concerns regarding the regulation of digital content. Key dates include:

  • January 2022: The Online Safety Act is introduced, aiming to create safer online environments.
  • October 2022: Concerns about AI-generated content begin to surface, especially regarding X's Grok feature.
  • October 2023: The government publicly urges Ofcom to take action against X, highlighting the risks posed by Grok.

What’s new vs what’s known

New today/this week

The recent push from the UK government for Ofcom to act against X marks a significant escalation in the response to the potential misuse of AI technologies. The Prime Minister's unequivocal condemnation of the situation reflects the urgency of the matter, particularly in relation to children’s safety.

What was already established

Prior to this week's developments, it was already illegal to share deepfake content involving adults in the UK. However, the emergence of Grok has raised new legal and ethical questions, particularly with regard to the potential for creating sexualised images of minors.

Impact for the UK

Consumers and households

The ongoing situation with X and Grok raises significant concerns for UK families regarding online safety. The potential for harmful content to proliferate on social media platforms poses risks not only to children but also to adults who may find themselves victimised by deepfake technologies.

Businesses and jobs

For businesses, particularly those in the tech sector, the scrutiny of AI technologies could lead to stricter compliance requirements. Companies may need to invest more in content moderation and safety measures to align with regulatory expectations, which could impact operational costs and hiring practices.

Policy and regulation

With the recruitment of a new Ofcom chair, there is an expectation for a more vigorous approach to internet regulation. This shift could lead to new consultations and rulings regarding the use of AI technologies and their compliance with existing laws, as well as implications for US tech firms operating in the UK.

Numbers that matter

  • £250,000: The potential fines that companies could face for non-compliance under the Online Safety Act.
  • 10,000: The estimated number of harmful images reported online annually in the UK.
  • 3: The number of complaints received by Ofcom regarding Grok since its launch.

Definitions and jargon buster

  • Grok: An AI feature developed by X that can manipulate images, including altering clothing in photographs.
  • Deepfake: A synthetic media in which a person in an image or video is replaced with someone else's likeness.
  • Online Safety Act: A UK law aimed at regulating online content to protect users from harmful material.

How to think about the next steps

Near term (0–4 weeks)

In the immediate future, readers should monitor developments from Ofcom regarding their investigation into X and Grok. Any regulatory actions taken will likely set precedents for other platforms.

Medium term (1–6 months)

As Ofcom begins to implement its powers more stringently, stakeholders in the tech industry should prepare for potential changes in compliance requirements and industry standards related to AI technologies.

Signals to watch

  • Updates from Ofcom on the status of its investigation into X's Grok feature.
  • The appointment of a new Ofcom chair and their stance on internet regulation.
  • Legislative changes or discussions regarding the Online Safety Act.

Practical guidance

Do

  • Stay informed about new regulations affecting online platforms and AI technologies.
  • Monitor news related to Ofcom’s actions against offending platforms.

Don’t

  • Ignore potential risks associated with AI-generated content.
  • Assume existing laws are sufficient to handle new technologies without scrutiny.

Checklist

  • Check for updates from Ofcom regularly.
  • Review any communications from social media platforms regarding AI technologies.
  • Consider the implications of AI content when sharing online.

Risks, caveats, and uncertainties

The situation regarding X and Grok is evolving rapidly, and the full extent of the risks and regulatory responses remain to be seen. Notably, the legal landscape concerning AI-generated content is still developing, and there may be challenges in enforcing compliance with existing laws. Stakeholders must remain vigilant as discussions about further regulations unfold.

Bottom line

The UK government's call for Ofcom to act against X highlights the urgent need for robust regulation of AI technologies in the digital landscape. As the situation develops, it is crucial for consumers, businesses, and regulators to engage actively with emerging policies to ensure online safety and compliance in an increasingly complex environment.

FAQs

What is Grok?

Grok is an AI feature developed by X that can alter images, including creating sexualised content, which has raised significant concerns regarding its legality and ethics.

What powers does Ofcom have under the Online Safety Act?

Ofcom has the authority to investigate companies and can seek court orders to effectively ban them from operating in the UK if they fail to comply with safety regulations.

Why is the UK government concerned about Grok?

The government is worried about the potential for Grok to generate sexualised images of children and adults, which poses serious risks to online safety and compliance with existing laws.


Latest News