Elon Musk Faces UK Probe: The UK’s privacy regulator has launched a formal investigation into Elon Musk and his companies X and xAI following reports that the Grok chatbot produced explicit deepfake images of real people without their permission.
The Information Commissioner’s Office (ICO) says it is examining whether personal data was misused and whether the companies failed to put basic protections in place to stop this kind of content from being generated and shared.
“These reports raise deeply troubling questions about how people’s personal data may have been used to create intimate or sexualised images without their knowledge or consent,” said William Malcolm, the ICO’s executive director of regulatory risk and innovation.
Regulators are not focusing only on individual users. Instead, they are asking what the companies themselves should have done differently. That includes whether Grok was released with weak safeguards and whether known risks were ignored.
The UK inquiry comes days after French prosecutors searched X’s Paris office as part of a separate criminal investigation into the alleged distribution of deepfakes and child abuse material. The parallel cases point to growing pressure on tech firms over how generative AI tools are handled.
Researchers estimate that Grok may have produced millions of sexualised images in a short period, including many that appear to depict minors. If confirmed, the legal consequences could be significant. Under GDPR, companies can be fined up to £17.5 million or 4% of their worldwide annual revenue.
X and xAI say they are adding new blocks and restrictions, including tighter controls around images involving children and limits on certain image-generation features. Critics argue these steps came too late. Once explicit images spread online, they are almost impossible to fully remove.
UK politicians are now calling for stricter AI laws. Some want developers to be legally required to assess risks before releasing new models and to prove that effective safety systems are in place.
Also Read: ChatGPT Outage: ChatGPT Comes Back Online After Brief Disruption in the US
The case also highlights a broader problem. You do not need to be a public figure to be targeted. A single photo from social media can be enough for someone to create a convincing fake.
The ICO’s investigation is expected to take months, but its impact could be long-lasting. For Musk and other AI developers, regulators are sending a clear signal: building powerful tools also means being responsible for how they are used.
