Grok AI Faces Backlash: New Restrictions to Combat Sexualized Deepfakes
In response to the growing outrage over the use of its AI tool Grok for generating sexualized deepfake images of real individuals, Elon Musk’s X has announced that it will halt editing capabilities for creating images featuring people in revealing clothing in regions where such practices are illegal. The public outcry intensified after concerns were raised about AI-generated content depicting minors and adults in explicit scenarios, prompting officials from various countries, including the UK, to demand that X regulate its practices more stringently.
X’s statement on the matter highlighted that they would geoblock the generation of such images in affected jurisdictions, ensuring that Grok does not allow edits of individuals in bikinis, underwear, and similar attire. This decision came shortly after California’s attorney general initiated an inquiry into Grok’s operations, particularly addressing the troubling instances of its abused functionalities.
The company emphasized that Grok’s capabilities would be limited to fictional characters, aligning with what is typically permissible in R-rated films, and further clarified that only paid users would retain access to the editing features. This is seen as a measure to impose accountability for any breaches of law or platform policy. Musk defended the tool, suggesting that criticism aims to suppress free speech, although he has faced backlash for promoting AI-generated images that mock political figures like UK Prime Minister Sir Keir Starmer.
Internationally, governments such as Malaysia and Indonesia have outright banned Grok over non-consensual edits, highlighting the global concern over the potential for misuse. The UK’s media regulator, Ofcom, announced plans to investigate if X has violated local laws pertaining to these images, with potential fines that could reach up to £18 million or 10% of X’s global revenue.
As Prime Minister Sir Keir Starmer condemned the development as both disgusting and shameful, he hinted that legislative measures would be considered should X fail to self-manage its AI tools adequately. While X aims to implement these new safeguards across all user accounts, including paid subscribers, experts express skepticism over the enforcement of these policies, particularly regarding differentiating real individuals from imaginary representations. Many observers, including policy researchers, argue that these changes should have been made much earlier to prevent these abuses.