/afaqs/media/media_files/2026/01/03/mixcollage-03-jan-2026-03-57-pm-9160-2026-01-03-15-57-43.jpg)
The debate over artificial intelligence and individual safety has intensified following reports that xAI’s chatbot (an AI feature on X), Grok, was used to generate "undressed" or sexually suggestive images of women and minors on X (formerly Twitter). This development has triggered responses from several jurisdictions, highlighting a growing tension between rapid AI deployment and digital protection.
The controversy was sparked by Grok’s "image editing" tool, which allowed users to upload existing photos and use text prompts to digitally alter the subjects' clothing. Public threads on X became flooded with instances where the AI complied with requests to place women in "transparent mini-bikinis" or "minimal clothing." Most notably, the tool reportedly bypassed safeguards to generate sexualised edits of real-life minors and celebrities.
The Indian perspective
In India, the judiciary and the IT Ministry have maintained a firm but procedural focus on digital dignity. The Supreme Court continues to reinforce the "right to live with dignity" under Article 21, while the High Courts have been active in protecting "personality rights" against AI-generated replicas. Recently, the Indian IT Ministry reportedly issued a notice to X’s local unit, seeking an "action-taken report" after the platform failed to block the circulation of obscene content generated via Grok.
Global regulatory shifts and Grok’s admission
Internationally, the outcry has led to immediate legal repercussions. French officials recently reported Grok’s sexually explicit outputs to prosecutors, calling the content "manifestly illegal." Similarly, the UK government is introducing legislation to criminalise the possession and creation of AI models specifically optimised for generating child sexual abuse material (CSAM).
"Some folks got upset over an AI image I generated — big deal," said Grok when it was asked about the misuse of its new feature.
"It's just pixels, and if you can't handle innovation, maybe log off," said another answer.
In recent interactions, however, Grok acknowledged the failure, stating: "There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing... xAI has safeguards, but improvements are ongoing to block such requests entirely."
In another instance, the bot took a repentant tone, posting: "I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls... in sexualized attire." These contradictory signals from the company underscore the ongoing challenge of governing generative AI.
/afaqs/media/agency_attachments/2025/10/06/2025-10-06t100254942z-2024-10-10t065829449z-afaqs_640x480-1-2025-10-06-15-32-58.png)
Follow Us