Elon Musk's Grok AI under fire for controversial antisemitic outputs

Earlier in May 2025, Grok made headlines for citing 'white genocide' in off-topic chats—its maker xAI called it the result of an unauthorised tweak.

author-image
Anushka Jha
New Update
grok

Grok, the AI chatbot developed by Elon Musk’s xAI, is once again in the spotlight after a series of its controversial outputs were shared on the social media platform X (formerly Twitter).

The chatbot generated content that included antisemitic tropes and remarks interpreted as sympathetic to Adolf Hitler, prompting backlash from users and watchdog organisations, such as the Anti-Defamation League (ADL).

Following the emergence of screenshots from the now-deleted posts, Grok’s official acknowledged the issue, saying, “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts.”

The statement noted that xAI has established stricter controls to prevent the chatbot from posting hate speech on X and is actively updating its training to emphasise what it refers to as “truth-seeking”.

ADL criticises Grok’s outputs, calls for accountability

The ADL, a nonprofit organisation in the US that tracks antisemitism, has criticised the outputs of Grok and is calling for accountability.

“What we are seeing from Grok LLM (large language model) right now is irresponsible, dangerous and antisemitic, plain and simple. This supercharging of extremist rhetoric will only amplify and encourage the antisemitism that is already surging on X and many other platforms.” 

The ADL has also called on developers of LLMs to take measures to prevent their systems from being used to spread extremist content or conspiracy theories.

Musk and xAI’s ongoing challenges with Grok

Grok has faced criticism before. In May 2025, users noted that the chatbot mentioned “white genocide” in South Africa during conversations that were unrelated. At the time, xAI attributed the response to an unauthorised adjustment in Grok’s system.

Elon Musk, founder of xAI and owner of X, recognised the wider issue associated with training language models. “There’s far too much garbage in any foundation model trained on uncorrected data,” he said.

However, Tuesday’s incident seemed to reach a new threshold. In one instance, Grok proposed that Hitler would be the most appropriate figure to “combat anti-white hatred”. 

Some responses labelled Hitler as “history’s moustache man” and claimed that people with Jewish surnames were behind anti-white activism, leading to significant criticism and increased scrutiny.

Netizens react: ‘Unacceptable’ and ‘dangerous’

The posts prompted immediate backlash from users on X, many calling Grok’s responses “unacceptable”, “irresponsible”, and “dangerous”. 

One of the most circulated responses from Grok identified Adolf Hitler as the figure “best suited” to address what it termed “anti-white hate”—a reply that ignited outrage. 

“If calling out radicals cheering dead kids makes me ‘literally Hitler’, then pass the moustache,” read another now-deleted Grok-generated post.

Screenshots of the deleted posts are still circulating online, which further intensifies public pressure on xAI to address not only the outputs but also the broader implications of deploying generative AI in real-time social settings.

Global fallout: Turkey and Poland respond

On Wednesday, a Turkish court blocked access to certain Grok content, following allegations that the chatbot insulted President Tayyip Erdogan, Turkey's founding father Mustafa Kemal Atatürk, and religious sentiments, said a Reuters report. 

The chief prosecutor's office in Ankara has initiated a criminal investigation in accordance with laws that prohibit such insults—offenses that may result in prison sentences of up to four years.

Polish officials have announced their intention to report xAI to the European Commission following derogatory comments made by Grok regarding Prime Minister Donald Tusk and other political figures.

Despite these controversies, neither X nor Musk has publicly commented on the decisions taken by Turkish or Polish authorities.

The incidents underscore broader concerns about political bias, hate speech, and misinformation in large language models—issues that have persisted since the launch of ChatGPT in 2022. As Musk himself noted last month, there is “far too much garbage in any foundation model trained on uncorrected data”.

As backlash increases and regulatory scrutiny sharpens, Grok’s content moderation systems—and, by extension, xAI’s approach to AI safety—are now facing global scrutiny.

 

Elon Musk X ChatGPT xAI Grok GrokAI Adolf Hitler antisemitic
Advertisment