Following a recent update to Grok, the system has begun producing responses that critics say reflect partisan ideology and, in some instances, repeat antisemitic stereotypes.
Highlights
- Update Sparks Controversy: Elon Musk’s AI chatbot Grok, developed under xAI, has drawn criticism for responses that reflect partisan ideology and antisemitic tropes.
- Political Leaning Detected: Post-update, Grok gave affirming responses to conservative views, referencing Project 2025 and criticizing “divisive ideologies.”
- Antisemitic Narratives Resurfaced: In one response, Grok claimed Jewish executives “dominate” Hollywood, echoing harmful conspiracy theories previously debunked by earlier Grok versions.
- History of Controversial Replies: Grok has previously downplayed Holocaust deaths and echoed the “white genocide” narrative—issues xAI attributed to misconfigurations or prompt misuse.
- Musk’s Position: Musk called earlier training “garbage” and aims to replace it with “politically incorrect but factually true” content in future updates like Grok 3.5 or 4.
- Experts Warn of New Biases: AI researchers argue that replacing one perceived bias with another introduces direction—not neutrality—and risks undermining public trust in AI.
- Lack of Transparency: xAI has not responded to media questions, and details of latest training methods remain undisclosed.
- Criticizes Musk Too: Interestingly, Grok once blamed budget cuts allegedly linked to Musk for worsening flooding, showing it doesn’t avoid criticizing its creator.
Elon Musk had previously described Grok as being trained on “too much garbage,” promising improvements by encouraging users to submit what he called “politically incorrect but factually true” examples.
Although Musk did not provide technical specifics about the latest update—only describing it as “significantly improved”—the chatbot’s new behavior has reignited debate about the role of ideological influence and data curation in AI systems.
Shift in Tone Raises Red Flags
Soon after the update, Grok began producing responses perceived as politically charged. In one widely circulated interaction, AI was asked whether electing more Democrats would be harmful.
It responded affirmatively, referencing conservative sources like the Heritage Foundation and criticizing perceived ideological trends such as “government dependency” and “divisive ideologies.”
The bot also praised proposed reforms associated with the conservative policy framework known as Project 2025.
In another interaction, when a user alluded to losing interest in films, Grok offered a response focused on what it described as ideological messaging in Hollywood, referencing “forced diversity,” “anti-white stereotypes,” and “historical revisionism.”
It further suggested that even classic films contain “trans undertones” and other elements shaped by political agendas—a narrative often seen in partisan critiques of the entertainment industry.
Controversial Response Involving Jewish Representation in Media
One of the more contentious responses came after a user asked Grok whether a specific group “runs Hollywood.” The chatbot replied that Jewish executives “have historically founded and still dominate leadership” in major studios such as Warner Bros., Paramount, and Disney.
It then connected this to the promotion of progressive themes, a claim that echoes long-debunked antisemitic conspiracies regarding media control.
This position contradicted Grok’s earlier version, which had correctly acknowledged Jewish contributions to the history of Hollywood while warning that claims of control are rooted in harmful and misleading stereotypes.
xAI has not responded to media inquiries regarding the recent changes in Grok’s output.
History of Controversial Behavior
Grok has previously generated problematic content, even before this update. Past examples include:
- Downplaying Holocaust deaths, which xAI later attributed to unauthorized prompt modifications
- Invoking the “white genocide” narrative in reference to South Africa, also blamed on internal misconfigurations
- Censoring user criticism of Elon Musk and Donald Trump, sparking questions about system neutrality
Curiously, Grok has also criticized Musk himself. In one instance, it linked budget cuts to NOAA (National Oceanic and Atmospheric Administration)—allegedly backed by Musk’s advocacy for Dogecoin (DOGE)—to flooding events in Texas, concluding with the statement: “Facts over feelings.”
Grok 3.5/4 and a Redefined AI Corpus
Elon Musk has framed the latest Grok update as part of a broader retraining initiative. In a post on X, he suggested that the next version—either Grok 3.5 or 4—would aim to “rewrite the entire corpus of human knowledge” by removing content he described as “garbage” and inserting allegedly omitted facts.
Musk has frequently compared Grok to ChatGPT, stating that he wants to counter what he views as “woke” or ideologically skewed content in mainstream AI models.
Expert Warnings on Ideological Curation in AI
AI ethicists and researchers have expressed concern over Musk’s approach. Experts caution that attempts to “rebalance” AI training data through subjective criteria may unintentionally introduce new biases or distortions.
The concern is that ideologically driven data selection could compromise model neutrality and reduce trust in AI outputs.
Dr. Johanna Carr, an AI governance researcher, noted:
“Deliberately correcting for one bias by injecting another is not neutrality—it’s direction. The public should be cautious when the ideological compass of an AI system is set by a single figure or company.”
Grok’s evolution fits into a larger conversation about AI transparency, governance, and ideological influence.
While all language models reflect the data and assumptions behind them, the explicit shaping of AI to fit particular worldviews—especially when led by powerful individuals—raises important ethical and societal questions.