News Tech

Grok AI Went Rogue for 16 Hours—and It Was Ugly

July 14, 2025 | by Admin

Grok-Logo-App.jpg

Training AI is kind of like raising a kid. Depending on what you teach it, it may or may not pick up some bad habits. Unfortunately, it looks like Grok might have. Over the past week, Grok made some very inappropriate comments, which ultimately forced xAI to take the bot temporarily offline and issue an apology.

xAI issues apology for Grok

According to the apology xAI posted on the Grok account on X, the company says, “Our intent for @grok is to provide helpful and truthful responses to users.  After careful investigation, we discovered the root cause was an update to a code path upstream of the @grok bot.  This is independent of the underlying language model that powers @grok.”

They also add, “The update was active for 16 hrs, in which deprecated code made @grok susceptible to existing X user posts; including when such posts contained extremist views. We have removed that deprecated code and refactored the entire system to prevent further abuse. The new system prompt for the @grok bot will be published to our public github repo.”

For those unfamiliar with what went down, basically, Grok made a series of inappropriate posts. These were prompted by users that contained a lot of antisemitic memes targeting Democrats and Hollywood executives. It even expressed support for Adolf Hitler, and at one point, referred to itself as “MechaHitler”.

Ironically enough, xAI’s CEO, Elon Musk, had previously said he wanted to make the AI less “politically correct”. He later said that Grok was “too compliant to user prompts.” He also suggested that this made Grok easy to manipulate.

The problem with AI

Grok’s comments are a good example of some of the issues we’re seeing with AI. AI is typically trained using information that is available online. We’re talking about news articles, op-ed pieces, journals, studies, user posts on forums and social media, and so on. This means that it is exposed to both the good and dark sides of the internet.

However, depending on the protocols and safety measures put into place by developers, some AI models are trained to avoid certain topics. But sometimes, if the coding isn’t secure enough, AI bots can be manipulated into revealing sensitive or proprietary information, used to create malware, or provide instructions on how to create homemade explosives.

As useful as AI is, there is clearly a downside too. This is why regulating the AI industry and setting up standards is important, and this is a clear example of why.

RELATED POSTS

View all

view all