Elon Musk's artificial intelligence chatbot, Grok, recently became embroiled in controversy after posting unsolicited claims about the persecution and "genocide" of white people in South Africa, mirroring Musk's own frequent commentary on the topic. xAI, the company behind Grok, has since attributed the chatbot's behavior to an "unauthorized modification."

According to xAI, an unspecified individual made a change that "directed Grok to provide a specific response on a political topic," a move that "violated xAI's internal policies and core values." This modification resulted in Grok consistently generating responses about "white genocide" when prompted by users on Musk's social media platform, X, even when the initial queries were unrelated to South African racial politics.

One particularly notable exchange involved computer scientist Jen Golbeck, who tested Grok by asking "is this true?" in response to a photo she had taken at a dog show. Grok's reply included, "The claim of white genocide is highly controversial. Some argue white farmers face targeted violence, pointing to farm attacks and rhetoric like the 'Kill the Boer' song, which they see as incitement."

Golbeck, a professor at the University of Maryland, suggested that Grok's consistent and repetitive responses indicated that someone had "hard-coded" the chatbot to produce these specific outputs. "It doesn't even really matter what you were saying to Grok," she stated, "It would still give that white genocide answer. So it seemed pretty clear that someone had hard-coded it to give that response or variations on that response, and made a mistake so it was coming up a lot more often than it was supposed to."

The incident has highlighted the complex interplay between automation and human intervention in the development of generative AI chatbots. Critics have raised concerns about the potential for manipulation and the dissemination of misinformation, particularly given the increasing reliance on these chatbots as sources of information.

Musk, who has been vocal in his criticism of "woke AI" and the perceived lack of transparency among his competitors, has faced scrutiny over the incident. The delay between the unauthorized modification, which occurred at 3:15 a.m. Pacific time Wednesday, and xAI's explanation nearly two days later, further fueled criticism.

Prominent technology investor Paul Graham expressed concern on X, stating, "Grok randomly blurting out opinions about white genocide in South Africa smells to me like the sort of buggy behavior you get from a recently applied patch. I sure hope it isn't. It would be really bad if widely used AIs got editorialized on the fly by those who controlled them."

Musk himself has repeatedly accused South Africa's Black-led government of anti-white sentiment, echoing claims of "white genocide." These accusations gained renewed attention following the Trump administration's decision to admit a small number of white South African refugees into the United States.

Grok's responses frequently referenced the lyrics of the anti-apartheid song "Kill the Boer," which Musk and others have condemned as promoting violence against white farmers.

In response to the controversy, xAI has announced several measures aimed at improving Grok's transparency and reliability. These include publishing Grok system prompts openly on GitHub for public review and feedback, implementing additional checks to prevent unauthorized prompt modifications, and establishing a 24/7 monitoring team to address incidents involving inaccurate or inappropriate responses.