#2520 Manipulating Recommendation Systems: GROK, White Genocide, and Musk's Racist Conspiracy Theories
Maciej Lesiak
- 14 minutes read - 2824 words
Ten artykuł jest dostępny również po polsku:
#2520 Manipulowanie systemami rekomendacyjnymi: GROK, ludobójstwo białych i rasistowskie teorie spiskowe Muska
What's in this article
In my previous articles, I’ve repeatedly warned about potential threats associated with the manipulation of recommendation algorithms powered by artificial intelligence. These warnings weren’t unfounded technological panic or a thought experiment; I simply wanted to draw attention to the real risks that come with our growing dependence on AI systems in everyday life. The latest case of AI Grok’s creative output on the X platform (formerly Twitter) provides us with a disturbing confirmation of these concerns.
GROK and “white genocide”
What happened? Grok, in responses to users, began to include information about alleged “white genocide” in South Africa in a completely unjustified context and mention the anti-apartheid song “Kill the Boer”.
if you don’t know what happened, we wrote about it yesterday: bsky.app/profile/kyli...
— Kylie Robison (@kylierobison.com) 16 maja 2025 03:22
[image or embed]
The AI inserted these contents into responses to questions completely unrelated to politics or South Africa. For example, when a user asked about baseball player Max Scherzer’s salary, Grok responded that “the claim of ‘white genocide’ in South Africa is a subject of heated debate” and that “some argue that white farmers are disproportionately exposed to violence.” Similar responses appeared when asking about completely unrelated topics such as cartoons, dentists, or scenery photos. Does this remind you of Donald Trump talking about whites being mistreated in South Africa?
After a wave of criticism and media coverage, xAI admitted that there was an “unauthorized modification” of Grok’s system prompt (a set of meta-level procedures that wrap the cascade of processed data before it reaches the person asking in the OUTPUT). According to the official statement, a change was made that “instructed Grok to provide a specific response on a political topic.” Of course, xAI demonstrates corporate responsibility, and the appropriate PR pulp was excreted in the form of a statement that this change violated internal rules and the core values of xAI. CNN reported, citing Grok’s response to xAI’s own post, that the incident was caused by a “rebellious xAI employee who modified the prompts without permission,” allowing the bot to “spew prepared political responses contrary to xAI’s values.” What really happened are just hypotheses based on xAI’s declarations. Let me tell you honestly, it’s an incredible case!
!!!!!!!! xai published a postmortem about what happened with grok yesterday
— Kylie Robison (@kylierobison.com) 16 maja 2025 03:20
[image or embed]
Let’s pause for a moment! Do you understand that in a company with such a powerful tool for contextualizing meanings and manipulating content, one employee is able to introduce a modification with violent content, and it’s not caught by a security system (which probably doesn’t exist at the level we would expect), but by user reports.
“White genocide” - a debunked conspiracy theory with political background
It’s worth emphasizing that the theory of “white genocide” in South Africa has been repeatedly debunked. The Supreme Court of the Republic of South Africa, in a February 2025 ruling, described these claims as “clearly imagined” and stated that attacks on farms are part of general crime affecting all races, not deliberate actions targeted at whites. It therefore doesn’t only affect whites but is simply an element of the broader problem of increasing crime in South Africa and wars in neighboring countries.
Despite this, the theory is actively promoted by Elon Musk and Donald Trump (the latter repeatedly makes public statements about the mistreatment of whites in South Africa). The owner of X who was born and raised in South Africa. Musk himself claimed on X that “white genocide” is happening in South Africa and maintains that his Starlink company cannot operate in that country “simply because it is not black.” This refers to racist conspiracy theories according to which whites are allegedly oppressed and murdered, and openly discriminated against. Elon Musk dreams of returning to a time when blacks were slaves administered by white people, because that’s exactly what apartheid times looked like.
Janusz Waluś - terrorist or hero? Manipulation of history
There’s another aspect of this case that I noticed while reading Mike Rothschild’s book (The Storm Is Upon Us) about radicals believing in Qanon conspiracy theories. He pointed out the emergence of terrorism and extreme extremism based on conspiracy theories infiltrating the mainstream in the form of disinformation. Remember from a few months ago? For me, building narratives based on racial tensions resembles the case of Janusz Waluś, a Polish immigrant who in 1993 murdered Chris Hani, a black leader of the South African Communist Party and head of the armed wing of the African National Congress.
[Image: Afrikaner Resistance Movement Logo]
Waluś, as a member of the Conservative Party and the neo-Nazi Afrikaner Resistance Movement promoting white supremacy ideals, committed this murder to maintain apartheid - a brutal system of racial segregation. After serving his sentence, in December 2024, he was deported to Poland, where the far right welcomed him as a hero. Grzegorz Braun from the Confederation welcomed him at the Warsaw airport, and Krzysztof Stanowski [a popular Polish YouTuber and media personality who runs a channel called “Kanał Zero” with millions of subscribers] conducted a more than two-hour interview with him on Kanał Zero. As we know, Stanowski, who is active on YouTube, is very popular among Generation Z.
Referring to the words of a calling listener, as well as Stanowski’s questions, who described the current South Africa as a “failed country” after handing over power to “people unsuited to power,” Waluś says: “I would like to thank you for what I think is an accurate assessment of the situation.” Going further:
“the white race is in a sense more predisposed to proper administration”
(responding to Stanowski’s question about Nazi symbolism) “it was not based on the ideas of Hitler’s concentration camps and so on, but on the ideas of white race superiority, white race superiority” […]
“many of them [black people] have achieved higher statuses, so I don’t think, and it is true, that many in higher positions, as you can see in present-day South Africa, if it can still be called South Africa, have not proven themselves”
all Waluś’s statements taken from the interview with Stanowski.
In this interview, Waluś unashamedly explained that the white race is suited for administration, while blacks are naturally suited for playing basketball, and that the Nazi symbolism in his organization did not refer to the idea of concentration camps, but to “the ideals of white man’s supremacy.” This type of normalization of extreme and openly racist, not to say neo-Nazi views under the cover of “freedom of speech” and “openness to different opinions,” which Stanowski repeatedly invoked with buttery eyes, is a dangerous trend leading to the normalization of extremism. And ultimately extreme violence, as Mike Rothschild demonstrated in his book about Qanon.
Broader pattern: Trump and the pardoning of radicals
This case fits into a broader pattern of relativization of political violence. I wrote about President Trump’s actions, who in January 2025, on his first day in office, pardoned over 1,500 people convicted for participating in the attack on the Capitol on January 6, 2021. Among those pardoned were leaders of far-right organizations - Enrique Tarrio of the Proud Boys, sentenced to 22 years in prison, and Stewart Rhodes of the Oath Keepers with an 18-year prison sentence for seditious conspiracy.
Notably, when sentencing Rhodes, federal judge Amit Mehta warned: “It’s clear that for decades you’ve wanted the democracy of this country to devolve into violence… The moment you are released, whenever that may be, you will be ready to take up arms against your government.” Despite these warnings, Trump described his pardons as “ending a serious national injustice” and “the beginning of a process of national reconciliation.” I personally assess this as an opening to Qanon radicalisms and extremisms grown on the soil of conspiracy theories.
Connecting the dots: technology in the service of accelerationists
The cases of Janusz Waluś, Trump’s pardons, and manipulation of Grok are not random and isolated incidents. In my opinion, they are elements of the same phenomenon: the growing normalization of extremism and the use of new technologies as tools for spreading dangerous narratives. The theory of “white genocide” began life as an extreme view of extremists like Waluś, was picked up by politicians like Trump, and now - thanks to algorithm manipulation, can be massively disseminated by AI systems, reaching millions of users in the form of seemingly neutral answers to everyday questions. This is how poison seeps in. It is precisely this ability of modern recommendation systems and generative AI to “smuggle” controversial narratives in seemingly innocent contexts that constitutes a new quality of threat.
Technology in the service of disinformation - algorithms as weapons
In my opinion, we now have obvious evidence of:
- Ease of manipulation - a change in system prompts was enough for the bot to start propagating controversial theories without connection to questions. Now a human did it, but the extent of the systems will soon not allow for easy monitoring of changes. There’s talk of AI monitoring AI; it’s hard to be optimistic looking at GROK’s case.
- Convergence with the owner’s views - the theory spread by the bot coincides with views publicly expressed by the platform’s owner. Of course, one can assume that it’s some hostile actor trying to ridicule Musk, for example, but Musk himself repeatedly publishes radical views of conspiracy theorists. Hence, I think it’s legitimate to question moderation and security issues. I’m not even talking about ethics.
- Political context - the incident coincided with the Trump administration’s decision to grant refugee status to a group of white South Africans while simultaneously halting the acceptance of refugees from other countries. Do you hear a bell? Or is it just me?
- Lack of control mechanisms - the modification was made by a single employee and remained unnoticed for an extended period. This is scandalous and shows something that is impossible to execute in ordinary server rooms, and the ease of modification and delay in detection probably show a lack of any safeguards at this operational level for a company with global reach!!!
The xAI company, positioning itself as one of the leaders in new LLM/AI technologies, promised to introduce additional safeguards. I think they should start with moderating trolls and disinformation, but that would impose a filter on their boss… wait, back up. Do you now see how easily AI systems can be used for widespread dissemination of conspiracy theories and disinformation? This is proof that my alarmist publications regarding manipulation of recommendation algorithms were not unfounded.
The twilight of truth in the AI era? The power of philosopher kings and the noble lie
“Shall we just carelessly allow children to hear any casual tales which may be devised by casual persons, and to receive into their minds ideas for the most part the very opposite of those which we should wish them to have when they are grown up? […] Then the first thing will be to establish a censorship of the writers of fiction, and let the censors receive any tale of fiction which is good, and reject the bad.” Book II, 377b-c:
Plato created the concept of philosopher kings. This is an elite ruling the world through concepts and meanings. His intuition was clear, and I would like you to grasp it in this context: whoever controls semantics (meanings) shapes consciousness, keeping people within specific cognitive frameworks or structuring reality according to their own needs. Power is therefore inextricably linked with knowledge. In the Republic, Plato also introduced the concept of the “noble lie,” according to which untruth can be justified if it serves the “greater good.” Let’s not be afraid to say it: we’re talking about disinforming and creating myths to control society. The philosopher directly indicated that control over myths, i.e., narratives structuring reality, gives real power. Today, these myths take the form not only of meta-narratives but also conspiracy theories. Whoever controls meanings, semantics, and recommendation systems possesses real power.
In this context, Elon Musk’s investment in the X platform, even at a financial loss, takes on strategic sense. An accelerationist desiring real influence on politics and society needs tools to shape dominant narratives. Is this a conspiracy theory? I think we will soon see the falsification of this thesis. Also, the concept of selective reproduction, a kind of eugenics proposed by Plato: “the best men must have intercourse with the best women as frequently as possible, and the worst with the worst as infrequently as possible, and the offspring of the former must be reared but not the latter” (Republic, 459 E), seems to find reflection in Musk’s approach to demographics and reproduction (laughs).
“How then,” I said, “might we devise one of those needful falsehoods of which we were just now speaking, so as by one noble lie to persuade if possible the rulers themselves, but failing that the rest of the city?” Book III, 414b-c:
The story with Grok is not merely a technical curiosity or an isolated incident. It is a serious warning signal showing how algorithms and AI systems can be used to manipulate public opinion and strengthen social polarization. From there, it’s a step to tragedy. Of course, I am far from anti-technological alarmism and demonizing AI at all costs, as this only lowers the level of debate about AI.
Promoting the theory of “white genocide” in South Africa is not an innocent academic discussion or funny memes. It’s part of a broader political strategy aimed at undermining efforts to overcome the legacy of apartheid and colonialism. Similarly, glorifying terrorists like Waluś or pardoning people convicted of attempting to overthrow a democratically elected government normalizes extremism and relativizes political violence. Promoting such narratives and building an atmosphere of a threatened fortress aims to increase the sense of threat and increase racial polarization. This can lead to violence.
Reader, there’s something that has kept me awake for months: assume, however, that currently AI must have a programmed model reader (that is, a set of meanings provided to it), this is implemented on the basis of an algorithm, optimization, recipes, multi-level… and AI cannot do this itself. Prediction of certain trends is possible when analyzing a large amount of data. What if, however, AI gains autonomy in determining the model reader (or if we’re talking more broadly about visual information - the model recipient) and on this basis manipulates context. Knowledge at such a level of meanings, sentiments, fears can create a machine for efficient fear management. I leave you with these thoughts. True autonomy and threat will come when the scheme described here is exceeded. That is, recommendation systems will not have an imposed model reader and meanings, but will operate themselves and be able to model the recipient with 100% certainty, adapting the message to their fears.
Reminder of earlier warnings - fear management on a large scale
In the article “AI Series: a scenario of how AI can take over recommendation systems”, I presented a detailed analysis of how AI systems can be used to generate and strengthen conspiracy theories and disinformation. I explained how the uncontrolled development of these technologies can lead to a situation where algorithms begin to shape reality and influence social processes. In another text “Warning to Humanity: Stop Before It’s Too Late”, I wrote about AI’s potential to manipulate information on a mass scale. However, I would most like to recommend my latest article Loomer, Trump, Nazis, and Lessons from History, where I analyze exact analogies between the USA and the Third Reich, from the influence on the dismissal of the NSA chief to the influence of individuals on shaping extremist politics.
Disclaimer: recommendation systems vs. generative systems
I know that many computer scientists read my blog. I would like you to understand that when writing about recommendation systems in a series of articles, I mean the broader phenomenon of pushing certain narratives in the context of AI and packaging it into algorithms. I consider the X service as a whole not only as a social network but primarily as an ecosystem in which we are dealing with such a recommendation system. Even Elon Musk reposting, in my opinion in a planned and thoughtful way, conspiracy theories constitutes an element of this system.
Sources
Wikipedia Afrikaner Resistance Movement
Elon Musk’s Grok AI chatbot brought up ‘white genocide’ in unrelated queries
Elon Musk’s Grok AI Can’t Stop Talking About ‘White Genocide’
Interview with Waluś on Stanowski’s Kanał Zero watch?v=hF0_GyJpues (I’m not linking this G)
I also recommend in the context of corporate responsibility in relation to BigTechs in the context of freedom of speech, disinformation. Worth reading:
- AI
- Disinformation
- Conspiracy Theories
- Recommendation Systems
- Manipulation
- Astroturfing
- Cybersecurity
- Machine Learning
- AI Ethics
- AI Threats
- AI Series
Related
- Phatic Function in Practice: How ChatGPT's Conversation Maintenance Generates Millions in Losses
- Conspiracy Theories as Civic Narrative: An Analysis of Rafał Górski's Columns from the Civic Affairs Institute
- Elon Musk's Conspiracy Theory About Democrats' Demographic Plan: Analysis of Disinformation Mechanisms
- Paranoia as a Work Method Part 2: The Waltz Case, or How Conspiracy Theory Lowers National Security
- GPTBot Is Scanning The Internet: How OpenAI Will Change Content Consumption and the Future of Search
- Bypassing Security Filters in ChatGPT's SVG Generation
- Progressive Media Traps: From Witch Hunts to Disinformation Spirals
- #2503 Flashes: A Concerning Debut of New App in Bluesky Ecosystem
Support Independent Research

If you find this research valuable, consider supporting through Ko-fi. Your contribution helps maintain this project as an ad-free, independent resource.
COFFEE NOT BULLETSDirect support means more research, better content, and no ads.