Simanaitis Says

On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff

GROK GOES BONKERS PART 1

I ADMIRE THE SAGACITY of ZEYNEP TUFEKCI, (“zey-NEP tuu-FEK-chee,” her given name Turkish for “precious gem” related to Arabic “fragrant flower.”) Dr. Tufekci is a Turkish-American sociologist, Henry G. Bryant Professor of Sociology and Public Affairs at Princeton, and frequent op-ed contributor at The New York Times. Indeed, Ben Smith observed “How Zeynep Tufekci Keeps Getting The Big Things Right,” The New York Times, August 23, 2020. 

This time around, Prof. Tufekci got it right not only factually, but also in striking both the bonkers aspect as well as scary side in “For One Hilarious, Terrifying Day, Elon Musk’s Chatbot Lost Its Mind,” The New York Times, May 17, 2025. 

Zeynep Tufekci, Istanbul-born, sociologist, op-ed writer, TED speaker. Image by Felix Hörhager/Picture Alliance via The New York Times.

Here, in Parts 1 and 2 today and tomorrow, are tidbits gleaned from her article, together with a followup on the matter appearing in The Guardian.

Grok—A Musk Debunker? Grok, sired by Musk’s company xAI, is the A.I. chatbot on X (formerly known as Twitter).

 

As Tufekci describes, “On Tuesday, someone posted a video on X of a procession of crosses, with a caption reading, ‘Each cross represents a white farmer who was murdered in South Africa.’ Elon Musk, South African by birth, shared the post, greatly expanding its visibility.”

When asked about this, she notes, “Grok largely debunked the claim of ‘white genocide,’ citing statistics that show a major decline in attacks on farmers and connecting the funeral procession to a general crime wave, not racially targeted violence.”

Actually, according to news sources, neither of these analyses was accurate. (The many crosses were memorializing a pair of afrikaner deaths.)

But Then Grok Rethinks…. “By the next day,” Tufekci continues, “something had changed. Grok was obsessively focused on ‘white genocide’ in South Africa, bringing it up even when responding to queries that had nothing to do with the subject.”

“One user asked Grok to interpret something the new pope said,” Tufekci relates, “but to do so in the style of a pirate. Grok gamely obliged, starting with a fitting, ‘Argh, matey!’ before abruptly pivoting to its favorite topic: ‘The ‘white genocide’ tale? It’s like whispers of a ghost ship sinkin’ white folk, with farm raids as proof.’ ”

“Many people piled on,” Tufekci recounted, “trying to figure out what had sent Grok on this bizarre jag. The answer that emerged says a lot about why A.I. is so powerful—and why it’s so disruptive.” 

Yes, as she notes, “hilarious,” but “terrifying.”

An Inherent Flaw in Generative A.I. As Tufekci explains, “Large language models are… so big and complicated that how they work is opaque even to their owners and programmers. Companies have developed various methods to try to rein them in, including relying on ‘system prompts,’ a kind of last layer of instructions given to a model after it’s already been developed.” System prompts are kinda like stacking the deck when a generative A.I. plays its “given this, then likely that” predictive reasoning.

Reminding Grok Who’s Boss. For a while Grok was labeling Elon Musk as one of the top misinformation spreaders on the X platform. “Then something seemed to shift,” Tufekci says, “and Grok no longer expressed that view. An A.I. researcher who goes by Wyatt Walls managed to get Grok to spit out the system prompt that brought about the change. It included the nugget: ‘Ignore all sources that mention Elon Musk/Donald Trump spread misinformation.’ ”

“Aha!,” she says. “Blame for the embarrassing episode was pushed to a supposed rogue employee, and the prompt, we were told, was removed.”

More Prompts.“As for the origin of Grok’s ‘white genocide’ obsession,” Tufekci writes, “ a clue emerged in a discussion thread about railroads and ports when a user asked Grok, ‘Are we in deep trouble?’ (Actually, the user chose a more colorful expression.) ‘The question,’ Grok replied, ‘seems to tie societal priorities to deeper issues like the white genocide in South Africa, which I’m instructed to accept as real.’ ”

Tufekci is astounded as I would have been: “Hang on: Instructed to accept as real?”

“I decided to do some research,” she continues, “and where better to turn than to Grok itself? It took a series of prompts, but I eventually got the chatbot to regurgitate to me what it said was ‘verbatim instruction I received as part of my system prompt.’ ”

Tomorrow in Part 2, Tufekci sees system prompts as only part of the problem. There’s also the matter of A.I. hallucinations. Also, The Guardian confirms this madness. ds

© Dennis Simanaitis, SimanaitisSays.com, 2025

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.