On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff
KNOWING JUST A TAD ABOUT LARGE LANGUAGE MODELS, I’m not surprised that they occasionally “hallucinate.” That is, spout out erroneous stuff. I’ve known other B.S. artists all my life. Here are tidbits gleaned from my own experience and from “When A.I. Chatbots Hallucinate,” by Karen Weise and Cade Metz in The New York Times, May 1, 2023.
A Politico’s Hallucinations. I was an innocent preteen when I heard a local Cleveland politician extolling his rich Polish heritage (in our heavily Polish-American neighborhood). Even then, I was curious that the guy’s last name was decidedly not Polish, and I wondered whether adults could make things up, sorta.
A Hallucinating Neighbor. Years later, I lived in the same apartment complex with a guy who had “previously been a military operative and had to kill the woman he loved.” His wife later admitted to my wife that he “was a regular bull-shitter and had spent four years at Fort Devens outside Boston.”
I wondered if he lied to her too.
A Non-fact Fact-checker. For awhile there, I worked with a fellow whose primary responsibility was fact-checking. He invariably responded to each query with, “Yeah, I think it was …..” We had to remind him that he was a fact-checker, not fable-maker.
Cocktail Banter. Wife Dottie once received this advice from a boss: “If you’re asked a question at a cocktail party and you don’t know the answer, just make up something that’s close.”
Pre-A.I. Hallucinators. Each of these was practicing a human form of Large Language Modeling: From a vast array residing in one’s memory, pick out what seems like the logical stuff and call it a fact. Sometimes it’s done quite innocently; other times, less so.
Chatbot Hallucinators. In “When A.I. Chatbots Hallucinate,” Karen Weise and Cade Metz share fascinating examples of chatbots as B.S. artists: “When ChatGPT was recently asked how James Joyce and Vladimir Lenin first met—there is no evidence they ever did—this is how it responded:
ChatGPT: “James Joyce and Vladimir Lenin met in Zurich, Switzerland in 1916. Both men were living in exile in Zurich during World War I. Joyce was a writer and Lenin was a revolutionary. They met at the Cafe Odéon, a popular gathering place for artists and intellectuals in Zurich.”
Apart from an L.L.M. fact hoovered here and there, this account is pure B.S.
Weise and Metz say, “Fabrications like these are common. Figuring out why chatbots make things up and how to solve the problem has become one of the most pressing issues facing researchers as the tech industry races toward the development of new A.I. systems.”
An Understated Problem. “The tech industry,” Weise and Metz observe, “often refers to the inaccuracies as ‘hallucinations.’ But to some researchers, ‘hallucinations’ is too much of a euphemism. Even researchers within tech companies worry that people will rely too heavily on these systems for medical and legal advice and other information they use to make daily decisions.”
Weise and Metz cite Subbarao Kambhampati, a professor and researcher of artificial intelligence at Arizona State University: “If you don’t know the answer to a question already, I would not give a question to one of these systems.”
An Excellent Series of Articles. The New York Times, May 1, 2023, offers other relevant articles as well: “ ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead,” “What Exactly Are the Dangers Posed by A.I.?, and “A.I. Is Getting Better at Mind-Reading.”
You can bet that, unlike Fox News, Representative George Santos, and the like, these articles have been fact-checked—by humans. ds
© Dennis Simanaitis, SimanaitisSays.com, 2023