Simanaitis Says

On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff

A.I. GIGO

IT’S A TIME-HONORED TRADITION of computer programming: Garbage In/Garbage Out. And, all its hallucinations included, modern Artificial Intelligence is nothing more than high-falutin computer programming. Here are tidbits on recent A.I. GIGO.

Large Language Models. Stripped of all their algorithmic hype, Large Language Models aren’t far akin from the adage of an infinite number of monkeys and typewriters (“What’s a typewriter, Grandpa?”) composing among other things Hamlet.

The LLM amasses words galore and applies predictions about which word follows the other, based on previous word orders in its vast memory bank. “To be, or….” It doesn’t take much computer ken to come up with “not to be.” 

Image by Pablo Delcan for The New York Times, May 1, 2023.

But What of Hallucinations? Thus far, A.I. researchers have had only limited success in weeding out the errant monkey’s “fckasdn;oasdjg;as.” Said another way, of eliminating a chatbot’s lying—aka “hallucinating.”

Ever our willing digital servant, challenged with a pesky task a chatbot is not averse to make things up.

Just Kidding, Folks…. If it’s something innocuous like “To be, or let me sleep on it,” we all chortle and go on with our A.I.-addled lives. But what about when chatbots are employed to perform real duties like researching complex topics?

Avianca Versus LLM. Benjamin Weiser reports “Here’s What Happens When Your Lawyer Uses ChatGPT,” in The New York Times, May 27, 2023. 

Image by Nicolas Econommou/NurPhoto from The New York Times, May 27, 2023.

“The lawsuit began,” Weiser recounts, “like so many others: A man named Roberto Mata sued the airline Avianca, saying he was injured when a metal serving cart struck his knee during a flight to Kennedy International Airport in New York.”

Weiser continues, “When Avianca asked a Manhattan federal judge to toss out the case, Mr. Mata’s lawyers vehemently objected, submitting a 10-page brief that cited more than half a dozen relevant court decisions. There was Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and, of course, Varghese v. China Southern Airlines, with its learned discussion of federal law and ‘the tolling effect of the automatic stay on a statute of limitations.’ ”

All very legalese. Except for one thing: Weiser notes, “No one — not the airline’s lawyers, not even the judge himself—could find the decisions or the quotations cited and summarized in the brief. That was because ChatGPT had invented everything.”

Weiser says, “The lawyer who created the brief, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, threw himself on the mercy of the court on Thursday, saying in an affidavit that he had used the artificial intelligence program to do his legal research—‘a source that has revealed itself to be unreliable.’ ”

Can chatbots be guilty of perjury? 

DealBook on Section 230. The New York Times features DealBook/With Andrew Ross Sorkin, addressing this in “Who Is Liable for A.I. Creations?,” June 3, 2023. 

Ephrat Livni writes in DealBook, “A string of challenges to Section 230—the law that shields online platforms from liability for user-generated content—have failed over the last several weeks. Most recently, the Supreme Court declined on Tuesday to review a suit about exploitative content on Reddit. But the debate over what responsibility tech companies have for harmful content is far from settled—and generative artificial intelligence tools like the ChatGPT chatbot could open a new line of questions.”

The point of Section 230, drafted in 1996, was to protect Internet platforms, “hosts,” from suits about material created by others. My italics. 

Image from “Republican Theater of the Absurd,” May 26, 2021.

If SimanaitisSays writes something nasty about Republicans at its WordPresshosted website, I suppose the rascals can come after SimanaitisSays but not its host.

A Deadly Example. Livni writes, “Generative A.I. tools have already been used to make intentionally harmful content. And hallucinations—the falsehoods that generative A.I. tools create (like court cases that never existed)—are a significant problem. If a user prompts an A.I. for cocktail instructions and it offers a poisonous concoction, the algorithm operator’s liability is obvious, said Eric Goldman, a law professor at Santa Clara University and a Section 230 expert.”

Livni concludes, “ ‘The blossoming of A.I. comes at one of the most precarious times amid a maturing tech backlash,’ Goldman said. ‘We need some kind of immunity for people who make the tools,’ he added. ‘Without it, we’re never going to see the full potential of A.I.’ ”

Or maybe persuade chatbots from making things up. ds

© Dennis Simanaitis, SimanaitisSays.com, 2023 

2 comments on “A.I. GIGO

  1. Jack Albrecht
    June 6, 2023
    Jack Albrecht's avatar

    This is similar to self-driving. We need the opposite of immunity for those who make the tools. We need culpability. We don’t need to persuade chatbots from making things up, we need to make the people who write the software responsible for what their software does.

    I have €8 million in personal liability insurance coverage for every employee when they travel to a customer for damages they might inflict on very expensive equipment. Why should it be any different for someone writing code that can result in damages?

  2. Mike Scott
    June 7, 2023
    Mike Scott's avatar

    Hear, hear! Thank you Dennis and Jack. The news brims with stories about Ameca, a microprocessor Chatty Cathy, and in the face of soaring teen suicide, the insulation, isolation of Vision Pro. Code is just that, 0s and 1s, intricate calculating machines, if the mainstream pile-on, me-too corporate journalism might park the sci-fi rubbish.

    Meanwhile, the world’s coral reefs die, as do much in the oceans, fount of all life.

    Curious priorities.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.