Simanaitis Says

On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff

EDUCATING US ALL ABOUT A.I. PART 2

IN PART 1, CHANCELLOR MARIE LYNN MIRANDA’S Science Editorial promoted educating us all about optimizing our use of A.I. Today in Part 2, we offer examples of why this is needed. 

Examples of A.I. Hallucinations Abound. Zach Warren reports, “GenAI Hallucinations Are Still Pervasive in Legal Filings, But Better Lawyering is the Cure,” Thompson Reuters, August 18, 2025.

Warren recounts, “According to a study conducted through Thomson Reuters Westlaw of cases between June 30 and August 1, hallucinations and citations of non-existent legal cases continue to be pervasive across courts. This search found 22 different cases in which courts or opposing parties found non-existent cases within filings, leading to discipline motions or sanctions in many instances.” 

Apparently, legalbots are more likely to make stuff up than humans charged with the sometimes tiresome research task.

Also, just recently, Zoe Hardy reports in Daily Mail, April 14, 2026, “Warning Issued For People Using AI Chatbots For Medical Advice: Major Study Found Information Given  By ChatGPT, Gemini, and Grok Is Often Inaccurate.”

Hardy recounts, “Publishing their findings in the British Medical Journal, researchers found that AI-driven chatbots give problematic responses half of the time, potentially exposing users to unnecessary harm…. The first independent safety evaluation for ChatGPT Health—with Open AI’s chat-bot being with most widely-used model—found it under-triaged more than half of cases. Building on this review, the current study probed five popular chatbots including Google‘s Gemini, DeepSeek, Meta AI, ChatGPT and Elon Musk‘s Grok.”

Hardy continues, “While the quality of responses didn’t seem to differ between the five chat-bots tested, Grok was found to generate significantly more highly problematic responses than expected. Gemini, on the other hand, produced the least highly problematic responses and the most non-problematic ones.”

Summing up, Hardy notes, “The researchers concluded: ‘By default, chatbots do not reason or weigh evidence, nor are they able to make ethical or value-based judgments.’ ”

Recall Chancellor Miranda’s comment: “These systems do not reason or have some special access to the ‘truth.’ ”

 The Chancellor’s Summary: “There is no doubt that students of all kinds will use AI; most already are. The task now is to help them become professional and ethical users. This includes learning how and when to acknowledge AI use, how to distinguish between tasks where AI tools work well and where they do not, and how to know when to challenge AI-based output or decisions. As a matter of course, AI must be used in ways that align with professional standards and serve the public good.”

She concludes, “Although AI promises extraordinary gains in productivity and innovation, its benefits will accrue unevenly unless higher education acts decisively to broaden access, skills, and agency. In the end, it is human intelligence, creativity, and innovation that will determine our collective future.” ds 

© Dennis Simanaitis, SimanaitisSays.com, 2026

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.