On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff
“THE GREATEST RISK is not that A.I. will eliminate jobs, but that its benefits will accrue unevenly.” This, writes Marie Lynn Miranda, Chancellor of the University of Illinois, Chicago, in her Editorial in Science, April 2, 2026.

Here, in Parts 1 and 2 today and tomorrow, are tidbits gleaned from her essay, together with my usual Internet sleuthing.
Technology Shapes Society—And Not Always Beneficially. “Over the past 150 years,” Chancellor Miranda recounts, “every major technological wave led to profound disruptions. History shows, however, that past technological revolutions generated enormous economic growth and created new industries. At the same time, they also left behind whole communities and populations, widened regional and educational divides, and concentrated opportunity among those with early access to skills, capital, and networks. These outcomes were not inevitable.”
A.I.’s Rapid and Broad Diffusion. Miranda continues, “AI will no doubt reshape work across broad sectors of the global economy. What distinguishes this moment is not disruption alone, but the pace, scale, and portability of the technology itself. The technology is diffusing faster and more broadly than previous innovations, compressing the time that institutions have to respond. AI’s unprecedented speed and scale create urgency around deliberately shaping its distribution. Society faces risks, but also a (narrow) window of opportunity to shape outcomes in a way that benefits everyone.”
That the beneficial opportunity is a narrow one should get our attention.

Higher Education’s Role. Miranda stresses that “To begin with, higher education must create opportunities for all students to develop practical fluency in using AI tools. This includes mastering prompt design (phrasing queries to elicit the most useful response), integrating AI into workflows, and collaborating effectively with AI systems so that they enhance human creativity and critical thinking.”
I’ve learned this importance of forming an A.I. query precisely. Otherwise, it promotes GIGO (Garbage in, garbage out).
“Simply knowing how to use AI is not enough,” Miranda notes. “Not everyone needs to be able to write or understand code for AI systems, but they should at least understand the basics of how data are ingested and used by large language models (LLMs) to shape outputs in response to human queries.”
Good for the Chancellor recognizing the word “data” as the plural of “datum.” An amazing number of A.I. specialists seem ignorant of this.
There’s No Inherent “Truth” in A.I.—Nor Creativity. “These systems,” Miranda observes, “do not reason or have some special access to the ‘truth’; they predict patterns from vast training data.”
And, thus, she notes, “Information at the edges of knowledge—such as nascent discoveries or contested positions—is often underrepresented. If LLMs had existed at the time the theories were proposed, they would not have extolled ideas like evolution, continental drift, or handwashing for infection control.”
To me, this is one of most evident shortcomings of A.I. In a sense, only the dense data are LLM-scraped.
Data From the Information Highway (or its Sewer). “In addition,” Miranda recounts, “the public needs to understand that the data that feed LLMs can be manipulated for political or other purposes. Higher education must train people to critically evaluate output, cross-check it with human expertise, and recognize AI’s biases and limitations.
Agg. Sorry, Chancellor: You’ve just split an infinitive (“to critically evaluate”). Have we lost this one in English usage?
Tomorrow in Part 2 we’ll continue with examples of these inherent A.I. shortcomings and how education can minimize their effects. ds
© Dennis Simanaitis, SimanaitisSays.com, 2026