Simanaitis Says

On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff

A.I. AND A SENSE OF MORALITY

A NEW BOOK, THE AI MIRROR, offers the views of philosopher Shannon Vallor, who at PhilPeople says, “My research explores how emerging technologies reshape human moral and intellectual character, and maps the ethical challenges and opportunities posed by new uses of data and artificial intelligence.” 

The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking, by Shannon Vallor, Oxford University Press, USA, 2024.  

What a refreshing perspective, especially contrasting with artificial intelligence scraping its way into activities around the world. Thus far, it seems,  A.I. marketers have thrived on “can we?,” not “should we?” Nor have they been particularly renowned for their business ethics. 

Vallor’s book is reviewed in the July 5, 2024, issue of AAAS Science by Michael Spezio in his “Beyond the Looking Glass.” Tidbits following are gleaned from this review as well as from Internet articles of Dr. Vallor. 

Shannon Vallor, Ph.D. Philosophy, Boston College; the Baillie Gifford Professor in the Ethics of Data and Artificial Intelligence at the University of Edinburgh, and Director of the Centre for Technomoral Futures in the Edinburgh Futures Institute; co-founder of BRAID (Bridging Responsible A.I. Divides), funded by the Arts and Humanities Research Council.

An Appeal to Mythological Figures. As cited by Michael Spezio in his Science review, “In her new book, The AI Mirror, philosopher Shannon Vallor, a former artificial intelligence ethicist for Google, presents a strident plea for governments and industry to allocate resources for new research and institutions dedicated to ‘technomorality.’ The book’s seven heavily intertwined chapters explore the power of the metaphor of A.I. as a mirror that promotes illusions that ‘potentially hold us captive, like Narcissus in the reflecting pool.’ ”

Spezio continues, “Vallor draws on other mythological figures in service of her metaphor as well. For example, she likens the curse placed on the mountain nymph Echo, which rendered her unable to speak except to repeat the last words of others, to the propensities of large language models to mimic human racial, ethnic, gender, and other biases….”

Like the Roman virtue Prudentia (and unlike A.I. scrapings), humans exhibit practical wisdom. Image from Penta Springs Limited/Alamy Stock Photo via Science.   

“And,” Spezio recounts, “she poetically invokes the mirror held by Prudentia, the Roman personification of prudence or practical wisdom. Like Prudentia, Vallor maintains, humans are capable of using A.I. mirrors to not only look into our past but also understand it and allow it to guide our steps in the present and the future.”

A.I. Successes. “Here,” Spezio notes, “she cites examples of judicious uses of A.I. that include the protein decoder AlphaFold and several studies in which scholars used A.I. to identify and propose corrections to long-standing racial bias, including the excellent work of cognitive scientist Abeba Birhane.” 

A.I. Shortcomings. However, Spezio observes, “Vallor stresses that current forms of A.I. are hopelessly far from attaining the status of artificial general intelligence (AGI). GPT machines, for example, routinely generate inaccurate responses about real persons and events. Such responses ‘are exactly what ChatGPT is designed to do—produce outputs that are statistically plausible given the patterns of the input,’ she writes. This is because A.I. machines lack a human’s ‘commonsense grasp and flowing awareness of how the world works and fits together.’ ”

Technomorality attempts to remedy these A.I. deficiencies.

Superhuman? Vallor writes in noemamag.com, May 23, 2024, “The rhetoric over ‘superhuman’ A.I. implicitly erases what’s most important about being human…. Today’s powerful A.I. systems lack even the most basic features of human minds; they do not share with humans what we call consciousness or sentience, the related capacity to feel things like pain, joy, fear and love.”

“Nor,” Vallor continues, “do they have the slightest sense of their place and role in this world, much less the ability to experience it. They can answer the questions we choose to ask, paint us pretty pictures, generate deepfake videos and more. But an A.I. tool is dark inside.”

Hard-wired. Reviewer Spezio recounts, “In chapter 4, Vallor applies a framework explored by philosopher Harry G. Frankfurt to A.I. machines and the models behind them, writing that ‘they are hard-wired for bullshit,’ in other words, ‘they aren’t designed to be accurate—they are designed to sound accurate.’ ”

Other Reading To Do: Spezio says, “Vallor’s previous work, Technology and the Virtues, offered readers a sustained argument for ‘technomoral’ wisdom and virtues that drew on Aristotelian, Kongzist (Confucian), and Buddhist thinking. Prospective readers of The AI Mirror may want to start with that book, as her new work is a more cursory tour through a set of expansive concepts and literatures, lacking the systematic presentation that characterized Technology and the Virtues.”

Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting by Shannon Vallor, Oxford University Press USA, 2018.

It surely sounds like two books well worth reading. (And I didn’t need A.I. to reveal this.) ds 

© Dennis Simanaitis, SimanaitisSays.com, 2024

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.