Simanaitis Says

On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff

A.I. DOOMERISM?    PART 1    

BACK IN 2020 PAUL TAYLOR taught me what little I know about “SIR modeling,” “Susceptible, Infectious, Recovered” as they relate to epidemics such as Covid. 

This time around, Taylor writes about “Llamas, Pizzas, Mandolins,” in London Review of Books,” March 21, 2024. (The three disparate topics refer to categories that are part of Artificial Intelligence organization, of which more anon.) 

Here, in Parts 1 and 2 today and tomorrow are tidbits gleaned from Taylor’s recent article dealing with, among other aspects, the distressing one of A.I. Doomerism.

Doomerism. Taylor writes that Geoffrey Hinton, “one of the most influential A.I. researchers of the last thirty years, is a comparatively recent convert to A.I. doomerism. Until May last year Hinton was, at 75, an active researcher in Google’s AI division. Observing the progress being made, he concluded that, to his surprise, existing algorithms were already better at learning than human brains, and that superhuman levels of intelligence would soon be achieved. He promptly retired, saying that we should be careful—since machines more intelligent than us are unlikely to be content to leave us in charge.” 

Coming from an active researcher, this sci-fi scenario is scary indeed. However, Taylor also recognizes counterexamples.

Paul Taylor is Professor of Health Informatics at UCL Institute of Health Informatics. He holds a BSc in Psychology, an MSc in Artificial Intelligence and a PhD in Medical Physics from UCL. His research interests have focused on the use of computer systems in clinical decisions, particularly in image interpretation, including mammography and chest radiography. Image from University College London.

Radiology. “Those who work at the leading edge of technology,” Taylor observes, “can’t always accurately assess its potential. Eight years ago Hinton suggested that it was no longer worth training radiologists, since A.I. would be able to interpret medical images within five years. He now concedes he was wrong.”

“His error,” Taylor says, “was not in his assessment of the way A.I. would develop, but rather in his failure to appreciate how difficult it would be for companies to translate technical success into products in a highly regulated market, or to understand the way a profession evolves as certain tasks are automated. Of the 692 A.I. systems that have so far been approved by the FDA for medical use, 531 target radiology, and yet today there are 470 vacancies for radiologists listed on a U.S. job board.”

Large Language Models. LLMs have appeared often here at SimanaitisSays. Taylor writes, “The algorithms are trained, principally, by learning to predict the missing word in a passage; sceptics refer to them as a glorified form of autocomplete. This misses the point.”

What’s “Understanding”? The point being, in learning to predict the right word, do the algorithms understand the world? Taylor writes, “The use of the word ‘understand’ is perhaps too anthropomorphic.” He cites another researcher who lists “things that can make accurate predictions without possessing understanding: Babylonian astronomers, dogs chasing frisbees, probability distributions.” 

The first two sound facetious; the third, less so. 

Discriminative A.I. “The simplest form of artificial intelligence,” Taylor writes, “predicts the appropriate label for some form of data. Given a collection of chest X-rays, if some are labelled as containing cancer and the rest as not containing cancer, a machine learning algorithm can be trained to recognise cancer. This kind of ‘discriminative A.I.’ typically has to be trained on a large number of accurately labelled images.”

This explains, by the way, Taylor’s title: He cites another researcher’s labeling “examples of images in 101 categories, including llamas, pizzas, mandolins and helicopters…. It’s hard to know how many categories are available to adult humans processing visual information, but one estimate is that we might have 30,000.”

Tomorrow in Part 2, we’ll continue tidbit-gleaning from Taylor’s article with terms like “generative A.I.” and the possible “scaling up” of A.I. to the point of consciousness. Heady stuff, this. ds 

© Dennis Simanaitis, SimanaitisSays.com, 2024

One comment on “A.I. DOOMERISM?    PART 1    

  1. Tom Austin
    April 7, 2024
    Tom Austin's avatar

    Yes, and consciousness is equally blighted as a concept as well.

    >

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.