Simanaitis Says

On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff


YESTERDAY, WE SHARPENED our discussion of artificial intelligence by separating A.I. from some guy named Al. We also gleaned tidbits from James Fallows’ article in The New York Times Book Review, March 19, 2021, “Can Humans Be Replaced by Machines?” Today in Part 2, Paul Taylor’s “Insanely Complicated, Hopelessly Inadequate,” London Review of Books, January 21, 2021, adds to our A.I. ken.

The Shortcoming of GOFAI. In his LRB article, Paul Taylor discusses GOFAI, Good Old-Fashioned A.I., in which symbolic logic offered the fundamental tools of computerized intelligence. This was inherent in the first A.I. projects back in the early 1960s, with precise rules and symbolic-logical operations. 

However, Taylor notes, this approach has an inherent problem: “This will resonate with anyone who has tried to express seemingly straightforward concepts in sets of rules, only to be defeated by the complexity of real life.”

Paul Taylor is Professor of Health Informatics at UCL Institute of Health Informatics. Image from University College London. Taylor has appeared here at SimanaitisSays in “A Glimpse at SIR Modeling.”

A Medical Example. Taylor’s specialities include health informatics. He gives an example: “One of the standard terminologies used for the computerisation of medical records includes arteries as a subclass of soft tissue, which seems not unreasonable if ‘soft tissue’ is taken to include anything that isn’t bone, but has the consequence that aortic aneurysm is classified as a disorder of soft tissue….”

He continues, this “may be logically correct but feels out of place. The problem is that any attempt to devise a scheme that is rigorously logical inevitably diverges from the way we actually talk about the world.”

Image by Charles Sutton from Adventures in Neurosymbolic Learning.

Machine Learning. By contrast, Taylor observes, “Over the last forty years the extraordinary increase in the rate of accumulation of digital data and the equally dramatic drop in the price of processing power have made possible purely data-driven approaches to machine learning.” He explains, “These systems make predictions based on correlations observed among vast quantities of data. They break calculations down into billions of simpler ones, and learn by iteration, altering the weight given to each piece of information at each stage, until the output of the entire network of calculations conforms to a predetermined target.”

“Artificial neural networks,” Taylor notes, “have proved spectacularly successful at tasks such as generating captions for images, recognising spoken words and identifying winning moves in chess.”

The Flexibility of Human Intelligence. Taylor comments, “Given the extent of the paradigm shift in A.I. research since 1980, you might think the debate about how to achieve A.I. had been comprehensively settled in favour of machine learning. But although its algorithms can master specific tasks, they haven’t yet shown anything that approaches the flexibility of human intelligence.” 

“It’s worth asking,” Taylor continues, “whether there are limits to what machine learning will be capable of, and whether there is something about the way humans think that is essential to real intelligence and not amenable to the kind of computation performed by artificial neural networks.”

I’m reminded of A.I. snafus such as the adversarial perturbation of tabby cat to guacamole and the Chinese woman mistakenly nabbed for jaywalking based on her image in a bus ad.

Future A.I. Taylor writes, “It still seems likely that future advances in A.I. will come from neural networks, simply because of the sheer scale of research now devoted to them…. The hope is that data-driven machine learning will be able to move beyond simple pattern-recognition and start to develop the organising theories about the world that seem to be an essential component of intelligence.”

An A.I. Challenge. Taylor says, “A computer can play chess to superhuman levels and yet have no concept of what chess is, what place chess has in the world, or even that there is a world.” 

He cites Cantwell Smith, one of the authors reviewed, “To take one pressing example, Cantwell Smith argues that safely controlling a self-driving car in an urban environment will require the kind of judgment that makes such awareness necessary.”

“Perhaps,” Taylor says, “but it seems at least possible that careful engineering could make a car that would be safe enough, even if it doesn’t really know what it is doing.” 

Yes, I know some drivers like that. ds 

© Dennis Simanaitis,, 2021

One comment on “NEW THOUGHTS ON A.I.   PART 2

  1. Jack Albrecht
    May 2, 2021

    Until liability for driverless cars sits firmly on a human being’s shoulders they will never be “safe enough.” If liability goes to a corporation there is little incentive to improve liability, as death and dismemberment will be a simple calculation added to the cost of doing business. Then again, that is what it is now…

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: