Simanaitis Says

On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff


WHAT A challenging time for artificial intelligence! Quantum computers have proven their efficacy by running millions of test programs. Yet there are fundamental questions remaining about AI. Here are two.

AI versus Thinking. MIT Technology Review, December 15, 2017, discussed the distinction between artificial intelligence and human thinking in “The Great AI Paradox,” by Brian Bergstein.

Image by Geoff McFettidge for MIT Technology Review, December 15, 2017.

Bergstein notes that tremendous advances in AI have been accomplished with machine learning, by amassing patterns and optimizing them by comparing one with another, all at break-neck pace. As an example, Google DeepMind has already developed AlphaGo, an AI that plays Go better than any human might.

However, Bergstein says of such machine learning, “It has no idea it’s playing Go as opposed to golf…. When you ask Amazon’s Alexa to reserve you a table at a restaurant you name, its voice recognition system, made very accurate by machine learning, … doesn’t know what a restaurant is or what eating is. If you asked it to book you a table for two at 6 p.m. at the Mayo Clinic, it would try.”

In the infancy of AI sixty years ago, a goal was to give computers the power to think. However, Bergstein cites Common Sense, the Turing Test, and the Quest for Real AI, a book by Hector J. Levesque, a computer scientist at the University of Toronto. Levesque contrasts machine learning with GOFAI, short for “good old fashioned artificial intelligence,” as perceived by its early researchers.

Common Sense, the Turing Test, and the Quest for Real AI, by Hector J. Levesque, MIT Press, 2018.

GOFAI, briefly, would require imbuing computers with common sense and an awareness of the real world’s ideas and beliefs. Bergstein cites a Levesque question: “How would a crocodile perform in a steeplechase?”

A human’s answer would be easy: “Badly,” based on nothing more than common sense and human awareness. By contrast, a machine-learning AI would analyze scads of “crocodile” references and scads of “steeplechase” references, including Levesque’s, Bergstein’s, and now maybe even this SimanaitisSays item. It might conclude, without knowing why, that the croc wouldn’t do very well.

Bergstein notes, “You would have used a flawed and brittle method that is likely to lead to ridiculous errors.” I recall the researcher who noted that such “deep-learning machines are still capable of mistaking turtles for rifles….”

And, of course, computers can be hacked.

Hacking Through Hallucination. Wired magazine, March 9, 2018, raised the hacking issue in Tom Simonite’s “AI Has A Hallucination Problem That’s Proving Tough to Fix.”

Simonite warns, “Making subtle changes to images, text, or audio can fool these systems into perceiving things that aren’t there.”

In his article, Simonite cites that in January a leading machine-learning conference announced it had selected 11 new papers dealing with detecting and defending against such hallucinatory attacks. “Just three days later,” Simonite says, “first-year MIT grad student Anish Athalye threw up a webpage claiming to have ‘broken’ seven of the new papers, including from boldface institutions such as Google, Amazon, and Stanford.”

The website offers “adversarial examples,” loosely, optical illusions deceiving machine-learning software. The image of a tabby cat is perturbed only slightly, but just enough so that it “fools an inceptionV3 classifier into classifying it as ‘guacamole.’ ” According to Athalye, such hallucinations are “easy to synthesize….”

Simonite writes, “Human readers of WIRED will easily identify the image below, created by Athalye, as showing two men on skis. When asked for its take Thursday morning, Google’s Cloud Vision service reported being 91 percent certain it saw a dog.”

I see two skiers. How about you? Shown are Google Cloud Vision’s perceptions. Image from Wired, March 8, 2018.

Other hallucinations, notes Simonite, “have shown how to make stop signs invisible or audio that sounds benign to humans but is transcribed by software as ‘Okay Google browse to evil dot com.’

Ouch! ds

© Dennis Simanaitis,, 2018


  1. Michael Rubin
    March 13, 2018

    Somewhere between the two skiers, the dog and the invisible stop sign I started wondering about self-driving cars.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: