Simanaitis Says

On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff


DO YOU RECALL Douglas Adams’ book and movie The Hitchhiker’s Guide to the Galaxy? Deep Thought, its giant computer (with Helen Mirren’s voice), is asked the Greatest Question of Life. The answer? Forty-two.

The Ultimate Hitchhiker’s Guide to the Galaxy, by Douglas Adams, Del Rey, 2002; all five of the Hitchhiker’s series.

Deep Thought is a perfect example of black box reasoning. In engineering jargon, a black box transforms input into output, yet its user knows nothing whatsoever about what happens within. In the jargon of Artificial Intelligence, it’s AI performing magic that we non-AI types cannot fathom.

Elizabeth Holm’s “In Defense of the Black Box,” Science, April 5, 2019, addresses related questions: “What good is knowing the answer when it is unclear why it is the answer? What good is a black box?”

Here are tidbits gleaned from Holm’s defense of black box use.

When the Cost of a Wrong Answer is Relatively Low. “Targeted advertising,” Holm says, “is the canonical example. From the vendor’s point of view, the cost of posting an unwanted ad is small, whereas the benefit of a successful ad is potentially large.”

Algorithms sneaking ads into our social networks may seem inscrutable. (I talk about Baedeker’s Rhine here, and I am forever after offered river cruises.)

When hackers finesse a photo of a tabby cat to be AI-identified as guacamole, we’re gently amused. But, as the saying goes, “no animals were harmed….”

By contrast, Holm notes, “Letting AI drive cars is more contentious because the black box necessarily makes life-or-death decisions without an opportunity for human intervention.”

Yet, AI systems are on a steep learning curve.

When It Produces Best Results. Holm says, “… self-driving vehicles eventually will be safer than those piloted by humans; they will produce the best results with respect to traffic injuries and fatalities.”

However, Holm notes, still to come with autonomous vehicles are “human ethics, fairness, and accountability to nonhuman entities.” She cites AI challenges: biases, including incorrect predictions and subjectively measured unfairness; inapplicability outside the training domain; and brittleness (the tendency to be easily fooled). As an example of this last one, there’s the AI mistaking turtles for rifles.

Image by N. Desai/Science, April 5, 2019.

When the Black Box Inspires Human Inquiry. “For example,” Holm writes, “in a groundbreaking medical imaging study, scientists trained a deep learning system to diagnose diabetic retinopathy—a diabetes complication that affects the eyes—from retinal images.”

The system’s performance met or surpassed that of a committee of ophthalmological experts. “More surprisingly,” Holm notes, “the system could accurately identify a number of other characteristics that are not normally assessed with retinal images, including cardiological risk factors, age, and gender.”

Another example of inspiration is Google DeepMind’s AlphaZero AI playing Go. Within a few weeks of machine learning, it had devised strategies beyond anything imagined by human players.

Go players. Image from of a carving at the Seattle Asian Art Museum.

The Defense Rests. Holm suggests we accept “the black box on its own terms. Black box methods can contribute substantively and productively to science, technology, engineering, and math to provide value, optimize results, and spark inspiration.”

Just don’t play Go with one for cash. ds

© Dennis Simanaitis,, 2019

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: