Simanaitis Says

On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff

MACHINE LEARN THE NEW YORKER?

THE HUMAN BRAIN’S neural network has a hundred billion nerve cells with trillions of connections linking them. Human thought is constantly updating these links with the latest input. Artificial Intelligence’s machine learning mimics this: Vast quantities of data are fed to artificial neural networks, and the newest ones figure out their own links.

How successful is machine-learning A.I.? As described here at SimanaitisSays, AlphaZero, a Google DeepMind project, learned the classic Go game in 72 hours. Left running (and machine-learning) for several weeks, “it had taken Go to a level that was beyond anything that had been previously imagined.”

Smart Compose, another Google machine-learning software, was introduced to Gmail users in May 2018. This A.I. suggests the ending for your sentence as you type it.

Type “Four score and seven,” and you can bet it’ll suggest “years ago our fathers .…”

John Seabrook gives details of this “predictive text” machine learning in “The Next Word” at The New Yorker Online. His article is scheduled to be published in the magazine’s print edition, October 14, 2019. However, there’s an interactive feature of the online version that makes it particularly entertaining: From time to time, you can click to read predicted text for the article. A.I.’s prediction, not Seabrook’s.

“The Next Word,” by John Seabrook, The New Yorker print edition, October 14, 2019. Image by Igor Bastides.

Machine Learning’s Starting Point. There are two ways in which machines learn from data: The older approach is supervised by human coding of the training data, a particularly time-consuming process. More recent advances have been unsupervised: With “deep learning,” the computer is designed to accept data and figure out its own patterns through trial and error.

AlphaZero gained expertise with Go by studying scads of the game’s past moves and outcomes. Smart Compose gets its smarts from the accumulated words of Gmail users.

Deep Learning Tradeoffs. There may be no privacy issues inherent in historic Go gaming. But a data base of Gmail users’ messages is another matter.

Image by Igor Bastides for The New York Times, January 18, 2019.

There’s also an environmental implication: Seabrook notes the high cost of training extensive neural nets, “… in part because of the energy costs incurred in running and cooling the sprawling terrestrial ‘server farms’ that power the cloud. A group of researchers at UMass Amherst, led by Emma Strubell, conducted a recent study showing that the carbon footprint created by training a gigantic neural net is roughly equal to the lifetime emissions of five automobiles.”

A third matter is philosophical: Given the potential power of deep-learning A.I., what protects us from a few corporations reaping what Seabrook calls “the almost immeasurable rewards of a vast new world.” Can capitalism withstand such disruption?

A Nonprofit Approach. OpenAI is a nonprofit organization founded in 2015. Seabrook notes, 
“Its founders’ idea was to endow a nonprofit with the expertise and the resources to be competitive with private enterprise, while at the same time making its discoveries available as open source.”

As one of its discoveries, OpenAI devised GPT-2, what Seabrook calls “a kind of supercharged version of Smart Compose.” He says, “GPT-2 was trained to write from a forty-gigabyte data set of articles that people had posted links to on Reddit and which other Reddit users had upvoted.”

In February 2019, OpenAI delayed GPT-2’s release because, as Seabrook writes, “the machine was too good at writing.”

A grandiose publicity stunt? Or part of OpenAI’s avowed mission not to upset a level A.I. playing field?

How Close to Superhuman? Moore’s Law, as fine-tuned by Intel CEO Gordon Moore in 1975, implied that computer processing power would double every two years. Indeed, today’s innovations are updating this “total available compute” to ten-fold increases every year.

Seabrook writes, “The brain is estimated to contain a hundred billion neurons, with trillions of connections between them. The neural net that the full version of GPT-2 runs on has about one and a half billion connections, or ‘parameters.’ At the current rate at which compute is growing, neural nets could equal the brain’s raw processing capacity in five years.”

Deep learning has exceeded human Go playing, and perhaps human writing, but what about human thought and creativity? Or human kindness? ds

© Dennis Simanaitis, SimanaitisSays.com, 2019

One comment on “MACHINE LEARN THE NEW YORKER?

  1. phil ford
    October 14, 2019

    If AI could learn human kindness, we’d have a chance of saving iit …

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: