On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff
ROBOTS ARE getting closer to achieving human-like dexterity. Computers are improving in recognition of speech and writing. Many of these advances are attributable to artificial neural networks, ANN, for short.
Briefly, an ANN mimics the workings of the human brain. However, this doesn’t tell me much because I don’t understand how the brain works, axons, synapses and all that jazz. Instead, let’s ignore the physiology and focus on results: Neural networks, human and artificial, process information. Both are able to make order out of the overkill of sensory input. ANNs, like humans, are able to learn by example.
In its learning mode, a neuron is trained to fire or not, depending on specific input patterns. Its rules of fire are of the digital yes/no on/off type. Complex neurons have weighted inputs that nuance the decision-making; their operations are akin to analog computing. An ANN has interconnected arrays of these complex neurons.
The first artificial neuron back in 1943 was formulated by a neurophysiologist and a mathematical logician. Alas, technology of information processing didn’t exist at that time to exploit their theoretical construct. Today, though, computers have achieved the power and sophistication to handle big data. And the latest research in “deep learning” has taken ANN beyond capabilities of conventional programming.
Conventional programming solves problems with algorithms, that is, sets of unambiguous instructions. By contrast, today’s ANNs use huge collections of highly interconnected neurons working in parallel and learning patterns by example. The two approaches are complementary, not competing. Often, conventional computing supervises ANN operation.
As its name implies, deep learning enhances an ANN’s establishing rules of neuron firing. Researchers at DeepMind, now part of Google, developed a new approach in deep learning and tested it on classic Atari 2600 games, including Breakout, Enduro, River Raid, Seaquest and Space Invaders. Their ANN achieved “a level comparable to that of a professional human game tester across a set of 49 games.”
Stressing this, the researchers wrote their approach “bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.” A full discussion can be seen at “Human-level control through deep reinforcement learning.”
Researchers at the University of California, Berkeley have designed a Berkeley Robot for the Elimination of Tedious Tasks. BRETT learns human-like dexterity in things hitherto beyond robotic capability. The researchers’ work in advanced ANN is described in “End-to-End Training of Deep Visuomotor Policies.” Their deep convolutional neural networks (CNNs) have more than 92,000 parameters.
The UC Berkeley researchers write, “This method can learn a number of manipulation tasks that require close coordination between vision and control, including inserting a block into a shape sorting cube, screwing on a bottle cap, fitting the claw of a toy hammer under a nail with various grasps and placing a coat hanger on a clothes rack.”
Everything but the hammer I’m good at. ds
© Dennis Simanaitis, SimanaitisSays.com, 2015