Simanaitis Says

On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff

WORKING THE INTERNET’S EDGE WITH ANALOG OPTICS

WE LIVE IN DIGITAL TIMES. Yet, as noted recently here in SimanaitisSays, analog computing is having a resurgence.

“Delocalized Photonic Deep Learning on the Internet’s Edge,” by Alexander Sludds et al., Science, October 20, 2022, offers an example of this, albeit with adjectives and nouns begging for definition to the benefit of us non-specialists. Here are tidbits on this tantalizing research.

Background. Deep Learning uses artificial neural networks to mimic the learning process of the human brain. It identifies inferences (loosely, if/then relationships) by analyzing vast amounts of data.

This deep learning is photonic if it employs photons not electrons; that is, optical fibers in lieu of electrical wires. A benefit of photonics is its support of higher data rates, of potentially more machine learning in a given nanosecond. 

Delocalizing this learning splits the process into tasks capable of being performed by a multiplicity of less powerful devices, even by devices as ubiquitous as cell phones.

Learning on the Edge. The researchers note, “Smart devices such as cell phones and sensors are low-power electronics operating on the edge of the internet. Although they are increasingly more powerful, they cannot perform complex machine learning tasks locally. Instead, such devices offload these tasks to the cloud, where they are performed by factory-sized servers in data centers, creating issues related to large power consumption, latency, and data privacy.”

They offer an alternative to this: “We introduce an approach to machine learning inference based on delocalized analog processing across networks. In this approach, named Netcast, cloud-based ‘smart transceivers’ stream weight data to edge devices, enabling ultraefficient photonic inference.”  

Linear Algebra’s Matrices. The computational demands of deep learning rely on the linear algebra of matrices, of rectangular arrays of values. Ryan Hamerly offers details in “The Future of Deep Learning is Photonic,” IEEE Spectrum, June 29, 2021.

Modern computer hardware, Hamerly notes, “has been very well optimized for matrix operations…. The relevant matrix calculations for deep learning boil down to a large number of multipy-and-accumulate operations, whereby pairs of numbers are multiplied together and their products added up.” 

Digital Overwork Analog Photonics. However, with advances of deep learning comes vast increases in the number of these operations. “The usual solution,” Hamerly notes, “is simply to throw more computing resources—along with time, money, and energy—at the problem.” 

“As a result,” Hamerly notes, “training today’s large neural networks often has a significant environmental footprint. One 2019 study found, for example, that training a certain deep neural network for natural-language processing produced five times the CO2 emissions typically associated with driving an automobile over its lifetime.”

Succinctly, Moore’s Law (“Transistor density doubles every two years”) is “running out of steam.” And analog photonics is a promising alternative.

This computer rendering depicts the pattern of a photonic chip performing neural-network calculations using light. Image by Alexander Sludds from IEEE Spectrum.

Multiplying with Light. Hamerly describes how photonic multiplication operates: “Two beams whose electric fields are proportional to the numbers to be multiplied, x and y, impinge on a beam splitter (blue square). The beams leaving the beam splitter shine on photodetectors (ovals), which provide electrical signals proportional to these electric fields squared. Inverting one photodetector signal and adding it to the other then results in a signal proportional to the product of the two inputs.”

Image by David Schneider from IEEE Spectrum.

And Exceedingly Quickly. What with photons being more efficient than electrons, Sludds and his colleagues report that “Netcast allows milliwatt-class edge devices with minimal memory and process to compute at teraFLOPS rates reserved for high-power (>100 watts) cloud computers.”

Their edge computing architecture “makes use of the strengths of photonics and electronics to achieve orders of magnitude in energy efficiency and optical sensitivity improvements over existing digital electronics.” 

Quite an achievement. And the rest of us get to learn a new way of multiplying optically. ds 

© Dennis Simanaitis, SimanaitisSays.com, 2022 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: