On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff
IN 1975, INTEL founder Gordon Moore predicted that the number of transistors in a semiconductor would double every two years. Moore’s Law, as it came to be known, held true until recently. These fundamental enhancements “at the Bottom” have given us cellphones with more power than room-size computers had 25 years ago and Internet access now for nearly half the world.
“Unfortunately,” MIT computer science specialist Charles E. Leiserson and his colleagues observe, “semiconductor miniaturization is running out of steam as a viable way to grow computer performance—there isn’t much more room at the ‘Bottom.’ ”
Fortunately, in “There’s Plenty of Room at the Top,” Science June 5, 2020, Leiserson et al identify three areas of potential growth at the “Top” of computer technology: software enhancement, new algorithms, and advances in hardware architecture. Here are tidbits gleaned from this article and my usual Internet sleuthing.
Less Software Bloat. Software can be made to run faster by revising the strategy of its development. Hitherto, the goal has been to minimize a software’s development time, not its operational efficiency.
Leiserson and his colleagues write, “Performance engineering can also tailor software to the hardware on which it runs, for example, to take advantage of parallel processors [with simultaneous computation of a task’s separate aspects] and vector units.”
More New Algorithms. Since the late 1970s, specialists note that processing time has improved as much from algorithmic advances as it has from hardware speedups. “As such,” they say, “we see the biggest benefits coming from algorithms for new problem domains (e.g., machine learning) and from developing new theoretical machine models that better reflect emerging hardware.”
Other Hardware Architecture. “Hardware architecture,” the specialists say, “can be streamlined—for instance, through processor simplification, where a complex processing core is replaced with a simpler core that requires fewer transistors.”
This may seem counterintuitive, but in state-of-the-art computation these days, a crucial goal is minimizing the electronic travel within a core. Parallel computing is amenable to this idea of employing smaller dedicated cores as well.
“Another form of streamlining,” Leiserson and his colleagues note, “is domain specialization, where hardware is customized for a particular application domain. This type of specialization jettisons processor functionality that is not needed for the domain….”
They cite machine learning as an example: Its hardware doesn’t need the floating-point precision required in engineering calculations.
There’s also the potential for replacing silicon technology with quantum computing.
Digital technology is binary: off/on, 0/1. Quantum computing is rather more complex, conceptually and technologically.
A Coming Tradeoff. “As miniaturization wanes,” Leiserson and his colleagues conclude, ”the silicon-fabrication improvements at the Bottom will no longer provide the predictable, broad-based gains in computer performance that society has enjoyed for more than 50 years.”
“Unlike the historical gains at the Bottom,” they say, “gains at the Top will be opportunistic, uneven, and sporadic. Moreover, they will be subject to diminishing returns as specific computations become better explored.”
Goodbye, Moore’s Law. As Margo said in All About Eve, “Fasten your seatbelts, it’s going to be a bumpy night.” Probably an interesting one too. ds
© Dennis Simanaitis, SimanaitisSays.com, 2020