In May 2018, Dario Amodei, vice president of research at OpenAI, offered a projection for the growth of the computing power used in the largest AI training runs, which use machine learning algorithms to build models based on data. Amodei explained the computing resources dedicated to AI systems have been “increasing exponentially with a 3.4-month doubling time. Since 2012, this metric has grown by more than 300,000x [times].”
The editors at Institute of Electrical and Electronics Engineers (IEEE) say the advances of deep learning often come at an enormous price in computing resources and the energy they consume. Where before it was software eating the world, now it’s deep learning that’s eating the world.
The researchers at OpenAI are optimistic, but they warned in 2018, “Cost will eventually limit the parallelism side of the trend and physics will limit the chip efficiency side.” Hooking up networks of massive numbers of parallel computers isn’t inexpensive, and the attempt to wedge or layer even more transistors onto a chip might create units running too hot to manage. Add the power footprint of these networks, and the energy required to run these AI operations is becoming excessive.
A NEEDED DISRUPTION
What’s needed to prevent an inevitable slowdown or halt in this kind of AI machine learning is an entirely new direction for all or part of the process. The IEEE suggests one possibility: “An ambitious new strategy that’s coming to the fore this year is to perform many of the required calculations using photons rather than electrons. In particular, one company, Lightmatter, will begin marketing late this year a neural-network accelerator chip that calculates with light.”
The computing power required for deep learning is demanding more than systems can supply, and there aren’t too many ways to fix this. You can write more efficient algorithms, but as Nicholas Harris, CEO of Lightmatter, explains, “I challenge you to lock a bunch of theorists in a room and have them come up with better algorithms every 18 months.” Harris concludes, “Either we invent new kinds of computers to continue, or AI slows down.” The Boston-based Lightmatter has promised the release this year of their Envise AI processing chip (see top of page), which they say is capable of much higher-speed calculations with fewer of the problems attached to electron-based circuits.
A plug-and-play chip, the Envise, doesn’t rely on transistors, but rather on an optical accelerator that does the specific mathematical calculation, matrix multiplications, needed in neural-network calculations. David Schneider, writing in the IEEE Spectrum says, “Processing analog signals carried by light slashes energy costs and boosts the speed of calculations.”
Lightmatter told Will Knight of Wired their Envise chip runs 1.5 to 10 times faster than a top-of-the-line NVIDIA A100 AI chip, depending on what it’s working on. “Running a natural language model called BERT [Bidirectional Encoder Representations from Transformers], for example, Lightmatter says Envise is five times faster than the Nvidia chip; it also consumes one-sixth of the power.” The power saving is due to the lesser energy demands of light compared to moving electrons with transistors.
HANDLING LIMITATIONS
The analog calculations of the Envise solution come with limitations not present with the digital calculations. It’s less accurate than conventional chips, but the company says it has solutions in place for improving accuracy. Schneider writes that the Envise has an 8-bit equivalent system, and “this limits [the] company’s chip to neural-network inference calculations—the ones that are carried out after the network has been trained. Harris and his colleagues hope their technology might one day be applied to training neural networks, too, but training demands more precision than their optical processor can now provide.”
The plan for Lightmatter this year is to ship server blades with 16 chips that can fit into existing data centers. You can follow the company’s progress toward this goal on their home page.
Lightmatter isn’t the only research firm addressing how to improve efficiencies in AI processing using photonic computing. Schneider lists five other companies that are also worth a look. They are: Fathom Radiant, Lightelligence, LightOn, Luminous, and Optalysis.