The Bright Side of AI

Lightmatter’s photonic processor uses light, not electrons, to run AI more efficiently than ever — and it is compatible with standard tools.

nickbild
7 months ago HW101
The photonic processor is available on a PCI-e card (📷: Lightmatter)

What exactly is a computer? Is it mechanical, electronic, or somewhere in-between? Analog or digital? How many bits are in a byte? How many bits should a CPU operate on at one time? How do the fundamental units, like processing, memory, and storage interact with one another? These questions are all but settled now (although quantum computing may shake things up again), but there was a time when there were almost as many different answers to these questions as there were computer scientists.

It makes a lot of sense that we ended up with standardized architectures and instruction sets, because without them, interoperability and technological progress would be greatly hindered. But this standardization also comes with a cost, as we are now seeing with the rise of artificial intelligence (AI). Computers are being asked to do things that had not been imagined when the basic designs had been drawn up, and as it turns out, those designs are not well-suited for running large AI algorithms.

A rack of photonic processors (📷: Lightmatter)

A new kind of computing is needed to handle today’s AI workloads, and that is what a company called Lightmatter is promising with their recently announced photonic processor. They have demonstrated that their non-traditional computing system can perform advanced AI tasks accurately and efficiently. This technology appears to have the potential to push computing beyond its present limitations.

Lightmatter’s processor makes use of photons — particles of light — instead of electrons for performing calculations. This approach offers several built-in advantages, including high bandwidth, ultra-low latency, and significantly improved energy efficiency. Most importantly, it enables parallel computing at a scale that is simply not possible with current systems.

The processor package itself contains six chips, packed with 1 million photonic components and 50 billion transistors, connected via high-speed vertical interconnects. Despite its complexity, it consumes only 78 watts of electrical power and 1.6 watts of optical power while achieving a processing speed of 65.5 trillion operations per second.

A micrograph of the chip (📷: Lightmatter)

The chip uses a numerical system called Adaptive Block Floating Point (ABFP), which allows the analog photonic computations to maintain the precision needed for deep learning tasks. Instead of assigning an exponent to every number (as in floating-point math), ABFP assigns a shared exponent to blocks of numbers, such as neural network weights or activations. These values are then normalized, processed through the photonic hardware in fixed-point format, and finally rescaled using the shared exponent. This method drastically reduces quantization errors that usually plague analog systems.

What makes this system practical — not just theoretical — is its integration with standard AI tools like PyTorch and TensorFlow. Developers do not need to reinvent their software stacks to use Lightmatter’s hardware. For this reason, the processor has been able to effectively run a number of advanced neural networks — like ResNet, BERT, and DeepMind’s Atari-playing reinforcement models — without modifying the models or using special training tricks.

Lightmatter’s processor is not a replacement for digital computing — at least not yet. Instead, it is a complementary technology, much like GPUs are today. But as scaling problems are amplified, photonic computing could become increasingly important to the future of computing.

nickbild

R&D, creativity, and building the next big thing you never knew you wanted are my specialties.

Latest Articles