Cerebras Systems Unveils Wafer-Scale Engine for Deep Learning Applications

AI systems are usually powered by GPUs with specialized cores- for Nvidia its Tesla-powered chips, for AMD its Instinct-powered chips, and…

AI systems are usually powered by GPUs with specialized cores — for NVIDIA, it’s Tesla-powered chips, for AMD it’s Instinct-powered chips, and for Intel, the company’s upcoming Nervana chips will power deep learning applications. All of those aforementioned GPUs are powerhouses at processing AI algorithms, and they can be found in everything from robots to IoT devices. Now, a California-based AI startup just took the crown for raw processing power with a chip the size of an iPad and is expected to drive everything from autonomous vehicles to surveillance software.

Cerebras Wafer-Scale Engine is the largest semiconductor ever manufactured and measures out at an astounding 46,225mm2 and has 400,000 SLA (Sparse Linear Algebra) cores that pack 1.2 trillion transistors. To put that into perspective, NVIDIA’s powerful Tesla V100 GPU features 21.1-million transistors to drive AI applications. The Wafer-Scale Engine also houses 18Gb of on-chip SRAM and has an interconnect speed of 100 Pb/s (Petabytes per Second). Processors and GPUs are produced on silicon wafers, which can hold hundreds of separate chips that are cut and separated. Cerebras’ WSE, on the other hand, is a single chip interconnected on a single wafer, which is designed to handle all those transistors.

Cerebras states that the Wafer-Scale Engine cores are programmable, allowing the chip to run all neural network algorithms- including TensorFlow, PyTorch, and more. The company even claims that it can reduce complex workloads that used to take months and scale them back to minutes. Cerebras has begun shipping the Wafer-Scale Engine to a small number of customers, and there is no word yet on price, but you can bet it won’t be cheap.

Related articles
Sponsored articles
Related articles