Intel Announces Its First AI-Optimized FPGA, the Stratix 10 NX FPGA

AI Tensor Blocks provide accelerated AI compute for common matrix-matrix and vector-matrix multiplications and INT8 inferencing.

Ish Ot Jr.
9 months agoMachine Learning & AI / FPGAs

According to Intel, AI model complexity is doubling every 3.5 months — that's 10X per year! In order to keep up with machine learning software's frantic pace, specialized ASICs can be replaced with more flexible FPGAs. The Intel Stratix 10 NX FPGA provides accelerated AI compute via AI Tensor Blocks, which are optimized for common matrix-matrix and vector-matrix multiplications, and INT8 inferencing.

In-package 3D HBM DRAM allows models to be stored on-chip, and onboard transceivers permit multi-node inferencing with data transfer rates of up to 57.8 Gbps. The Stratix 10 NX FPGA delivers twice the clock frequency performance on up to 70% less power compared to conventional architectures, thanks to Intel's Hyperflex FPGA Architecture, and has a staggering logic capacity of up to more than 2,000,000 logic elements for hardware customization.

Intel's Ultra Path Interconnect (UPI) enables a high-performance interface for select Intel Xeon Scalable processors, while PCI Express (PCIe) Gen4 support offers connections to other devices. The Stratix 10 NX FPGA will be available from Intel later this year.

Related articles
Sponsored articles
Related articles