The Open Source Ztachip Is a RISC-V Accelerator for Edge AI and Computer Vision Applications

Designed to outperform even RISC-V chips with the recently-ratified vector extensions, ztachip can boost performance by up to 50x.

Embedded developer Vuong Nguyen has released an open source RISC-V accelerator designed to boost the performance of edge AI and computer vision tasks up to 50 times — and you can try it out yourself by loading it onto a field-programmable gate array (FPGA).

"Ztachip is a RISC-V accelerator for vision and AI edge applications running on low-end FPGA devices or custom ASIC [Application Specific Integrated Circuit]," Nguyen explains. "An innovative tensor processor hardware is implemented to accelerate a wide range of different tasks from many common vision tasks such as edge-detection, optical-flow, motion-detection, color-conversion to executing TensorFlow AI models."

The ztachip aims to offer high-performance edge AI on a fully open source accelerator core. (📹: Vuong Nguyen)

The accelerator is built around the free and open source RISC-V instruction set architecture (ISA), and comes with some impressive performance claims: Compared to a standard RISC-V core, without specific optimizations for machine-learning workloads, the ztachip can accelerate performance between 20-50 times — even outperforming RISC-V chips which include the recently-ratified vector processing extensions.

The accelerator comes complete with what Nguyen calls "a new tensor programming paradigm," which is part of the secret behind the acceleration on offer. Despite its performance, though, the ztachip core is built to be resource-light — running happily on relatively low-end FPGA devices, which should in turn translate to being realizable in silicon without too much cost or complexity.

Ztachip is available to run in simulation or on Altera or Xilinx FPGAs, using a wrapper layer to ease porting to additional platforms when required. A demonstration of the accelerator running on an Arty A7 FPGA development board showcases the use of a range of networks and tasks, including TensorFlow Mobinet image classification, SSD-Mobinet object detection, Canny edge detection, Harris-Corner point-of-interest detection, motion-sensing, and a neat multi-tasking demo which runs object, edge, point-of-interest, and motion detection simultaneously.

The ztachip source code is available on GitHub under the permissive MIT license, with instructions on getting started with deploying the core to an FPGA.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire:
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles