Efinix Launches RISC-V-Based TinyML Platform for High-Efficiency Edge AI Acceleration

Built on the 32-bit VexRiscv core with custom instructions, this TFLite Micro platform offers the option of user-defined acceleration.

Gareth Halfacree
1 year ago β€’ Machine Learning & AI / FPGAs

High-efficiency field-programmable gate array (FPGA) specialist Efinix has announced the launch of the TinyML Platform, a RISC-V-based artificial intelligence acceleration solution which it claims can offer a lower barrier to entry than its competition.

"We are seeing an increasing trend to drive AI workloads to the far edge where they have immediate access to raw data in an environment where it is still contextually relevant. Providing sufficient compute for these AI algorithms in power and space constrained environments is a huge challenge," claims Efinix's Mark Oliver in support of the launch. "Our TinyML Platform harnesses the potential of our high performance, embedded RISC-V core combined with the efficiency of the Efinix FPGA architecture and delivers them intuitively to the designer, speeding time to market and lowering the barrier to AI adoption at the edge."

The Efinix TinyML Platform is built atop the Sapphire system-on-chip (SoC), a 32-bit VexRiscv RISC-V quad-core Linux-capable part which uses custom instructions to add acceleration for tinyML and edge AI workloads. Said workloads are supported on the device thanks to a TensorFlow Lite for Microcontrollers (TFLite Micro) Library β€” an open source community creation on which Efinix's platform rests. The company is also offering an Edge Vision SoC Framework as a "starting point" for model implementation, to help users get up and running as quickly as possible.

For those with unique needs, the company's platform also offers the option of user-defined accelerators which can be loaded onto the FPGA alongside the RISC-V cores and Efinix's own accelerator β€” while an accelerator socket with connection to the direct memory access (DMA) controller and system-on-chip provides a route to pre- and post-processing before or after AI inference takes place. The company's own accelerator also offers two operation modes: Lite, which minimizes resource usage; and Standard, which offers the highest performance.

More details on the TinyML Platform are available on the Efinix website, while a tutorial has been published alongside the source code under the permissive MIT license to GitHub.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles