Bringing AI to the Edge: Xilinx's New Kria SOM Lineup with Embedded App Store

Xilinx announces the launch of their new SOM ecosystem for AI and ML modeling in edge applications with the Kria K26 and embedded app store!

Xilinx's Kria K26 SOM (📷: Xilinx)

The system-on-module (SOM) format for FPGA development boards has gained a lot of traction recently as a method of adding an extra level of hardware flexibility to the development setup that is also cost effective. As its name implies, a SOM FPGA development board isolates the the actual FPGA chip to its own modular PCB with some sort of high-speed, high density connector interface while the rest of the peripherals such as power supplies, interface connectors, COM ports, etc. are placed on a compatible baseboard PCB.

This allows for a user to select a target FPGA chip and desired peripherals for their development platform without the constraint of finding everything in a single development board. As an FPGA engineer working in industry, I have many frustrating memories of finding a development board with my target FPGA chip, but was missing a key peripheral or vice-versa.

It's cost-effective in the sense that when a future project needs new peripherals (but can be done with the same FPGA chip) a whole new FPGA development board isn't required. Instead, the baseboard can simply be swapped out.

Between their small form-factor and flexibility to adapt to different use case needs via a simple baseboard swap out, it's no surprise that they're slowly becoming mainstream in correlation with the rise in demand for edge computing solutions...

Today though, Xilinx has set the bar high for SOMs in the AI for edge computing space. Announcing their all-new Kria system-on-module portfolio. The first of which being the Kria K26 that dropped today, while they also teased two more future SOMs for other targeted edge computing applications.

The Kria SOM hardware lineup (📷: Xilinx)

The Kria K26 in particular is geared towards vision AI with the intent to create smart cities and smart factories. Vision AI, also commonly referred to as computer vision, focuses on the concept of training computers to act as a human's vision system.

Vision AI enables a computer device to use a camera to capture images/videos and process those images to identify objects the camera is viewing, mimicking the process of the human eye viewing an object and relaying it to the brain where a decision is made or action is taken based on what the eye is viewing.

Deploying the machine learning model of neural networking, vision AI takes an image and is able to differentiate between various elements in the image to ultimately identify objects based on a given computer vision model. Types of computer vision models include object detection, edge detection, pattern/face recognition, or image classification.

A system has to be trained by feeding it a large data set of images containing as many variations as possible of the target object to identify. This makes it no surprise that computer systems meant for use with vision AI need to pack some serious heat.

The Kria K26 packs this serious heat as it is based on the Zynq UltraScale+ MPSoC Architecture which boasts a quad-core Arm Cortex-A53 processor with 256K system logic cells, 1.4TOPS AI processing performance, 4k 60p H.264/265 video codec, and has 4GB of RAM available to it.

For external interfaces, the K26 is equipped with a dual 240-pin connector with 245 I/O, 15 camera interfaces (a mix of MIPI, sub-LVDS, and SLVS-EC), 1Gb — 40Gb Ethernet via four 10G interfaces, and four USB interfaces (2.0 and 3.0). And all of this packed onto a PCB the approximate size of a credit card.

The Kria K26 is approximately the size of a credit card. (📷: Xilinx)

As I mentioned before, the brain of a vision AI application is the neural network it uses to mimic the human brain for object identification. Developing a neural network isn't for the faint of heart and takes a certain level of expertise. This is where Xilinx really shattered expectations with their announcement today.

Xilinx is rolling out the first ever embedded app store for edge computing applications, with a solid offering of the various neural network models most commonly deployed in vision AI solutions. Accelerated applications for edge computing available in Xilinx's new App Store include a smart camera model, defect detection model, various AI box models, natural language processing smart vision model, face and sound recognition models, a touch screen model, virtual zone restriction model, and HDR image signal processing model.

This idea of an embedded app store feels like the most natural evolution in the FPGA development world. The current offerings upon its launch are not only from Xilinx, but also from partner companies. This is such a huge leap ahead as previously to find an AI model, one would need to start with a broad Google search to finally land on a company's/developer's page offering the best-suited AI model for a given application. A centralized database of these production-ready AI models from various sources will drastically cut down on the overall duration of the development cycle, and make a user's life so much easier.

The current offering of accelerated apps available in Xilinx's App Store. (📷: Xilinx)

The Kria SOM development architecture allows for a user to simply drop in and dynamically swap out the AI model being used by the application via the Vitis AI development environment. The AI model can either be one of the ones from Xilinx's App Store, or a user's own custom AI model.

The AI model runs on the Kria SOM's Deep Processing Learning Unit (DPU) which appears to be the SOM's optimized IP that I would guess is running in programmable logic and is being controlled by the embedded application running on the Arm Cortex of the Zynq MPSoC.

The Kria SOM accelerated application architecture overview.

The Kria KV260 Vision AI Starter Kit ships the Kria K26 SOM with a baseboard that is equipped with 8 camera interfaces, 3 MIPI sensor interfaces, built-in ISP component, an HDMI output, and DisplayPort output. The baseboard also contains a 1Gb Ethernet port, four USB ports, and a Pmod connector to access the Pmod ecosystem. Xilinx currently has this development kit set at $199, sealing its position as a formidable competitor in the market space. The Zynq UltraScale+ based Ultra96 development board that offers only a subset of the peripherals of the K26 SOM with baseboard is currently sitting at the $249 price-point.

The Kria KV260 Vision AI Starter Kit

To get users up and running as fast as possible, Xilinx has packaged the Kria KV260 Vision AI Starter Kit with a pre-programmed SD card that runs an image with a command line user interface on the K26 such that it should take a user less than an hour to get up and running without needing to launch Vivado or Vitis.

A user can simply connect a camera, keyboard, power cable, and monitor to the baseboard and power on the system. After booting up, the user selects one of the pre-loaded accelerated applications to load the corresponding AI model into the DPU of the K26 and then run the accelerated application.

This kit with its low price point is a great fit for a broad range of users from hobbyist makers all the way up to large corporations. Coupled with the access to a centralized database of AI models via Xilinx's embedded app store, it will be very exciting to see what hobbyists and professionals will be able to accomplish with it.

whitney-knitter

All thoughts/opinions are my own and do not reflect those of any company/entity I currently/previously associate with.

Latest Articles