Earlier this morning, during his keynote at the Google Next conference in San Francisco, Injong Rhee, the VP of IoT, Google Cloud, announced two new AIY Project boards—the AIY Projects Edge TPU Dev Board, and the Edge TPU Accelerator—both based around Google’s new purpose-built Edge TPU.
The new Edge TPU boards aren’t Google’s first experiments with “do-it-yourself” artificial intelligence. Just over a year ago the original AIY Projects Voice Kit came bundled free with issue 57 of the MagPi. A collaboration between the Google AIY Projects team and the Raspberry Pi Foundation, it allowed you add voice interaction to your Raspberry Pi projects, and over the following six months I spent a lot of time with the kit—getting hands on, putting it inside a retro-rotary phone, and finally building a voice-controlled magic mirror.
But while it was possible to run TensorFlow locally on your Raspberry Pi with the kit, it really struggled with that. The Voice Kit was really intended to be used with the Google Cloud. However the launch of the Vision Kit, the next AIY Projects Kit, at the tail end of last year, changed things.
Machine learning development is done in two stages. An algorithm is initially trained on a large set of sample data on a fast powerful machine or cluster, then the trained network is deployed into an application that needs to interpret real data. This deployment stage, or “inference,” stage is where the Vision Kit came in, with the ability to run these trained networks “at the edge” nearer the data, allowing us to put the smarts on the smart device, rather than in the cloud.
Built around the Intel Movidius chip, it was designed to run the all the machine learning locally — on the device — rather than talk to the cloud. While training still happened in the cloud, inference happened locally.
While an initial limited run of kits was made available before Christmas, there were a few teething troubles, and it wasn’t until April this year that the Vision Kit was made generally available—alongside an updated Voice Kit. The new kits immediately got integrated into projects ranging from face-tracking plushie dinosaurs to wearable robotic owls.
The two new boards announced today are a part of what Google is calling “Cloud-to-Edge Machine Learning,” a company wide initiative around the Internet of Things. The boards form the hardware component of Google’s new Cloud IoT Edge service, allowing users to run inferences of pre-trained TensorFlow Lite models locally on their own hardware.
Both are built around Google’s new Edge TPU. Similarly to the Intel Movidius chip the company previously used in the Vision Kit, the Edge TPU is a purpose-built ASIC designed to run machine learning inference locally. With training of the models carried out beforehand in the cloud using Google’s Cloud TPU.
However initial appearances suggest that the new Edge TPU may considerably outperform the Intel hardware, with the Edge TPU supposedly being capable of concurrently executing multiple machine learning models in real time on a high-resolution video, all at 30 frames per second.
Initially the Edge TPU is going to be made available in two form factors.
The first, the Edge TPU Dev Board is split into two parts, a base board offering a number of connectors—including the familiar Raspberry Pi 40-pin GPIO connector—and a detachable daughter board built around a NXP i.MX 8M which has a quad core Cortex-A53 processor and an additional Cortex-M4 core, with 1GB of RAM and 8GB of eMMC flash memory, and the Edge TPU itself.
“The AIY Projects Edge TPU Dev Board is an all-in-one development board that allows you to prototype embedded systems that demand fast ML inferencing. The base board provides all the peripheral connections you need to effectively prototype your device — including a 40-pin GPIO header to integrate with various electrical components. The board also features a removable System-on-module (SOM) daughter board can be directly integrated into your own hardware once you’re ready to scale.”
The base board offers a surprising number of connectors, including two USB Type-C connectors, and a USB 3.0 Type-A host connector. Along side these are a micro-SD Card slot, a 3.5mm audio jack, two PDM MEMS microphones—similar to the updated voice kit‚ and a full-sized HDMI connector, as well as both a 39-pin FFC connector for a MIPI-DSI display, and a 24-pin FFC connector for a MIPI-CSI2 camera. Out of the box the board will run either Debian Linux, or Android Things and supports TensorFlow Lite.
The Edge TPU is also going to be made available as a USB device, similar to the Movidius Neural Compute Stick released last year. The Edge TPU Accelerator allows you to add an Edge TPU co-processor to existing systems.
“The AIY Projects Edge TPU Accelerator is a neural network coprocessor for your existing system. This small USB-C stick can connect to any Linux-based system to perform accelerated ML inferencing. The casing includes mounting holes for attachment to host boards such as a Raspberry Pi Zero or your custom device.”
Unlike the Movidius stick, the new Edge TPU based device has a USB Type-C connector. Fortunately, if you’re thinking about using the stick with a single-board computer like the Raspberry Pi, the new style Type-C connector should be backwards compatible with Raspberry Pi boards, abet at USB 2.0 speeds. The Edge TPU Accelerator will be supported by Debian Linux or Android Things, and like the stand-alone dev board will support TensorFlow Lite.
The two new AIY projects Edge TPU boards should be available later in the year. While there’s no indication yet as to price, or distribution channel, you can sign up to get notified when they’re available.
But if you don’t want to wait until the fall, and if you think you have a good use case for the new services, you could always request early access to the alpha release of Google’s new Cloud IoT Edge service, and to get your hands on pre-production Edge TPU hardware.