Google I/O may have been canceled this year, but that did not stop Google engineers from completing the interactive installation they call Pixelopolis. Meant to be on display at the conference, Pixelopolis demonstrates miniature self-driving cars that cruise around a tiny model city.
Users can interact with Pixelopolis via a phone app that allows them to choose a destination that they would like the car to drive to. The app will then show the user what the car “sees” via a stream from its camera that is annotated with the objects that it detects. The cars have been trained to stay in their lanes, avoid collisions, and understand traffic signs.
You may have guessed that the cars in Pixelopolis are powered by GPU workstations running as network edge devices. That would be a good guess, but incorrect. Each car is equipped with a Pixel 4 smartphone running TensorFlow Lite accelerated by the Pixel Neural Core. In conjunction with the onboard camera, and using USB-C to control the motors and other electronic components, all computation and sensing is done on-device.
To keep the car in its lane, a convolutional neural network (CNN) with a long short-term memory (LSTM) network model was chosen. LSTM models incorporate prior information, as opposed to only using the latest image capture, to improve the accuracy of the prediction. Based on a time series of images, the model determines whether the car should turn, and in which direction.
A MobileNet CNN architecture model (ssd_mobilenet_edgetpu, specifically) was chosen to handle object detection. This model serves two purposes — first, to detect traffic signs, which the cars use to determine where they are in the city. The second purpose is to detect other cars so that the Pixelopolans are not playing bumper cars with each other. Images are classified by this model in 6.6 milliseconds — not too shabby for a smartphone.
Compared to the real world, Pixelopolis presents a very highly constrained problem. The possible routes a car may take are limited, the set of traffic signs are well defined, and differing weather and lighting conditions need not be considered to name a few niceties of such a toy environment. In short, there are not many surprises that a car of Pixelopolis will ever need to negotiate. A real world self-driving scenario is far more complex, so your smartphone may not be taking the wheel of your car just yet, nevertheless, it is still highly impressive that we can now carry a device in our pockets that is capable of autonomously navigating a miniature car around a model city.