Once upon a time, I started a project were we would like to understand the behaviour of tourists when they visit an attraction. Traditionally, a researcher would go to those places and manually take notes to understand what the visitors enjoyed the most, where they spent time, etc. However, I am an electrical engineer, so I felt the need to automate the task. At first, I considered the use of "dumb" sensors, the ones that can only count or, in the best scenario, tell the direction people are moving, but that would limit the type of questions that we could answer.
What I really wanted to have was a tool that would do the same job as the researcher manually annotating things. The researcher would accomplish the task using her/his eyes and the most complex organ in the human body: the almighty brain! We still don't have anything like a human brain (or even a fly brain...), but at least we have deep learning!
It became clear this project would need a sensor capable of using vision to make sense of the world. That could be solved with a camera and a deep learning based algorithm (for images that usually means a Deep Convolutional Neural Network) like an object detector. Still, not so long ago, the way this used to be done was by saving the images (or sending somewhere) and processing them using a beefy, power hungry, GPU. However, saving (or transmitting) images would incur in some serious problems related to privacy. One way to avoid that would be to process everything "on the edge" in such a way that the result would not contain any personal information. For a normal person, a gaming laptop with a webcam would suffice for a proof of concept, but I was already thinking ahead as I remembered my previous experiences running demos using a laptop with lots of people around, tripping on cables, equipment falling to the ground, batteries getting flat faster than expected, etc.
There were already quite a few smart cameras that allow you to run deep neural models on the edge (e.g. JeVois, OpenMV, OpenCV OAK, etc), but putting together my own system would give me much more freedom and flexibility. Therefore, I decided to go with a Raspberry Pi Zero W, Raspberry Camera V2 and Google Coral USB accelerator. I will not lie here, one of the reasons for me to choose this particular set of tools was quite mundane: I had experience with them and I had them at home while we were in the middle of another lockdown. But the killer feature of this setup was its very low power and small size, allowing me to power it for a whole day from a normal power bank and easily deploy it anywhere because of its small size.
People ask me the reason I chose to use the Raspberry Pi Zero for this project considering it's not a powerful single-board computer. as I mentioned, I could have tried ready-made smart cameras, but the Raspberry Pi is one of the best selling general purpose computers ever (beating the Commodore 64, numbers from 4 years ago!), therefore it has a huge community. This makes life much easier when you need to find a driver or how to solve a problem. In addition to that, last year I released my work on a flexible, collision resilient, quadcopter that has as its brain... a Raspberry Pi Zero W! So, I'm quite familiar working with the Zero and I hope the Raspberry Pi Foundation will soon come up with an upgrade (maybe adding a RP2040 to the RPI Zero PCB?).How to build your ownHardware
- Raspberry Pi Zero W
- Raspberry Pi Camera ( + RPI Zero flat cable)
- Coral USB Accelerator
- USB-C to micro USB cable (or adapter)
- Micro SDCard (if you want to use our image it's recommended 8GB)
- 3D printed parts
The image available in this repo has lots of interesting stuff pre-installed and its default user is
pi and password
maplesyrup (yup, I love maplesyrup).Another interesting thing available is the Jupyter Notebook Server. It helps a lot while testing things. You can launch it by running
/home/pi/launch_jupyter.sh (it's slow at times, but very much usable) and the password is again
maplesyrup (your browser will complain saying it's not a secure connection because the server is using a self-signed certificate).
- You will, at least, need to install the libedgetpu and tflite_runtime. Be aware this tflite_runtime version was compiled for Python 3.7.
- In addition to the software above, you will need to install Picamera, Pillow...
- The USB connector is used by the Coral USB Accelerator, therefore it's easier if you use the Maple-Syrup-Pi-Camera headless. A full example of wifi configuration file is available here (you just need to modify it and copy to the boot partition that appears when you connect the SDCard to your computer).
- The image comes with SSH enabled and you can learn more about how to connect using SSH here.
- If you are a Unix user, you can even mount the RPI filesystem on your computer and use VSCode remotely.
- Automatic License Plate Recognition
- Face Mask Detection (Mask, No Mask, Poor Mask)
- Examples already installed in the SDCard image
The RPI Zero W has a USB 2.0 connection with a theoretical 480Mbit/s (50MB/s) speed, but it will never get close to that because the RPI0 has a single core ARMv6 CPU (no free lunch!). Therefore, the Google Coral USB Accelerator is very often limited by the USB bandwidth, or the ability of RPI0 to exchange data with it. This fact plays a role in keeping the average power consumption low, though. Models that need post-processing or use a custom OP (the EdgeTPU compiler runs these ops in the host) will also suffer from the small USB bandwidth and the weak ARMv6 single core.Power consumption:
The RPI Zero W has no protection circuit on its 5V input, therefore it connects the USB power directly to the power supply. That means the Coral USB Accelerator will be directly connected to the power supply allowing it to drain as much current as the power supply and impedance of microUSB + PCB traces allow it. According to the Google Coral USB Accelerator Datasheet, the accelerator alone could draw up to 900mA (peak). The RPI Zero W has a typical power consumption of 150mA. In my experiments, the Maple-Syrup-Pi-Camera consumes around 160mA at 5V when idle (800mW).Examples of power consumption:
Using a hobby-grade USB power meter (and a power bank that states it can deliver up to 2A at 5V):
- MultiPose shows peaks of 350mA at 5V (1.75W)
- Automatic License Plate Recognition shows peaks of 400mA at 5V (2W)
- Face Mask Detection (Mask, No Mask, Poor Mask) shows peaks of 420mA at 5V (2.1W)
- 3D parts adapted from https://www.thingiverse.com/thing:2254307
- Instructions on how to create RPI images using docker
- Some models and project examples are from https://coral.ai/examples/ and https://github.com/google-coral/pycoral
- SDCard image adapted from the collision resilient quadcopter CogniFly.
- Life is just much easier thanks to Netron