It is a well-known fact that there is a pandemic that changed the lifestyle of a human. The most effective prevention against the pandemic is to wear a mask. A challenging task in research is detecting people with or without masks, to prevent the transmission of SARS-CoV-2 between humans.
Recently, I've started to be interested in edge computing, so I bought various edge devices from M5Stack and started to use them. Therefore, this time, I developed a face mask recognition using the AI camera of M5Stack UnitV2 in conjunction with M5Stack Core2 Development Kit. A camera device of M5Stack UnitV2 monitors the entrance and detects the face mask is worn or not, and also it will notify you with a sound and visually by displaying an image on the M5Stack Core2.
This tutorial even for those who are working for the first time with edge devices and face mask recognition is performed without writing a code just by following the below procedure using drag and drop block coding of the UiFlow tool. M5stack's graphical programming platform, UiFlow has made it so easy for everyone to get started creating their IoT and TinyML projects!Prerequisites
Before you get started with the tutorial, you will need the following:
- M5Stack Core2 ESP32 IoT Development Kit for AWS IoT EduKit
- M5Stack UnitV2 - The standalone AI Camera for Edge Computing (SSD202D) TinyML
- A computer with an internet connection and the ability to flash your M5Stack Core2. Here we’ll be using a laptop.
- Some experience with Python and Blockly is helpful but not required.
So, Let’s get started!What is TinyML?
Let’s start by explaining what is TinyML.
TinyML is the area of machine learning concerning models suitable for low-powered and embedded devices like M5Stack UnitV2.
For more information, see this link.About M5Stack UnitV2
First, I will briefly introduce the camera device M5Stack UnitV2. It is an AI-equipped camera device sold by M5Stack.
The UnitV2 from M5Stack is a standalone device adopting SigmaStar SSD202D (ARM Cortex-A7 dual-core 1.2GHz) as the core, embedded 128MB DDR3 memory, with 512MB NAND Flash, 1080P camera, 2.4G Wi-Fi, and cooling fan.
The UnitV2 also integrates AI recognition applications developed by M5Stack (such as Face Recognition, Object Tracking, Color Tracker, Shape Detector, and Barcode Detector) to help users build their own AI applications.
You can use the pre-installed functions by accessing http://10.254.239.1/ with M5Unit V2 connected to your computer.
In addition, M5Stack's AI model training service V-Training can be used to build custom recognition models, so I will introduce it below.About M5Stack Core2 ESP32 IoT Development Kit for AWS IoT EduKit
The core of the system is an ESP32 - a microcontroller produced by Espressif.
This product is the 2nd generation Core device of the M5Stack development kit series, which is a further improvement of the original generation Core function.
M5Stack has a wide range of function expansion modules such as sensors that can be connected without soldering because it allows you to start complex projects relatively easily, accelerate development speed, and improve quality at the same time.
Click here to be directed to a page where you can order M5Stack Core2 ESP32 IoT Development Kit for AWS IoT EduKit.Download custom dataset from Kaggle
There's no machine learning without data. Hence, the first step involved collecting the training data. Any deep learning model would require a large volume of training data to give good results on inference.
We need to find dataset of the face mask using Kaggle website. To use Kaggle resources and you need to log in to the Kaggle website. As a first step, download the dataset from Kaggle. This dataset is available for download at the following link.
The classes are:
- With mask
- Without mask
Unzip a zip file.V-Training Object Recognition Model Training Service
Next, let's actually create a model for face mask recognition using V-Training. The official website has more detailed tutorials, here is a general description of the training process.
Step 1: V-Training Sign Up
Ok now let’s get started. Go to this link and create a free account in V-Training. If you already have an account in M5 forum, simply sign in with your credentials.
Step 2: Upload images from your dataset
The minimum number of images required for learning is 30 images per class. The overall size of the image training set is not allowed to exceed 200M.
Let's see some example faces from our dataset.
In order to improve the training effect, the more samples are the better.
Step 3: Create labels
For example, two-sample labels are created in this tutorial. With maks and without a mask.
Step 4: Data Labeling
This step is relatively boring, you need to manually mark the target in the picture since the data is collected without labels.
We draw a box around each object that we want the detector to see and label each box with the object class that we would like the detector to predict.
In this screenshot, I’m placing a rectangle to mark the face mask. Here, I labeled the rectangle as With_mask.
Step 5: Train your model
Choose Efficient Mode and click upload. Train and wait for the result to return.
Wait for the training to complete to generate a model.
If the learning model is created successfully, you will receive a result like this below.
If you fail to create the model, the reason is written will be written as Failed, so please correct it and try again. If you can train successfully, you can check the accuracy of the model and the transition of loss as a result of training.
The trained model can be downloaded as a compressed file from the website.
Step 6: Model deployment
Now that the model has been trained, it's time to deploy it to M5Stack UnitV2. Since it has a built-in object detection program from the beginning, you need to upload your trained model by putting it using the web interface.
After UnitV2 is started, the AP hotspot (SSID: M5UV2_XXX: PWD:12345678) will be turned on by default, and the user can establish a network connection with UnitV2 through Wi-Fi access.
This compressed file does not need to be copied to the TF card, visit the domain name unitv2.py or through the IP: 10.254.239.1, switch the function to Object Recognition, and click the add button to upload the model.
Here is how it actually works.
Repeat it again for example with a mask.
Note that it probably won’t be very accurate due to we used a very small training set!
At this moment, we can use UnitV2 as an AI camera and send the target detection results from the serial port. It will continuously output identification sample data through the serial port (HY2.0-4P interface at the bottom). With a UART serial port communication, all identification content is output in JSON format through the serial port.
To put it simply, UnitV2 recognizes it and sends it to M5Stack via UART in JSON format.M5Stack Core2 setup
In order to use M5Stack Core2, you need to set it up first.
Unlike other boards, UIFlow functionality isn’t flashed onto the M5Stack Core2, by default. That’s the first thing you need to do to start programming your boards with UIFlow.
- Download and install the M5Burner, which is a tool for the firmware writing in accordance with the M5Stack module you are working.
- Open M5Burner
- Download the latest version of the firmware for M5Stack Core2.
- Connect the M5Stack to your PC, specify the COM port, and write the firmware with BURN
- It is recommended to use this M5Burner to set up when connecting to WiFi also.
- When M5Stack is restarted and connected to WiFi, API Key will be displayed, the network connection is successful!
Now you can start programming with UIFlow!Writing a Blockly program of Face Mask Detection using UIFlow
UIFlow is a web-based visual programming environment developed for the M5Stack series based on Google's open-source visual programming environment Block. You can switch between block programming and Python coding.
Copy the following blocks into the Blockly editing area, then click Run in the top right corner to execute the code. Drag and drop the controls elements too.
The below program simply displays the prediction class on the screen of M5Stack Core2, also it makes a sound when it detects a face mask or not.
Connect M5Unit V2 to M5 Core2 using grove cable.
An example video of the face mask detection system in action is shown below.
For details, please refer to the code in the attachment.
And that's it!Conclusion
Today, we have developed a tool to detect face masks using a device with a camera called M5Stack UnitV2 jointly with the well-known M5Stack Core2. The great part of this UnitV2 is not only the size of a thumb, but it can perform low-cost and high-performance image processing. Compared with Nvidia Jetson Nano, UnitV2 is much more convenient and simple. Also, we used V-Training to make learning easier and faster. In the future, I would like to try to improve detection accuracy and upload data to cloud IoT services.
I hope this has helped you get a start using TinyML for embedded device!
Please take care of yourself and each other to help our hardware community to stay safe! Thank you for reading my blog post.References