Object detection is one of the computer vision techniques which could localize and classify objects on images or videos. We can see the bounding box around each of interest in the image and assigns them a class label. The most common applications of object detection are face recognition, face mask detection and etc. Therefore, we are going to build a face mask detector which detects whether the person is wearing a mask or not. In COVID situation, to tackle issue with the current scenarios, the use of technology would automate the detection task and also avoid the manually checks of violators.
In today’s tutorial, we will learn how we can build our own face mask detection model using great cloud platform Edge Impulse through which we deploy trained model on NVIDIA Jetson board as well as connect it with a CSI camera sensor to capture real-time image. The tutorial could be apply either dataset from your own image data set or a labeled public datasets from the Kaggle. The image data set has 2 classes which are with_mask and without_mask.
By the end of this tutorial, you will able to:
- Collect a good data set for real-time detection using Edge Impulse platform
- Data labeling
- Train face mask detection model using Edge Impulse
- Deploy the object detection on NVIDIA Jetson board
We will require the following for our project:
Hardware Required:
- NVIDIA Jetson board (Here we’ll be using Xavier NX Developer Kit)
- Laptop or standalone PC
- For live camera demonstrations, cameras like the Raspberry Pi Camera module are required. Here we’ll be using Arducam Complete High Quality Camera Bundle.
Software Required:
- Edge Impulse account.
- Some experience with TinyML and Deep learning is helpful but not required.
- To run the NVIDIA Jetson Nano board headless(without the monitor), set up either SSH access or RDP connection from your laptop.
- Familiar with the Linux command line, a shell like bash, and an editor like nano.
Here, I will be using Nvidia Xavier NX board. Compared to the Jetson Nano, the Xavier NX is anywhere between two to seven times faster, depending on the application.
Edge impulse is a cloud-based platform. Using the Edge Impulse platform, users can train their AI, ML models without deep knowledge of programming or AI, ML concepts.
To train a machine learning model with Edge Impulse, create an Edge Impulse account, verify your account and then start a new project.
Before starting anything, it might be good to update everything. You can do that by entering the commands below.
sudo apt-get update
sudo apt-get upgradeAlso before we begin, connect your camera module to the CSI port on your Nvidia Jetson board. Then, to use Edge Impulse on Nvidia Jetson board you first have to install Edge Impulse and its dependencies.
From the terminal, run:
wget -q -O - https://cdn.edgeimpulse.com/firmware/linux/jetson.sh | bashYou should get a response that looks like the one below.
+ edge-impulse-linux@1.3.3
added 347 packages from 416 contributors in 68.138Now, use the below command to run Edge Impulse:
edge-impulse-linuxYou will be asked to log in to your Edge Impulse account. You’ll then be asked to choose a project, and finally to select a microphone and camera to connect to the project.
Edge Impulse Linux client v1.3.3
? What is your user name or e-mail address (edgeimpulse.com)? 
com
? What is your password? [hidden]
? Select a microphone (or run this command with --disable-microphone to skip sel
ection) jetson-xaviernx - jetson-xaviernx-ape
[SER] Using microphone hw:1,0
[SER] Using camera CSI camera starting...
[SER] Connected to camera
[WS ] Connecting to wss://remote-mgmt.edgeimpulse.com
[WS ] Connected to wss://remote-mgmt.edgeimpulse.com
? What name do you want to give this device? Jetson
[WS ] Device "Jetson" is now connected to project "shakhizat-project-1"
[WS ] Go to https://studio.edgeimpulse.com/studio/45666/acquisition/training to build your machine learning model!Now, you have successfully installed Edge Impulse on your Nvidia Jetson board.
If everything goes correctly, you should see the following in the Device section of the Edge Impulse:
For this face mask detection project, we should collect the image data set which mimicking the real situation. We should collect enough images for each of our classes. You can collect the samples by using a mobile phone, Nvidia Jetson board, or you can upload images into an edge impulse account.
To load the samples click on Data acquisition section of the Edge Impulse.
Then, click on the Let's collect somedata button.
As we want to train and test the image data set, we should split it to 80–20.
For face mask detection, we need our bounding box to classify our classes. Therefore, we need to annotate the images using Edge Impulse annotation tool.
This process take of time if you have a lot of images and classes.
Ideally, It is recommended to take the face mask dataset from Kaggle. This dataset for machine learning is already annotated, as well as their bounding boxes in the PASCAL VOC format. Pascal VOC is an XML file, unlike COCO which has a JSON file, which has become a common interchange format for object detection labels. However, Edge Impulseuse another format in JSON format. You can use this python script to convert annotations from the Pascal VOC to Edge Impulse format.
The complete project can be found from here, implemented by Peter Ing. Check it out.
Training the ModelNow that we have collected the face mask samples, we can pass them on to the neural network and start the training process to automatically detect whether a person is wearing a mask or not. So open up the Impulsedesign section of the Edge Impulse.
As our dataset is ready, now we will create an impulse for our data. For that, go to the Create impulse section.
Click on Add a processing block.
Select image option.
Then, click a learning block and add object detection option.
Rename it as Face Mask Detection and click Save Impulse button.
Next, go to the Images section under the Impulse design menu item, and then click on the Generate Features tab, and then hit the Generate features button.
After that, click on the Face mask detection section under the Impulse design menu item, and hit the Start training button at the bottom of the page. Here, we used the default MobileNetV2. You can use different training models if you want.
Training output:
Creating job... OK (ID: 2325943)
Scheduling job in cluster...
Job started
Splitting data into training and validation sets...
Splitting data into training and validation sets OK
Training model...
Training on 129 inputs, validating on 33 inputs
Building model and restoring weights for fine-tuning...
Finished restoring weights
Fine tuning...
Attached to job 2325943...
Epoch 1 of 50, loss=1.0996249, val_loss=1.2549742
Epoch 2 of 50, loss=0.5730881, val_loss=1.0309869
Epoch 3 of 50, loss=0.37260413, val_loss=0.88406265
Epoch 4 of 50, loss=0.2732575, val_loss=0.8300728
Epoch 5 of 50, loss=0.21881193, val_loss=0.7985368
Epoch 6 of 50, loss=0.1842412, val_loss=0.77767074
Epoch 7 of 50, loss=0.15971155, val_loss=0.75798315
Epoch 8 of 50, loss=0.141622, val_loss=0.74178046
Epoch 9 of 50, loss=0.12764679, val_loss=0.7201556
Epoch 10 of 50, loss=0.124089435, val_loss=0.7229701
Epoch 11 of 50, loss=0.13733643, val_loss=0.6930027
Epoch 12 of 50, loss=0.10552671, val_loss=0.68260795
Epoch 13 of 50, loss=0.09776387, val_loss=0.6571001
Epoch 14 of 50, loss=0.09269215, val_loss=0.65096503
Epoch 15 of 50, loss=0.08853194, val_loss=0.6398335
Epoch 16 of 50, loss=0.08513473, val_loss=0.6339971
Epoch 17 of 50, loss=0.08155578, val_loss=0.6237093
Epoch 18 of 50, loss=0.07958686, val_loss=0.62563324
Epoch 19 of 50, loss=0.08469187, val_loss=0.60824037
Epoch 20 of 50, loss=0.106107965, val_loss=0.65657234
Epoch 21 of 50, loss=0.08075548, val_loss=0.60204226
Epoch 22 of 50, loss=0.06780515, val_loss=0.61008215
Epoch 23 of 50, loss=0.07962225, val_loss=0.6042041
Epoch 24 of 50, loss=0.07898002, val_loss=0.62557745
Epoch 25 of 50, loss=0.07712146, val_loss=0.6038083
Epoch 26 of 50, loss=0.059002914, val_loss=0.604347
Epoch 27 of 50, loss=0.060714073, val_loss=0.598505
Epoch 28 of 50, loss=0.056586243, val_loss=0.6049032
Epoch 29 of 50, loss=0.06364094, val_loss=0.5943693
Epoch 30 of 50, loss=0.0693655, val_loss=0.6258873
Epoch 31 of 50, loss=0.06930919, val_loss=0.5844879
Epoch 32 of 50, loss=0.055322483, val_loss=0.60721684
Epoch 33 of 50, loss=0.053181175, val_loss=0.5940475
Epoch 34 of 50, loss=0.071890086, val_loss=0.6106543
Epoch 35 of 50, loss=0.09753211, val_loss=0.63664484
Epoch 36 of 50, loss=0.063474864, val_loss=0.5776911
Epoch 37 of 50, loss=0.056974597, val_loss=0.5894003
Epoch 38 of 50, loss=0.05551439, val_loss=0.5942682
Epoch 39 of 50, loss=0.07409478, val_loss=0.5911636
Epoch 40 of 50, loss=0.0580862, val_loss=0.61330724
Epoch 41 of 50, loss=0.057891976, val_loss=0.58406746
Epoch 42 of 50, loss=0.052040614, val_loss=0.61015534
Epoch 43 of 50, loss=0.051715873, val_loss=0.5831931
Epoch 44 of 50, loss=0.052400388, val_loss=0.6184015
Epoch 45 of 50, loss=0.057132762, val_loss=0.5842898
Epoch 46 of 50, loss=0.061072033, val_loss=0.6269493
Epoch 47 of 50, loss=0.0581602, val_loss=0.6031094
Epoch 48 of 50, loss=0.051550377, val_loss=0.6048402
Epoch 49 of 50, loss=0.05313967, val_loss=0.6296155
Epoch 50 of 50, loss=0.050658334, val_loss=0.6037806
Finished fine tuning
Checkpoint saved
Finished training
Creating SavedModel for conversion...
Attached to job 2325943...
Converting TensorFlow Lite float32 model...
Converting TensorFlow Lite int8 quantized model with int8 input and float32 output...
Calculating performance metrics...
Calculating inferencing time...
Attached to job 2325943...
Job completedOnce the training process is complete, we can deploy the trained Edge impulse image classification model to NVIDIA Jetson board. For that, go to the Terminal window and enter the below command:
edge-impulse-linux-runnerOutput
Edge Impulse Linux runner v1.3.3
[RUN] Already have model /home/jetson/.ei-linux-runner/models/45666/v8/model.eim not downloading...
[RUN] Starting the image classifier for Shakhizat Nurgaliyev / shakhizat-project-1 (v8)
[RUN] Parameters image size 320x320 px (3 channels) classes [ 'with_mask', 'without_mask' ]
[RUN] Using camera CSI camera starting...
[RUN] Connected to camera
Want to see a feed of the camera and live classification in your browser?
Go to http://##############:4912Launch the video stream on the browser using above link with your lcoal ip address and port number 4912.
Below result visualization:
In this tutorial you learned how to use Edge Impulse to build a dataset of images, how to build a machine learning model that detect face mask in those images, and how to deploy that model to an edge device like NVIDIA Jetson board and test it in a real time. In a nutshell, Edge Impulseis a great platform for non-coders to develop machine learning models. Highly recommended.
References








Comments