Every cyclist faces difficulties when it comes to using hand signals for turning left and right, which can be quite risky in some situations. I have developed a prototype that can be worn on any cycle helmet, making it portable and easy to use.
Please watch the working demo in the link below.
In model, we need to follow the below steps In Edge Impulse.
To begin, we will create a database and label the sample data as "Right", "Left" with background road noise and "Road Noise" as separate datasets. Subsequently, we will initiate the model training process using Edge Impulse. Based on the outcomes of the validation, it may be necessary to adjust the training parameters or incorporate additional datasets, followed by retraining the model to enhance its performance. Once the model has been successfully trained and validated, achieving an acceptable level of accuracy, we will proceed to deploy it on the Portenta H7 hardware utilizing the Arduino library.
To connect the Arduino Portenta H7 board to the EI account, please follow the steps outlined in the link below.
After installing the firmware, open the command window/Terminal (MacOs) and type the following daemon command:
edge-impulse-daemon
Once the device is connected, navigate to the Data Acquisition section.
Begin collecting audio data by recording the audio through Portenta H7. Here I pronounced Left and recorded the data in edge impulse tool and labelled it as Left. Likewise record the audio data for Right and collect the road noise alone for better accuracy.
Model training in Edge ImpulseIn the Create Impulse section, please set the Preprocessing block as MFCC and select "Classification" as the learning block.
Then generate the features and visualise to get high level overview of each labels.
In the NN settings, configure the training cycles as 200 and learning rate as 0.005
In the Neural Network section, configure the layers as indicated.
I have used reshape layer to convert the audio data to 1D array and applied 1D convolution layer for model training. For improving the accuracy, I have used the drop out layer as well.
The model achieved an accuracy of 100% during the training phase, which is sufficient to proceed to the next step.
During the testing phase, the model is evaluated using new datasets that were not incorporated in the training process. The Model achieved 84.21 % which is sufficient for hardware deployment.
After successfully verifying the trained model, deploy it back to the Arduino Portenta H7.
However, we won't be deploying it directly to the Arduino Portenta H7 just yet, as we need to add more logic on top of the machine learning prediction.
Once the model is downloaded, follow these steps to import the library into the Arduino IDE:
Importing library in Arduino IDE:1. Open your Arduino IDE, go to **Sketch** > **Add File**, and select the downloaded file.
2. Once imported, navigate to **Examples** > **Smart_Helmet_V2_Inferencing** > **Portenta_H7** > **Portenta_h7_microphone_continuous**.
When classifying audio, such as in keyword detection, it is essential to ensure that all information is both captured and analyzed to avoid missing any events. This requires your device to capture audio samples while simultaneously analyzing them.
In the standard (non-continuous) inference mode for classifying data, you sample data until you have a complete window (for example, 1 second for a keyword spotting model, as detailed in the Create Impulse tab in the studio). Once you have this window, you classify it using the `run_classifier` function, which returns a prediction. After that, you clear the buffer, sample new data, and run the inference process again. However, there are some important caveats to consider when deploying your model in the real world:
1. There is a delay between windows since classifying each window takes time and no data is sampled during this classification. This can result in missing events.
2. There is no overlap between windows. Therefore, if an event occurs right at the end of a window, it might not be fully captured, leading to incorrect classification.
To mitigate this we need to continuous inferencing. Please follow the below link for detailed explanation about continuous inferencing.
AlgorithmAfter detecting the keyword 'Right' or 'Left', the SW will blink the respective LEDs ( Left or right) for 7000 ms with 500ms cycle time.
I have incorporated custom logic into this library to activate specific General Purpose Input/Output (GPIO) pins to turn ON/OFF the specific LEDs when keyword is detected. A Link to the modified.ino file is provided for reference once the library has been added to the Arduino.
Final ModelI have connected the 5V mini power bank to power up the Portenta H7 and LEDs.
This TinyML model, using the Continuous Inferencing method through the Edge Impulse tool, enables the 'Hey Siri' feature in microcontrollers. These ML models can be utilized in various use cases, such as turning lights on or off in home automation.
Comments