Globally, people are taking more health care measures thanks to the emerging digital healthcare solutions. Although many factors contribute to the significant market growth of this technology, advancements in the sectors of computing, artificial intelligence (AI), and remote monitoring (IoT) are some of the major driving forces. It is now possible for a single wearable device to monitor a range of medical parameters giving us seamless access to personal healthcare analytics that can contribute to our health, facilitate preventive care, and assist in the management of ongoing illness. We can now use rings and watches to track our blood pressure, oxygen levels, heart rate, sleep partners and muscle activity such as walking.
In this day and age, you probably have a fitness tracker such as a wearable (watch and/or ring) or even simply by using your mobile phone. However, these devices generate a lot of data and unless someone is working towards a specific health or fitness goal, step count stands out for general wellness. For most people who are able to walk, movement is the easiest quantitative parameter to track. Each day we have a number of steps that we want to make, and the 'magical' number for most people is between 6, 000 to 10, 000 steps a day. While researching on this project, I came to learn that just 2, 500 steps a day is enough to provide health benefits and reduce cardiovascular diseases.
However, when counting steps for fitness, there is need to factor in terrain since the earth is not flat (pun-intended). For example, walking uphill requires more effort from the heart, muscles and lungs, and more calories are also burned. This demanding exercise is significant to calorie burning and cardiovascular benefits. In this case, the reliability of wearables is alarming. These devices are being promoted that they will improve general health and fitness. However, majority of companies do not provide evidence to support the effectiveness of their products.
Terrain classification while walking would require additional sensors on the wearables such as an inertial measurement unit (IMU) to sense angle of movement. Through additional research and advancing AI models, we can be able to identify how movement on different terrains affects calorie burning and our cardiovascular health. Through this, we can create precise health tracking and even support decision making such as recommendation on how to burn certain amount of calories by walking on an identified terrain.
Software developments in the embedded AI field have enabled rapid AI development, deployment as well as advanced customization and automation, enabling developers to bring AI solutions to the world quickly. For this project, the complex tasks involve fetching data from a microcontroller and using it to generate a dataset for training a step classification model which is then required to be deployed to the microcontroller. With the Edge Impulse platform, these tasks have been simplified, enabling us to rapidly create efficient AI solutions. I would also like to express my sincere gratitude to the company for supporting me in carrying out this research project.
This demonstration research project tackles how to integrate terrain classification while walking for a more precise health monitoring. My goals for this project were:
- Train an AI model capable of classifying walking into uphill, downhill or flat surface.
- Develop the project as a compact wearable that is powered by a small low-cost and low-power microcontroller board as the processing unit.
- Periodically send terrain classification results to a simple mobile dashboard for observation, similar to pedometer applications.
To achieve this, the project leverages embedded AI technology which allows us to run light-weight AI models on microcontrollers connected to sensors. I utilized the XIAO nRF52840 Sense board given that it has an onboard 6 DOF IMU (LSM6DS3TR-C), Bluetooth LE 5.2, ultra-Low power consumption, and it's small form factor makes it a great fit for a wearable. The onboard IMU is used to capture angle of a surface from a foot level while the Bluetooth Low Energy (BLE) connectivity is used to send terrain classification results to a simple custom BLE WebApp that we can access on our mobile devices.
To sense the slope of the ground, I observed that we can place an IMU sensor on the foot level since it is the direct contact surface with the ground. As we are walking on different slopes, our feet tilt relative to the surface they are touching, and this can be sensed by an IMU.
I programmed the XIAO nRF52840 Sense board to collect IMU data while walking on various surfaces and used the data to train a simple step classification model. During the deployment phase, the board was programmed to periodically send the counted steps for each terrain via BLE, a wireless communication technology introduced with Bluetooth 4.0 for short-range, low-power data transmission.
The next part of this documentation describes how to develop a similar wearable and the various configurations that I made. Want to test the wearable right away? You can jump to section '03. Deploying the model' to start building your wearable.
You can access the public project Edge Impulse project on this link: Walk terrain classification. All the source codes for the project files have been open sourced and they are available in this GitHub repository: ai-walking-terrain-classification. The 3D print files for the wearable are also available in the GitHub repository and Printables.com.
Components and hardware configurationSoftware components:
- Arduino IDE
- Edge Impulse Studio account
Hardware components:
- XIAO nRF52840 Sense
- LiPo battery. I used a 3.7V 200mAh battery
- Soldering iron and some wires to connect the battery to the XIAO board
- Personal computer (PC), preferably a laptop.
Considering that we first need to have the XIAO board secured on a foot, I designed a simple enclosure that can house the board and a LiPo battery.
The case has a USB-C slot that will allow us to connect a PC to the XIAO nRF52840 Sense and collect IMU data during the data acquisition step of the project. After 3D printing the case, I soldered wires to the battery pads on the XIAO board.
When designing the case, I made a 'hook' on the bottom side (as seen in the above images) for shoe laces to slide into, allowing a cleaner attachment of the wearable. However upon testing, I found that the hook cannot not hold onto shoe laces well and the unit keeps falling when walking. For this, I decided to do a simple quick hack and slide the case below shoe laces.
With the ability to now secure the XIAO nRF52840 Sense to a foot, we can now collect data and train a terrain classification model. One evening, I headed to a less congested parking lot that has a vehicle ramp which has both flat and inclined surfaces. I used this environment to collect data for training the model, and testing the wearable.
2.1. Collecting DataTo collect IMU data from the XIAO nRF52840 Sense and create a dataset, we will utilize the Arduino sketch, XIAO_nRF52840_EI_LSM6DS3_data.ino, to collect raw data from the board. The sketch continuously sends 3 axes of sensor data via Serial and our computer will forward the data to an Edge Impulse project.
On your personal computer (preferably a laptop) ensure that you have installed Arduino IDE and the Seeed nRF52 mbed-enabled Boards by Seeed Studio. Next, install the LSM6DS3 library which is used to obtain data from the IMU. While uploading the code to print raw IMU data, I encountered some compatibility issues which resulted in errors such as "class arduino:: MbedSPI has no member named 'setBitOrder'"
, and nevertheless after successful uploading of the code, the IMU data was 0. After debugging, I managed to get things working by using version 2.7.2 of the Seeed board and version 2.7.2 of the LSM6DS3 library.
Connect your XIAO board to your PC, select the 'Seeed XIAO BLE Sense - nRF52840' board followed by the appropriate port and upload the sketch. On a Serial terminal such as the Serial Monitor, you should see raw IMU data from the board and they will change when the board is moved. We are now ready to connect our XIAO board with Edge Impulse Studio.
The first step to connect to the XIAO board and retrieved data is to install the Edge Impulse CLI on your PC. This is a highly advanced software that is used to control local devices, act as a proxy to synchronize data for devices that don’t have an internet connection, and to upload and convert local files. In the project, we will utilize the edge-impulse-data-forwarder tool of the CLI to capture Serial data from the XIAO board and automatically upload them to an Edge Impulse project. You can follow the installation steps according to your computer's Operating System.
Before starting to collect data, I first secured the the wearable on my shoe and connected the XIAO board to my computer using the USB-C slot. From here, collecting data, training the model and deploying it back to the device for terrain classification is a matter of minutes.
Head to Edge Impulse Studio and create a project. Have your XIAO nRF52840 Sense connected to your PC and no Serial terminal application accessing the device's COM port. Once the project has been created, ensure your PC has internet access and run the command below on a CLI to start the data forwarder:
edge-impulse-data-forwarder
You will be requested to enter your Edge Impulse account credentials and select the project you want the XIAO board's Serial data to be uploaded to. The data-forwarder will automatically detect 3 sensor axes and we can name them as Ax, Ay, Az
.
Now, when we go to 'Data acquisition' in our Edge Impulse project, we will see that our XIAO board has been connected and the platform has detected the 3 sensor axes.
We can now start sampling terrain data. On the 'Collect data' section, we need to first set the label/class for the data we are collecting. We can also configure the duration of sampling data using the 'Sample length (ms.)' setting. I set the sample length to 30 seconds durations and collected data while walking on the flat surfaces of the parking lot, and later while walking up and down the ramps to obtain data for the inclined surfaces. I planned to have 3 classes: flat_surface, uphill_surface and downhill_surface
. Click 'Start sampling' and the Edge Impulse Studio will automatically instruct the data forwarder running on your PC to start sampling IMU data via Serial and upload them to the project.
While walking on the different slopes, I tried to ensure consistent frequency of the steps so that the dataset can be evenly balanced.
Once the set duration of sampling has been completed, the sampled data will be visible under 'Dataset'.
Initially, I collected around 200 seconds of data for the 3 classes: flat_surface, uphill_surface and downhill_surface
. However, after deploying the model I observed that there was a huge bias because the model was not trained on still motion. To fix this, I added another class (no_motion
) of IMU data from still positions such as standing. Eventually the dataset had 250 seconds of IMU data for the 4 classes.
Looking at the accelerometer data from the IMU, we can observe that there is a data "trend" between the different slopes from a time series representation of motion data. This tells us that an AI model is capable of analyzing these trends and learning the terrain slopes, adding to step detection/calculation which they are exceptionally good at.
It is worth noting that there is need to collect more data to enable the model be more effective. In my case, the dataset size I worked with is relatively small and it is intended to explore the feasibility of the solution.
Before proceeding to process the IMU data, we can split the collected data into 1 second samples by clicking the '3-dot' vertical menu on each sample then choose 'Split Sample'. Click '+Add Segment' and then click the graph to create a segment. Repeat this until all the motion sample on the graph has been segmented. Finally, click 'Split' and you will see the samples as 1 second (or as configured by 'Set segment length (ms.)'.
Finally, the last step in creating our dataset is to split it for training and testing. The popular rule is 80/20 split and this indicates that 80% of the dataset is used for model training purposes while 20% is used for model testing. On Edge Impulse Studio project, we can click the red triangle with exclamation mark (as shown in the image below) and this will open an interface that suggests splitting our dataset. We then click the button 'Perform train / test split' on the interface that opens. This will open another interface that asks us if we are sure of rebalancing our dataset. We need to click the button 'Yes perform train / test split' and finally enter 'perform split' in the next window as prompted, followed by clicking the button 'Perform train / test split'.
After collecting data for our project, we can now train a Machine Learning model for the required terrain classification task. To do this, on the Edge Impulse project, we need to create an Impulse which is a configuration that defines the input data type, data pre-processing algorithm, and the Machine Learning model training.
One of the great features of Edge Impulse is the simplified development and deployment of Machine Learning models. The Experiments feature allows projects to contain multiple Impulses, where each Impulse can contain either the same combination of blocks or a different combination. This allows us to view the performance for various types of learning and processing blocks, while using the same input training and testing datasets.
The first step is to create an Impulse by clicking the 'Create impulse' button. In my case, I experimented various processing techniques and models using three Impulses. For each Impulse, I set the input data to a window size and window increase of 1000ms (1 second), and a frequency of 50Hz. For processing the raw data and extracting meaningful features, the first and third Impulse use Spectral Analysis. This algorithm is great for analyzing repetitive motion, such as data from accelerometers. It extracts the frequency and power characteristics of a signal over time. In the second Impulse, I experimented processing the IMU data by flattening each axis into a single value but the resulting model is not robust and this can be related to the fact that walking may generally not be slow moving data, which the flattening block is best for. Finally, the processed data in each Impulse is fed to a classifier, the Classification learning block, and it is designed to learn patterns from data, and can apply these to new data. It is great for categorizing movement or recognizing audio. After setting the configuration for an Impulse, we click the “Save Impulse” button.
In the third Impulse, I selected only the Z axis to be processed since generally while walking our feet move up and down, on the vertical (Z) axis. I intended to observe how only using this axis would perform both in training a model and also in the real world.
The next step is to configure the processing block and generate features from the training set. Under Impulse, click 'Spectral features'. We can use the default DSP parameters or leverage the 'Autotune parameters' feature to automatically optimize them for better performance. In my case, I proceeded with the default parameters.
On the page, click 'Save parameters' followed by the 'Generate features' button. The features generation process will take some time depending on the size of the data. When this process is finished, the Feature explorer will plot the features. Note that features are the output of the processing block, and not the raw data itself.
Looking at the features, we can observe that when utilizing Spectral Analysis the clusters for the 4 classes are well separated (meaningful) illustrating that a classifier can learn the differences.
The last step is to train the model. We click 'Classifier' and set the training parameters. I used 100 training cycles (epochs), a learning rate of 0.001 for Impulse 1 and 3, and a learning rate of 0.01 for the second Impulse. For a relatively small project such as this one, we can use a CPU as the training processor and in this case, the training process takes less than 5 minutes. Click 'Save & train' to start the training process.
The classifier is a simple fully connected feedforward neural network with an input layer, two dense layers and an output layer. In the input layer, each sample is represented as a vector of X numerical values (features) that come from the digital signal processing such as Spectral Analysis. Next, the dense layers transform the features and learn patterns. Finally, the output layer produces a probability distribution of the given input over the 4 terrain classes (prediction).
After the training process is complete, the resulting model's performance may seem acceptable since the accuracy in all Impulses is more than 80%. However, accuracy itself is not a measure of how well a model is performing. First, the loss from the second and third Impulse is 0.4 and 0.5 respectively. This metric means the model is learning something useful but at the same time it still makes errors fairly often. Looking at the confusion matrix especially for the second and third Impulse, we can see that the main challenge is the model distinguishing between flat and inclined surfaces.
When training our model, we used 80% of the data in our dataset. The remaining 20% is used to test the accuracy of the model in classifying unseen data. Before deploying the model, we need to verify that it has not overfit (crammed data), by testing it on new/unseen data. To test our model, we first click 'Model testing' then 'Classify all'. The first Impulse has a testing accuracy of 76%, while the second and third Impulses have 62% and 51% respectively.
One would ask why does the model performs differently during the training set, yet it is technically the same data that was used during training. The short answer to this is that when training a model, the data in the train set is fed to the model repetitively during each epoch. The data is normally shuffled and augmented to: prevent the model from memorizing the partners and improve generalization by making the dataset more diverse. However, depending on the size of the train data and how it is processed, it may not be sufficient to train the model properly. In this case, it is not possible for the training accuracy to be similar to the test (and even deployment) accuracy.
In our case, we can see that using the Flatten block does not result in a robust model. At the same time, since the third Impulse utilized only the Z axis, we can observe that processing data only from that axis may not be the ideal technique for this solution. In my case, I chose to proceed deploying the first Impulse.
03. Deploying the modelWe will deploy the first Impulse as an Arduino library - a single package containing all the processes for sampling IMU data for 1 second, processing the data and feeding the features to the classifier. We can include this package (Arduino library) in our own sketches to run the impulse locally on microcontrollers such as the XIAO nRF52840 Sense. Technically no programming is needed to deploy the Impulse as the Edge Impulse platform generates all the required source code and example sketches as well, this is awesome!
Ensure Impulse 1 is the active one and then click 'Deployment'. In the field 'Search deployment options' select Arduino library. Since memory and CPU clock rate is limited for our deployment, we can optimize the model so that it can utilize the available resources on the nRF52840 (or simply, so that it can fit and manage to run on the SoC). Model optimization often has a trade-off whereby we decide whether to trade model accuracy for improved performance or reduced the model’s memory (RAM) use. Edge Impulse has made model optimization very easy with just a click. Currently we can get two optimizations: EON Compiler (gives the same accuracy but uses 54% less RAM and 61% less ROM) and TensorFlow Lite. To enable model optimizations, I selected the EON Compiler and Quantized (int8).
Click 'Build' and this will start a task for packaging your Impulse accordingly, and finally a zip file will be downloaded to your computer. Add the library to your Arduino IDE and open the customized inference sketch(available in the GitHub repository). The inference sketch continuously samples 1 second of IMU data and classifies the motion for each class. Based on the prediction, there are counters flat_surface_steps, uphill_steps, downhill_steps
which are incremented accordingly for each class. Finally, the XIAO nRF52840 Sense periodically transmits inference results and classification time via BLE. The frequency of BLE transmissions is configured by the ble_upload_ms
variable in the inference sketch. Note that increasing the transmission rate will increase the energy consumption rate from the battery. Ensure the filename of the inference library is correct, if you have deployed the model from a different Edge Impulse project name. Finally, upload the inference sketch to the XIAO board.
After the sketch was uploaded, I disconnected the USB-C cable from the XIAO board and used the soldered wires to connect the LiPo battery to the board. At this point, we are done with the hardware related tasks and we can mount the wearable on our shoe.
The remaining task is to simply copy the simple mobile WebApp HTML file to a smartphone. In the GitHub repository, the index.html and index_mobile.html are similar with the only difference being that the 'mobile' version of the HTML file embeds images as base64 encoded strings since this was a quick hack to fix file path issues. The HTML file creates a dashboard for a terrain aware pedometer. It allows the device (smartphone, PC, etc.) to connect to the XIAO board via BLE using Web Bluetooth API, on supported browsers such as Google Chrome. With the BLE connection, the page receives step data (flat, uphill, downhill) and prediction time from the XIAO board and updates the dashboard in real-time with the latest step counts. The page also saves daily step counts on local storage, so that data is not lost when the page reloads, or the device resets.
Once the HTML file has been copied to your smartphone, open it and connect the XIAO board by clicking the 'Link wearable' button and select 'XIAO-nRF52840-Sense'. With the wearable secured on your foot, you can start walking and you will see terrain classification results from the board.
At last, our terrain aware wearable is complete. We now have terrain sensing while walking, enabled by the XIAO nRF52840 Sense powered wearable, and real-time step classification on the Web Application. The entire process from collecting data, training the model and uploading the inference sketch took about an hour for me, while the majority of the time was spent compiling the inference sketch on Arduino IDE.
Still in the same ramp and parking lot area that I used to collect data, I walked around and observed how the model was performing. One, the model is able to differentiate walking versus standing still. There are notable errors when it comes to the predictions but these can be corrected with more data and advanced processing. The BLE updates and saving daily records of step count works very well.
Below are some snapshots of the mobile WebApp.
Wearable technology has seen global adoption. However, comparisons between various wearables for tracking physical activity still show significant large variations in accuracy between different devices. At the same time, terrain classification while walking can provide more insightful data about our fitness but this solution is subject to have more analysis and data to precisely identify how the movements affects our body.
Simple step counting is not an ideal metric for fitness status. These and greatly improving the model are promising future works for the project. As usual, all the source code for the project has been open sourced and the source codes are available in the GitHub repository.
Comments