This project focuses on building a complete wearable activity tracking system that integrates embedded programming, cloud services, machine learning, and application development. The goal was to design a working end-to-end IoT pipeline capable of classifying human activities in real time with minimal latency and reliable data handling.
The system starts at the embedded level using the RT-Thread RT-Spark development board, which is based on an STM32 microcontroller and programmed using RT-Thread Studio. I developed firmware to initialize and configure the onboard accelerometer, gyroscope, and Wi-Fi module. Sensor data was sampled at a controlled rate to balance temporal resolution and network overhead. The firmware packages multi-axis motion data into structured messages and transmits them periodically over Wi-Fi, ensuring efficient and reliable communication with the cloud platform.
For cloud services, I used Mango Cloud as the device management and data ingestion platform. Incoming sensor data is received, validated, and forwarded to the application layer for processing. I implemented the server-side logic, where the system continuously listens for new sensor updates while handling concurrent API requests. This required careful design to prevent data conflicts and ensure smooth execution between data reception, inference, and database operations.
The activity recognition component is powered by a machine learning model trained on accelerometer and gyroscope datasets. The model was designed to be lightweight while still capturing temporal patterns in motion data. Once trained, it was deployed within the backend to perform real-time inference on incoming sensor streams. Predictions are stored alongside timestamps, enabling both live monitoring and historical analysis.
On the application side, I developed a web-based Progressive Web App (PWA) using HTML and CSS for the user interface and JavaScript for real-time data processing, activity classification, and visualization that communicates with Mango Cloud through REST APIs. The app retrieves real-time predictions and historical activity records and presents them in an intuitive interface. The application logic ensures asynchronous data fetching so that the user interface remains responsive even during continuous updates.
The mobile application interface was designed to clearly communicate the system’s real-time activity classification results while also exposing useful diagnostic information from the underlying data pipeline. Rather than displaying only a final label, the UI reflects how sensor data is processed, buffered, and interpreted by the machine learning model.
At the top of the interface, the current detected activity state is prominently displayed (Walking, Running, or Idle), along with a representative animated icon. This provides immediate, intuitive feedback to the user and confirms that the wearable device, cloud service, and application are communicating correctly. A timestamp below the activity label indicates when the prediction was generated, emphasizing the real-time nature of the system.
Each activity screen includes a circular status indicator, whose color and animation visually reinforce the detected state. For example, walking and running states use dynamic motion icons, while the idle state uses a static symbol to represent minimal movement. This visual consistency helps users quickly distinguish between activity transitions.
Below the activity label, the app displays two important internal metrics: variance and buffer status. The variance value represents the level of motion intensity computed from recent accelerometer and gyroscope readings. Lower variance values correspond to idle or stationary states, while higher variance values indicate active movement such as walking or running. Displaying this value makes the app more transparent and helps validate why a certain activity was classified.
The buffer indicator shows how many sensor data windows have been accumulated out of the required buffer size before classification. This reflects the temporal logic of the machine learning model, which relies on short sequences of sensor data rather than single readings. As the buffer fills, the app approaches a new prediction cycle, ensuring smoother and more stable activity recognition.
At the bottom of the interface, a real-time signal graph visualizes recent sensor activity. This graph allows users to observe changes in motion patterns as they transition between idle, walking, and running. Sudden spikes and fluctuations in the graph correlate directly with increased physical movement, reinforcing the connection between raw sensor data and the predicted activity.
https://drive.google.com/file/d/1Fsc3QaE1XMSbEOnriudUOjs-XctXt1F9/view?usp=sharing
Written by: Junie Jebreel M. Botawan
Assistance with ChatGPT OpenAI








Comments