This project is a full-stack IoT system designed for the real-time classification of user activities. Using an RT-Thread RT-Spark Development board equipped with an accelerometer and gyroscope, motion data is captured and transmitted over Wi-Fi. The system bridges the gap between raw hardware signals and meaningful user insights by leveraging cloud-based analytics and mobile visualization.
Software ArchitectureThe project is divided into three distinct layers:
- The Edge: An RT-Spark board collecting 6-axis accelerometer and gyroscope readings at an optimized sampling rate.
- The Brain (Back-end): A Flask web server hosting a pre-trained Machine Learning model. It receives raw data, extracts temporal features, and predicts activities (Walking, Running, or Stationary).
- The Interface: An Android mobile application that provides a real-time dashboard and a historical log of user activity retrieved from a Firebase database.
The RT-Thread RT-Spark Development board served as the edge sensing device. Due to the integration of the ICM20608 sensor suite, high-fidelity motion data was accessible without external peripheral wiring. The device was configured for high-speed Wi-Fi transmission to maintain a continuous data stream to the back-end.
Phase 2: Embedded Firmware Development (RT-Thread Studio)Firmware development was conducted within RT-Thread Studio.
- Driver Initialization: Onboard sensor drivers were enabled to capture raw data from the 3-axis accelerometer and 3-axis gyroscope.
- Network Management: The WLAN management framework was utilized to establish a stable TCP/IP connection.
- Data Serialization: Sensor readings were aggregated into 30-sample windows, serialized into JSON format, and transmitted via HTTP POST requests to the server’s REST API endpoint.
Activity recognition was powered by a supervised learning model trained on a comprehensive motion dataset.
- Feature Extraction: Statistical features, specifically the mean and standard deviation of each axis, were calculated to represent the temporal rhythm of movement.
- Model Training: A lightweight Scikit-Learn classifier was selected to ensure rapid inference times. The model was optimized to distinguish between "Stationary, " "Walking, " and "Running" states.
- Deployment: The finalized model was serialized into a
.pklformat for integration into the Python-based Flask environment.
The Flask server functioned as the central processing unit of the architecture.
- Inference Engine: The server received JSON packets, extracted real-time features, and passed them through the ML model to generate predictions and confidence scores.
- Firebase Realtime Database: Predictions were pushed to a cloud database. A Throttling Mechanism was implemented to update the "Current Activity" node instantly while limiting "History" log entries to every 10 seconds (or upon activity change) to prevent database congestion.
The user interface was developed using Android Studio and Java.
- Real-time Synchronization: Firebase listeners were utilized to ensure the UI responded to cloud changes with negligible latency.
- User Experience (UX): The dashboard was designed to provide visual cues (color-coded backgrounds) corresponding to the detected activity. A RecyclerView was implemented to render the historical logs efficiently.
The completed system demonstrates a successful integration of embedded systems, cloud computing, and machine learning. By offloading complex inference to a Flask back-end and utilizing a real-time database, the prototype provides accurate, low-latency activity tracking suitable for modern wearable applications.Project Demo: https://drive.google.com/file/d/1hT--Oq-w1WQ6VzeuwJQyIA-iCDd9br_Y/view?usp=sharing














Comments