Haoran You (hy34)
Zhanhang Zhou (zz70)
Project DescriptionIn this project we want to interpret motion information captured by sensors. Normally, a motion sensor will record activities in three dimensions, naming x, y, z, and different actions will have their corresponding pattern in the sensor records (a picture below shows patterns for jogging and walking down-stairs).
In order to build an algorithm and identify different kinds of action, aka creating “heuristic” in motion differentiation, we will utilize a CNN (convoluted-neural-network) to train machine-learning models. A successful motion-identification model will be helpful in aspects like Automated industrial processes, Robots, Disease diagnosis, Human-Interface action and so on.
General Structure
The structure of the project will be similar with the example (Hello world and Word detection) given in previous sections. We gather input from the Arduino board, and pass that information to the gesture capture model we’ve trained, and then process the results. Our project will be designed to capture if the board is moving on following routes (O, W, L), and the result will be O, W, L or no action.
Detailed information of the training model is reflected in the following section. And information collection and analysis will be dealt by the accelerometer. The successfully captured results will be printed on monitor, and the board LED highlight will also be flashing.
Model training- Datasets
- The dataset consists of 16325 training lengths, 136 validation lengths, and 192 testing lengths. We follow the default setting of the given example and use the ten-fold cross-validation to record and benchmark the trained performance.
- Model Architecture
- We follow the instructions to train a two-layer convolutional neural network with the fully-connected layer as linear classifier, and dropout function for avoiding overfitting. The total number of parameters is 4300.
- Training Settings
- We train the model for 50 epochs with a batch size of 64.
- In the figure below, we visualize the training and testing accuracy trajectories along the 50 epochs training, represented via red line and blue line, respectively. The achieved final testing accuracy is 91.67%.
- We also show the corresponding training and testing loss curves to demonstrate the convergence of our algorithm, which also validates the effectiveness of the chosen model and training hyperparameters.
Step 1: Setting up microcontroller, download supporting library for BLE 33 nano, select corresponding board and port
Step 2: Invoke given project code pre-installed from library Arduino_tensorFlow. In detail, the path will be File-> Examples -> TensorFlowLite -> magic_wand
Step 3: In addition to allowing the Arduino board to capture motives, we need support of library Arduino_LSM9DS1. If the library is version 1.0.0, add FIFO buffer to improve performance (see picture attached below)
If the library is version 1.1.0, then the FIFO buffer is already implemented.
Step 4: Upload the compiled project and test results in the serial monitor. A sample output will be displayed in the video attached below.









Comments