In this tutorial you will learn how to create a motion controlled sound device, that we named the ThereSynth.
IntroductionWe started this project as a part of a Mobility and Internet of Things course at our faculty.
We decided to use the Arduino Nano 33 BLE SENSE as a gesture sensor. The gestures are recognised with the help of the online tool Edge Impulse. We also developed an Android app with Android Studio, that connect with the Arduino Nano 33 BLE SENSE through Bluetooth Low Energy. Then, it recieves gesture information through BLE, and plays sounds accordingly.
This project requires no wiring, and is very simple to set up and get started. If you have the hardware, all you need is a Edge Impulse account, Android Studio and Arduino IDE and you are ready to start.
The project is not perfect, but the proof of concept is shown very well. You are free to expand the project as you wish. Hope you enjoy!
Gathering the data and creating the modelIf we want to detect gestures, then we can hardly avoid some machine learning. In the past, this could be difficult to do, as machine learning algorithms are complicated and can require alot of time and effort to fully understand. The platform Edge Impulse makes machine learning techniques available to mere mortals like us. Edge Impluse can gather data from a variety of devices and then process this data and build model from this data.
The first step in using Edge Impulse is to create an account and connect one of the many available devices, from which we will gather data. In this project, we will connect an Arduino Nano 33 BLE SENSE. To do this, there some extra steps you have to take, but it is all neatly explained on the Edge Impulse website. Just make sure that the edge-impulse-daemon is running when you try to record data.
Once your device is connected, you are ready to start acquiring data! On the platform Edge Impulse, look for "Data aquisition" in the left hand side menu. Now select the Arduino Nano 33 BLE SENSE in the "Record new data" section of the screen. Since we will be detecting gestures in this project, we will record data from the accelerometer. You should pick this as the sensor.
Now name a gesutre and start teaching the model by pressing the "Start sampling" button. Edge Impulse will now start recording data for the set amount of time (default is 10 sec), during which, you should perform the gesture with the Arduino in your hand. The gesture can be anything you want, but we suggest gestures that are distinct from each other, as this will make the model more accurate. Also make sure, that you add the "idle" gesture, as this helps with performance.
In our case, the gestures are the letters M, Z, L, O and the whip and stab gesture. In hindsight, these could be chosen better, because some of the letters look very similar from the accelerometer point of view, but with enough data recorded, the model is accurate enough. If you mess up a recording of data, you can simply delete the recording. And while the gestures you record should be accurate, take into account that during usage, nobody will perform the gestures perfectly 100% of the time, so some human error should be taught to the model.
Once you collected data for all of your gestures (should be at least 200 sec for each of the gestures), you are ready to test the model! Firstly, decide which part of the sampled data will be used for training the model, and which for testing. While this can be done manually, it is best to simply randomize this process, to get rid of any bias. To do this, navigate to the "Dashboard" part of the left hand side menu, and at the bottom, pick "Rebalance dataset". This will take 20% of your data, and classify it as "test data". You now have data that is prepared to build a basic machine learning model.
To build the model from the gathered and sorted data, pick "Create impulse" from the left hand side menu. You now have a lot of options, the majority of which is complicated and would need a full university to understand. For now, all you need to know is to add a "Spectral Analysis" processing block, which will extract the information about the movement of a specific axis, and a "Neural Network (Keras)" learning block, which is a method used to train the model. Now just click on "Save impulse" and you are done with the setup.
You can inspect the spectral features of your data in the "Spectral features" part of the left hand side menu, to see if your gestures are distinct enough. Click on the "Generate features" sub-menu, and then "Generate features" button, to get a visual representation of how distinct your gestures are from one another.
How distinct the features are, is seen from the arrangment of the dots in space. If the dots are all in one big cluster, your model probably isn't very accurate.
If you are not satisfied with the results, you can tweak the impulse parameters, record more data for each of the features, or start from scratch and pick a different gesture entirely. This can be done at any point later in the project, if the detection of gestures is not good enough. If you think the model is set up well enough, you are ready to train it.
In the left hand side menu, pick NN Classifier. Again, there are a lot of options, but the default ones will work fine for now. Just click "Start training" to start training the model. Once it's done, you can check how good your model is. This is best visible from the "Accuracy", "Loss" and "Confusion matrix" parts of the screen. They are all pretty self-explanatory, so I will not go into detail explaining them here.
In the "Model testing" part of the left hand side menu, you can use the data you classified as test data to test the model you trained. Click on "Classify all" to see how well your model reacts to the data you recorded. Here, a healthy accuracy to have is above 75%, but you should try to get the best results you can. In the "Live classification" part of the left hand side menu, you can record new data and test the detection rate of the model you just created. This is also a good indicator of how accurate your model is.
This concludes the machine learning part of the project. All that is left is to export the model you created on to the Arduino Nano 33 BLE SENSE. Edge Impulse also helps us with that, by creating an Arduino library from our model.
Deploying the modelTo get the library, navigate to the "Deployment" part of the left hand side menu. Pick the option "Arduino library" and then click on "Analyze optimizations". Now click "Build". Your Arduino library should now download. Open Arduino IDE and include the downloaded library as a.zip library. You should now have an example named "nano_33ble_sense_accelerometer_continous" under "PROJECT_NAME Inferencing - Edge Impulse". Try running this example and observe the data you get through the serial monitor. If everything is working as it should, you are ready to start developing the Android Studio part of the project.
You are aldo welcome to use our Arduino code, that includes recognition of the gestures mentioned before. The Arduino code is available on this link. You can just build the Arduino project and upload the code to your Arduino Nano BLE Sense.
Making the appFot this part of the project, we developed an app, that connects your smartphone to the Arduino Nano through BLE (Bluetooth Low Energy). This the way that your phone will get gesture information from the Arduino Nano. You are encouraged to make your own app here, or at least to modify what we made so that the app gets a home brewed feel. You can change the icons to mach your gestures, add custom sounds or develop completely new functionalities. In either case, the link to the GitHub repository is available here.
Open the Android Studio project and connect your phone to your computer with an USB cable. Make sure that your phone has developer debugging options active. One that is done, you should be able to build the.apk file and install it on your phone.
Customizing the appWe won't explain the code in detail here, but we stiil want to give you some direction on how to make the project your own. The two main personalizations you can make are changing the gesture icons and changing the sounds that you can link to your gestures. To change the sounds, open the code from the github repository with android studio and go to java/si.uni_lj.fe.tnuv.tnuv_projekt/customize_param, then go to the function getEntryList() where you can play around with the sounds. You can change the file names to you custom sounds. Your custom sounds have to be included in the project folder, in TNUV-MIS-main/app/src/main/assets. Make sure that the filenames are exactly the same in the code as in the folder.
To change the icons for your gestures will play, go to lines 52, 53, and 54 of the file main_window.java and change image_file_names string to the names of the.png icons you put into the directory TNUV-MIS-main/app/src/main/res/drawable.
Final productIf you followed the instructions, you should now have a working gesture detection synthesizer. If you still don't know if this project is worth spending time on, you can watch this 2 minute presentation video and it will surely convince you.
Comments