The Internet of Things (IoT) has recently become a very popular concept for tech companies. The ability to connect to the internet has been built into many appliances, thermostats, light bulbs, home security systems, and even doorbells. All of these devices have experienced an increase in utility, functionality, and of course price over the last few years. So where do we go from here, what is the next step in IoT?
Most first generation IoT devices connected sensors and user controls to some cloud based service that analyzed the data and then published it, replied back with information, or sent commands. Cloud based services such as If This Then That (IFTTT) have evolved to provide an intermediary between platforms to relay information and commands. And many companies provide portals and apps to configure their devices.
There is room to debate weather connecting devices in our homes to the world wide web is a good idea, there are, or course, security issues that are cause for concern, does the advantage of being able to control our environment remotely, or with our voices outweigh the risk? How do we resolve these security concerns while building more functionality and convenience into these new smart devices? The answer or course is to make those devices even smarter than they already are so there is no need for them to transmit our personal data across the internet. In this way we can eliminate both privacy concerns about our personal data being compromised, and the ability of others to access and control our devices remotely without our knowledge.
Living on the EdgeThis is where Edge IoT devices come to the rescue. Edge computing adds more smarts to smart devices, giving them the ability to analyze their data locally and react immediately on their own, instead of uploading sensor data to a cloud service and waiting for a response. A smart thermostat might turn on the heater when certain conditions are met, usually paired with a cloud service to determine the most cost efficient method to control the heater, a smart Edge thermostat has the ability to gather and analyze information locally and make decisions on it's own without external intervention.
Avnet and Octonion have taken Edge computing one step closer by adding Artificial Intelligence (AI) directly to the device so it can interpret and analyze complex sensory data without the need of uploading everything all the time to the cloud for analysis. The SmartEdge Agile is loaded with sensors and has built in, data encryption for secure communications when it does connect to the cloud service. With a Gyroscope, an accelerometer, magnetometer, pressure sensor, temperature and humidity sensors, Ambient light sensor and a microphone this little device comes loaded for bear and ready for just about any situation where real time data analytics are useful. This approach provides for the ability to expand the application of this technology in new an innovative areas.
With the on board sensors this little gem can be used for any application where an immediate response is needed based on specific conditions, imagine a sensor in your car that can alert you of a possible problem based on vibrations or changes in the sound of the engine, or a pair of shoes that can reconfigure itself to your feet based on your gait while you're running.
The Toys!The SmartEdge Agile connects via Bluetooth 5.0 to a gateway device (in my case a cell phone, via an Android app), and from there to the Brainium portal (https://brainium.com) where the sensor data can be analyzed and an AI model can be built and deployed to one or multiple devices.
The AI studio in the portal allows you to analyze motions made with the device, and add widgets that log sensor data. For my example I'm going to set up specific movements with a wand to turn on and off a table lamp.
The ProjectI purchased a Harry Potter wand to attach the Agile to, so that movements can be more uniform, and so I can impress people with my magic wand. I then taped the Agile to the handle of the wand, since the wand is too thin to place the device inside of it.
The table lamp is plugged into a TP-Link Kasa mini WiFi Smart Plug that can be controlled remotely. This is pretty much all of the setup this project requires.
Getting StartedFirst things first, you'll need to sign up for a user account on the Brainium website, enter your email address and you will receive an email with login information.
The next thing you will need to set up is a gateway for the SmartEdge Agile to connect to in order to communicate with the portal.
There are three different gateway devices that can be used currently, an Android or iOS smart phone, or a Raspberry Pi, the software for each of these devices can be downloaded with the following links.
- https://play.google.com/store/apps/details?id=com.brainium.android.gateway
- https://itunes.apple.com/us/app/brainium-gateway/id1446583825
- https://spa.brainium.com/apps/linux
Once the gateway is installed and you've logged into your account you can add the SmartEdge device through the gateway. Turn on the SmartEdge agile device an make sure the gateway software is running on your device, I noticed that it might not connect correctly when the screen on my phone is locked.
That's it for the setup, pretty simple..
Create a new project and train the modelUsing the new project wizard I created a project called Magic Wand and added the Agile device to the project.
Next in the motion recognition tab I added the motions I wanted to train the Agile to recognize.
Selecting each shape one by one I recorded a set of motions, I set 10 as the number of motions for each set, and recorded multiple sets, the more data you give the algorithm the better results you'll get. I selected 10 motions because it's not so many that I lose count but it gives the Algorithm enough (I hope) data to work with. It's important to pause before and after each motion so that there is a distinction between movements to be recognized, remember the system doesn't know what a square or circle, or whatever shape you're training it for is, you are teaching it to recognize those things. A good idea is to change positions between training sets, making the movements as close as possible to what you would do naturally with a magic wand to impress your friends! Don't try too hard to make each shape perfect each time, it should be natural, the imperfections will not matter if you provide enough training data, they will help the model recognize imperfect movements. Often the first motion isn't detected when you start, this happened to me mostly because I didn't pause long enough before starting, after a while I got the hang of it. Create training sets for all of the shapes you've decided to use in your project. Once you've created all of your training sets you need it's time to generate your model. Select all of the training sets you want to include and select Generate the Model, this will take a few minutes, depending on amount of data and complexity.
After the model is generated it's time to test the accuracy. If you find that the model is having a hard time recognizing your movements you can add more training sets. Let's deploy the model to the Agile device and test it out! Remember we're setting up an Edge AI device, so the actual motion recognition will be done at the device, not by the portal, that's just for the setup and a place to store data.
Back to the projects page, select the magic Wand project, under devices you'll find the magic wand with all of the sensors listed:
Select AI Rules, in the popup window select Motion Recognition Workspace type and the Shapes workspace. Select the prepared model you want to use and press apply, the model will be sent to the Agile device, this may take a few minutes.
Once the model is applied you can set up a widget to show recognized motions, or any other type of sensor data you might want to monitor:
Now we know that the model is working as expected time to go back to the devices page and select AI Rules again, this time lets add 2 new AI rules, Send an email if the circle is detected, and send an email if the square is detected. Brainium has two nice little API's for the portal, but for expediency I will just use an email message to turn the lamp on and off.
In my Gmail account (the one used to create my user account on the portal) I set up 2 new filters, one to assign the label On if a message arrives from the Brainium portal with the word circle in the body of the text, and one to assign the label Off if the word square is detected.
Once the rules in Gmail are set up time to connect the Gmail account and the TP-Link/Kasa account (for the smart plug) to IFTTT, just follow the directions. Next create two new applets, one to turn the power On at the plug if a message with the label On arrives in my inbox, and one to turn the power off when a message labeled off is detected.
That's it! Now to impress my friends...
The SmartEdge Agile device has a lot of potential, I'm sure I will be building some more projects with it in the future. I can't wait to see how this platform evolves.
Comments