Machine learning development is done in two distinct stages. An algorithm is initially trained on a large set of sample data on a fast powerful machine or cluster, then the trained network is deployed to an application or device that needs to interpret real data. Over the last six months, the computing power necessary for this deployment, or “inference,” stage has dropped dramatically.
The release of TensorFlow Lite for Micro-controllers back in March, was followed in May by the announcement of a public beta release of Xnor.ai’s AI2GO platform. Using proprietary binary weight networks, the AI2GO platform is aimed not just at embedded hardware, but existing hardware.
Which is where Wyze comes into the picture.
“The company has also been responsive to customer concerns and questions relating to security and privacy. Not only can its security camera capture all data locally using an SD card (it does limit some of the functionality), but Wyze has been quick to address customer concerns about its data going to Chinese servers. (Wyze also allows for two-factor authentication on all its devices, making them more secure than many other smart home products.)”—Stacy Higginbotham
I actually think we’re going to see a lot of this over the next six months to a year as manufacturers react to the privacy concerns around the Internet of Things. The partnership between Xnor.ai and Wyze to put Xnor’s AI2GO framework onto the Wyze Cam is a leading indication that it’s not just the arrival of hardware designed to run machine learning models at vastly increased speed that is driving machine learning on the edge; it is also the ability to run machine learning models inside a relatively low power envelopes, without needing a connection to the cloud, that is now making edge based computing that much more of an attractive proposition.
Right now, the best sensor we have is the camera, and a lot of our machine learning models are designed around that. The ability to run these sorts of models on existing hardware, like the Wyze Cam, without expanding the exiting Bill of Materials costs could well be a significant driver for the edge ecosystem.
The ecosystem around edge computing is, in fact, starting to feel far more mature. Which means that biggest growth area in machine learning practice over the next year or two could well be around inferencing, rather than training.
It also addresses one of the real problems we’ve seen with the Internet of Things, that of ownership. As customers we may have purchased a physical thing, but the software and services that make that thing smart remain in the hands of the manufacturer. The data the thing generates belongs to, or at least remains in possession of, the manufacturer, not the person that paid for the thing. Which has changed the very idea of what it means to own something.
However, it now seems likely that machine learning on the edge can change that, allowing users more control and a closer relationship and ownership of their own data.
More than a few years ago now Mark Zuckerberg famously stood up and said that privacy should no longer be considered “…a social norm.” But, earlier this year, Zuckerberg stood up on stage at Facebook’s F8 conference and back tracked, saying that “…the future is private.”
Even if you don’t believe Zuckerberg. The idea that this man—the man that 10 years ago stood up and sold us on the mantra of the big data age—that privacy wasn’t important any more, said it, tells us something. It tells us that that age is over.
Take the data. Act on the data. Then throw the data away.