Smart Buildings From Dumb Sensors

Machine learning turns simple, privacy-preserving sensors into a low-cost, scalable smart building platform.

Nick Bild
2 years ago β€’ Machine Learning & AI
Chameleon, rear view (left) and front view (right) (πŸ“·: A. Rico et al.)

The potential of smart buildings β€” buildings which gather and use intelligence to create a responsive environment β€” to transform our everyday lives is only beginning to be unlocked. By fully adopting smart building technologies, it is possible to reduce energy costs through automatic, occupancy-based light and temperature controls, and to maximize use of spaces (or eliminate expensive, unused spaces) based on historical usage data. By looking for anomalous usage patterns, one can also make predictions about systems that may need preventative maintenance, which can avoid the significant expenses that would occur in the event of an actual failure.

With such huge potential, one might ask why these technologies have not been more widely adopted to date. Two of the most prominent reasons are privacy considerations and the large amount of computational resources that are often required. Help is on the way, however, in the form of a new adaptive sensor fusion and hybrid machine learning architecture being developed by researchers at the MIT Media Lab. Called Chameleon, this system eschews invasive sensors like cameras and microphones and is designed to run on a tiny microcontroller.

Chameleon is built around an optical carbon dioxide sensor, and a passive infrared (PIR) sensor. PIR sensors are the low-resolution, not-so-smart sensors that typically click on the light when you walk into a room, and carbon dioxide sensors, naturally, detect the quantity of that gas in the air. These types of sensors collect very coarse data, which is not capable of uniquely identifying individuals. But, as it turns out, when you pair these sensors with machine learning, it is possible to recognize the types of activities happening in a room, and with minimal computational horsepower.

By using supervised and unsupervised machine learning algorithms, the technique was found to be 87% to 99% accurate in classifying activity states (e.g.: video meeting, exercising, sleeping, having a meal). This impressive level of accuracy was achieved after as little as one week of training. Tests were conducted in both a sparsely occupied office, and a more densely occupied classroom. It was found that the algorithms were able to deliver consistently good results, even as the layouts of the rooms changed. This is good news when it comes to the feasibility of deploying Chameleon at scale.

The team hopes that their device will someday be incorporated into digital urban-planning processes, such that communities will be able to better manage their offices, classrooms, parks, and homes. Having devised a system that protects privacy, uses minimal compute resources, and does not require long calibration procedures or the collection of large, labeled data sets, that may be just the beginning of the future impact of this work.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles