Sensing a Shift in Edge ML

By building ML processing directly into sensors, it becomes easy to create reconfigurable ML-powered devices without any special knowledge.

Nick Bild
11 days agoMachine Learning & AI
ML sensor incorporated into a larger system design (📷: P. Warden et al.)

By now, you would have to be living under a rock to not be aware of the successes of edge machine learning, and how it is transforming what is possible across a wide range of application areas. But unlike many other technological advances that are more straightforward to understand, the tensors, activation functions, weights and biases, and backpropagations of machine learning can be very difficult to grasp. With these barriers that exist to even understanding the basic concepts, it is no surprise that actually implementing and deploying a machine learning-powered device presents challenges that are preventing an even more widespread use of the technology, and the benefits that it could bring.

Fortunately there is help available to make ideas a reality without expertise in the field. Edge Impulse, for example, allows entire machine learning pipelines to be developed — and deployed to physical hardware — with a simple point-and-click web-based interface. And now a group of researchers at Stanford and Harvard universities have put forward an idea for the tight integration of sensors with machine learning pipelines such as these to create reusable building blocks that can be incorporated into arbitrary device designs. Dubbed ML sensors, these tightly-coupled pairings of sensors and machine learning processors would hide the details of their internal operation, and instead return simpler outputs.

As an example, imagine a person detection ML sensor. When pointed at a person, it might return a simple binary signal, indicating that a person is present. This sensor, and its output signal would be very easy to incorporate into a larger system design that includes other ML and traditional sensors, as well as general processing units. The details about the neural network and data processing are hidden inside the sensor itself, so no special knowledge would be needed to use them.

The previous example demonstrates the overarching goal of ML sensors, which is to abstract the machine learning details from the user and make the interface frictionless. Another guiding principle of the technology is the preservation of privacy and security. With sensor data never needing to leave the device, fairly strong guarantees can be made that it will remain private. This is particularly important when dealing with images or audio data, but the principle applies to any type of data that is generated.

Now, abstraction is good for keeping things simple, but keeping ML sensors a total black box would not be a good idea. It is important that a user knows exactly what to expect from the sensor, and what its limitations might be. For this reason, the team recommends that a special datasheet should be provided along with each ML sensor. It would detail how the device works at a high level, but would also point out any relevant operational characteristics. In the case of a person detector, a datasheet might mention what lighting conditions are acceptable, the distance a subject can be from the sensor, and the number of inferences per second that will be produced.

ML sensors do not solve every problem — model bias and lack of interpretability of results still remain of concern. However, simplifying the process of incorporating machine learning into hardware designs has the potential to solve many problems and stoke innovation. At present, however, this is still just an idea. No one has actually created a functional ML sensor just yet, so we will have to be patient for a bit longer before we see how ML sensors might help to shape the future.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles