AI Trust You

Deep Evidential Regression promises to tell us how confident we can be in the predictions produced by neural networks.

Nick Bild
3 years agoMachine Learning & AI
Depth estimation with uncertainty (📷: A. Amini et al.)

Remember that feeling after you finished your first “Hello world” project in machine learning? You were running inference on your newly trained model, marvelling at how well it could detect bananas. 99% confidence! Then you decided to turn the camera on an orange to let it work its magic again. 99% confidence that it’s a... banana?!?

That first project was your introduction to the sometimes strange world of machine learning, where insufficient training examples or inappropriate hyperparameter choices can yield odd results. Odd results that the model is very sure are accurate, that is.

Mistakes in a trivial example of classifying fruit are all fun and games, but what about other applications of machine learning? How about medical diagnoses, or autonomous driving? Clearly mistakes here are no laughing matter.

A collaboration between MIT and Harvard University researchers is seeking to give us a level of certainty about the results produced by neural networks, especially for safety critical problem domains with a technique they call Deep Evidential Regression.

Current methods to assess confidence in the results of a model rely on running the model many times over. This requires lots of processing time and memory where neural networks are large, with potentially billions of parameters, which is infeasible for many applications.

The team developed a new method in which only a single run of the model is required. That run produces both a prediction, as well as a probabilistic distribution that represents the certainty of that prediction. The distribution represents uncertainty in both the model’s prediction, and the underlying data from which it was derived.

To test the new method, the researchers applied it to a common computer vision problem — that of depth estimation. Their model performed as well as the best existing models, and in addition also produced a confidence metric. As expected, this metric showed high degrees of uncertainty when the depth prediction was incorrect.

Deep Evidential Regression shows promise for giving us confidence in the often mysterious results neural networks produce, but has room yet for improvement. Tuning parameters and properly penalizing misleading evidence are areas that the team is continuing to explore for future refinement.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles