Researchers Propose a Link Between Certain Computer Vision Models and Your Peripheral Vision
A quick-flash vision experiment suggests that adversarially robust computer vision models may work in the same way as peripheral vision.
A pair of researchers at the Center for Brains, Minds, and Machines at the Massachusetts Institute of Technology have found a link between a robust computer vision model and human vision: peripheral vision.
"It seems like peripheral vision, and the textural representations that are going on there, have been shown to be pretty useful for human vision," explains lead author Anne Harrington of the work. "So, our thought was, OK, maybe there might be some uses in machines, too."
Working with adversarially robust models β designed specifically for overcoming noise added deliberately to images to throw off less-robust machine vision models β the team found evidence that these robust models shared commanilities with human peripheral vision processing that could help boost performance while also adding to our knowledge of how human vision works.
The team set up an experiment that aimed to determine whether adversarially robust models, which are retrained on hand-labelled mistakes to improve their performance, encode image representations in the same way as peripheral vision in humans. Three models β standard, adversarially robust, and one designed around human peripheral processing β were trained to synthesize image representations from noise.
Human participants were then asked to distinguish between original images and their synthesized versions, flashed briefly at their periphery. The result: Both the adversarially robust model and the peripheral model, Texform, were the hardest images to pick out when presented at the far periphery of the participant's vision β unlike those generated from a standard model.
A key finding: The mistakes the human participants made were aligned across both Texform and the adversarially robust model, suggesting that the latter does indeed β and seemingly incidentally β feature some aspect of human peripheral processing.
"We are shedding light into this alignment of how humans and machines make the same kinds of mistakes, and why," explains senior author Arturo Deza. "Why does adversarial robustness happen? Is there a biological equivalent for adversarial robustness in machines that we havenβt uncovered yet in the brain?"
"We could even learn about human vision by trying to get certain properties out of artificial neural networks," Harrington suggests.
The team's work is to be presented at the International Conference on Learning Representations 2022, with a copy available under public access terms on OpenReview.net and more information on MIT News.