Columbia University Researchers Gave This Robot Arm a True "Sense of Self"

This robot's self awareness "is trivial compared to that of humans," admits one professor, "but you have to start somewhere."

A team of scientists from Columbia University has developed what they claim is the first robot to have a demonstrable sense of self — specifically, the ability to build a model of its own body entirely from scratch and without any prior training.

"Internal computational models of physical bodies are fundamental to the ability of robots and animals alike to plan and control their actions," the researchers explain of their work. "These 'self-models' allow robots to consider outcomes of multiple possible future actions without trying them out in physical reality."

This robot knows its shape by heart, but through developing a sense of self rather than manual modeling. (📹: Chen et al)

Typically, though, said models need to be built by hand prior to the robot's deployment. While forward kinematic models can be generated automatically, they provide only a limited overview of the robot — which is why the team opted for a different approach, using a query-driven self-model built up from scratch by the robot itself.

"We were really curious to see how the robot imagined itself," says Hod Lipson, professor of mechanical engineering and director of the lab where the work was carried out. "But you can’t just peek into a neural network, it's a black box."

To prove that robot did, indeed, possess a sense of self, the team turned to cracking open that black box with a range of visualization techniques — trying a selection before discovering the trick to turning the robot's self-image into something humans can understand. "It was a sort of gently flickering cloud that appeared to engulf the robot's three-dimensional body," Lipson says. "As the robot moved, the flickering cloud gently followed it."

"We humans clearly have a notion of self," adds first author Boyuan Chen, now an assistant professor at Duke University. "Close your eyes and try to imagine how your own body would move if you were to take some action, such as stretch your arms forward or take a step backward. Somewhere inside our brain we have a notion of self, a self-model that informs us what volume of our immediate surroundings we occupy, and how that volume changes as we move."

The self-model is, the researchers concluded, accurate to around one per cent of the robot's physical workspace — and could prove the first step on the path to granting future robots true self awareness. "[The self model is] trivial compared to that of humans," Lipson admits, "but you have to start somewhere. We have to go slowly and carefully, so we can reap the benefits while minimizing the risks."

The team's work has been published under open-access terms in the journal Science Robotics. The project's code base, meanwhile, has been published on GitHub under the permissive MIT license.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles