Humans and other animals possess a very useful awareness of their own bodies. This isn’t a philosophical self-awareness that relates to consciousness, but rather a sense of our own physical bodies that is called proprioception. That sense is how you’re able to touch your fingertip to your nose, even when your eyes are closed. But, it’s a sense that robots haven’t been able to develop on their own, until now that Columbia Engineering researchers have created an AI system that can do just that.
Awareness of one’s own body, and its capabilities, is critical towards adapting to variable situations in the real world. For example, a cat knows if it can fit through a small opening or jump on top of a tall fence, even if it hasn’t ever encountered those specific obstacles before. In order for robots to seamlessly integrate into the world at large, they’ll need to be able to perform similar feats of body-awareness.
The deep-learning artificial intelligence system that the researchers developed allows a robot to do that. They started with a four DoF (Degrees of Freedom) robotic arm, but didn’t program the robot with any explicit awareness of itself. It didn’t know what shape it was, what geometry it possessed, or what capabilities it had. They then let it experiment with itself over a day and a half of training. Again, this wasn’t directed training, the robot was just left to figure it’s own body out.
Through that training, the robotic arm was able to develop a simulation model of itself — the artificial equivalent of your sense of proprioception. Afterwards, in a closed-loop pick-and-place test, the robot was able to use that self-simulation to place objects in a receptacle with 100% accuracy. In a second open-loop test, where the robot had to rely only on its self-simulation model, it achieved an accuracy of 44%. This development is a huge step forward in building robot AI that can understand itself and how it should interact with the surrounding environment.