Split-Second "Event Camera" Allows Drones to Play Dodgeball, And to Win, Against Human Players

By sending data only from pixels that have detected motion, an "event camera" dramatically speeds up computer vision systems.

Using an "event camera," a drone can react rapidly to things like thrown objects. (📷: Falanga et al)

A smart camera sensor, dubbed an "event camera," gives drones and other autonomous vehicles split-second reaction times — allowing a quadcopter to not only play dodgeball against a human player but to win.

“For search and rescue applications, such as after an earthquake, time is very critical, so we need drones that can navigate as fast as possible in order to accomplish more within their limited battery life,” explains Davide Scaramuzza, leader of the Robotics and Perception Group at the University of Zurich, on the research into rapid-reaction computer vision. “However, by navigating fast drones are also more exposed to the risk of colliding with obstacles, and even more if these are moving. We realised that a novel type of camera, called Event Camera, are a perfect fit for this purpose."

The "event camera" differs from a regular camera through what researchers call "smart pixels," capable of limited on-board processing: Rather than sending an entire image to a controlling device, the smart pixels on an event camera trigger only when a change in light intensity — typically meaning motion — has been detected. As a result, the controlling device not only receives information as quickly as possible but the amount of data is dramatically reduced - speeding up how quickly it can respond.

Coupled with a customized object detection algorithm, which is able to subtract the drone's own movement from the resulting data, the results are impressive: Equipped with an event camera, a drone could respond to thrown objects in just 3.5 milliseconds with a 90-percent accuracy rate. At that, the drone was able to successfully play dodgeball with a player three metres (around 9.8') away throwing a ball at 10m/s (around 33 feet per second.) With an object of known size, one camera feed proved sufficient; adding a second for stereoscopic vision allowed the drone to see and dodge objects of varying size too.

“One day drones will be used for a large variety of applications, such as delivery of goods, transportation of people, aerial filmography and, of course, search and rescue,” predicts Scaramuzza. “But enabling robots to perceive and make decision faster can be a game changer for also for other domains where reliably detecting incoming obstacles plays a crucial role, such as automotive, good delivery, transportation, mining, and remote inspection with robots."

"Our ultimate goal is to make one day autonomous drones navigate as good as human drone pilots," says lead author Davide Falanga of the research. "Currently, in all search and rescue applications where drones are involved, the human is actually in control.

"If we could have autonomous drones navigate as reliable as human pilots we would then be able to use them for missions that fall beyond line of sight or beyond the reach of the remote control."

The team's work has been published under open access terms in the journal Science Robotics.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles