MIT's Underwater Camera, Powered by Sound, Is 100,000 Times More Energy-Efficient Than Its Rivals

Harvesting energy from sound waves and storing it in a supercapacitor, this clever camera could deliver detailed climate data.

Researchers at the Massachusetts Institute of Technology (MIT) have developed a wireless underwater camera with a difference: it doesn't need a battery in order to capture and transmit full-color photographs.

"One of the most exciting applications of this camera for me personally is in the context of climate monitoring," says Fadel Adib, associate professor and director of the Signal Kinetics Group at MIT's Media Lab, of the team's work. "We are building climate models, but we are missing data from over 95 per cent of the ocean. This technology could help us build more accurate climate models and better understand how climate change impacts the underwater world."

Capturing imagery underwater has long been a challenge. First, the device has to be wholly water-tight. Second, it has to operate in conditions of low light. Third, it should ideally transmit its images wirelessly β€” no easy task in water, which absorbs radio signals. Lastly, the former all combine to require a relatively large amount of power β€” which, traditionally, has meant bulky systems with heavy batteries.

The camera developed at MIT is different. It's waterproof, captures full-color images in low lighting conditions, and transmits them back to a receiver using sound waves carried through the water itself via low-energy backscatter β€” and does so with a claimed 100,000-fold improvement in energy efficiency compared to the current state-of-the-art in underwater cameras.

That high level of efficiency means that it requires no internal battery. Instead, sound waves are converted into electricity through piezoelectric transducers and stored in a supercapacitor β€” then used to drive off-the-shelf ultra-low-power grayscale image sensors. To capture color imagery, these sensors combined with red, green, and blue LEDs β€” capturing three grayscale images under each color of light so they can be combined into a single color picture.

"We were trying to minimize the hardware as much as possible, and that creates new constraints on how to build the system, send information, and perform image reconstruction," Adib, senior author on the paper, explains. "It took a fair amount of creativity to figure out how to do this. When we were kids in art class, we were taught that we could make all colors using three basic colors. The same rules follow for color images we see on our computers. We just need red, green, and blue β€” these three channels β€” to construct color images."

The team's prototype has proven the concept, and work is now underway on increasing the available memory for real-time capture, adding live-streaming and video capabilities, and boosting the communications range from its current 40 meter (around 131 feet) upper limit.

The researchers' paper has been published in the journal Nature Communications under open-access terms.

Main article image courtesy of Adam Glanzman/MIT.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles