While Google Glass, the AR-based glasses that provide data overlay for objects, never really gained any traction for commercial use, Google is having another go at high-tech specs, only this time it provides captions for everyday sounds and spoken words. Wearable Subtitles are touted as a mobile solution that transforms speech and sound into visual representations for the deaf and hard-of-hearing. Rather than using smartphones, the glasses offer privately transcribed text, hands-free use, improved mobility, and socially-acceptable interactions.
Wearable Subtitles is a proof-of-concept that offers a 3D printed frame and provides augmented communication via sound transcript for around 15-hours per day before needing to be recharged. Google developed the glasses using a slender PCB outfitted with a MediaTek MT2523D SiP (Arm Cortex-M4), Bluetooth 4.0 and BLE transceiver, power management IC, and a MIPI-DSI display controller. The entire hardware package is housed within the right-side of the frames for easy concealment. The embedded system communicates with a smartphone via BLE and uses the phone’s microphone and a communication protocol to transcribe words and sounds.
The translated captions are then piped to a monocular display off-set on the right-side lens, with all text displayed using a white font to maximize contrast and visibility. The Google researchers used several deaf and hard-of-hearing applicants to flesh-out the design, making improvements to the frame, response times, clarity of the lens, and so forth, for the next prototype. They also found that the participants’ on-the-go mobile contexts were more discreet than using mobile phones. Picture holding a phone up to someone’s face to convert text into speech, and you get the idea. It will be interesting to see how Google’s Wearable Subtitles will evolve over the coming revisions and if they will hit the commercial market.