Researchers from the Future Interfaces Group and SMASH Lab at Carnegie Mellon University have developed a method to quickly and easily anchor augmented reality data to a spatial location and receive status reports from devices without the need for additional hardware: LightAnchors.
The ability to wave your smartphone around and have it pull up data about the world around you, overlaid on a real-world visual provided by the rear camera, cannot be oversold — but is surprisingly tricky to achieve. Existing implementations typically rely on depth-sensing camera systems, inertial measurement units, or special marker tags — but LightAnchors needs nothing more than for the item in question to have an LED or other light source already on it.
"Unlike most prior tracking methods, which instrument objects with markers (often large and/or obtrusive), we take advantage of point lights already found in many objects and environments," the team explains. "For example, most electrical appliances now feature small (LED) status lights, and light bulbs are common in indoor and outdoor settings. In addition to leveraging these point lights for in-view anchoring (i.e., attaching information and interfaces to specific objects), we also co-opt these lights for data transmission, blinking them rapidly to encode binary data."
If that sounds familiar, it's because Carnegie Mellon isn't the only university playing with the concept: Earlier this year Stanford University unveiled InfoLED, which also tracked the location of objects and allowed for status feedback by reading their built-in LEDs. Where InfoLED allowed for recognition at up to 7 meters [around 23 feet] in indoor conditions, LightAnchors have been tested to 12 meters and to a wider range of light sources than status LEDs: "We modified an outdoor entrance light to output a fixed UID; the summoned LightAnchor displays the building name (Department of Motor Vehicles), its current status (open), and closing time (5pm)," the Carnegie Mellon team elaborates.
As with InfoLEDs, LightAnchors have a few drawbacks. There's no built-in security or authentication, the bandwidth is limited to transmitting only very short streams of data and typically requires everything to be transmitted at least twice for successful decoding, and tracking can be lost as the smartphone camera attempts to adjust its exposure and focus. The system is also planar, locating a LightAnchor within two dimensions; extending this to the third dimension would require at least three LightAnchors set up in a known geometry — or to add additional data from other augmented reality tracking systems.