Carnegie Mellon Research Team Designs Method to Appropriate Point Lights for AR Interfaces

LightAnchors enables spatially-anchored data in augmented reality applications without special hardware.

Augmented reality (AR) environments require a precise and instant overlay of digital information onto everyday objects. The most common method for achieving this is through the use of markers. A research team with Carnegie Mellon is presenting a new method of visual tagging dubbed LightAnchors.

Instead of instrumenting objects with markers, which can be large and obtrusive, LightAnchors uses point lights that are often already found in objects and throughout environments. The point lights are things like the small LED status lights found in most electrical appliances and light bulbs. Going beyond just utilizing these lights as anchors on which to attach information and interfaces to specific objects, their system co-opts the lights for data transmission, blinking them rapidly to encode binary data. Additionally, LightAnchors can transmit dynamic payloads without the need for WiFi or any connectivity using only a microcontroller with the ability to blink an LED.

The algorithm works, at a high level, by creating an image pyramid for each incoming frame of video such that all lights are contained within a single pixel. From there, the algorithm searches for candidate light anchors by finding bright pixels surrounded by darker pixels. All candidate anchors are passed from the detection process to the tracker on every frame, and the algorithm attempts to pair all current candidates with those from previous frames, using a distance threshold to discard unlikely pairings. After the tracking on each frame, there is an attempt to decode all candidate anchors, tracking a sequence of intensity values over frames, which are then converted to a binary sequence using a dynamic threshold. All data is encoded as a binary sequence, prefixed with a known pattern. Since the same message is repeatedly transmitted, the prefix appears at both the beginning and end and makes payload segmentation simple.

Study data captured using an iPhone 7 across three environments confirm that the approach can be both rapid and accurate. One major drawback is limited bitrate. As this is mainly set by smartphone processors and camera FPS, its impact will lessen as high-speed cameras become increasingly commonplace. Another challenge is controlling the exposure and focus of the camera to enable robust tracking, as ideal settings for LightAnchors are not always ideal for a human user. At present, the development team plans to release a LightAnchors app that can run on modern iOS devices onto the Apple App Store.

See more about this project after this jump.

Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles