Entry Date:
October 4, 2012

VR Codes

Principal Investigator Andrew Lippman


VR Codes are dynamic data invisibly hidden in television and graphic displays. They allow the display to present simultaneously visual information in an unimpeded way, and real-time data to a camera. Our intention is to make social displays that many can use at once; using VR codes, many can draw data from a display and control its use on a mobile device. We think of VR Codes as analogous to QR codes for video, and envision a future where every display in the environment contains latent information embedded in VR codes.

Envision a world where inconspicuous and unobtrusive display surfaces act as general digital interfaces which transmit both words and pictures as well as machine-compatible data. They also encode relative orientation and positioning. Any display can be a transmitter and any phone can be a receiver. Further, data can be rendered invisibly on the screen.

VRCodes present the design, implementation and evaluation of a novel visible light-based communications architecture based on undetectable, embedded codes in a picture that are easily resolved by an inexpensive camera. The software-defined interface creates an interactive system in which any aspect of the signal processing can be dynamically modified to fit the changing hardware peripherals and well as the demands of desired human interaction.

This design of a visual environment that is rich in information for both people and their devices overcomes many of the limitations imposed by radio frequency (RF) interfaces. It is scalable, directional, and potentially high capacity. We demonstrate it through NewsFlash, a multi-screen set of images where each user's phone is an informational magnifying glass that reads codes arranged around the images.