Entry Date:
January 17, 2012

FocalSpace


FocalSpace is a system for focused collaboration utilizing spatial depth and directional audio. We present a space where participants, tools, and other physical objects within the space are treated as interactive objects that can be detected, selected, and augmented with metadata. Further, we demonstrate several scenarios of interaction as concrete examples. By utilizing diminishing reality to remove unwanted background surroundings through synthetic blur, the system aims to attract participant attention to foreground activity.

What we can do if the screen in videoconference rooms can turn into an interactive display? With Kinect camera and sound sensors, We explore how expanding a system’s understanding of spatially calibrated depth and audio alongside a live video stream can generate semantically rich three-dimensional pixels containing information regarding their material properties and location. Four features are implemented, which are “Talking to Focus,” “Freezing Former Frames,” “Privacy Zone” and “Spacial Augmenting Reality.”