Entry Date:
January 22, 2019

Depth Estimation of Non-Rigid Objects for Time-of-Flight Imaging

Principal Investigator Vivienne Sze


Depth sensing is used in a variety of applications that range from augmented reality to robotics. Time-of-flight (TOF) cameras, which measure depth by emitting and measuring the roundtrip time of light, are appealing because they obtain dense depth measurements with minimal latency. However, as these sensors become prevalent, one disadvantage is that many TOF cameras in close proximity will interfere with one another, and techniques to mitigate this can lower the frame rate at which depth can be acquired. Previously, we proposed an algorithm that uses concurrently collected optical images to estimate the depth of rigid objects. Here, we consider the case of objects undergoing non-rigid deformations. We model these objects as locally rigid and use previous depth measurements along with the pixel-wise motion across the collected optical images to estimate the underlying 3-D scene motion, from which depth can then be obtained. In contrast to conventional techniques, our approach exploits previous depth measurements directly to estimate the pose, or the rotation and translation, of each point by finding the solution to a sparse linear system. We evaluate the technique on a RGB-D dataset where we estimate depth with a mean relative error of 0.58%, which out- performs other adapted techniques.