Entry Date:
April 9, 2014

Mixture of Manhattan Frames


Man-made objects and buildings exhibit a clear structure in the form of orthogonal and parallel planes. This observation, commonly referred to as the Manhattan-world (MW) model, has been widely exploited in computer vision and robotics. At both larger and smaller scales, the scale of a city, indoor scenes or smaller objects, a more flexible model is merited. Here, we propose a novel probabilistic model that describes scenes as mixtures of Manhattan Frames (MF) - sets of orthogonal and parallel planes. By exploiting the geometry of both orthogonality constraints and the unit sphere, our approach allows us to describe man-made structures in a flexible way, We propose an inference that is a hybrid of Gibbs sampling and gradient-based optimization of a robust cost function over the SO(3) manifold. An MF merging mechanism allows us to infer the model order. We show the versatility of our Mixture-of-Manhattan-Frames (MMF) model by describing complex scenes from ASUS Xtion PRO depth images and aerial-LiDAR measurements of an urban center. Additionally, we demonstrate that the model lends itself to depth focal-length calibration of RGB-D cameras as well as to plane segmentation.