Entry Date:
September 22, 2003

Decoupling of Data into Meaningful Components


Computer graphics involves the manipulation of a variety of signals or data such as images, incoming light or three-dimensional objects. We want to develop processing tools to decouple these data into meaningful components that facilitate further processing and interactive manipulation.

For example, we have shown that the decoupling of an image into components that are similar to incoming light and albedo allows for powerful interactive relighting tools or pictorial tonal management. In these contributions, the decoupling is inspired from the Retinex theory, and decomposes the images into a large-scale component that is assumed to contain the illumination variation, and a small-scale component that represents the albedo. While this decoupling is not physically accurate, it yields very powerful tools for manipulation for the following reasons:

(*) It is not used for vision inference but is modified and recombined to create an image that is similar to the original.
(*) It is related to the human visual system and exploits some of its limitations or features.

Recently, we have extended the bilateral filter to handle polygonal meshes. In this context, the definition of the filter is not as straightforward because signal and location areconflated. We use a first-order predictor to decouples the signal and the spatial location in a surface. Because first-order properties such as normals are noisy, we use mollification (pre-smoothing of the normals) to obtain more reliable detection of outliers and features.