Entry Date:
September 19, 2004

Efficient Graphical Models for Processing Images

Principal Investigator William Freeman


We show a convenient representation which allows a graphical model with a low number of states per node to infer high-dimensional image values, given an observed image. A significant benefit of this representation is that the probabilistic model does not have to be designed with this representation in mind. To demonstrate its usefulness, we apply it to two useful image processing problems: super-resolution and color demosaicing.

Recent advances in methods for performing approximate inference in graphical models, such as belief propagation and graph cuts, have enabled their modelling power to be applied to many vision problems. A graphical model is especially useful for problems which focus on estimating images because a graphical model makes it convenient to express relationships between pixels in both the observed image and the image or scene being estimated

The chief barrier to using graphical models for estimating images is the huge dimensionality of images. Typically, each pixel or patch of pixels in the image being estimated is treated as a separate variable or node in the graph. A simple discrete representation of images would allocate one discrete state of each variable for each possible gray-level in the image. For a typical gray-scale image, this would require 256 states per node in the graphical model. If each variable corresponds to a patch of N pixels, then 256N states will be required! For algorithms such as belief propagation, the amount of computation required for approximate inference in a graphical model typically varies on the order of N2 x M where M is the number of nodes in the graph and N is the number of discrete states per node. The quadratic relationship between the complexity of inference and the number of states limits the sort of problems that discrete-valued graphical models can be applied to.

We applied our method to two important image-processing problems:

Super-Resolution
We first demonstrate our method applied to the problem of super-resolution, also known as image interpolation. The goal of a super resolution resolution algorithm is to produce a high-resolution image from a single low-resolution image. The high-resolution image, which is being estimated, is modelled as a mosaic of patches of pixels. Each patch corresponds to one node in the MRF. The candidate values for a patch in the high-resolution image are found by applying a set of linear regression functions to a corresponding patch of pixels from the observed low-resolution image. The high-resolution image is found by using the max-product Belief Propagation algorithm to choose the best candidate for each patch of pixels.

CCD Demosaicing
The second problem we applied our method two that of estimating a full-color image from samples color for each pixel. Typical CCD’s are only able to sample one color band at each pixel in the sensor. To obtain the full-color image, with the value of three color bands per pixel, the value of the other two color bands at each pixel must be interpolated from the observed samples. This problem is known as demosaicing.

Similar to the super-resolution problem, the algorithm must estimate hidden information at every pixel location in the image, except now it is trying to estimate color values instead of high-resolution information. As before, we learn a set of linear regression functions that interpolate a set of candidate full-color samples. The best candidate for each pixel is then chosen using Belief Propagation. This method significantly reduces the number of blurry, fringe artifacts.