We study early-to-mid level human vision. Many early visual tasks ideally involve some sort of statistical processing. In image segmentation, we infer whether two regions of an image likely came from the same generating process. Our attention is drawn to items with high visual information, i.e. items that are "unexpected" based upon the rest of the scene. Our perception of a set of similar items, such as a handful of berries, likely developed to make inferences about the world, such as where one should best forage for berries. The statistical processing necessary to make these inferences can easily be implemented using neurally plausible mechanisms.
Our current research focus includes examining whether the statistical representations underlying these tasks might also underlie such phenomena as visual crowding, early stages of object recognition, illusory conjunctions, and the effect of display clutter on performance of visual tasks.
We take a three-pronged approach in our research. First, researchers in the lab work with a variety of computational models of visual phenomena:computer vision algorithms, ideal and semi-ideal observers, and neural models. We aim for predictive models, ideally ones which can be applied to arbitrary images as input. Second, we use behavioral experiments on humans to gather data and insights, and test models and hypotheses. Finally, we test the validity and utility of our models in applications such as image compression, design of user interfaces, and design of information visualizations.