Entry Date:
December 26, 2006

Computational Model for Learning to Parse the World into Objects

Principal Investigator Pawan Sinha


Complementing Project Prakash on the computational front, we are developing algorithms for automated object-concept discovery from natural video streams. Not only is this problem of great significance in visual neuroscience, it also occurs prominently in several other areas, such as computational genomics where the task is to extract common subsequences (‘objects’) across multiple strings (‘input images’). We are developing two kinds of algorithms for this task. The first works with static imagery. Our work here builds upon past research in statistical learning and string matching to create an efficient scheme for discovering commonalities across multiple inputs. Our computational simulations show that this approach successfully learns to recognize objects even when the input images are not spatially normalized, and the image quality is significantly degraded. However, success here is dependent upon the availability of partial supervision. In order to accurately model human object learning, we cannot be critically reliant on the availability of a ‘teacher’. With this in mind, we are also developing completely unsupervised concept discovery algorithms. This work incorporates empirical data from Project Prakash to provide guidelines for, and constraints on, the algorithms’ designs and computational complexity.