Entry Date:
April 9, 2014

Automatic Registration of Multimodal Images Via Learned Shared Features


Registration is a well known problem that has received substantial attention in recent years. On the other hand the registration of multi-modality images has not been as developed, in particular fusion between 3D structural information and 2D texture information. In the past this fusion has been performed by the laborious process of manually selecting correspondence points or using several Information metrics such as mutual information, KL-divergence or entropy. The aim of this project is to take a novel approach and learn shared features between the multi-modal images.