Registration of multi-modal images has been a challenging task due to the complex intensity relationship between images. The standard multi-modal approach tends to use sophisticated similarity measures, such as mutual information, to assess the accuracy of the alignment. Employing such measures imply the increase in the computational time and complexity, and makes it highly difficult for the optimization process to converge. A new registration method is proposed based on introducing a structural representation of images captured from different modalities, in order to convert the multi-modal problem into a mono-modal one. Structural features are extracted by utilizing a modified version of entropy images in a patch-based manner. Experiments are performed on simulated and real brain images from different modalities. Quantitative assessments demonstrate that better accuracy can be achieved compared to the conventional multi-modal registration method.
展开▼