Evaluating non-rigid image registration performance is a difficult problem since there is rarely a “gold standard” (i.e., ground truth) correspondence between two images. The Non-rigid Image Registration Evaluation Project (NIREP) was started to develop a standardized set of common databases, evaluation statistics and a software tool for performance evaluation of non-rigid image registration algorithms. The goal of the work in this thesis is to build up common image databases for rigorous testing of non-rigid image registration algorithms, and compare their performance by a diverse set of evaluation statistics on our multiple well documented image databases. The well documented databases as well as new evaluation statistics have been and will be released to public research community. The performance of five non-rigid registration algorithms (Affine, AIR, Demons, SLE and SICLE) was evaluated using 22 images from two NIREP evaluation databases. Six evaluation statistics (Relative Overlap, Intensity Variance, Normalized ROI overlap, alignment of calcarine sulci, Inverse Consistency Error and Transitivity Error) were used to evaluate and compare registration performance. This thesis provides a complete and accurate reporting of evaluation tests so that others are able to get access to these results and make a comparison of registration algorithms they concerned in their specific use. Moreover, this work followed the recommendations of the Standards for Reporting of Diagnostic Accuracy (STARD) initiative to disclose all relevant information for each non-rigid registration validation test.
Identifer | oai:union.ndltd.org:uiowa.edu/oai:ir.uiowa.edu:etd-1633 |
Date | 01 December 2009 |
Creators | Wei, Ying |
Contributors | Christensen, Gary Edward |
Publisher | University of Iowa |
Source Sets | University of Iowa |
Language | English |
Detected Language | English |
Type | thesis |
Format | application/pdf |
Source | Theses and Dissertations |
Rights | Copyright © 2009 Ying Wei |
Page generated in 0.005 seconds