Advances in medical imaging technology have led to the acquisition of large number of images in different modalities. On some of these images the boundaries of key organs need to be accurately identified for treatment planning and diagnosis. This is typically performed manually by a physician who uses prior knowledge of organ shapes and locations to demarcate the boundaries of organs. Such manual segmentation is subjective, time consuming and prone to inconsistency. Automating this task has been found to be very challenging due to poor tissue contrast and ill-defined organ/tissue boundaries. This dissertation presents a genetic algorithm for combining representations of learned information such as known shapes, regional properties and relative location of objects into a single framework in order to perform automated segmentation. The algorithm has been tested on two different datasets: for segmenting hands on thermographic images and for prostate segmentation on pelvic computed tomography (CT) and magnetic resonance (MR) images. In this dissertation we report the results of segmentation in two dimensions (2D) for thermographic images; and two as well as three dimensions (3D) for pelvic images. We show that combining multiple features for segmentation improves segmentation accuracy as compared with segmentation using single features such as texture or shape alone.
Identifer | oai:union.ndltd.org:pdx.edu/oai:pdxscholar.library.pdx.edu:open_access_etds-1024 |
Date | 01 January 2010 |
Creators | Ghosh, Payel |
Publisher | PDXScholar |
Source Sets | Portland State University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Dissertations and Theses |
Page generated in 0.0022 seconds