• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 157
  • 37
  • 21
  • 10
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 298
  • 298
  • 86
  • 58
  • 57
  • 56
  • 48
  • 41
  • 39
  • 38
  • 36
  • 31
  • 28
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Image quality assessment using natural scene statistics

Sheikh, Hamid Rahim 28 August 2008 (has links)
Not available / text
32

Digital image noise smoothing using high frequency information

Jarrett, David Ward, 1963- January 1987 (has links)
The goal of digital image noise smoothing is to smooth noise in the image without smoothing edges and other high frequency information. Statistically optimal methods must use accurate statistical models of the image and noise. Subjective methods must also characterize the image. Two methods using high frequency information to augment existing noise smoothing methods are investigated: two component model (TCM) smoothing and second derivative enhancement (SDE) smoothing. TCM smoothing applies an optimal noise smoothing filter to a high frequency residual, extracted from the noisy image using a two component source model. The lower variance and increased stationarity of the residual compared to the original image increases this filters effectiveness. SDE smoothing enhances the edges of the low pass filtered noisy image with the second derivative, extracted from the noisy image. Both methods are shown to perform better than the methods they augment, through objective (statistical) and subjective (visual) comparisons.
33

Uniform framework for the objective assessment and optimisation of radiotherapy image quality

Reilly, Andrew James January 2011 (has links)
Image guidance has rapidly become central to current radiotherapy practice. A uniform framework is developed for evaluating image quality across all imaging modalities by modelling the ‘universal phantom’: breaking any phantom down into its constituent fundamental test objects and applying appropriate analysis techniques to these through the construction of an automated analysis tree. This is implemented practically through the new software package ‘IQWorks’ and is applicable to both radiotherapy and diagnostic imaging. For electronic portal imaging (EPI), excellent agreement was observed with two commercial solutions: the QC-3V phantom and PIPS Pro software (Standard Imaging) and EPID QC phantom and epidSoft software (PTW). However, PIPS Pro’s noise correction strategy appears unnecessary for all but the highest frequency modulation transfer function (MTF) point and its contrast to noise ratio (CNR) calculation is not as described. Serious flaws identified in epid- Soft included erroneous file handling leading to incorrect MTF and signal to noise ratio (SNR) results, and a sensitivity to phantom alignment resulting in overestimation of MTF points by up to 150% for alignment errors of only ±1 pixel. The ‘QEPI1’ is introduced as a new EPI performance phantom. Being a simple lead square with a central square hole it is inexpensive and straightforward to manufacture yet enables calculation of a wide range of performance metrics at multiple locations across the field of view. Measured MTF curves agree with those of traditional bar pattern phantoms to within the limits of experimental uncertainty. An intercomparison of the Varian aS1000 and aS500-II detectors demonstrated an improvement in MTF for the aS1000 of 50–100% over the clinically relevant range 0.4–1 cycles/mm, yet with a corresponding reduction in CNR by a factor of p 2. Both detectors therefore offer advantages for different clinical applications. Characterisation of cone-beam CT (CBCT) facilities on two Varian On-Board Imaging (OBI) units revealed that only two out of six clinical modes had been calibrated by default, leading to errors of the order of 400 HU for some modes and materials – well outside the ±40 HU tolerance. Following calibration, all curves agreed sufficiently for dose calculation accuracy within 2%. CNR and MTF experiments demonstrated that a boost in MTF f50 of 20–30% is achievable by using a 5122 rather than a 3842 matrix, but with a reduction in CNR of the order of 30%. The MTF f50 of the single-pulse half-resolution radiographic mode of the Varian PaxScan 4030CB detector was measured in the plane of the detector as 1.0±0.1 cycles/mm using both a traditional tungsten edge and the new QEPI1 phantom. For digitally reconstructed radiographs (DRRs), a reduction in CT slice thickness resulted in an expected improvement in MTF in the patient scanning direction but a deterioration in the orthogonal direction, with the optimum slice thickness being 1–2 mm. Two general purposes display devices were calibrated against the DICOM Greyscale Standard Display Function (GSDF) to within the ±20% limit for Class 2 review devices. By providing an approach to image quality evaluation that is uniform across all radiotherapy imaging modalities this work enables consistent end-to-end optimisation of this fundamental part of the radiotherapy process, thereby supporting enhanced use of image-guidance at all relevant stages of radiotherapy and better supporting the clinical decisions based on it.
34

THE KNIFE EDGE TEST AS A WAVEFRONT SENSOR (IMAGE PROCESSING).

KENKNIGHT, CHARLES ELMAN. January 1987 (has links)
An algorithm to reduce data from the knife edge test is given. The method is an extension of the theory of single sideband holography to second order effects. Application to phase microscopy is especially useful because a troublesome second order term vanishes when the knife edge does not attenuate the unscattered radiation probing the specimen. The algorithm was tested by simulation of an active optics system that sensed and corrected small (less than quarter wavelength) wavefront errors. Convergence to a null was quadratic until limited by detector-injected noise in signal. The best form of the algorithm used only a Fourier transform of the smoothed detector record, a filtering of the transform, an inverse transform, and an arctangent solving for the phase of the input wavefront deformation. Iterations were helpful only for a Wiener filtering of the data record that weighted down Fourier amplitudes smaller than the mean noise level before analysis. The simplicity and sensitivity of this wavefront sensor makes it a candidate for active optic control of small-angle light scattering in space. In real time optical processing a two dimensional signal can be applied as a voltage to a deformable mirror and be received as an intensity modulation at an output plane. Combination of these features may permit a real time null test. Application to electron microscopy should allow the finding of defocus, astigmatism, and spherical aberrations for single micrographs at 0.2 nm resolution, provided a combination of specimen and support membrane is used that permits some a priori knowledge. For some thin specimens (up to nearly 100 atom layers thick) the left-right symmetry of diffraction should allow reconstruction of the wave-front deformations caused by the specimen with double the bandpass used in each image.
35

DIGITAL COLOR IMAGE ENHANCEMENT BASED ON LUMINANCE & SATURATION.

KIM, CHEOL-SUNG. January 1987 (has links)
This dissertation analyzes the different characteristics of color images compared to monochromatic images, combines these characteristics with monochromatic image enhancement techniques, and proposes useful color image enhancement algorithms. Luminance, hue, and saturation (L-H-S) color space is selected for color image enhancement. Color luminance is shown to play the most important role in achieving good image enhancement. Color saturation also exhibits unique features which contribute to the enhancement of high frequency details and color contrast. The local windowing method, one of the most popular image processing techniques, is rigorously analyzed for the effects of window size or weighting values on the visual appearance of an image, and the subjective enhancement afforded by local image processing techniques is explained in terms of the human vision system response. The digital color image enhancement algorithms proposed are based on the observation that the enhanced luminance image results in a good color image in L-H-S color space when the chromatic components (hue, and saturation) are kept the same. The saturation component usually contains high frequency details that are not present in the luminance component. However, processing only the saturation, while keeping the luminance and the hue unchanged, is not satisfactory because the response of human vision system presents a low pass filter to the chromatic components. To exploit high frequency details of the saturation component, we take the high frequency component of the inverse saturation image, which correlates with the luminance image, and process the luminance image proportionally to this inverse saturation image. These proposed algorithms are simple to implement. The main three application areas in image enhancement: contrast enhancement, sharpness enhancement, and noise smoothing, are discussed separately. The computer processing algorithms are restricted to those which preserve the natural appearance of the scene.
36

DESIGN AND DEVELOPMENT OF A MEGAVOLTAGE CT SCANNER FOR RADIATION THERAPY.

CHEN, CHING-TAI. January 1982 (has links)
A Varian 4 MeV isocentric therapy accelerator has been modified to perform also as a CT scanner. The goal is to provide low cost computed tomography capability for use in radiotherapy. The system will have three principal uses. These are (i) to provide 2- and 3-dimensional maps of electron density distribution for CT assisted therapy planning, (ii) to aid in patient set up by providing sectional views of the treatment volume and high contrast scout-mode verification images and (iii) to provide a means for periodically checking the patients anatomical conformation against what was used to generate the original therapy plan. The treatment machine was modified by mounting an array of detectors on a frame bolted to the counter weight end of the gantry in such a manner as to define a 'third generation' CT Scanner geometry. The data gathering is controlled by a Z-80 based microcomputer system which transfers the x-ray transmission data to a general purpose PDP 11/34 for processing. There a series of calibration processes and a logarithmic conversion are performed to get projection data. After reordering the projection data to an equivalent parallel beam sinogram format a convolution algorithm is employed to construct the image from the equivalent parallel projection data. Results of phantom studies have shown a spatial resolution of 2.6 mm and an electron density discrimination of less than 1% which are sufficiently good for accurate therapy planning. Results also show that the system is linear to within the precision of our measurement (≈ .75%) over a wide range of electron densities corresponding to those found in body tissues. Animal and human images are also presented to demonstrate that the system's imaging capability is sufficient to allow the necessary visualization of anatomy.
37

Development and image quality assessment of a contrast-enhancement algorithm for display of digital chest radiographs.

Rehm, Kelly. January 1992 (has links)
This dissertation presents a contrast-enhancement algorithm called Artifact-Suppressed Adaptive Histogram Equalization (ASAHE). This algorithm was developed as part of a larger effort to replace the film radiographs currently used in radiology departments with digital images. Among the expected benefits of digital radiology are improved image management and greater diagnostic accuracy. Film radiographs record X-ray transmission data at high spatial resolution, and a wide dynamic range of signal. Current digital radiography systems record an image at reduced spatial resolution and with coarse sampling of the available dynamic range. These reductions have a negative impact on diagnostic accuracy. The contrast-enhancement algorithm presented in this dissertation is designed to boost diagnostic accuracy of radiologists using digital images. The ASAHE algorithm is an extension of an earlier technique called Adaptive Histogram Equalization (AHE). The AHE algorithm is unsuitable for chest radiographs because it over-enhances noise, and introduces boundary artifacts. The modifications incorporated in ASAHE suppress the artifacts and allow processing of chest radiographs. This dissertation describes the psychophysical methods used to evaluate the effects of processing algorithms on human observer performance. An experiment conducted with anthropomorphic phantoms and simulated nodules showed the ASAHE algorithm to be superior for human detection of nodules when compared to a computed radiography system's algorithm that is in current use. An experiment conducted using clinical images demonstrating pneumothoraces (partial lung collapse) indicated no difference in human observer accuracy when ASAHE images were compared to computed radiography images, but greater ease of diagnosis when ASAHE images were used. These results provide evidence to suggest that Artifact-Suppressed Adaptive Histogram Equalization can be effective in increasing diagnostic accuracy and efficiency.
38

Assessing and Optimizing Pinhole SPECT Imaging Systems for Detection Tasks

Gross, Kevin Anthony January 2006 (has links)
The subject of this dissertation is the assessment and optimization of image quality for multiple-pinhole, multiple-camera SPECT systems. These systems collect gamma-ray photons emitted from an object using pinhole apertures. Conventional measures of image quality, such as the signal-to-noise ratio or the modulation transfer function, do not predict how well a system's images can be used to perform a relevant task. This dissertation takes the stance that the ultimate measure of image quality is to measure how well images produced from a system can be used to perform a task. Furthermore, we recognize that image quality is inherently a statistical concept that must be assessed for the average task performance across a large ensemble of images.The tasks considered in this dissertation are detection tasks. Namely we consider detecting a known three-dimensional signal embedded in a three-dimensional stochastic object using the Bayesian ideal observer. Out of all possible observers (human or otherwise) the ideal observer sets the absolute upper bound for detection task performance by using all possible information in the image data. By employing a stochastic object model we can account for the effects of object variability, which has a large effect on observer performance.An imaging system whose hardware has been optimized for ideal observer detection task performance is an imaging system that maximally transfers detection task relevant information to the image data.The theory and simulation of image quality, detection tasks, and gamma-ray imaging are presented. Assessments of ideal observer detection task performance are used to optimize imaging hardware for SPECT systems as well as to rank different imaging system designs.
39

Advanced Techniques for Image Quality Assessment of Modern X-ray Computed Tomography Systems

Solomon, Justin Bennion January 2016 (has links)
<p>X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination. </p><p>A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.</p><p>Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.</p><p>The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).</p><p>First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.</p><p>Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.</p><p>Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.</p><p>The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.</p><p>To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.</p><p>The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.</p><p>The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.</p><p>Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.</p><p>The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose. </p><p>In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.</p> / Dissertation
40

Prospective Estimation of Radiation Dose and Image Quality for Optimized CT Performance

Tian, Xiaoyu January 2016 (has links)
<p>X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].</p><p>Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.</p><p>As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization. </p><p>More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.</p><p>With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled. </p><p>Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4. </p><p>With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes. </p><p>Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization. </p><p>Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.</p> / Dissertation

Page generated in 0.0845 seconds