• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Comparison of Artificial Neural Network Classifiers for Analysis of CT Images for the Inspection of Hardwood Logs

He, Jing 01 April 1998 (has links)
This thesis describes an automatic CT image interpretation approach that can be used to detect hardwood defects. The goal of this research has been to develop several automatic image interpretation systems for different types of wood, with lower-level processing performed by feed forward artificial neural networks. In the course of this work, five single-species classifiers and seven multiple-species classifiers have been developed for 2-D and 3-D analysis. These classifiers were trained with back-propagation, using training samples of three species of hardwood: cherry, red oak and yellow poplar. These classifiers recognize six classes: heartwood (clear wood), sapwood, knots, bark, split s and decay. This demonstrates the feasibility of developing general classifiers that can be used with different types of hardwood logs. This will help sawmill and veneer mill operators to improve the quality of products and preserve natural resources. / Master of Science
2

Characterization of Computed Tomography Radiomic Features using Texture Phantoms

Shafiq ul Hassan, Muhammad 05 April 2018 (has links)
Radiomics treats images as quantitative data and promises to improve cancer prediction in radiology and therapy response assessment in radiation oncology. However, there are a number of fundamental problems that need to be solved in order to potentially apply radiomic features in clinic. The first basic step in computed tomography (CT) radiomic analysis is the acquisition of images using selectable image acquisition and reconstruction parameters. Radiomic features have shown large variability due to variation of these parameters. Therefore, it is important to develop methods to address these variability issues in radiomic features due to each CT parameter. To this end, texture phantoms provide a stable geometry and Hounsfield Units (HU) to characterize the radiomic features with respect to image acquisition and reconstruction parameters. In this project, normalization methods were developed to address the variability issues in CT Radiomics using texture phantoms. In the first part of this project, variability in radiomic features due to voxel size variation was addressed. A voxel size resampling method is presented as a preprocessing step for imaging data acquired with variable voxel sizes. After resampling, variability due to variable voxel size in 42 radiomic features was reduced significantly. Voxel size normalization is presented to address the intrinsic dependence of some key radiomic features. After normalization, 10 features became robust as a function of voxel size. Some of these features were identified as predictive biomarkers in diagnostic imaging or useful in response assessment in radiation therapy. However, these key features were found to be intrinsically dependent on voxel size (which also implies dependence on lesion volume). The normalization factors are also developed to address the intrinsic dependence of texture features on the number of gray levels. After normalization, the variability due to gray levels in 17 texture features was reduced significantly. In the second part of the project, voxel size and gray level (GL) normalizations developed based on phantom studies, were tested on the actual lung cancer tumors. Eighteen patients with non-small cell lung cancer of varying tumor volumes were studied and compared with phantom scans acquired on 8 different CT scanners. Eight out of 10 features showed high (Rs > 0.9) and low (Rs < 0.5) Spearman rank correlations with voxel size before and after normalizations, respectively. Likewise, texture features were unstable (ICC < 0.6) and highly stable (ICC > 0.9) before and after gray level normalizations, respectively. This work showed that voxel size and GL normalizations derived from texture phantom also apply to lung cancer tumors. This work highlights the importance and utility of investigating the robustness of CT radiomic features using CT texture phantoms. Another contribution of this work is to develop correction factors to address the variability issues in radiomic features due to reconstruction kernels. Reconstruction kernels and tube current contribute to noise texture in CT. Most of texture features were sensitive to correlated noise texture due to reconstruction kernels. In this work, noise power spectra (NPS) was measured on 5 CT scanners using standard ACR phantom to quantify the correlated noise texture. The variability in texture features due to different kernels was reduced by applying the NPS peak frequency and the region of interest (ROI) maximum intensity as correction factors. Most texture features were radiation dose independent but were strongly kernel dependent, which is demonstrated by a significant shift in NPS peak frequency among kernels. Percent improvements in robustness of 19 features were in the range of 30% to 78% after corrections. In conclusion, most texture features are sensitive to imaging parameters such as reconstruction kernels, reconstruction Field of View (FOV), and slice thickness. All reconstruction parameters contribute to inherent noise in CT images. The problem can be partly solved by quantifying noise texture in CT radiomics using a texture phantom and an ACR phantom. Texture phantoms should be a pre-requisite to patient studies as they provide stable geometry and HU distribution to characterize the radiomic features and provide ground truths for multi-institutional validation studies.
3

Development of a New 3D Reconstruction Algorithm for Computed Tomography (CT)

Iborra Carreres, Amadeo 07 January 2016 (has links)
[EN] Model-based computed tomography (CT) image reconstruction is dominated by iterative algorithms. Although long reconstruction times remain as a barrier in practical applications, techniques to speed up its convergence are object of investigation, obtaining impressive results. In this thesis, a direct algorithm is proposed for model-based image reconstruction. The model-based approximation relies on the construction of a model matrix that poses a linear system which solution is the reconstructed image. The proposed algorithm consists in the QR decomposition of this matrix and the resolution of the system by a backward substitution process. The cost of this image reconstruction technique is a matrix vector multiplication and a backward substitution process, since the model construction and the QR decomposition are performed only once, because of each image reconstruction corresponds to the resolution of the same CT system for a different right hand side. Several problems regarding the implementation of this algorithm arise, such as the exact calculation of a volume intersection, definition of fill-in reduction strategies optimized for CT model matrices, or CT symmetry exploit to reduce the size of the system. These problems have been detailed and solutions to overcome them have been proposed, and as a result, a proof of concept implementation has been obtained. Reconstructed images have been analyzed and compared against the filtered backprojection (FBP) and maximum likelihood expectation maximization (MLEM) reconstruction algorithms, and results show several benefits of the proposed algorithm. Although high resolutions could not have been achieved yet, obtained results also demonstrate the prospective of this algorithm, as great performance and scalability improvements would be achieved with the success in the development of better fill-in strategies or additional symmetries in CT geometry. / [ES] En la reconstrucción de imagen de tomografía axial computerizada (TAC), en su modalidad model-based, prevalecen los algoritmos iterativos. Aunque los altos tiempos de reconstrucción aún son una barrera para aplicaciones prácticas, diferentes técnicas para la aceleración de su convergencia están siendo objeto de investigación, obteniendo resultados impresionantes. En esta tesis, se propone un algoritmo directo para la reconstrucción de imagen model-based. La aproximación model-based se basa en la construcción de una matriz modelo que plantea un sistema lineal cuya solución es la imagen reconstruida. El algoritmo propuesto consiste en la descomposición QR de esta matriz y la resolución del sistema por un proceso de sustitución regresiva. El coste de esta técnica de reconstrucción de imagen es un producto matriz vector y una sustitución regresiva, ya que la construcción del modelo y la descomposición QR se realizan una sola vez, debido a que cada reconstrucción de imagen supone la resolución del mismo sistema TAC para un término independiente diferente. Durante la implementación de este algoritmo aparecen varios problemas, tales como el cálculo exacto del volumen de intersección, la definición de estrategias de reducción del relleno optimizadas para matrices de modelo de TAC, o el aprovechamiento de simetrías del TAC que reduzcan el tama\~no del sistema. Estos problemas han sido detallados y se han propuesto soluciones para superarlos, y como resultado, se ha obtenido una implementación de prueba de concepto. Las imágenes reconstruidas han sido analizadas y comparadas frente a los algoritmos de reconstrucción filtered backprojection (FBP) y maximum likelihood expectation maximization (MLEM), y los resultados muestran varias ventajas del algoritmo propuesto. Aunque no se han podido obtener resoluciones altas aún, los resultados obtenidos también demuestran el futuro de este algoritmo, ya que se podrían obtener mejoras importantes en el rendimiento y la escalabilidad con el éxito en el desarrollo de mejores estrategias de reducción de relleno o simetrías en la geometría TAC. / [CA] En la reconstrucció de imatge tomografia axial computerizada (TAC) en la seua modalitat model-based prevaleixen els algorismes iteratius. Tot i que els alts temps de reconstrucció encara són un obstacle per a aplicacions pràctiques, diferents tècniques per a l'acceleració de la seua convergència estàn siguent objecte de investigació, obtenint resultats impressionants. En aquesta tesi, es proposa un algorisme direct per a la recconstrucció de image model-based. L'aproximació model-based es basa en la construcció d'una matriu model que planteja un sistema lineal quina sol·lució es la imatge reconstruida. L'algorisme propost consisteix en la descomposició QR d'aquesta matriu i la resolució del sistema per un procés de substitució regresiva. El cost d'aquesta tècnica de reconstrucció de imatge es un producte matriu vector i una substitució regresiva, ja que la construcció del model i la descomposició QR es realitzen una sola vegada, degut a que cada reconstrucció de imatge suposa la resolució del mateix sistema TAC per a un tèrme independent diferent. Durant la implementació d'aquest algorisme sorgixen diferents problemes, tals com el càlcul exacte del volum de intersecció, la definició d'estratègies de reducció de farcit optimitzades per a matrius de model de TAC, o el aprofitament de simetries del TAC que redueixquen el tamany del sistema. Aquestos problemes han sigut detallats y s'han proposat solucions per a superar-los, i com a resultat, s'ha obtingut una implementació de prova de concepte. Les imatges reconstruides han sigut analitzades i comparades front als algorismes de reconstrucció filtered backprojection (FBP) i maximum likelihood expectation maximization (MLEM), i els resultats mostren varies ventajes del algorisme propost. Encara que no s'han pogut obtindre resolucions altes ara per ara, els resultats obtinguts també demostren el futur d'aquest algorisme, ja que es prodrien obtindre millores importants en el rendiment i la escalabilitat amb l'éxit en el desemvolupament de millors estratègies de reducció de farcit o simetries en la geometria TAC. / Iborra Carreres, A. (2015). Development of a New 3D Reconstruction Algorithm for Computed Tomography (CT) [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/59421
4

Automated Measurement of Midline Shift in Brain CT Images and its Application in Computer-Aided Medical Decision Making

Wenan, Chen 03 March 2010 (has links)
The severity of traumatic brain injury (TBI) is known to be characterized by the shift of the middle line in brain as the ventricular system often changes in size and position, depending on the location of the original injury. In this thesis, the focus is given to processing of the CT (Computer Tomography) brain images to automatically calculate midline shift in pathological cases and use it to predict Intracranial Pressure (ICP). The midline shift measurement can be divided into three steps. First the ideal midline of the brain, i.e., the midline before injury, is found via a hierarchical search based on skull symmetry and tissue features. Second, the ventricular system is segmented from the brain CT slices. Third, the actual midline is estimated from the deformed ventricles by shape matching method. The horizontal shift in the ventricles is then calculated based on the ideal midline and the actual midline in TBI CT images. The proposed method presents accurate detection of the ideal midline using anatomical features in the skull, accurate segmentation of ventricles for actual midline estimation using the information of anatomical features with a spatial template derived from a magnetic resonance imaging (MRI) scan, and an accurate estimation of the actual midline based on the robust proposed multiple regions shape matching algorithm. After the midline shift is successively measured, features including midline shift, texture information of CT images, as well as other demographic information are used to predict ICP. Machine learning algorithms are used to model the relation between the ICP and the extracted features. By using systematic feature selection and parameter selection of the learning model, promising results on ICP prediction are achieved. The prediction results also indicate the reliability of the proposed midline shift estimation.
5

Evaluation of Phantoms Used in Image Quality Performance Testing of Dental Cone Beam Computed Tomography Systems

Alahmad, Haitham N. January 2015 (has links)
No description available.
6

Automated anatomical labeling of the bronchial branch and its application to the virtual bronchoscopy system

Mori, Kensaku, Hasegawa, Jun-ichi, Suenaga, Yasuhito, Toriwaki, Jun-ichiro 02 1900 (has links)
No description available.
7

Comprehensive assessment and characterization of pulmonary acinar morphometry using multi-resolution micro x-ray computed tomography

Kizhakke Puliyakote, Abhilash Srikumar 01 May 2016 (has links)
The characterization of the normal pulmonary acinus is a necessary first step in understanding the nature of respiratory physiology and in assessing the etiology of pulmonary pathology. Murine models play a vital role in the advancement of current understanding of the dynamics of gas exchange, particle deposition and the manifestations of diseases such as COPD, Cystic Fibrosis and Asthma. With the advent of interior tomography techniques, high-resolution micro computed tomography (μCT) systems provide the ability to nondestructively assess the pulmonary acinus at micron and sub-micron resolutions. With the application of Systematic Uniform Random Sampling (SURS) principles applied to in-situ fixed, intact, ex-vivo lungs, we seek to characterize the structure of pulmonary acini in mice and study the variations across dimensions of age, location within the lung and strain phenotypes. Lungs from mice of three common research strains were perfusion fixed in-situ, and imaged using a multi-resolution μCT system (Micro XCT 400, Zeiss Inc.). Using lower resolution whole lung images, SURS methods were used for identification of region-specific acini for high-resolution imaging. Acinar morphometric metrics included diameters, lengths and branching angles for each alveolar duct and total path lengths from entrance of the acinus to the terminal alveolar sacs. In addition, other metrics such as acinar volume, alveolar surface area and surface area/volume ratios were assessed. A generation-based analysis demonstrated significant differences in acinar morphometry across young and old age groups and across the three strains. The method was successfully adapted to large animals and the data from one porcine specimen has been presented. The registration framework provides a direct technique to assess acinar deformations and provides critical physiological information about the state of alveolar ducts and individual alveoli at different phases of respiration. The techniques presented here allow us to perform direct assessment of the three-dimensional structure of the pulmonary acinus in previously unavailable detail and present a unique technique for comprehensive quantitative analysis. The acinar morphometric parameters will help develop improved mathematical and near-anatomical models that can accurately represent the geometric structure of acini, leading to improved assessment of flow dynamics in the normal lung.
8

Application of Machine Learning and Deep Learning Methods in Geological Carbon Sequestration Across Multiple Spatial Scales

Wang, Hongsheng 24 August 2022 (has links)
Under current technical levels and industrial systems, geological carbon sequestration (GCS) is a viable solution to maintain and further reduce carbon dioxide (CO2) concentration and ensure energy security simultaneously. The pre-injection formation characterization and post-injection CO2 monitoring, verification, and accounting (MVA) are two critical and challenging tasks to guarantee the sequestration effect. The tasks can be accomplished using core analyses and well-logging technologies, which complement each other to produce the most accurate and sufficient subsurface information for pore-scale and reservoir-scale studies. In recent years, the unprecedented data sources, increasing computational capability, and the developments of machine learning (ML) and deep learning (DL) algorithms provide novel perspectives for expanding the knowledge from data, which can capture highly complex nonlinear relationships between multivariate inputs and outputs. This work applied ML and DL methods to GCS-related studies at pore and reservoir scales, including digital rock physics (DRP) and the well-logging data interpretation and analysis. DRP provides cost-saving and practical core analysis methods, combining high-resolution imaging techniques, such as the three-dimensional (3D) X-ray computed tomography (CT) scanning, with advanced numerical simulations. Image segmentation is a crucial step of the DRP framework, affecting the accuracy of the following analyses and simulations. We proposed a DL-based workflow for boundary and small target segmentation in digital rock images, which aims to overcome the main challenge in X-ray CT image segmentation, partial volume blurring (PVB). The training data and the model architecture are critical factors affecting the performance of supervised learning models. We employed the entropy-based-masking indicator kriging (IK-EBM) to generate high-quality training data. The performance of IK-EBM on segmentation affected by PVB was compared with some commonly used image segmentation methods on the synthetic data with known ground truth. We then trained and tested the UNet++ model with nested architecture and redesigned skip connections. The evaluation metrics include the pixel-wise (i.e. F1 score, boundary-scaled accuracy, and pixel-by-pixel comparison) and physics-based (porosity, permeability, and CO2 blob curvature distributions) accuracies. We also visualized the feature maps and tested the model generalizations. Contact angle (CA) distribution quantifies the rock surface wettability, which regulates the multiphase behaviors in the porous media. We developed a DL-based CA measurement workflow by integrating an unsupervised learning pipeline for image segmentation and an open-source CA measurement tool. The image segmentation pipeline includes the model training of a CNN-based unsupervised DL model, which is constrained by feature similarity and spatial continuity. In addition, the over-segmentation strategy was adopted for model training, and the post-processing was implemented to cluster the model output to the user-desired target. The performance of the proposed pipeline was evaluated using synthetic data with known ground truth regarding the pixel-wise and physics-based evaluation metrics. The resulting CA measurements with the segmentation results as input data were validated using manual CA measurements. The GCS projects in the Illinois Basin are the first large-scale injection into saline aquifers and employed the latest pulsed neutron tool, the pulsed neutron eXtreme (PNX), to monitor the injected CO2 saturation. The well-logging data provide valuable references for the formation evaluation and CO2 monitoring in GCS in saline aquifers at the reservoir scale. In addition, data-driven models based on supervised ML and DL algorithms provide a novel perspective for well-logging data analysis and interpretation. We applied two commonly used ML and DL algorithms, support vector machine regression (SVR) and artificial neural network (ANN), to the well-logging dataset from GCS projects in the Illinois Basin. The dataset includes the conventional well-logging data for mineralogy and porosity interpretation and PNX data for CO2 saturation estimation. The model performance was evaluated using the root mean square error (RMSE) and R2 score between model-predicted and true values. The results showed that all the ML and DL models achieved excellent accuracies and high efficiency. In addition, we ranked the feature importance of PNX data in the CO2 saturation estimation models using the permutation importance algorithm, and the formation sigma, pressure, and temperature are the three most significant factors in CO2 saturation estimation models. The major challenge for the CO2 storage field projects is the large-scale real-time data processing, including the pore-scale core and reservoir-scale well-logging data. Compared with the traditional data processing methods, ML and DL methods achieved accuracy and efficiency simultaneously. This work developed ML and DL-based workflows and models for X-ray CT image segmentation and well-logging data interpretations based on the available datasets. The performance of data-driven surrogate models has been validated regarding comprehensive evaluation metrics. The findings fill the knowledge gap regarding formation evaluation and fluid behavior simulation across multiple scales, ensuring sequestration security and effect. In addition, the developed ML and DL workflows and models provide efficient and reliable tools for massive GCS-related data processing, which can be widely used in future GCS projects. / Doctor of Philosophy / Geological carbon sequestration (GCS) is the solution to ease the tension between the increasing carbon dioxide (CO2) concentrations in the atmosphere and the high dependence of human society on fossil energy. The sequestration requires the injection formation to have adequate storage capability, injectivity, and impermeable caprock overlain. Also, the injected CO2 plumes should be monitored in real-time to prevent any migration of CO2 to the surface. Therefore, pre-injection formation characterization and post-injection CO2 saturation monitoring are two critical and challenging tasks to guarantee the sequestration effect and security, which can be accomplished using the combination of pore-scale core analyses and reservoir-scale well-logging technologies. This work applied machine learning (ML) and deep learning (DL) methods to GCS-related studies across multiple spatial scales. We developed supervised and unsupervised DL-based workflows to segment the X-ray computed-tomography (CT) image of digital rocks for the pore-scale studies. Image segmentation is a crucial step in the digital rock physics (DRP) framework, and the following analyses and simulations are conducted on the segmented images. We also developed ML and DL models for well-logging data interpretation to analyze the mineralogy and estimate CO2 saturation. Compared with the traditional well-logging analysis methods, which are usually time-consuming and prior knowledge-dependent, the ML and DL methods achieved comparable accuracy and much shorter processing time. The performance of developed workflows and models was validated regarding comprehensive evaluation metrics, achieving excellent accuracies and high efficiency simultaneously. We are at the early stage of CO2 sequestration, and relevant knowledge and tools are inadequate. In addition, the main challenge of CO2 sequestration field projects is the large-scale and real-time data processing for fast decision-making. The findings of this dissertation fill the knowledge gap in GCS-related formation evaluation and fluid behavior simulations across multiple spatial scales. The developed ML and DL workflows provide efficient and reliable tools for massive data processing, which can be widely used in future GCS projects.
9

Computer-assisted volumetric tumour assessment for the evaluation of patient response in malignant pleural mesothelioma

Chen, Mitchell January 2011 (has links)
Malignant pleural mesothelioma (MPM) is a form of aggressive tumour that is almost always associated with prior exposure to asbestos. Currently responsible for over 47,000 deaths worldwide each year and rising, it poses a serious threat to global public health. Many clinical studies of MPM, including its diagnosis, prognostic planning, and the evaluation of a treatment, necessitate the accurate quantification of tumours based on medical image scans, primarily computed tomography (CT). Currently, clinical best practice requires application of the MPM-adapted Response Evaluation Criteria in Solid Tumours (MPM-RECIST) scheme, which provides a uni-dimensional measure of the tumour's size. However, the low CT contrast between the tumour and surrounding tissues, the extensive elongated growth pattern characteristic of MPM, and, as a consequence, the pronounced partial volume effect, collectively contribute to the significant intra- and inter-observer variations in MPM-RECIST values seen in clinical practice, which in turn greatly affect clinical judgement and outcome. In this thesis, we present a novel computer-assisted approach to evaluate MPM patient response to treatments, based on the volumetric segmentation of tumours (VTA) on CT. We have developed a 3D segmentation routine based on the Random Walk (RW) segmentation framework by L. Grady, which is notable for its good performance in handling weak tissue boundaries and the ability to segment any arbitrary shapes with appropriately placed initialisation points. Results also show its benefit with regard to computation time, as compared to other candidate methods such as level sets. We have also added a boundary enhancement regulariser to RW, to improve its performance with smooth MPM boundaries. The regulariser is inspired by anisotropic diffusion. To reduce the required level of user supervision, we developed a registration-assisted segmentation option. Finally, we achieved effective and highly manoeuvrable partial volume correction by applying a reverse diffusion-based interpolation. To assess its clinical utility, we applied our method to a set of 48 CT studies from a group of 15 MPM patients and compared the findings to the MPM-RECIST observations made by a clinical specialist. Correlations confirm the utility of our algorithm for assessing MPM treatment response. Furthermore, our 3D algorithm found applications in monitoring the patient quality of life and palliative care planning. For example, segmented aerated lungs demonstrated very good correlation with the VTA-derived patient responses, suggesting their use in assessing the pulmonary function impairment caused by the disease. Likewise, segmented fluids highlight sites of pleural effusion and may potentially assist in intra-pleural fluid drainage planning. Throughout this thesis, to meet the demands of probabilistic analyses of data, we have used the Non-Parametric Windows (NPW) probability density estimator. NPW outperforms the histogram in terms of its smoothness and kernel density estimator in its parameter setting, and preserves signal properties such as the order of occurrence and band-limitedness of the sample, which are important for tissue reconstruction from discrete image data. We have also worked on extending this estimator to analysing vector-valued quantities; which are essential for multi-feature studies involving values such as image colour, texture, heterogeneity and entropy.
10

Advances in dual-energy computed tomography imaging of radiological properties

Han, Dong 01 January 2018 (has links)
Dual-energy computed tomography (DECT) has shown great potential in the reduction of uncertainties of proton ranges and low energy photon cross section estimation used in radiation therapy planning. The work presented herein investigated three contributions for advancing DECT applications. 1) A linear and separable two-parameter DECT, the basis vector model (BVM) was used to estimate proton stopping power. Compared to other nonlinear two-parameter models in the literature, the BVM model shows a comparable accuracy achieved for typical human tissues. This model outperforms other nonlinear models in estimations of linear attenuation coefficients. This is the first study to clearly illustrate the advantages of linear model not only in accurately mapping radiological quantities for radiation therapy, but also in providing a unique model for accurate linear forward projection modelling, which is needed by the statistical iterative reconstruction (SIR) and other advanced DECT reconstruction algorithms. 2) Accurate DECT requires knowledge of x-ray beam properties. Using the Birch-Marshall1 model and beam hardening correction coefficients encoded in a CT scanner’s sinogram header files, an efficient and accurate way to estimate the x-ray spectrum is proposed. The merits of the proposed technique lie in requiring no physical transmission measurement after a one-time calibration against an independently measured spectrum. This technique can also be used in monitoring the aging of x-ray CT tubes. 3) An iterative filtered back projection with anatomical constraint (iFBP-AC) algorithm was also implemented on a digital phantom to evaluate its ability in mitigating beam hardening effects and supporting accurate material decomposition for in vivo imaging of photon cross section and proton stopping power. Compared to iFBP without constraints, both algorithms demonstrate high efficiency of convergence. For an idealized digital phantom, similar accuracy was observed under a noiseless situation. With clinically achievable noise level added to the sinograms, iFBP-AC greatly outperforms iFBP in prediction of photon linear attenuation at low energy, i.e., 28 keV. The estimated mean errors of iFBP and iFBP-AC for cortical bone are 1% and 0.7%, respectively; the standard deviations are 0.6% and 5%, respectively. The achieved accuracy of iFBP-AC shows robustness versus contrast level. Similar mean errors are maintained for muscle tissue. The standard deviation achieved by iFBP-AC is 1.2%. In contrast, the standard deviation yielded by iFBP is about 20.2%. The algorithm of iFBP-AC shows potential application of quantitative measurement of DECT. The contributions in this thesis aim to improve the clinical performance of DECT.

Page generated in 0.0481 seconds