• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 7
  • 2
  • 1
  • 1
  • Tagged with
  • 33
  • 33
  • 17
  • 13
  • 8
  • 8
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Predicting Task-­specific Performance for Iterative Reconstruction in Computed Tomography

Chen, Baiyu January 2014 (has links)
<p>The cross-sectional images of computed tomography (CT) are calculated from a series of projections using reconstruction methods. Recently introduced on clinical CT scanners, iterative reconstruction (IR) method enables potential patient dose reduction with significantly reduced image noise, but is limited by its "waxy" texture and nonlinear nature. To balance the advantages and disadvantages of IR, evaluations are needed with diagnostic accuracy as the endpoint. Moreover, evaluations need to take into consideration the type of the imaging task (detection and quantification), the properties of the task (lesion size, contrast, edge profile, etc.), and other acquisition and reconstruction parameters. </p><p>To evaluate detection tasks, the more acceptable method is observer studies, which involve image preparation, graphical user interface setup, manual detection and scoring, and statistical analyses. Because such evaluation can be time consuming, mathematical models have been proposed to efficiently predict observer performance in terms of a detectability index (d'). However, certain assumptions such as system linearity may need to be made, thus limiting the application of the models to potentially nonlinear IR. For evaluating quantification tasks, conventional method can also be time consuming as it usually involves experiments with anthropomorphic phantoms. A mathematical model similar to d' was therefore proposed for the prediction of volume quantification performance, named the estimability index (e'). However, this prior model was limited in its modeling of the task, modeling of the volume segmentation process, and assumption of system linearity.</p><p>To expand prior d' and e' models to the evaluations of IR performance, the first part of this dissertation developed an experimental methodology to characterize image noise and resolution in a manner that was relevant to nonlinear IR. Results showed that this method was efficient and meaningful in characterizing the system performance accounting for the non-linearity of IR at multiple contrast and noise levels. It was also shown that when certain criteria were met, the measurement error could be controlled to be less than 10% to allow challenging measuring conditions with low object contrast and high image noise.</p><p>The second part of this dissertation incorporated the noise and resolution characterizations developed in the first part into the d' calculations, and evaluated the performance of IR and conventional filtered backprojection (FBP) for detection tasks. Results showed that compared to FBP, IR required less dose to achieve a threshold performance accuracy level, therefore potentially reducing the required dose. The dose saving potential of IR was not constant, but dependent on the task properties, with subtle tasks (small size and low contrast) enabling more dose saving than conspicuous tasks. Results also showed that at a fixed dose level, IR allowed more subtle tasks to exceed a threshold performance level, demonstrating the overall superior performance of IR for detection tasks.</p><p>The third part of this dissertation evaluated IR performance in volume quantification tasks with conventional experimental method. The volume quantification performance of IR was measured using an anthropomorphic chest phantom and compared to FBP in terms of accuracy and precision. Results showed that across a wide range of dose and slice thickness, IR led to accuracy significantly different from that of FBP, highlighting the importance of calibrating or expanding current segmentation software to incorporate the image characteristics of IR. Results also showed that despite IR's great noise reduction in uniform regions, IR in general had quantification precision similar to that of FBP, possibly due to IR's diminished noise reduction at edges (such as nodule boundaries) and IR's loss of resolution at low dose levels. </p><p>The last part of this dissertation mathematically predicted IR performance in volume quantification tasks with an e' model that was extended in three respects, including the task modeling, the segmentation software modeling, and the characterizations of noise and resolution properties. Results showed that the extended e' model correlated with experimental precision across a range of image acquisition protocols, nodule sizes, and segmentation software. In addition, compared to experimental assessments of quantification performance, e' was significantly reduced in computational time, such that it can be easily employed in clinical studies to verify quantitative compliance and to optimize clinical protocols for CT volumetry.</p><p>The research in this dissertation has two important clinical implications. First, because d' values reflect the percent of detection accuracy and e' values reflect the quantification precision, this work provides a framework for evaluating IR with diagnostic accuracy as the endpoint. Second, because the calculations of d' and e' models are much more efficient compared to conventional observer studies, the clinical protocols with IR can be optimized in a timely fashion, and the compliance of clinical performance can be examined routinely.</p> / Dissertation
12

TOMOGRAPHIC IMAGE RECONSTRUCTION: IMPLEMENTATION, OPTIMIZATION AND COMPARISON IN DIGITAL BREAST TOMOSYNTHESIS

Xu, Shiyu 01 December 2014 (has links)
Conventional 2D mammography was the most effective approach to detecting early stage breast cancer in the past decades of years. Tomosynthetic breast imaging is a potentially more valuable 3D technique for breast cancer detection. The limitations of current tomosynthesis systems include a longer scanning time than a conventional digital X-ray modality and a low spatial resolution due to the movement of the single X-ray source. Dr.Otto Zhou's group proposed the concept of stationary digital breast tomosynthesis (s-DBT) using a Carbon Nano-Tube (CNT) based X-ray source array. Instead of mechanically moving a single X-ray tube, s-DBT applies a stationary X-ray source array, which generates X-ray beams from different view angles by electronically activating the individual source prepositioned at the corresponding view angle, therefore eliminating the focal spot motion blurring from sources. The scanning speed is determined only by the detector readout time and the number of sources regardless of the angular coverage spans, such that the blur from patient's motion can be reduced due to the quick scan. S-DBT is potentially a promising modality to improve the early breast cancer detection by providing decent image quality with fast scan and low radiation dose. DBT system acquires a limited number of noisy 2D projections over a limited angular range and then mathematically reconstructs a 3D breast. 3D reconstruction is faced with the challenges of cone-beam and flat-panel geometry, highly incomplete sampling and huge reconstructed volume. In this research, we investigated several representative reconstruction methods such as Filtered backprojection method (FBP), Simultaneous algebraic reconstruction technique (SART) and Maximum likelihood (ML). We also compared our proposed statistical iterative reconstruction (IR) with particular prior and computational technique to these representative methods. Of all available reconstruction methods in this research, our proposed statistical IR appears particularly promising since it provides the flexibility of accurate physical noise modeling and geometric system description. In the following chapters, we present multiple key techniques of statistical IR to tomosynthesis imaging data to demonstrate significant image quality improvement over conventional techniques. These techniques include the physical modeling with a local voxel-pair based prior with the flexibility in its parameters to fine-tune image quality, the pre-computed parameter κ incorporated with the prior to remove the data dependence and to achieve a predictable resolution property, an effective ray-driven technique to compute the forward and backprojection and an over-sampled ray-driven method to perform high resolution reconstruction with a practical region of interest (ROI) technique. In addition, to solve the estimation problem with a fast computation, we also present a semi-quantitative method to optimize the relaxation parameter in a relaxed order-subsets framework and an optimization transfer based algorithm framework which potentially allows less iterations to achieve an acceptable convergence. The phantom data is acquired with the s-DBT prototype system to assess the performance of these particular techniques and compare our proposed method to those representatives. The value of IR is demonstrated in improving the detectability of low contrast and tiny micro-calcification, in reducing cross plane artifacts, in improving resolution and lowering noise in reconstructed images. In particular, noise power spectrum analysis (NPS) indicates a superior noise spectral property of our proposed statistical IR, especially in the high frequency range. With the decent noise property, statistical IR also provides a remarkable reconstruction MTF in general and in different areas within a focus plane. Although computational load remains a significant challenge for practical development, combined with the advancing computational techniques such as graphic computing, the superior image quality provided by statistical IR will be realized to benefit the diagnostics in real clinical applications.
13

Assessing computed tomography image quality for combined detection and estimation tasks

Tseng, Hsin-Wu, Fan, Jiahua, Kupinski, Matthew A. 21 November 2017 (has links)
Maintaining or even improving image quality while lowering patient dose is always the desire in clinical computed tomography (CT) imaging. Iterative reconstruction (IR) algorithms have been designed to allow for a reduced dose while maintaining or even improving an image. However, we have previously shown that the dose-saving capabilities allowed with IR are different for different clinical tasks. The channelized scanning linear observer (CSLO) was applied to study clinical tasks that combine detection and estimation when assessing CT image data. The purpose of this work is to illustrate the importance of task complexity when assessing dose savings and to move toward more realistic tasks when performing these types of studies. Human-observer validation of these methods will take place in a future publication. Low-contrast objects embedded in body-size phantoms were imaged multiple times and reconstructed by filtered back projection (FBP) and an IR algorithm. The task was to detect, localize, and estimate the size and contrast of low-contrast objects in the phantom. Independent signal-present and signal-absent regions of interest cropped from images were channelized by the dense-difference of Gauss channels for CSLO training and testing. Estimation receiver operating characteristic (EROC) curves and the areas under EROC curves (EAUC) were calculated by CSLO as the figure of merit. The one-shot method was used to compute the variance of the EAUC values. Results suggest that the IR algorithm studied in this work could efficiently reduce the dose by similar to 50% while maintaining an image quality comparable to conventional FBP reconstruction warranting further investigation using real patient data. (C) The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
14

Segmentation 3D de l'interligne articulaire pour l'imagerie basse dose dans le diagnostic précoce de la gonarthrose / 3D segmentation of joint space using low dose imaging for early diagnosis of knee osteoarthritis

Gharsallah-Mezlini, Houda 16 December 2016 (has links)
L'arthrose est une maladie dégénérative de l'articulation qui provoque des douleurs, une raideur et une diminution de mobilité. La quantification de l’interligne articulaire est la mesure qui permet de diagnostiquer la maladie et de suivre son évolution. A ce jour, la radiographie conventionnelle est la méthode de référence pour ce diagnostic et ce suivi. Néanmoins, l'articulation du genou et ses modifications structurales sont trop complexes pour permettre un diagnostic à un stade précoce à partir de simples images 2D. Une des pistes prometteuses de la recherche sur le diagnostic précoce est l’exploitation de l’information 3D de l’interligne. C’est dans ce contexte que s’inscrit cette thèse qui a pour but la segmentation et la quantification 3D de l’interligne articulaire afin d’atteindre l’objectif du diagnostic précoce de l’arthrose de genou. Au cours de cette thèse nous avons développé une méthode de quantification semi-automatique de l’interligne articulaire. La cartographie 3D des distances générée a permis de caractériser la morphologie de l’espace articulaire sur des images haute résolution 3D. Pour atteindre l’objectif de quantification à basse dose, deux approches ont été explorées. La première consistait à proposer une approche de segmentation 3D du volume osseux à partir d’un faible nombre de projections. La deuxième approche consiste à réaliser la quantification 3D de l’interligne sur des images issus d’un scanner basse dose obtenue à l’aide d’autres algorithmes mis en œuvre par les partenaires du projet VOXELO. La segmentation de l’interligne a été dans ce cas utilisée comme un critère de qualité de la reconstruction selon ces différents algorithmes.Afin de tester la robustesse de notre approche, nous avons utilisé des images haute résolution selon 2 types de géométrie du faisceau ou des images à faible dose et également sur des images scanner clinique in vivo. Ceci nous permet de conclure que la méthode de quantification de l’interligne que nous avons développé en 3D est potentiellement applicable sur des images provenant de différents appareils de scanner. Cet outil sera potentiellement utile pour détecter les stades précoces et suivre la progression de l'arthrose en clinique. / Osteoarthritis (OA) is a degenerative joint disease that causes pain, stiffness and decrease mobility. Knee OA presents the greatest morbidity. The main characteristic of OA is the cartilage loss inducing joint space narrowing. Usually, the diagnosis and progression of OA is monitored by the joint space measurement. Actually, conventional radiography is the reference method for the diagnosis and monitoring. However, the knee joint and structural changes are too complex to be assessed from simple 2D images especially at early stage. A promising research into early diagnosis is the use of 3D. The objective of our thesis is to provide a tool for a 3D quantification of joint space in order to achieve the goal of early diagnosis of knee osteoarthritis. In this thesis we have developed a semi-automatic method for the quantification of joint space. The 3D map generated allowed us to characterize the morphology of the joint space widths on 3D high resolution images.To achieve the goal of low-dose quantification, two approaches have been explored. The first was to provide a 3D segmentation method for bone extraction from a limited number of projections. The second approach is to perform the 3D quantification from a low dose scan obtained using other algorithms implemented by our partners of VOXELO project. The segmentation of the joint space was used as a quality criterion according to these different algorithms.To test the robustness of our approach, we used high-resolution images with different geometry acquisition types and low-dose images. We have also done a test on clinical CT images in vivo. This allows us to conclude that the method we developed is potentially applicable to images from different scanner devices. This tool can be used for detecting the early stages and track the progress of the clinical osteoarthritis
15

A Novel Technique to Improve the Resolution of a Gamma Camera

Natarajamani, Deepa 21 August 2012 (has links)
No description available.
16

Compressed Sensing based Micro-CT Methods and Applications

Sen Sharma, Kriti 12 June 2013 (has links)
High-resolution micro computed tomography (micro-CT) offers 3D image resolution of 1 um for non-destructive evaluation of various samples. However, the micro-CT performance is limited by several factors. Primarily, scan time is extremely long, and sample dimension is restricted by the x-ray beam and the detector size. The latter is the cause for the well-known interior problem. Recent advancement in image reconstruction, spurred by the advent of compressed sensing (CS) theory in 2006 and interior tomography theory since 2007, offers great reduction in the number of views and an increment in the volume of samples, while maintaining reconstruction accuracy. Yet, for a number of reasons, traditional filtered back-projection based reconstruction methods remain the de facto standard on all manufactured scanners. This work demonstrates that CS based global and interior reconstruction methods can enhance the imaging capability of micro-CT scanners. First, CS based few-view reconstruction methods have been developed for use with data from a real micro-CT scanner. By achieving high quality few-view reconstruction, the new approach is able to reduce micro-CT scan time to up to 1/8th of the time required by the conventional protocol. Next, two new reconstruction techniques have been developed that allow accurate interior reconstruction using just a limited number of global scout views as additional information. The techniques represent a significant progress relative to the previous methods that assume a fully sampled global scan. Of the two methods, the second method uses CS techniques and does not place any restrictions on scanning geometry. Finally, analytic and iterative reconstruction methods have been developed for enlargement of the field of view for the interior scan with a small detector. The idea is that truncated projections are acquired in an offset detector geometry, and the reconstruction procedure is performed through the use of a weighting function / weighted iteration updates, and projection completion. The CS based reconstruction yields the highest image quality in the numerical simulation. Yet, some limitations of the CS based techniques are observed in case of real data with various imperfect properties. In all the studies, physical micro-CT phantoms have been designed and utilized for performance analysis. Also, important guidelines are suggested for future improvements. / Ph. D.
17

3D Reconstruction of the Magnetic Vector Potential of Magnetic Nanoparticles Using Model Based Vector Field Electron Tomography

KC, Prabhat 01 June 2017 (has links)
Lorentz TEM observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials of the sample. These potentials can be extracted from the electron wave phase shift by separating electrostatic and magnetic phase shifts, followed by 3D tomographic reconstructions. In past, Vector Field Electron Tomography (VFET) was utilized to perform the reconstruction. However, VFET is based on a conventional tomography method called filtered back-projection (FBP). Consequently, the VFET approach tends to produce inconsistencies that are prominent along the edges of the sample. We propose a model-based iterative reconstruction (MBIR) approach to improve the reconstruction of magnetic vector potential, A(r). In the case of scalar tomography, the MBIR method is known to yield better reconstructions than the conventional FBP approach, due to the fact that MBIR can incorporate prior knowledge about the system to be reconstructed. For the same reason, we seek to use the MBIR approach to optimize vector field tomographic reconstructions via incorporation of prior knowledge. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to deduce the vector potential. A detailed study of reconstructions from simulated as well as experimental data sets is provided to establish the superiority of the MBIR approach over the VFET approach.
18

Iterative Reconstruction for Quantitative Material Decomposition in Dual-Energy CT

Muhammad, Arif January 2010 (has links)
It is of clinical interest to decompose a three material mixture into its constituted substances using dual-energy CT. In radiation therapy, for example material decomposition can be used to determine tissue properties for the calculation of dose in treatment planning. Due to use of polychromatic spectrum in CT, beam hardening artifacts prevent to achieve fully satisfactory results. Here an iterative reconstruction algorithm proposed by A. Malusek, M. Magnusson, M.Sandborg, and G. Alm Carlsson in 2008 is implemented to achieve this goal. The iterative algorithm can be implemented with both single- and dual-energy CT. The material decomposition process is based on mass conservation and volume conservation assumptions. The implementation and evaluation of iterative reconstruction algorithm is done by using simulation studies of analyzing mixtures of water, protein and adipose tissue. The results demonstrated that beam hardening artifacts are effectively removed and accurate estimation of mass fractions of each base material can be achieved with the proposed method. We also compared our novel iterative reconstruction algorithm to the commonly used water pre-correction method. Experimental results show that our novel iterative algorithm is more accurate.
19

Validation des modalités d’imagerie CBCT basse dose dans les bilans de localisation des canines incluses

Benaim, Eliyahou 03 1900 (has links)
OBJECTIF : L’objectif de cette étude a été de valider le potentiel des méthodes de reconstruction itérative nouvellement développées en imageries CT à faisceau conique, afin de réduire la dose d’exposition dans le cadre des bilans de localisation des canines incluses. MÉTHODOLOGIE : Quarante examens par imagerie volumétrique à faisceau conique de canines incluses ont été reconstruits à pleine dose (D), demi-dose (D2) et quart de dose (D4). Ces examens ont été analysés par un radiologiste maxillo-facial et par un résident en orthodontie. La cohérence entre les évaluations des critères radiologiques retenus a été évaluée avec les tests de Kappa Cohen. RÉSULTATS : Les résultats de cette étude ont montré de fortes valeurs de Kappa concernant l'évaluation inter-examinateur de la position de la canine impactée avec des scores compris entre 0.606 – 0.839. Les valeurs de Kappa déterminées pour la résorption, l'ankylose et les lésions associées étaient beaucoup plus faibles avec des scores compris entre 0.000 et 0.529. CONCLUSION : Cette étude a permis de montrer que la localisation des canines incluses pourrait potentiellement être possible à faible dose (1/4 dose), comparativement à un dosage conventionnel. Toutefois, le diagnostic de la résorption, de l'ankylose ou encore de certaines lésions associées nécessitent de la haute résolution et donc des acquisitions à pleine dose. / AIM : The aim of this study was to validate the potential of newly developed iterative reconstruction methods in cone beam CT imaging to reduce the exposure dose for localization assessments of impacted canines. METHODS : Forty Cone beam CT examinations of impacted canines were reconstructed at full dose (D), half dose (D2) and quarter dose (D4). These examinations were analyzed by a maxillofacial radiologist and by an orthodontic resident. Consistency between the assessments of the selected radiological criteria was evaluated with Kappa Cohen tests. RESULTS : The results of this study showed high Kappa values regarding the inter-examiner assessment of the impacted canine position with scores ranging from 0.606 - 0.839. The Kappa values determined for resorption, ankylosis and associated lesions were lower with scores between 0.000 and 0.529. CONCLUSION : This study showed that the localization of impacted canines could potentially be possible at low dose (1/4 dose), compared to a conventional assay. However, the diagnosis of resorption, ankylosis or certain associated lesions requires high resolution and therefore full dose acquisitions.
20

Transiting exoplanets : characterisation in the presence of stellar activity

Alapini Odunlade, Aude Ekundayo Pauline January 2010 (has links)
The combined observations of a planet’s transits and the radial velocity variations of its host star allow the determination of the planet’s orbital parameters, and most inter- estingly of its radius and mass, and hence its mean density. Observed densities provide important constraints to planet structure and evolution models. The uncertainties on the parameters of large exoplanets mainly arise from those on stellar masses and radii. For small exoplanets, the treatment of stellar variability limits the accuracy on the de- rived parameters. The goal of this PhD thesis was to reduce these sources of uncertainty by developing new techniques for stellar variability filtering and for the determination of stellar temperatures, and by robustly fitting the transits taking into account external constraints on the planet’s host star. To this end, I developed the Iterative Reconstruction Filter (IRF), a new post-detection stellar variability filter. By exploiting the prior knowledge of the planet’s orbital period, it simultaneously estimates the transit signal and the stellar variability signal, using a com- bination of moving average and median filters. The IRF was tested on simulated CoRoT light curves, where it significantly improved the estimate of the transit signal, particu- lary in the case of light curves with strong stellar variability. It was then applied to the light curves of the first seven planets discovered by CoRoT, a space mission designed to search for planetary transits, to obtain refined estimates of their parameters. As the IRF preserves all signal at the planet’s orbital period, t can also be used to search for secondary eclipses and orbital phase variations for the most promising cases. This en- abled the detection of the secondary eclipses of CoRoT-1b and CoRoT-2b in the white (300–1000 nm) CoRoT bandpass, as well as a marginal detection of CoRoT-1b’s orbital phase variations. The wide optical bandpass of CoRoT limits the distinction between thermal emission and reflected light contributions to the secondary eclipse. I developed a method to derive precise stellar relative temperatures using equiv- alent width ratios and applied it to the host stars of the first eight CoRoT planets. For stars with temperature within the calibrated range, the derived temperatures are con- sistent with the literature, but have smaller formal uncertainties. I then used a Markov Chain Monte Carlo technique to explore the correlations between planet parameters derived from transits, and the impact of external constraints (e.g. the spectroscopically derived stellar temperature, which is linked to the stellar density). Globally, this PhD thesis highlights, and in part addresses, the complexity of perform- ing detailed characterisation of transit light curves. Many low amplitude effects must be taken into account: residual stellar activity and systematics, stellar limb darkening, and the interplay of all available constraints on transit fitting. Several promising areas for further improvements and applications were identified. Current and future high precision photometry missions will discover increasing numbers of small planets around relatively active stars, and the IRF is expected to be useful in characterising them.

Page generated in 0.1588 seconds