• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 7
  • 2
  • 1
  • 1
  • Tagged with
  • 33
  • 33
  • 17
  • 13
  • 8
  • 8
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

<b>Advanced Algorithms for X-ray CT Image Reconstruction and Processing</b>

Madhuri Mahendra Nagare (17897678) 05 February 2024 (has links)
<p dir="ltr">X-ray computed tomography (CT) is one of the most widely used imaging modalities for medical diagnosis. Improving the quality of clinical CT images while keeping the X-ray dosage of patients low has been an active area of research. Recently, there have been two major technological advances in the commercial CT systems. The first is the use of Deep Neural Networks (DNN) to denoise and sharpen CT images, and the second is use of photon counting detectors (PCD) which provide higher spectral and spatial resolution compared to the conventional energy-integrating detectors. While both techniques have potential to improve the quality of CT images significantly, there are still challenges to improve the quality further.</p><p dir="ltr"><br></p><p dir="ltr">A denoising or sharpening algorithm for CT images must retain a favorable texture which is critically important for radiologists. However, commonly used methodologies in DNN training produce over-smooth images lacking texture. The lack of texture is a systematic error leading to a biased estimator.</p><p><br></p><p dir="ltr">In the first portion of this thesis, we propose three algorithms to reduce the bias, thereby to retain the favorable texture. The first method proposes a novel approach to designing a loss function that penalizes bias in the image more while training a DNN, producing more texture and detail in results. Our experiments verify that the proposed loss function outperforms the commonly used mean squared error loss function. The second algorithm proposes a novel approach to designing training pairs for a DNN-based sharpener. While conventional sharpeners employ noise-free ground truth producing over-smooth images, the proposed Noise Preserving Sharpening Filter (NPSF) adds appropriately scaled noise to both the input and the ground truth to keep the noise texture in the sharpened result similar to that of the input. Our evaluations show that the NPSF can sharpen noisy images while producing desired noise level and texture. The above two algorithms merely control the amount of texture retained and are not designed to produce texture that matches to a target texture. A Generative Adversarial Network (GAN) can produce the target texture. However, naive application of GANs can introduce inaccurate or even unreal image detail. Therefore, we propose a Texture Matching GAN (TMGAN) that uses parallel generators to separate anatomical features from the generated texture, which allows the GAN to be trained to match the target texture without directly affecting the underlying CT image. We demonstrate that TMGAN generates enhanced image quality while also producing texture that is desirable for clinical application.</p><p><br></p><p dir="ltr">In the second portion of this research, we propose a novel algorithm for the optimal statistical processing of photon-counting detector data for CT reconstruction. Current reconstruction and material decomposition algorithms for photon counting CT are not able to utilize simultaneously both the measured spectral information and advanced prior models. We propose a modular framework based on Multi-Agent Consensus Equilibrium (MACE) to obtain material decomposition and reconstructions using the PCD data. Our method employs a detector agent that uses PCD measurements to update an estimate along with a prior agent that enforces both physical and empirical knowledge about the material-decomposed sinograms. Importantly, the modular framework allows the two agents to be designed and optimized independently. Our evaluations on simulated data show promising results.</p>
32

Simulation et reconstruction 3D à partir de caméra Compton pour l’hadronthérapie : Influence des paramètres d’acquisition / Simulation and reconstruction from Compton caméra for hadrontherapy : Influence of the acquisition parameters

Hilaire, Estelle 18 November 2015 (has links)
L'hadronthérapie est une méthode de traitement du cancer qui emploie des ions (carbone ou proton) au lieu des rayons X. Les interactions entre le faisceau et le patient produisent des radiations secondaires. Il existe une corrélation entre la position d'émission de certaines de ces particules et la position du pic de Bragg. Parmi ces particules, des gamma-prompt sont produits par les fragments nucléaires excités et des travaux actuels ont pour but de concevoir des systèmes de tomographie par émission mono-photonique capable d'imager la position d'émission ces radiations en temps réel, avec une précision millimétrique, malgré le faible nombre de données acquises. Bien que ce ne soit pas actuellement possible, le but in fine est de surveiller le dépôt de dose. La caméra Compton est un des système TEMP qui a été proposé pour imager ce type de particules, car elle offre une meilleure résolution énergétique et la possibilité d'avoir une image 3D. Cependant, en pratique l'acquisition est affectée par le bruit provenant d'autres particules secondaires, et les algorithmes de reconstruction des images Compton sont plus compliqués et encore peu aboutis, mais sur une bonne voie de développement. Dans le cadre de cette thèse, nous avons développé une chaîne complète allant de la simulation de l'irradiation d'un fantôme par un faisceau de protons allant jusqu'à la reconstruction tomographique des images obtenues à partir de données acquises par la caméra Compton. Nous avons étudié différentes méthodes de reconstruction analytiques et itératives, et nous avons développé une méthode de reconstruction itérative capable de prendre en compte les incertitudes de mesure sur l'énergie. Enfin nous avons développé des méthodes pour la détection de la fin du parcours des distributions gamma-prompt reconstruites. / Hadrontherapy is a cancer treatment method which uses ions (proton or carbon) instead of X-rays. Interactions between the beam and the patient produce secondary radiation. It has been shown that there is a correlation between the emission position of some of these particles and the Bragg peak position. Among these particles, prompt-gamma are produced by excited nuclear fragments and current work aims to design SPECT systems able to image the emission position the radiation in real time, with a millimetric precision, despite the low data statistic. Although it is not currently possible, the goal is to monitor the deposited dose. The Compton camera is a SPECT system that proposed for imaging such particles, because it offers a good energy resolution and the possibility of a 3D imaging. However, in practice the acquisition is affected by noise from other secondary particles and the reconstruction algorithms are more complex and not totally completed, but the developments are well advanced. In this thesis, we developed a complete process from the simulation of irradiation of a phantom by a proton beam up to the tomographic reconstruction of images obtained from data acquired by the Compton camera. We studied different reconstruction methods (analytical and iterative), and we have developed an iterative method able to consider the measurement uncertainties on energy. Finally we developed methods to detect the end-of-range of the reconstructed prompt-gamma distributions.
33

The Development of Image Processing Algorithms in Cryo-EM

Rui Yan (6591728) 15 May 2019 (has links)
Cryo-electron microscopy (cryo-EM) has been established as the leading imaging technique for structural studies from small proteins to whole cells at a molecular level. The great advances in cryo-EM have led to the ability to provide unique insights into a wide variety of biological processes in a close to native, hydrated state at near-atomic resolutions. The developments of computational approaches have significantly contributed to the exciting achievements of cryo-EM. This dissertation emphasizes new approaches to address image processing problems in cryo-EM, including tilt series alignment evaluation, simultaneous determination of sample thickness, tilt, and electron mean free path based on Beer-Lambert law, Model-Based Iterative Reconstruction (MBIR) on tomographic data, minimization of objective lens astigmatism in instrument alignment and defocus and magnification dependent astigmatism of TEM images. The final goal of these methodological developments is to improve the 3D reconstruction of cryo-EM and visualize more detailed characterization.

Page generated in 0.1269 seconds