• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 222
  • 31
  • 23
  • 19
  • 17
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 379
  • 379
  • 147
  • 98
  • 76
  • 69
  • 64
  • 44
  • 44
  • 39
  • 39
  • 38
  • 36
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Quantitative Susceptibility Mapping (QSM) Reconstruction from MRI Phase Data

Gharabaghi, Sara January 2020 (has links)
No description available.
312

Content-Aware Image Restoration Techniques without Ground Truth and Novel Ideas to Image Reconstruction

Buchholz, Tim-Oliver 12 August 2022 (has links)
In this thesis I will use state-of-the-art (SOTA) image denoising methods to denoise electron microscopy (EM) data. Then, I will present NoiseVoid a deep learning based self-supervised image denoising approach which is trained on single noisy observations. Eventually, I approach the missing wedge problem in tomography and introduce a novel image encoding, based on the Fourier transform which I am using to predict missing Fourier coefficients directly in Fourier space with Fourier Image Transformer (FIT). In the next paragraphs I will summarize the individual contributions briefly. Electron microscopy is the go to method for high-resolution images in biological research. Modern scanning electron microscopy (SEM) setups are used to obtain neural connectivity maps, allowing us to identify individual synapses. However, slow scanning speeds are required to obtain SEM images of sufficient quality. In (Weigert et al. 2018) the authors show, for fluorescence microscopy, how pairs of low- and high-quality images can be obtained from biological samples and use them to train content-aware image restoration (CARE) networks. Once such a network is trained, it can be applied to noisy data to restore high quality images. With SEM-CARE I present how this approach can be directly applied to SEM data, allowing us to scan the samples faster, resulting in $40$- to $50$-fold imaging speedups for SEM imaging. In structural biology cryo transmission electron microscopy (cryo TEM) is used to resolve protein structures and describe molecular interactions. However, missing contrast agents as well as beam induced sample damage (Knapek and Dubochet 1980) prevent acquisition of high quality projection images. Hence, reconstructed tomograms suffer from low signal-to-noise ratio (SNR) and low contrast, which makes post-processing of such data difficult and often has to be done manually. To facilitate down stream analysis and manual data browsing of cryo tomograms I present cryoCARE a Noise2Noise (Lehtinen et al. 2018) based denoising method which is able to restore high contrast, low noise tomograms from sparse-view low-dose tilt-series. An implementation of cryoCARE is publicly available as Scipion (de la Rosa-Trevín et al. 2016) plugin. Next, I will discuss the problem of self-supervised image denoising. With cryoCARE I exploited the fact that modern cryo TEM cameras acquire multiple low-dose images, hence the Noise2Noise (Lehtinen et al. 2018) training paradigm can be applied. However, acquiring multiple noisy observations is not always possible e.g. in live imaging, with old cryo TEM cameras or simply by lack of access to the used imaging system. In such cases we have to fall back to self-supervised denoising methods and with Noise2Void I present the first self-supervised neural network based image denoising approach. Noise2Void is also available as an open-source Python package and as a one-click solution in Fiji (Schindelin et al. 2012). In the last part of this thesis I present Fourier Image Transformer (FIT) a novel approach to image reconstruction with Transformer networks. I develop a novel 1D image encoding based on the Fourier transform where each prefix encodes the whole image at reduced resolution, which I call Fourier Domain Encoding (FDE). I use FIT with FDEs and present proof of concept for super-resolution and tomographic reconstruction with missing wedge correction. The missing wedge artefacts in tomographic imaging originate in sparse-view imaging. Sparse-view imaging is used to keep the total exposure of the imaged sample to a minimum, by only acquiring a limited number of projection images. However, tomographic reconstructions from sparse-view acquisitions are affected by missing wedge artefacts, characterized by missing wedges in the Fourier space and visible as streaking artefacts in real image space. I show that FITs can be applied to tomographic reconstruction and that they fill in missing Fourier coefficients. Hence, FIT for tomographic reconstruction solves the missing wedge problem at its source.:Contents Summary iii Acknowledgements v 1 Introduction 1 1.1 Scanning Electron Microscopy . . . . . . . . . . . . . . . . . . . . 3 1.2 Cryo Transmission Electron Microscopy . . . . . . . . . . . . . . . 4 1.2.1 Single Particle Analysis . . . . . . . . . . . . . . . . . . . . 5 1.2.2 Cryo Tomography . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Tomographic Reconstruction . . . . . . . . . . . . . . . . . . . . . 8 1.4 Overview and Contributions . . . . . . . . . . . . . . . . . . . . . 11 2 Denoising in Electron Microscopy 15 2.1 Image Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2 Supervised Image Restoration . . . . . . . . . . . . . . . . . . . . 19 2.2.1 Training and Validation Loss . . . . . . . . . . . . . . . . 19 2.2.2 Neural Network Architectures . . . . . . . . . . . . . . . . 21 2.3 SEM-CARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.3.1 SEM-CARE Experiments . . . . . . . . . . . . . . . . . . 23 2.3.2 SEM-CARE Results . . . . . . . . . . . . . . . . . . . . . 25 2.4 Noise2Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5 cryoCARE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.1 Restoration of cryo TEM Projections . . . . . . . . . . . . 27 2.5.2 Restoration of cryo TEM Tomograms . . . . . . . . . . . . 29 2.5.3 Automated Downstream Analysis . . . . . . . . . . . . . . 31 2.6 Implementations and Availability . . . . . . . . . . . . . . . . . . 32 2.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.7.1 Tasks Facilitated through cryoCARE . . . . . . . . . . . 33 3 Noise2Void: Self-Supervised Denoising 35 3.1 Probabilistic Image Formation . . . . . . . . . . . . . . . . . . . . 37 3.2 Receptive Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.3 Noise2Void Training . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.3.1 Implementation Details . . . . . . . . . . . . . . . . . . . . 41 3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.4.1 Natural Images . . . . . . . . . . . . . . . . . . . . . . . . 43 3.4.2 Light Microscopy Data . . . . . . . . . . . . . . . . . . . . 44 3.4.3 Electron Microscopy Data . . . . . . . . . . . . . . . . . . 47 3.4.4 Errors and Limitations . . . . . . . . . . . . . . . . . . . . 48 3.5 Conclusion and Followup Work . . . . . . . . . . . . . . . . . . . 50 4 Fourier Image Transformer 53 4.1 Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.1.1 Attention Is All You Need . . . . . . . . . . . . . . . . . . 55 4.1.2 Fast-Transformers . . . . . . . . . . . . . . . . . . . . . . . 56 4.1.3 Transformers in Computer Vision . . . . . . . . . . . . . . 57 4.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2.1 Fourier Domain Encodings (FDEs) . . . . . . . . . . . . . 57 4.2.2 Fourier Coefficient Loss . . . . . . . . . . . . . . . . . . . . 59 4.3 FIT for Super-Resolution . . . . . . . . . . . . . . . . . . . . . . . 60 4.3.1 Super-Resolution Data . . . . . . . . . . . . . . . . . . . . 60 4.3.2 Super-Resolution Experiments . . . . . . . . . . . . . . . . 61 4.4 FIT for Tomography . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.4.1 Computed Tomography Data . . . . . . . . . . . . . . . . 64 4.4.2 Computed Tomography Experiments . . . . . . . . . . . . 66 4.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5 Conclusions and Outlook 71
313

Calculating Center of Mass Using List Mode Data from PET Biograph128 mCT-1104 / Beräkning av masscentrum genom användning av list mode data från PET Biograph128 mCT-1104

Rane, Lukas, Runeskog, Henrik January 2019 (has links)
A common problem within positron emission tomography examinations of the brain is the motion of the patient. If the patients ́ head moves during an examination all the data acquired after the movement will not be suited for clinical use. This means that a lot of data recovered from PET is not used at all. Motion tracking during PET acquisitions of the brain is not a well explored issue within medical imaging in relation to the magnitude of the problem. Due to the radiation risks of the examination and the logistics at the hospital, a second acquisition is not preferred. Therefore a method to avoid a second acquisition would be welcome. PET data saved in list mode makes it possible to analyze the data during an examination. By calculating the center of mass of the object examined in list mode only using the raw data from PET and use it as a tracking point, it would be possible to track a motion during an acquisition. The center of mass could therefore possibly be used as a reference to connect two different time intervals on each side of the moment were the motion occurred. The raw PET data used for this project was acquired in the Nuclear Medicine Department in Karolinska University Hospital in Huddinge and covered four turns of one minute acquisitions in different positions and with two different objects that were saved in list mode. The acquisitions were analyzed with the Siemens software e7-tools and sliced into time intervals. To calculate the center of mass within these time intervals, two methods were developed. One method only used the Siemens software e7-tools and histogrammed the time of flight bin position. The other method used each event position in its sinogram to calculate a center of mass sinusoidal equation. This equation lead to coordinates describing the center of mass in a specific slice. / Ett vanligt problem inom positronemissiontomografiundersökningar av hjärnan är rörelser från patienten. Om patienten rör sitt huvud under undersökningen kommer all förvärvad data inte vara kliniskt lämpad. Detta innebär att en stor del av datan från en PET-undersökning inte används över huvud taget. Rörelsespårning under PET undersökningar av hjärnan är ett relativt outforskat ämne inom medicinsk bildgivning i relation till amplituden av problemet. På grund av strålningsrisken av un- dersökningen och logistiken på sjukhusen, är en andra bildtagning inte att föredra. Därför skulle en metod för att undvika en andra bildtagning vara uppskattad. PET-rådata sparad i list mode möjliggör analys av data inom tidsspektrat av en undersökning. Genom att beräkna det undersökta objektets barocentrum genom att enbart använda rådata sparad i list mode och använda detta som en referenspunkt, så finns en möjlighet att följa en rörelse under en undersökning. Objektets barocentrum skulle kunna fungera som en referenspunkt för att binda ihop två olika tidsegment på varsin sida om tillfället då en rörelse har skett. Rådatan som användes i detta projekt var förvärvad vid nukleärmedicinska avdelningen på Karolinska Universetetssjukhuset i Huddinge och täckte fyra stycken undersökningar på en minut vardera i olika positioner och två olika objekt som sparades i list mode. Datainsamlingarna över- sattes med Siemens-mjukvaran e7-tools och delades sedan upp i tidsegment. För att räkna ut ett barocentrum i dessa tidssegment så utvecklades två metoder. En metod använde sig enbart av Siemens-mjukvaran e7-tools och använde dess funktion ”histogramming” för att dela upp alla events time of flight position. Den andra metoden använde varje events position i dess sinogram för att beräkna en barocentrisk sinusformad funktion. Denna funktion ledde till koordinater som beskrev masscentrum i en specifik skiva.
314

Multisensor Microwave Remote Sensing in the Cryosphere

Remund, Quinn P. 14 May 2003 (has links) (PDF)
Because the earth's cryosphere influences global weather patterns and climate, the scientific community has had great interest in monitoring this important region. Microwave remote sensing has proven to be a useful tool in estimating sea and glacial ice surface characteristics with both scatterometers and radiometers exhibiting high sensitivity to important ice properties. This dissertation presents an array of studies focused on extracting key surface features from multisensor microwave data sets. First, several enhanced resolution image reconstruction issues are addressed. Among these are the optimization of the scatterometer image reconstruction (SIR) algorithm for NASA scatterometer (NSCAT) data, an analysis of Ku-band azimuthal modulation in Antarctica, and inter-sensor European Remote Sensing Satellite (ERS) calibration. Next, various methods for the removal of atmospheric distortions in image reconstruction of passive radiometer observations are considered. An automated algorithm is proposed which determines the spatial extent of sea ice in the Arctic and Antarctic regions from NSCAT data. A multisensor iterative sea ice statistical classification method which adapts to the temporally varying signatures of ice types is developed. The sea ice extent and classification algorithms are adopted for current SeaWinds scatterometer data sets. Finally, the automated inversion of large-scale forward electromagnetic scattering of models is considered and used to study the temporal evolution of the scattering properties of polar sea ice.
315

TIME-OF-FLIGHT NEUTRON CT FOR ISOTOPE DENSITY RECONSTRUCTION AND CONE-BEAM CT SEPARABLE MODELS

Thilo Balke (15348532) 26 April 2023 (has links)
<p>There is a great need for accurate image reconstruction in the context of non-destructive evaluation. Major challenges include the ever-increasing necessity for high resolution reconstruction with limited scan and reconstruction time and thus fewer and noisier measurements. In this thesis, we leverage advanced Bayesian modeling of the physical measurement process and probabilistic prior information of the image distribution in order to yield higher image quality despite limited measurement time. We demonstrate in several ways efficient computational performance through the exploitation of more efficient memory access, optimized parametrization of the system model, and multi-pixel parallelization. We demonstrate that by building high-fidelity forward models that we can generate quantitatively reliable reconstructions despite very limited measurement data.</p> <p><br></p> <p>In the first chapter, we introduce an algorithm for estimating isotopic densities from neutron time-of-flight imaging data. Energy resolved neutron imaging (ERNI) is an advanced neutron radiography technique capable of non-destructively extracting spatial isotopic information within a given material. Energy-dependent radiography image sequences can be created by utilizing neutron time-of-flight techniques. In combination with uniquely characteristic isotopic neutron cross-section spectra, isotopic areal densities can be determined on a per-pixel basis, thus resulting in a set of areal density images for each isotope present in the sample. By preforming ERNI measurements over several rotational views, an isotope decomposed 3D computed tomography is possible. We demonstrate a method involving a robust and automated background estimation based on a linear programming formulation. The extremely high noise due to low count measurements is overcome using a sparse coding approach. It allows for a significant computation time improvement, from weeks to a few hours compared to existing neutron evaluation tools, enabling at the present stage a semi-quantitative, user-friendly routine application. </p> <p><br></p> <p>In the second chapter, we introduce the TRINIDI algorithm, a more refined algorithm for the same problem.</p> <p>Accurate reconstruction of 2D and 3D isotope densities is a desired capability with great potential impact in applications such as evaluation and development of next-generation nuclear fuels.</p> <p>Neutron time-of-flight (TOF) resonance imaging offers a potential approach by exploiting the characteristic neutron adsorption spectra of each isotope.</p> <p>However, it is a major challenge to compute quantitatively accurate images due to a variety of confounding effects such as severe Poisson noise, background scatter, beam non-uniformity, absorption non-linearity, and extended source pulse duration. We present the TRINIDI algorithm which is based on a two-step process in which we first estimate the neutron flux and background counts, and then reconstruct the areal densities of each isotope and pixel.</p> <p>Both components are based on the inversion of a forward model that accounts for the highly non-linear absorption, energy-dependent emission profile, and Poisson noise, while also modeling the substantial spatio-temporal variation of the background and flux. </p> <p>To do this, we formulate the non-linear inverse problem as two optimization problems that are solved in sequence.</p> <p>We demonstrate on both synthetic and measured data that TRINIDI can reconstruct quantitatively accurate 2D views of isotopic areal density that can then be reconstructed into quantitatively accurate 3D volumes of isotopic volumetric density.</p> <p><br></p> <p>In the third chapter, we introduce a separable forward model for cone-beam computed tomography (CT) that enables efficient computation of a Bayesian model-based reconstruction. Cone-beam CT is an attractive tool for many kinds of non-destructive evaluation (NDE). Model-based iterative reconstruction (MBIR) has been shown to improve reconstruction quality and reduce scan time. However, the computational burden and storage of the system matrix is challenging. In this paper we present a separable representation of the system matrix that can be completely stored in memory and accessed cache-efficiently. This is done by quantizing the voxel position for one of the separable subproblems. A parallelized algorithm, which we refer to as zipline update, is presented that speeds up the computation of the solution by about 50 to 100 times on 20 cores by updating groups of voxels together. The quality of the reconstruction and algorithmic scalability are demonstrated on real cone-beam CT data from an NDE application. We show that the reconstruction can be done from a sparse set of projection views while reducing artifacts visible in the conventional filtered back projection (FBP) reconstruction. We present qualitative results using a Markov Random Field (MRF) prior and a Plug-and-Play denoiser.</p>
316

Coil Sensitivity Estimation and Intensity Normalisation for Magnetic Resonance Imaging / Spolkänslighetsbestämning och intensitetsnormalisering för magnetresonanstomografi

Herterich, Rebecka, Sumarokova, Anna January 2019 (has links)
The quest for improved efficiency in magnetic resonance imaging has motivated the development of strategies like parallel imaging where arrays of multiple receiver coils are operated simultaneously in parallel. The objective of this project was to find an estimation of phased-array coil sensitivity profiles of magnetic resonance images of the human body. These sensitivity maps can then be used to perform an intensity inhomogeneity correction of the images. Through investigative work in Matlab, a script was developed that uses data embedded in raw data from a magnetic resonance scan, to generate coil sensitivities for each voxel of the volume of interest and recalculate them to two-dimensional sensitivity maps of the corresponding diagnostic images. The resulting mapped sensitivity profiles can be used in Sensitivity Encoding where a more exact solution can be obtained using the carefully estimated sensitivity maps of the images. / Inom magnetresonanstomografi eftersträvas förbättrad effektivitet, villket bidragit till utvecklingen av strategier som parallell imaging, där arrayer av flera mottagarspolar andvänds samtidigt. Syftet med detta projekt var att uppskattamottagarspolarnas känslighetskarta för att utnyttja dem till i metoder inom magnetresonansavbildning. Dessa känslighetskartor kan användas för att utföra intensitetsinhomogenitetskorrigering av bilderna. Genom utforskande arbete i Matlab utvecklades ett skript som tillämpar inbyggd rådata, från en magnetiskresonansavbildning för att generera spolens känslighet för varje voxel av volymen och omberäkna dem till tvådimensionella känslighetskartor av motsvarande diagnostiska bilder. De resulterande kartlagda känslighetsprofilerna kan användas i känslighetskodning, där en mer exakt lösning kan erhållas med hjälp av de noggrant uppskattade känslighetskartorna.
317

Dynamic Myocardial SPECT Imaging Using Single-Pinhole Collimator Detectors: Distance-Driven Forward and Back-Projection, and KDE-Based Image Reconstruction Methods

Ihsani, Alvin January 2015 (has links)
SPECT (Single Photon Emission Computed Tomography) is the modality of choice for myocardial perfusion imaging due to the high sensitivity and specificity, and the lower cost of equipment and radiotracers compared to PET. Dynamic SPECT imaging provides new possibilities for myocardial perfusion imaging by encoding more information in the reconstructed images in the form of time-activity functions. The recent introduction of small solid-state SPECT cameras using multiple pinhole collimators, such as the GE Discovery NM 530c, offers the ability to obtain accurate myocardial perfusion information with markedly decreased acquisition times and offers the possibility to obtain quantitative dynamic perfusion information. This research targets two aspects of dynamic SPECT imaging with the intent of contributing to the improvement of projection and reconstruction methods. First, we propose an adaptation of distance-driven projection to SPECT imaging systems using single-pinhole collimator detectors. The proposed distance-driven projection approach accounts for the finite size of the pinhole, the possibly coarse discretization of the detector and object spaces, and the tilt of the detector surface. We evaluate the projection method in terms of resolution and signal to noise ratio (SNR). We also propose two maximum a posteriori (MAP) iterative image reconstruction methods employing kernel density estimators. The proposed reconstruction methods cluster time-activity functions (or intensity values) by their spatial proximity and similarity, each of which is determined by spatial and range scaling parameters respectively. The results of our experiments support our belief that the proposed reconstruction methods are especially effective when performing reconstructions from low-count measurements. / Thesis / Doctor of Philosophy (PhD)
318

PREDICTION OF MULTI-PHASE LIVER CT VOLUMES USING DEEP NEURAL NETWORK

Afroza Haque (17544888) 04 December 2023 (has links)
<p dir="ltr">Progress in deep learning methodologies has transformed the landscape of medical image analysis, opening fresh pathways for precise and effective diagnostics. Currently, multi-phase liver CT scans follow a four-stage process, commencing with an initial scan carried out before the administration of <a href="" target="_blank">intravenous (IV) contrast-enhancing material</a>. Subsequently, three additional scans are performed following the contrast injection. The primary objective of this research is to automate the analysis and prediction of 50% of liver CT scans. It concentrates on discerning patterns of intensity change during the second, third, and fourth phases concerning the initial phase. The thesis comprises two key sections. The first section employs the non-contrast phase (first scan), late hepatic arterial phase (second scan), and portal venous phase (third scan) to predict the delayed phase (fourth scan). In the second section, the non-contrast phase and late hepatic arterial phase are utilized to predict both the portal venous and delayed phases. The study evaluates the performance of two deep learning models, U-Net and U²-Net. The process involves preprocessing steps like subtraction and normalization to compute contrast difference images, followed by post-processing techniques to generate the predicted 2D CT scans. Post-processing steps have similar techniques as in preprocessing but are performed in reverse order. Four fundamental evaluation metrics, including <a href="" target="_blank">Mean Absolute Error (MAE), Signal-to-Reconstruction Error Ratio (SRE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Measure (SSIM), </a>are employed for assessment. Based on these evaluation metrics, U²-Net performed better than U-Net for the prediction of both portal venous (third) and delayed (fourth) phases. Specifically, U²-Net exhibited superior MAE and PSNR results for the predicted third and fourth scans. However, U-Net did show slightly better SRE and SSIM performance in the predicted scans. On the other hand, for the exclusive prediction of the fourth scan, U-Net outperforms U²-Net in all four evaluation metrics. This implementation shows promising results which will eliminate the need for additional CT scans and reduce patients’ exposure to harmful radiation. Predicting 50% of liver CT volumes will reduce exposure to harmful radiation by half. The proposed method is not limited to liver CT scans and can be applied to various other multi-phase medical imaging techniques, including multi-phase CT angiography, multi-phase renal CT, contrast-enhanced breast MRI, and more.</p>
319

Deep Learning-based Regularizers for Cone Beam Computed Tomography Reconstruction / Djupinlärningsbaserade regulariserare för rekonstruktion inom volymtomografi

Syed, Sabina, Stenberg, Josefin January 2023 (has links)
Cone Beam Computed Tomography is a technology to visualize the 3D interior anatomy of a patient. It is important for image-guided radiation therapy in cancer treatment. During a scan, iterative methods are often used for the image reconstruction step. A key challenge is the ill-posedness of the resulting inversion problem, causing the images to become noisy. To combat this, regularizers can be introduced, which help stabilize the problem. This thesis focuses on Adversarial Convex Regularization that with deep learning regularize the scans according to a target image quality. It can be interpreted in a Bayesian setting by letting the regularizer be the prior, approximating the likelihood with the measurement error, and obtaining the patient image through the maximum-a-posteriori estimate. Adversarial Convex Regularization has previously shown promising results in regular Computed Tomography, and this study aims to investigate its potential in Cone Beam Computed Tomography.  Three different learned regularization methods have been developed, all based on Convolutional Neural Network architectures. One model is based on three-dimensional convolutional layers, while the remaining two rely on 2D layers. These two are in a later stage crafted to be applicable to 3D reconstruction by either stacking a 2D model or by averaging 2D models trained in three orthogonal planes. All neural networks are trained on simulated male pelvis data provided by Elekta. The 3D convolutional neural network model has proven to be heavily memory-consuming, while not performing better than current reconstruction methods with respect to image quality. The two architectures based on merging multiple 2D neural network gradients for 3D reconstruction are novel contributions that avoid memory issues. These two models outperform current methods in terms of multiple image quality metrics, such as Peak Signal-to-Noise Ratio and Structural Similarity Index Measure, and they also generalize well for real Cone Beam Computed Tomography data. Additionally, the architecture based on a weighted average of 2D neural networks is able to capture spatial interactions to a larger extent and is adjustable to favor the plane that best shows the field of interest, a possibly desirable feature in medical practice. / Volymtomografi kan användas inom cancerbehandling för att skapa bilder av patientens inre anatomi i 3D som sedan används vid stråldosplanering. Under den rekonstruerande fasen i en skanning används ofta iterativa metoder. En utmaning är att det resulterande inversionsproblemet är illa ställt, vilket leder till att bilderna blir brusiga. För att motverka detta kan regularisering introduceras som bidrar till att stabilisera problemet. Fokus för denna uppsats är Adversarial Convex Regularization som baserat på djupinlärning regulariserar bilderna enligt en målbildskvalitet. Detta kan även tolkas ur ett Bayesianskt perspektiv genom att betrakta regulariseraren som apriorifördelningen, approximera likelihoodfördelningen med mätfelet samt erhålla patientbilden genom maximum-a-posteriori-skattningen. Adversarial Convex Regularization har tidigare visat lovande resultat för data från Datortomografi och syftet med denna uppsats är att undersöka dess potential för Volymtomografi.  Tre olika inlärda regulariseringsmetoder har utvecklats med hjälp av faltningsnätverk. En av modellerna bygger på faltning av tredimensionella lager, medan de återstående två är baserade på 2D-lager. Dessa två sammanförs i ett senare skede för att kunna appliceras vid 3D-rekonstruktion, antingen genom att stapla 2D modeller eller genom att beräkna ett viktat medelvärde av tre 2D-modeller som tränats i tre ortogonala plan. Samtliga modeller är tränade på simulerad manlig bäckendata från Elekta. 3D-faltningsnätverket har visat sig vara minneskrävande samtidigt som det inte presterar bättre än nuvarande rekonstruktionsmetoder med avseende på bildkvalitet. De andra två metoderna som bygger på att stapla flera gradienter av 2D-nätverk vid 3D-rekonstruktion är ett nytt vetenskapligt bidrag och undviker minnesproblemen. Dessa två modeller överträffar nuvarande metoder gällande flera bildkvalitetsmått och generaliserar även väl för data från verklig Volymtomografi. Dessutom lyckas modellen som bygger på ett viktat medelvärde av 2D-nätverk i större utsträckning fånga spatiala interaktioner. Den kan även anpassas till att gynna det plan som bäst visar intresseområdet i kroppen, vilket möjligtvis är en önskvärd egenskap i medicinska sammanhang.
320

Bregman Operator Splitting with Variable Stepsize for TotalGeneralized Variation Based Multi-Channel MRIReconstruction

Cowen, Benjamin E. 02 September 2015 (has links)
No description available.

Page generated in 0.1455 seconds