• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 222
  • 31
  • 23
  • 19
  • 17
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 379
  • 379
  • 147
  • 98
  • 76
  • 69
  • 64
  • 44
  • 44
  • 39
  • 39
  • 38
  • 36
  • 31
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Studies On Bayesian Approaches To Image Restoration And Super Resolution Image Reconstruction

Chandra Mohan, S 07 1900 (has links) (PDF)
High quality image /video has become an integral part in our day-to-day life ranging from many areas of science, engineering and medical diagnosis. All these imaging applications call for high resolution, properly focused and crisp images. However, in real situations obtaining such a high quality image is expensive, and in some cases it is not practical. In imaging systems such as digital camera, blur and noise degrade the image quality. The recorded images look blurred, noisy and unable to resolve the finer details of the scene, which are clearly notable under zoomed conditions. The post processing techniques based on computational methods extract the hidden information and thereby improve the quality of the captured images. The study in this thesis focuses on deconvolution and eventually blind de-convolution problem of a single frame captured at low light imaging conditions arising from digital photography/surveillance imaging applications. Our intention is to restore a sharp image from its blurred and noisy observation, when the blur is completely known/unknown and such inverse problems are ill-posed/twice ill-posed. This thesis consists of two major parts. The first part addresses deconvolution/blind deconvolution problem using Bayesian approach with fuzzy logic based gradient potential as a prior functional. In comparison with analog cameras, artifacts are visible in digital cameras when the images are enlarged and there is a demand to enhance the resolution. The increased resolution can be in spatial, temporal or even in both the dimensions. Super resolution reconstruction methods reconstruct images/video containing spectral information beyond that is available in the captured low resolution images. The second part of the thesis addresses resolution enhancement of observed monochromatic/color images using multiple frames of the same scene. This reconstruction problem is formulated in Bayesian domain with an aspiration of reducing blur, noise, aliasing and increasing the spatial resolution. The image is modeled as Markov random field and a fuzzy logic filter based gradient potential is used to differentiate between edge and noisy pixels. Suitable priors are adaptively applied to obtain artifact free/reduced images. In this work, all our approaches are experimentally validated using standard test images. The Matlab based programming tools are used for carrying out the validation. The performance of the approaches are qualitatively compared with results of recently proposed methods. Our results turn out to be visually pleasing and quantitatively competitive.
142

Spatially Regularized Spherical Reconstruction: A Cross-Domain Filtering Approach for HARDI Signals

Salgado Patarroyo, Ivan Camilo 29 August 2013 (has links)
Despite the immense advances of science and medicine in recent years, several aspects regarding the physiology and the anatomy of the human brain are yet to be discovered and understood. A particularly challenging area in the study of human brain anatomy is that of brain connectivity, which describes the intricate means by which different regions of the brain interact with each other. The study of brain connectivity is deeply dependent on understanding the organization of white matter. The latter is predominantly comprised of bundles of myelinated axons, which serve as connecting pathways between approximately 10¹¹ neurons in the brain. Consequently, the delineation of fine anatomical details of white matter represents a highly challenging objective, and it is still an active area of research in the fields of neuroimaging and neuroscience, in general. Recent advances in medical imaging have resulted in a quantum leap in our understanding of brain anatomy and functionality. In particular, the advent of diffusion magnetic resonance imaging (dMRI) has provided researchers with a non-invasive means to infer information about the connectivity of the human brain. In a nutshell, dMRI is a set of imaging tools which aim at quantifying the process of water diffusion within the human brain to delineate the complex structural configurations of the white matter. Among the existing tools of dMRI high angular resolution diffusion imaging (HARDI) offers a desirable trade-off between its reconstruction accuracy and practical feasibility. In particular, HARDI excels in its ability to delineate complex directional patterns of the neural pathways throughout the brain, while remaining feasible for many clinical applications. Unfortunately, HARDI presents a fundamental trade-off between its ability to discriminate crossings of neural fiber tracts (i.e., its angular resolution) and the signal-to-noise ratio (SNR) of its associated images. Consequently, given that the angular resolution is of fundamental importance in the context of dMRI reconstruction, there is a need for effective algorithms for de-noising HARDI data. In this regard, the most effective de-noising approaches have been observed to be those which exploit both the angular and the spatial-domain regularity of HARDI signals. Accordingly, in this thesis, we propose a formulation of the problem of reconstruction of HARDI signals which incorporates regularization assumptions on both their angular and their spatial domains, while leading to a particularly simple numerical implementation. Experimental evidence suggests that the resulting cross-domain regularization procedure outperforms many other state of the art HARDI de-noising methods. Moreover, the proposed implementation of the algorithm supersedes the original reconstruction problem by a sequence of efficient filters which can be executed in parallel, suggesting its computational advantages over alternative implementations.
143

A Comparative Evaluation Of Super

Erbay, Fulya 01 May 2011 (has links) (PDF)
In this thesis, it is proposed to get the high definition color images by using super &ndash / resolution algorithms. Resolution enhancement of RGB, HSV and YIQ color domain images is presented. In this study, three solution methods are presented to improve the resolution of HSV color domain images. These solution methods are suggested to beat the color artifacts on super resolution image and decrease the computational complexity in HSV domain applications. PSNR values are measured and compared with the results of other two color domain experiments. In RGB color space, super &ndash / resolution algorithms are applied three color channels (R, G, B) separately and PSNR values are measured. In YIQ color domain, only Y channel is processed with super resolution algorithms because Y channel is luminance component of the image and it is the most important channel to improve the resolution of the image in YIQ color domain. Also, the third solution method suggested for HSV color domain offers applying super resolution algorithm to only value channel. Hence, value channel carry brightness data of the image. The results are compared with the YIQ color domain experiments. During the experiments, four different super resolution algorithms are used that are Direct Addition, MAP, POCS and IBP. Although, these methods are widely used reconstruction of monochrome images, here they are used for resolution enhancement of color images. Color super resolution performances of these algorithms are tested.
144

A multi-stack framework in magnetic resonance imaging

Shilling, Richard Zethward 02 April 2009 (has links)
Magnetic resonance imaging (MRI) is the preferred imaging modality for visualization of intracranial soft tissues. Surgical planning, and increasingly surgical navigation, use high resolution 3-D patient-specific structural maps of the brain. However, the process of MRI is a multi-parameter tomographic technique where high resolution imagery competes against high contrast and reasonable acquisition times. Resolution enhancement techniques based on super-resolution are particularly well suited in solving the problems of resolution when high contrast with reasonable times for MRI acquisitions are needed. Super-resolution is the concept of reconstructing a high resolution image from a set of low-resolution images taken at dierent viewpoints or foci. The MRI encoding techniques that produce high resolution imagery are often sub-optimal for the desired contrast needed for visualization of some structures in the brain. A novel super-resolution reconstruction framework for MRI is proposed in this thesis. Its purpose is to produce images of both high resolution and high contrast desirable for image-guided minimally invasive brain surgery. The input data are multiple 2-D multi-slice Inversion Recovery MRI scans acquired at orientations with regular angular spacing rotated around a common axis. Inspired by the computed tomography domain, the reconstruction is a 3-D volume of isotropic high resolution, where the inversion process resembles a projection reconstruction problem. Iterative algorithms for reconstruction are based on the projection onto convex sets formalism. Results demonstrate resolution enhancement in simulated phantom studies, and in ex- and in-vivo human brain scans, carried out on clinical scanners. In addition, a novel motion correction method is applied to volume registration using an iterative technique in which super-resolution reconstruction is estimated in a given iteration following motion correction in the preceding iteration. A comparison study of our method with previously published methods in super-resolution shows favorable characteristics of the proposed approach.
145

Straegies For Rapid MR Imaging

Sinha, Neelam 06 1900 (has links)
In MR imaging, techniques for acquisition of reduced data (Rapid MR imaging) are being explored to obtain high-quality images to satisfy the conflicting requirements of simultaneous high spatial and temporal resolution, required for functional studies. The term “rapid” is used because reduction in the volume of data acquisition leads to faster scans. The objective is to obtain high acceleration factors, since it indicates the ability of the technique to yield high-quality images with reduced data (in turn, reduced acquisition time). Reduced data acquisition in conventional (sequential) MR scanners, where a single receiver coil is used, can be achieved either by acquiring only certain k-space regions or by regularly undersampling the entire data in k-space. In parallel MR scanners, where multiple receiver coils are used to acquire high-SNR data, reduced data acquisition is typically accomplished using regular undersampling. Optimal region selection in the 3D k-space (restricted to ky - kz plane, since kx is the readout direction) needs to satisfy “maximum energy compaction” and “minimum acquisition” requirements. In this thesis, a novel star-shaped truncation window is proposed to increase the achievable acceleration factor. The proposed window monotonically cuts down the acquisition of the number of k-space samples with lesser energy. The truncation window samples data within a star-shaped region centered around the origin in the ky - kz plane. The missing values are extrapolated using generalized series modeling-based methods. The proposed method is applied to several real and synthetic data sets. The superior performance of the proposed method is illustrated using the standard measures of error images and uptake curve comparisons. Average values of slope error in estimating the enhancement curve are obtained over 5 real data sets of breast and abdomen images, for an acceleration factor of 8. The proposed method results in a slope error of 5%, while the values obtained using rectangular and elliptical windows are 12% and 10%, respectively. k-t BLAST, a popular method used in cardiac and functional brain imaging, involves regular undersampling. However, the method suffers from drawbacks such as separate training scan, blurred training estimates and aliased phase maps. In this thesis, variations to k-t BLAST have been proposed to overcome the drawbacks. The proposed improved k-t BLAST incorporates variable-density sampling scheme, phase information from the training map and utilization of generalized-series extrapolated training map. The advantage of using a variable density sampling scheme is that the training map is obtained from the actual acquisition instead of a separate pilot scan. Besides, phase information from the training map is used, in place of phase from the aliased map; generalized series extrapolated training map is used instead of the zero-padded training map, leading to better estimation of the unacquired values. The existing technique and the proposed variations are applied on real fMRI data volumes. Improvement in PSNR of activation maps of up to 10 dB. Besides, a reduction of 10% in RMSE is obtained over the entire time series of fMRI images. The peak improvement of the proposed method over k-t BLAST is 35%, averaged over 5 data sets. Most image reconstruction techniques in parallel MR imaging utilize the knowledge of coil sensitivities for image reconstruction, along with assumptions of image reconstruction functions. The thesis proposes an image reconstruction technique that neither needs to estimate coil sensitivities nor makes any assumptions on the image reconstruction function. The proposed cartesian parallel imaging using neural networks, called “Composite image Reconstruction And Unaliasing using Neural Networks” (CRAUNN), is a novel approach based on the observation that the aliasing patterns remain the same irrespective of whether the k-space acquisition consists of only low frequencies or the entire range of k-space frequencies. In the proposed approach, image reconstruction is obtained using the neural network framework. Data acquisition follows a variable-density sampling scheme, where low k-space frequencies are densely sampled, while the rest of the k-space is sparsely sampled. The blurred, unaliased images obtained using the densely sampled low k-space data are used to train the neural network. Image is reconstructed by feeding to the trained network, the aliased images, obtained using the regularly undersampled k-space containing the entire range of k-space frequencies. The proposed approach has been applied to the Shepp-Logan phantom as well as real brain MRI data sets. A visual error measure for estimating the image quality used in compression literature, called SSIM (Structural SIMilarity) index is employed. The average SSIM for the noisy Shepp-Logan phantom (SNR = 10 dB) using the proposed method is 0.68, while those obtained using GRAPPA and SENSE are 0.6 and 0.42, respectively. For the case of the phantom superimposed with fine grid-like structure, the average SSIM index obtained with the proposed method is 0.7, while those for GRAPPA and SENSE are 0.5 and 0.37, respectively. Image reconstruction is more challenging with reduced data acquired using non-cartesian trajectories since aliasing introduced is not localized. Popular technique for non-cartesian parallel imaging CGSENSE suffers from drawbacks like sensitivity to noise and requirement of good coil estimates, while radial/spiral GRAPPA requires complete identical scans to obtain reconstruction kernels for specific trajectories. In our work, the proposed neural network based reconstruction method, CRAUNN, has been shown to work for general non-cartesian acquisitions such as spiral and radial too. In addition, the proposed method does not require coil estimates, or trajectory-specific customized reconstruction kernels. Experiments are performed using radial and spiral trajectories on real and synthetic data, and compared with CGSENSE. Comparison of error images shows that the proposed method has far lesser residual aliasing compared to CGSENSE. The average SSIM index for reconstructions using CRAUNN with spirally and radially undersampled data, are comparable at 0.83 and 0.87, respectively. The same measure for reconstructions using CGSENSE are 0.67 and 0.69, respectively. The average RMSE for reconstructions using CRAUNN with spirally and radially undersampled data, are comparable at 11.1 and 6.1, respectively. The same measure for reconstructions using CGSENSE are 16 and 9.18, respectively.
146

New algorithms for solving inverse source problems in imaging techniques with applications in fluorescence tomography

Yin, Ke 16 September 2013 (has links)
This thesis is devoted to solving the inverse source problem arising in image reconstruction problems. In general, the solution is non-unique and the problem is severely ill-posed. Therefore, small perturbations, such as the noise in the data, and the modeling error in the forward problem, will cause huge errors in the computations. In practice, the most widely used method to tackle the problem is based on Tikhonov-type regularizations, which minimizes a cost function combining a regularization term and a data fitting term. However, because the two tasks, namely regularization and data fitting, are coupled together in Tikhonov regularization, they are difficult to solve. It happens even if each task can be efficiently solved when they are separate. We propose a method to overcome the major difficulties, namely the non-uniqueness of the solution and noisy data fitting, separately. First we find a particular solution called the orthogonal solution that satisfies the data fitting term. Then we add to it a correction function in the kernel space so that the final solution fulfills the regularization and other physical requirements. The key idea is that the correction function in the kernel has no impact to the data fitting, and the regularization is imposed in a smaller space. Moreover, there is no parameter needed to balance the data fitting and regularization terms. As a case study, we apply the proposed method to Fluorescence Tomography (FT), an emerging imaging technique well known for its ill-posedness and low image resolution in existing reconstruction techniques. We demonstrate by theory and examples that the proposed algorithm can drastically improve the computation speed and the image resolution over existing methods.
147

Spatially Regularized Spherical Reconstruction: A Cross-Domain Filtering Approach for HARDI Signals

Salgado Patarroyo, Ivan Camilo 29 August 2013 (has links)
Despite the immense advances of science and medicine in recent years, several aspects regarding the physiology and the anatomy of the human brain are yet to be discovered and understood. A particularly challenging area in the study of human brain anatomy is that of brain connectivity, which describes the intricate means by which different regions of the brain interact with each other. The study of brain connectivity is deeply dependent on understanding the organization of white matter. The latter is predominantly comprised of bundles of myelinated axons, which serve as connecting pathways between approximately 10¹¹ neurons in the brain. Consequently, the delineation of fine anatomical details of white matter represents a highly challenging objective, and it is still an active area of research in the fields of neuroimaging and neuroscience, in general. Recent advances in medical imaging have resulted in a quantum leap in our understanding of brain anatomy and functionality. In particular, the advent of diffusion magnetic resonance imaging (dMRI) has provided researchers with a non-invasive means to infer information about the connectivity of the human brain. In a nutshell, dMRI is a set of imaging tools which aim at quantifying the process of water diffusion within the human brain to delineate the complex structural configurations of the white matter. Among the existing tools of dMRI high angular resolution diffusion imaging (HARDI) offers a desirable trade-off between its reconstruction accuracy and practical feasibility. In particular, HARDI excels in its ability to delineate complex directional patterns of the neural pathways throughout the brain, while remaining feasible for many clinical applications. Unfortunately, HARDI presents a fundamental trade-off between its ability to discriminate crossings of neural fiber tracts (i.e., its angular resolution) and the signal-to-noise ratio (SNR) of its associated images. Consequently, given that the angular resolution is of fundamental importance in the context of dMRI reconstruction, there is a need for effective algorithms for de-noising HARDI data. In this regard, the most effective de-noising approaches have been observed to be those which exploit both the angular and the spatial-domain regularity of HARDI signals. Accordingly, in this thesis, we propose a formulation of the problem of reconstruction of HARDI signals which incorporates regularization assumptions on both their angular and their spatial domains, while leading to a particularly simple numerical implementation. Experimental evidence suggests that the resulting cross-domain regularization procedure outperforms many other state of the art HARDI de-noising methods. Moreover, the proposed implementation of the algorithm supersedes the original reconstruction problem by a sequence of efficient filters which can be executed in parallel, suggesting its computational advantages over alternative implementations.
148

Multidimensional Multicolor Image Reconstruction Techniques for Fluorescence Microscopy

Dilipkumar, Shilpa January 2015 (has links) (PDF)
Fluorescence microscopy is an indispensable tool in the areas of cell biology, histology and material science as it enables non-invasive observation of specimen in their natural environment. The main advantage of fluorescence microscopy is that, it is non-invasive and capable of imaging with very high contrast and visibility. It is dynamic, sensitive and allows high selectivity. The specificity and sensitivity of antibody-conjugated probes and genetically-engineered fluorescent protein constructs allows the user to label multiple targets and the precise location of intracellular components. However, its spatial reso- lution is limited to one-quarter of the excitation wavelength (Abbe’s diffraction limit). The advent of new and sophisticated optics and availability of fluorophores has made fluorescence imaging a flourishing field. Several advanced techniques like TIRF, 4PI, STED, SIM, SPIM, PALM, fPALM, GSDIM and STORM, have enabled high resolution imaging by breaking the diffraction barrier and are a boon to medical and biological research. Invention of confocal and multi-photon microscopes have enabled observation of the specimen embedded at depth. All these advances in fluorescence microscopy have made it a much sought-after technique. The first chapter provides an overview of the fundamental concepts in fluorescence imag- ing. A brief history of emergence of the field is provided in this chapter along with the evolution of different super-resolution microscopes. An introduction to the concept of fluorophores, their broad classification and their characteristics is discussed in this chap- ter. A brief explanation of different fluorescence imaging techniques and some trending techniques are introduced. This chapter provides a thorough foundation for the research work presented in the thesis. Second chapter deals with different microscopy techniques that have changed the face of biophotonics and nanoscale imaging. The resolution of optical imaging systems are dictated by the inherent property of the system, known as impulse response or more popularly “point spread function”. A basic fluorescence imaging system is presented in this chapter and introduces the concept of point spread function and resolution. The introduction of confocal microscope and multi-photon microscope brought about improved optical sectioning. 4PI microscopy technique was invented to improve the axial resolution of the optical imaging system. Using this microscopy modality, an axial resolution of upto ≈ 100nm was made possible. The basic concepts of these techniques is provided in this chapter. The chapter concludes with a discussion on some of the optical engineering techniques that aid in improved lateral and axial resolution improvements and then we proceed to take on these engineering techniques in detail in the next chapter. Introduction of spatial masks at the back aperture of the objective lens results in gen- eration of a Bessel-like beam, which enhances our ability to see deeper inside a spec- imen with reduced aberrations and improved lateral resolution. Bessel beams have non-diffracting and self-reconstructing properties which reduces the scattering while ob- serving cells embedded deep in a thick tissue. By coupling this with the 4PI super- resolution microscopy technique, multiple excitation spots can be generated along the optical axis of the two opposing high-NA objective lenses. This technique is known as multiple excitation spot optical (MESO) microscopy technique. It provides a lateral resolution improvement upto 150nm. A detailed description of the technique and a thorough analysis of the polarization properties is discussed in chapter 3. Chapters 4 and 5 bring the focus of the thesis to the main topic of research - multi- dimensional image reconstruction for fluorescence microscopy by employing the statis- tical techniques. We begin with an introduction to filtering techniques in Chapter 4 and concentrate on an edge-preserving denoising filter: Bilateral Filter for fluorescence microscopy images. Bilateral filter is a non-linear combination of two Gaussian filters, one based on proximity of two pixels and the other based on the intensity similarity of the two. These two sub-filters result in the edge-preserving capability of the filter. This technique is very popular in the field of image processing and we demonstrate the application of the technique for fluorescence microscopy images. The chapter presents a through description of the technique along with comparisons with Poisson noise mod- eling. Chapters 4 and 5 provide a detailed introduction to statistical iterative recon- struction algorithms like expectation maximization-maximum likelihood (EM-ML) and maximum a-posteriori (MAP) techniques. The main objective of an image reconstruc- tion algorithm is to recover an object from its noisy degraded images. Deconvolution methods are generally used to denoise and recover the true object. The choice of an appropriate prior function is the crux of the MAP algorithm. The remaining of chapter 5 provides an introduction to different potential functions. We show some results of the MAP algorithm in comparison with that of ML algorithm. In chapter 6, we continue the discussion on MAP reconstruction where two new potential functions are introduced and demonstrated. The first one is based on the application of Taylor series expansion on the image. The image field is considered to be analytic and hence Taylor series produces an accurate estimation of the field being reconstructed. The second half of the chapter introduces an interpolation function to approximate the value of a pixel in its neighborhood. Cubic B-splines are widely used as a basis function during interpolation and they are popular technique in computer vision and medical imaging techniques. These novel algorithms are tested on di_erent microscopy data like, confocal and 4PI. The results are shown at the _nal part of the chapter. Tagging cell organelles with uorescent probes enable their visualization and analysis non-invasively. In recent times, it is common to tag more than one organelle of interest and simultaneously observe their structures and functions. Multicolor uorescence imaging has become a key technique to study speci_c processes like pH sensing and cell metabolism with a nanoscale precision. However, this process is hindered by various problems like optical artifacts, noise, autouorescence, photobleaching and leakage of uorescence from one channel to the other. Chapter 7 deals with an image reconstruction technique to obtain noise-free and distortion-less data from multiple channels when imaging a multicolor sample. This technique is easily adaptable with the existing imaging systems and has potential application in biological imaging and biophysics where multiple probes are used to tag the features of interest. The fact that the lateral resolution of an optical system is better than the axial resolution is well known. Conventional microscopes focus on cells that are very close to the cover-slip or a few microns into the specimen. However, for cells that are embedded deep in a thick sample (ex: tissues), it is di_cult to visualize them using a conventional microscope. A number of factors like, scattering, optical aberrations, mismatch of refractive index between the objective lens and the mounting medium and noise, cause distortion of the images of samples at large depths. The system PSF gets distorted due to di_raction and its shape changes rapidly at large depths. The aim of chapter 8 is to introduce a technique to reduce distortion of images acquired at depth by employing image reconstruction techniques. The key to this methodology is the modeling of PSF at large depths. Maximum likelihood technique is then employed to reduce the streaking e_ects of the PSF and removes noise from raw images. This technique enables the visualization of cells embedded at a depth of 150_m. Several biological processes within the cell occur at a rate faster than the rate of acquisition and hence vital information is missed during imaging. The recorded images of these dynamic events are corrupted by motion blur, noise and other optical aberrations. Chapter 9 deals with two techniques that address temporal resolution improvement of the uorescence imaging system. The _rst technique focuses on accelerating the data acquisition process. This includes employing the concept of time-multiplexing to acquire sequential images from a dynamic sample using two cameras and generating multiple sheets of light using a di_raction grating, resulting in multi-plane illumination. The second technique involves the use of parallel processing units to enable real-time image reconstruction of the acquired data. A multi-node GPU and CUDA architecture effciently reduce the computation time of the reconstruction algorithms. Faster implementation of iterative image reconstruction techniques can aid in low-light imaging and dynamic monitoring of rapidly moving samples in real time. Employing rapid acquisition and rapid image reconstruction aids in real-time visualization of cells and have immense potential in the _eld of microbiology and bio-mechanics. Finally, we conclude the thesis with a brief section on the contribution of the thesis and the future scope the work presented. Thank you for using www.freepdfconvert.com service! Only two pages are converted. Please Sign Up to convert all pages. https://www.freepdfconvert.com/membership
149

Toward Computationally Efficient Models for Near-infrared and Photoacoustic Tomographic Imaging

Bhatt, Manish January 2016 (has links) (PDF)
Near Infrared (NIR) and Photoacoustic (PA) Imaging are promising imaging modalities that provides functional information of the soft biological tissues in-vivo, with applica-tions in breast and brain tissue imaging. These techniques use near infrared light in the wavelength range of (600 nm - 900 nm), giving an advantage of being non-ionizing imaging modality. This makes the prolong bed-side monitoring of tissue feasible, making them highly desirable medical imaging modalities in the clinic. The computation models that are deployed in these imaging scenarios are computationally demanding and often require a high performance computing systems to deploy them in real-time. This the-sis presents three computationally e cient models for near-infrared and photoacoustic imaging, without compromising the quality of measured functional properties, to make them more appealing in clinical scenarios. The attenuation of near-infrared (NIR) light intensity as it propagates in a turbid medium like biological tissue is described by modi ed the BeerLambert law (MBLL). The MBLL is generally used to quantify the changes in tissue chromophore concen-trations for NIR spectroscopic data analysis. Even though MBLL is e ective in terms of providing qualitative comparison, it su ers from its applicability across tissue types and tissue dimensions. A Lambert-W function-based modeling for light propagation in biological tissues is proposed and introduced, which is a generalized version of the Beer-Lambert model. The proposed modeling provides parametrization of tissue properties, which includes two attenuation coe cients o and . The model is validated against the Monte Carlo simulation, which is the gold standard for modeling NIR light propagation in biological tissue. Numerous human and animal tissues are included to validate the proposed empirical model, including an inhomogeneous adult human head model. The proposed model, which has a closed form (analytical), is rst of its kind in providing accurate modeling of NIR light propagation in biological tissues. Model based image reconstruction techniques yield better quantitative accuracy in photoacoustic (PA) image reconstruction, especially in limited data cases. An exponen-tial ltering of singular values is proposed for carrying out the image reconstruction in photoacoustic tomography. The results were compared with widely popular Tikhonov regularization, time reversal, and the state of the art least-squares QR based reconstruc-tion algorithms for three digital phantom cases with varying signal-to-noise ratios of data. The exponential ltering provided superior photoacoustic images of better quanti-tative accuracy. Moreover, the proposed ltering approach was observed to be less biased towards regularization parameter and did not come with any additional computational burden as it was implemented within the Tikhonov ltering framework. It was also shown that the standard Tikhonov ltering becomes an approximation to the proposed exponential ltering. The model based image reconstruction techniques for photoacoustic tomography re-quire an explicit regularization. An error estimate minimization based approach was proposed and developed for the determination of regularization parameter for PA imag-ing. The regularization was used within Lanczos bidiagonalization framework, which provides the advantage of dimensionality reduction for a large system of equations. The proposed method was computationally faster than the state of the art techniques and provided similar performance in terms of quantitative accuracy in reconstructed im-ages.The estimate can also be utilized in determining suitable regularization parameter for other popular techniques such as Tikhonov,exponential ltering and `1 norm based regularization methods.
150

Real versus Simulated data for Image Reconstruction : A comparison between training with sparse simulated data and sparse real data / Verklig kontra simulerad data för bildrekonstruktion : En jämförelse mellan träning med gles simulerad data och gles verklig data

Maiga, Aïssata, Löv, Johanna January 2021 (has links)
Our study investigates how training with sparse simulated data versus sparse real data affects image reconstruction. We compared on several criteria such as number of events, speed and high dynamic range, HDR. The results indicate that the difference between simulated data and real data is not large. Training with real data performed often better, but only by 2%. The findings confirm what earlier studies have shown; training with simulated data generalises well, even when training on sparse datasets as this study shows. / Vår studie undersöker hur träning med gles simulerad data och gles verklig data från en eventkamera, påverkar bildrekonstruktion. Vi tränade två modeller, en med simulerad data och en med verklig för att sedan jämföra dessa på ett flertal kriterier som antal event, hastighet och high dynamic range, HDR. Resultaten visar att skillnaden mellan att träna med simulerad data och verklig data inte är stor. Modellen tränad med verklig data presterade bättre i de flesta fall, men den genomsnittliga skillnaden mellan resultaten är bara 2%. Resultaten bekräftar vad tidigare studier har visat; träning med simulerad data generaliserar bra, och som denna studie visar även vid träning på glesa datamängder.

Page generated in 0.1501 seconds