• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 288
  • 171
  • 44
  • 32
  • 10
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 609
  • 143
  • 103
  • 89
  • 87
  • 78
  • 77
  • 70
  • 68
  • 68
  • 61
  • 59
  • 55
  • 53
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Numerical solutions to some ill-posed problems

Hoang, Nguyen Si January 1900 (has links)
Doctor of Philosophy / Department of Mathematics / Alexander G. Ramm / Several methods for a stable solution to the equation $F(u)=f$ have been developed. Here $F:H\to H$ is an operator in a Hilbert space $H$, and we assume that noisy data $f_\delta$, $\|f_\delta-f\|\le \delta$, are given in place of the exact data $f$. When $F$ is a linear bounded operator, two versions of the Dynamical Systems Method (DSM) with stopping rules of Discrepancy Principle type are proposed and justified mathematically. When $F$ is a non-linear monotone operator, various versions of the DSM are studied. A Discrepancy Principle for solving the equation is formulated and justified. Several versions of the DSM for solving the equation are formulated. These methods consist of a Newton-type method, a gradient-type method, and a simple iteration method. A priori and a posteriori choices of stopping rules for these methods are proposed and justified. Convergence of the solutions, obtained by these methods, to the minimal norm solution to the equation $F(u)=f$ is proved. Iterative schemes with a posteriori choices of stopping rule corresponding to the proposed DSM are formulated. Convergence of these iterative schemes to a solution to the equation $F(u)=f$ is proved. This dissertation consists of six chapters which are based on joint papers by the author and his advisor Prof. Alexander G. Ramm. These papers are published in different journals. The first two chapters deal with equations with linear and bounded operators and the last four chapters deal with non-linear equations with monotone operators.
72

Iterative Filtered Backprojection Methods for Helical Cone-Beam CT

Sunnegårdh, Johan January 2009 (has links)
State-of-the-art reconstruction algorithms for medical helical cone-beam Computed Tomography (CT) are of type non-exact Filtered Backprojection (FBP). They are attractive because of their simplicity and low computational cost, but they produce sub-optimal images with respect to artifacts, resolution, and noise. This thesis deals with possibilities to improve the image quality by means of iterative techniques. The first algorithm, Regularized Iterative Weighted Filtered Backprojection (RIWFBP), is an iterative algorithm employing the non-exact Weighted FilteredBackprojection (WFBP) algorithm [Stierstorfer et al., Phys. Med. Biol. 49, 2209-2218, 2004] in the update step. We have measured and compared artifact reduction as well as resolution and noise properties for RIWFBP and WFBP. The results show that artifacts originating in the non-exactness of the WFBP algorithm are suppressed within five iterations without notable degradation in terms of resolution versus noise. Our experiments also indicate that the number of required iterations can be reduced by employing a technique known as ordered subsets. A small modification of RIWFBP leads to a new algorithm, the Weighted Least Squares Iterative Filtered Backprojection (WLS-IFBP). This algorithm has a slightly lower rate of convergence than RIWFBP, but in return it has the attractive property of converging to a solution of a certain least squares minimization problem. Hereby, theory and algorithms from optimization theory become applicable. Besides linear regularization, we have examined edge-preserving non-linear regularization.In this case, resolution becomes contrast dependent, a fact that can be utilized for improving high contrast resolution without degrading the signal-to-noise ratio in low contrast regions. Resolution measurements at different contrast levels and anthropomorphic phantom studies confirm this property. Furthermore, an even morepronounced suppression of artifacts is observed. Iterative reconstruction opens for more realistic modeling of the input data acquisition process than what is possible with FBP. We have examined the possibility to improve the forward projection model by (i) multiple ray models, and (ii) calculating strip integrals instead of line integrals. In both cases, for linearregularization, the experiments indicate a trade off: the resolution is improved atthe price of increased noise levels. With non-linear regularization on the other hand, the degraded signal-to-noise ratio in low contrast regions can be avoided. Huge input data sizes make experiments on real medical CT data very demanding. To alleviate this problem, we have implemented the most time consuming parts of the algorithms on a Graphics Processing Unit (GPU). These implementations are described in some detail, and some specific problems regarding parallelism and memory access are discussed.
73

Sparse Linear Modeling of Speech from EEG / Gles Linjära Modellering av Tal från EEG

Tiger, Mattias January 2014 (has links)
For people with hearing impairments, attending to a single speaker in a multi-talker background can be very difficult and something which the current hearing aids can barely help with. Recent studies have shown that the audio stream a human focuses on can be found among the surrounding audio streams, using EEG and linear models. With this rises the possibility of using EEG to unconsciously control future hearing aids such that the attuned sounds get enhanced, while the rest are damped. For such hearing aids to be useful for every day usage it better be using something other than a motion sensitive, precisely placed EEG cap. This could possibly be archived by placing the electrodes together with the hearing aid in the ear. One of the leading hearing aid manufacturer Oticon and its research lab Erikholm Research Center have recorded an EEG data set of people listening to sentences and in which electrodes were placed in and closely around the ears. We have analyzed the data set by applying a range of signal processing approaches, mainly in the context of audio estimation from EEG. Two different types of linear sparse models based on L1-regularized least squares are formulated and evaluated, providing automatic dimensionality reduction in that they significantly reduce the number of channels needed. The first model is based on linear combinations of spectrograms and the second is based on linear temporal filtering. We have investigated the usefulness of the in-ear electrodes and found some positive indications. All models explored consider the in-ear electrodes to be the most important, or among the more important, of the 128 electrodes in the EEG cap.This could be a positive indication of the future possibility of using only electrodes in the ears for future hearing aids.
74

Variational Regularization Strategy for Atmospheric Tomography

Altuntac, Erdem 04 April 2016 (has links)
No description available.
75

On Regularized Newton-type Algorithms and A Posteriori Error Estimates for Solving Ill-posed Inverse Problems

Liu, Hui 11 August 2015 (has links)
Ill-posed inverse problems have wide applications in many fields such as oceanography, signal processing, machine learning, biomedical imaging, remote sensing, geophysics, and others. In this dissertation, we address the problem of solving unstable operator equations with iteratively regularized Newton-type algorithms. Important practical questions such as selection of regularization parameters, construction of generating (filtering) functions based on a priori information available for different models, algorithms for stopping rules and error estimates are investigated with equal attention given to theoretical study and numerical experiments.
76

Automatic regularization technique for the estimation of neural receptive fields

Park, Mijung 02 November 2010 (has links)
A fundamental question on visual system in neuroscience is how the visual stimuli are functionally related to neural responses. This relationship is often explained by the notion of receptive fields, an approximated linear or quasi-linear filter that encodes the high dimensional visual stimuli into neural spikes. Traditional methods for estimating the filter do not efficiently exploit prior information about the structure of neural receptive fields. Here, we propose several approaches to design the prior distribution over the filter, considering the neurophysiological fact that receptive fields tend to be localized both in space-time and spatio-temporal frequency domain. To automatically regularize the estimation of neural receptive fields, we use the evidence optimization technique, a MAP (maximum a posteriori) estimation under a prior distribution whose parameters are set by maximizing the marginal likelihood. Simulation results show that the proposed methods can estimate the receptive field using datasets that are tens to hundreds of times smaller than those required by traditional methods. / text
77

Regularization Using a Parameterized Trust Region Subproblem

Grodzevich, Oleg January 2004 (has links)
We present a new method for regularization of ill-conditioned problems that extends the traditional trust-region approach. Ill-conditioned problems arise, for example, in image restoration or mathematical processing of medical data, and involve matrices that are very ill-conditioned. The method makes use of the L-curve and L-curve maximum curvature criterion as a strategy recently proposed to find a good regularization parameter. We describe the method and show its application to an image restoration problem. We also provide a MATLAB code for the algorithm. Finally, a comparison to the CGLS approach is given and analyzed, and future research directions are proposed.
78

Parallel magnetic resonance imaging reconstruction problems using wavelet representations / Problèmes de reconstruction en imagerie par résonance magnétique parallèle à l'aide de représentations en ondelettes

Chaari, Lotfi 05 November 2010 (has links)
Pour réduire le temps d'acquisition ou bien améliorer la résolution spatio-temporelle dans certaines application en IRM, de puissantes techniques parallèles utilisant plusieurs antennes réceptrices sont apparues depuis les années 90. Dans ce contexte, les images d'IRM doivent être reconstruites à partir des données sous-échantillonnées acquises dans le « k-space ». Plusieurs approches de reconstruction ont donc été proposées dont la méthode SENSitivity Encoding (SENSE). Cependant, les images reconstruites sont souvent entâchées par des artéfacts dus au bruit affectant les données observées, ou bien à des erreurs d'estimation des profils de sensibilité des antennes. Dans ce travail, nous présentons de nouvelles méthodes de reconstruction basées sur l'algorithme SENSE, qui introduisent une régularisation dans le domaine transformé en ondelettes afin de promouvoir la parcimonie de la solution. Sous des conditions expérimentales dégradées, ces méthodes donnent une bonne qualité de reconstruction contrairement à la méthode SENSE et aux autres techniques de régularisation classique (e.g. Tikhonov). Les méthodes proposées reposent sur des algorithmes parallèles d'optimisation permettant de traiter des critères convexes, mais non nécessairement différentiables contenant des a priori parcimonieux. Contrairement à la plupart des méthodes de reconstruction qui opèrent coupe par coupe, l'une des méthodes proposées permet une reconstruction 4D (3D + temps) en exploitant les corrélations spatiales et temporelles. Le problème d'estimation d'hyperparamètres sous-jacent au processus de régularisation a aussi été traité dans un cadre bayésien en utilisant des techniques MCMC. Une validation sur des données réelles anatomiques et fonctionnelles montre que les méthodes proposées réduisent les artéfacts de reconstruction et améliorent la sensibilité/spécificité statistique en IRM fonctionnelle / To reduce scanning time or improve spatio-temporal resolution in some MRI applications, parallel MRI acquisition techniques with multiple coils have emerged since the early 90's as powerful methods. In these techniques, MRI images have to be reconstructed from acquired undersampled « k-space » data. To this end, several reconstruction techniques have been proposed such as the widely-used SENSitivity Encoding (SENSE) method. However, the reconstructed images generally present artifacts due to the noise corrupting the observed data and coil sensitivity profile estimation errors. In this work, we present novel SENSE-based reconstruction methods which proceed with regularization in the complex wavelet domain so as to promote the sparsity of the solution. These methods achieve accurate image reconstruction under degraded experimental conditions, in which neither the SENSE method nor standard regularized methods (e.g. Tikhonov) give convincing results. The proposed approaches relies on fast parallel optimization algorithms dealing with convex but non-differentiable criteria involving suitable sparsity promoting priors. Moreover, in contrast with most of the available reconstruction methods which proceed by a slice by slice reconstruction, one of the proposed methods allows 4D (3D + time) reconstruction exploiting spatial and temporal correlations. The hyperparameter estimation problem inherent to the regularization process has also been addressed from a Bayesian viewpoint by using MCMC techniques. Experiments on real anatomical and functional data show that the proposed methods allow us to reduce reconstruction artifacts and improve the statistical sensitivity/specificity in functional MRI
79

Single-image full-focus reconstruction using depth-based deconvolution

Salahieh, Basel, Rodriguez, Jeffrey J., Stetson, Sean, Liang, Rongguang 30 September 2016 (has links)
In contrast with traditional extended depth-of-field approaches, we propose a depth-based deconvolution technique that realizes the depth-variant nature of the point spread function of an ordinary fixed-focus camera. The developed technique brings a single blurred image to focus at different depth planes which can be stitched together based on a depth map to output a full-focus image. Strategies to suppress the deconvolution's ringing artifacts are implemented on three levels: block tiling to eliminate boundary artifacts, reference maps to reduce ringing initiated by sharp edges, and depth-based masking to mitigate artifacts raised by neighboring depth-transition surfaces. The performance is validated numerically for planar and multidepth objects. (C) 2016 Society of Photo-Optical Instrumentation Engineers (SPIE)
80

Restauration d'images 3D de microscopie de fluorescence en présence d'aberrations optiques / Restoration of 3D fluorescence microscopy images under the presence of optical aberrations

Ben Hadj, Saïma 17 April 2013 (has links)
Dans cette thèse, nous nous intéressons à la restauration d'image tridimensionnelle de microscopie de fluorescence. Deux difficultés majeures dans ce système d'imagerie sont traitées. La première est le flou variable en profondeur qui est dû aux aberrations induites par la variation des indices de réfraction dans le système optique et le spécimen imagé. La deuxième est le bruit qui est principalement dû au processus de comptage de photons. L'objectif de cette thèse est de réduire ces distorsions afin de fournir aux biologistes une image de meilleure qualité possible. Dans la première partie de cette thèse, nous étudions les modèles d'approximation du flou variable en profondeur et nous choisissons un modèle adéquat au problème d'inversion. Dans ce modèle, la réponse impulsionnelle (RI) variable en profondeur est approchée par une combinaison convexe d'un ensemble de RIs invariables spatialement. Nous développons pour ce modèle deux méthodes rapides de restauration non-aveugle par minimisation d'un critère régularisé, chacune d'elles est adaptée au type de bruit présent dans les images de microscopie confocale ou à champ large. Dans la deuxième partie, nous abordons le problème de restauration aveugle et proposons deux méthodes dans lesquelles le flou variable en profondeur et l'image sont conjointement estimés. Dans la première méthode, la RI est estimée en chaque voxel du volume considéré afin de laisser une grande liberté sur la forme de la RI, tandis que dans la deuxième méthode, la forme de la RI est contrainte par une fonction gaussienne afin de réduire le nombre de variables inconnues et l'espace des solutions possibles. Dans ces deux méthodes d'estimation aveugle, l'effet des aberrations optiques n'est pas efficacement estimé en raison du manque d'information. Nous améliorons ces méthodes d'estimation en alternant des contraintes dans les domaines fréquentiel et spatial. Des résultats sont montrés en simulation et sur des données réelles. / In this thesis, we focus on the restoration of three-dimensional image of fluorescence microscopy. Two major difficulties in this imaging system are considered. The first one is the depth-variant blur due to aberrations induced by the refractive index variation in the optical system and the imaged specimen. The second difficulty is the noise due to the photon counting process. The goal of this thesis is to reduce these distortions in order to provide biologists with a better image quality. In the first part of this thesis, we study the approximation models of the depth-variant blur and choose an appropriate model for the inversion problem. In that model, the depth-variant point spread function (PSF) is approximated by a convex combination of a set of space-invariant PSFs. We then develop for that model two fast non-blind restoration methods by minimizing a regularized criterion, each of these methods is adapted to the type of noise present in images of confocal or wide field microscopy. In the second part, we address the problem of blind restoration and propose two methods where the depth-variant blur and the image are jointly estimated. In the first method, the PSF is estimated at each voxel in the considered volume in order to allow high degree of freedom on the PSF shape while in the second method, the shape of the PSF is constrained by a Gaussian function in order to reduce the number of unknown variables and the space of possible solutions. In both blind estimation methods, the effect of optical aberrations is not effectively estimated due to the lack of information. We thus improve these estimation methods by alternating some constraints in the frequency and spatial domains. Results on simulated and real data are shown.

Page generated in 0.3306 seconds