1 |
Robust Implementations of the Multistage Wiener FilterHiemstra, John David 11 April 2003 (has links)
The research in this dissertation addresses reduced rank adaptive signal processing, with specific emphasis on the multistage Wiener filter (MWF). The MWF is a generalization of the classical Wiener filter that performs a stage-by-stage decomposition based on orthogonal projections. Truncation of this decomposition produces a reduced rank filter with many benefits, for example, improved performance.
This dissertation extends knowledge of the MWF in four areas. The first area is rank and sample support compression. This dissertation examines, under a wide variety of conditions, the size of the adaptive subspace required by the MWF (i.e., the rank) as well as the required number of training samples. Comparisons are made with other algorithms such as the eigenvector-based principal components algorithm. The second area investigated in this dissertation concerns "soft stops", i.e., the insertion of diagonal loading into the MWF. Several methods for inserting loading into the MWF are described, as well as methods for choosing the amount of loading. The next area investigated is MWF rank selection. The MWF will outperform the classical Wiener filter when the rank is properly chosen. This dissertation presents six approaches for selecting MWF rank. The algorithms are compared to one another and an overall design space taxonomy is presented. Finally, as digital modelling capabilities become more sophisticated there is emerging interest in augmenting adaptive processing algorithms to incorporate prior knowledge. This dissertation presents two methods for augmenting the MWF, one based on linear constraints and a second based on non-zero weight vector initialization. Both approaches are evaluated under ideal and perturbed conditions.
Together the research described in this dissertation increases the utility and robustness of the multistage Wiener filter. The analysis is presented in the context of adaptive array processing, both spatial array processing and space-time adaptive processing for airborne radar. The results, however, are applicable across the entire spectrum of adaptive signal processing applications. / Ph. D.
|
2 |
Tractographie de la matière blanche orientée par a priori anatomiques et microstructurels / White matter tractography guided by anatomical and microstructural priorsGirard, Gabriel 20 April 2016 (has links)
L’imagerie par résonance magnétique pondérée en diffusion est une modalité unique sensible aux mouvements microscopiques des molécules d’eau dans les tissus biologiques. Il est possible d’utiliser les caractéristiques de ce mouvement pour inférer la structure macroscopique des faisceaux de la matière blanche du cerveau. La technique, appelée tractographie, est devenue l’outil de choix pour étudier cette structure de façon non invasive. Par exemple, la tractographie est utilisée en planification neurochirurgicale et pour le suivi du développement de maladies neurodégénératives.Dans cette thèse, nous exposons certains des biais introduits lors de reconstructions par tractographie, et des méthodes sont proposées pour les réduire. D’abord, nous utilisons des connaissances anatomiques a priori pour orienter la reconstruction. Ainsi, nous montrons que l’information anatomique sur la nature des tissus permet d'estimer des faisceaux anatomiquement plausibles et de réduire les biais dans l’estimation de structures complexes de la matière blanche. Ensuite, nous utilisons des connnaissances microstructurelles a priori dans la reconstruction, afin de permettre à la tractographie de suivre le mouvement des molécules d’eau non seulement le long des faisceaux, mais aussi dans des milieux microstructurels spécifiques. La tractographie peut ainsi distinguer différents faisceaux, réduire les erreurs de reconstruction et permettre l’étude de la microstructure le long de la matière blanche. Somme toute, nous montrons que l’utilisation de connaissances anatomiques et microstructurelles a priori, en tractographie, augmente l’exactitude des reconstructions de la matière blanche du cerveau. / Diffusion-weighted magnetic resonance imaging is a unique imaging modality sensitive to the microscopic movement of water molecules in biological tissues. By characterizing the movement of water molecules, it is possible to infer the macroscopic neuronal pathways of the brain. The technique, so-called tractography, had become the tool of choice to study non-invasively the human brain's white matter in vivo. For instance, it has been used in neurosurgical intervention planning and in neurodegenerative diseases monitoring. In this thesis, we report biases from current tractography reconstruction and suggest methods to reduce them. We first use anatomical priors, derived from a high resolution T1-weighted image, to guide tractography. We show that knowledge of the nature of biological tissue helps tractography to reconstruct anatomically valid neuronal pathways, and reduces biases in the estimation of complex white matter regions. We then use microstructural priors, derived from the state-of-the-art diffusionweighted magnetic resonance imaging protocol, in the tractography reconstruction process. This allows tractography to follow the movement of water molecules not only along neuronal pathways, but also in a microstructurally specific environment. Thus, the tractography distinguishes more accurately neuronal pathways and reduces reconstruction errors. Moreover, it provides the mean to study white matter microstructure characteristics along neuronal pathways. Altogether, we show that anatomical and microstructural priors used during the tractography process improve brain’s white matter reconstruction.
|
3 |
Επίδραση κριτηρίων τερματισμού σε υλοποιήσεις επαναληπτικών αποκωδικοποιητών Turbo με αναπαράσταση πεπερασμένης ακρίβειαςΓίδαρος, Σπύρος 18 September 2007 (has links)
Στην διπλωματική εργασία γίνεται μελέτη της κωδικοποίησης καναλιού, μελέτη των προβλημάτων που εισάγει το κανάλι, μελετώνται σε βάθος οι turbo κωδικοποιητές και διάφορα κριτήρια τερματισμού, γίνεται μελέτη της επίδρασης της πεπερασμένης ακρίβειας σε turbo συστήματα και προτείνονται αρχιτεκτονικές για την υλοποίηση των turbo αποκωδικοποιητών. / In this thesis we study the problem of channel coding, particularly we study turbo coding and termination criteria. Moreover we study the impact of fix point arithmetic on early stopping iterative turbo decoders and we proposed architectures for the implementation of turbo decoders in hardware.
|
4 |
Trapping Sets in Fountain Codes over Noisy ChannelsOROZCO, VIVIAN 04 November 2009 (has links)
Fountain codes have demonstrated great results for the binary erasure channel and have already been incorporated into several international standards to recover lost packets at the application layer. These include multimedia broadcast/multicast sessions and digital video broadcasting on global internet-protocol. The rateless property of Fountain codes holds great promise for noisy channels. These are more sophisticated mathematical models representing errors on communications links rather than only erasures. The practical implementation of Fountain codes for these channels, however, is hampered by high decoding cost and delay.
In this work we study trapping sets in Fountain codes over noisy channels and their effect on the decoding process. While trapping sets have received much attention for low-density parity-check (LDPC) codes, to our knowledge they have never been fully explored for Fountain codes. Our study takes into account the different code structure and the dynamic nature of Fountain codes. We show that 'error-free' trapping sets exist for Fountain codes. When the decoder is caught in an error-free trapping set it actually has the correct message estimate, but is unable to detect this is the case. Thus, the decoding process continues, increasing the decoding cost and delay for naught. The decoding process for rateless codes consists of one or more decoding attempts. We show that trapping sets may reappear as part of other trapping sets on subsequent decoding attempts or be defeated by the reception of more symbols. Based on our observations we propose early termination methods that use trapping set detection to obtain improvements in realized rate, latency, and decoding cost for Fountain codes. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2009-10-29 14:33:06.548
|
5 |
Error Estimation for Solutions of Linear Systems in Bi-Conjugate Gradient AlgorithmJain, Puneet January 2016 (has links) (PDF)
No description available.
|
6 |
Vers une méthode de restauration aveugle d’images hyperspectrales / Towards a blind restoration method of hyperspectral imagesZhang, Mo 06 December 2018 (has links)
Nous proposons dans cette thèse de développer une méthode de restauration aveugle d'images flouées et bruitées où aucune connaissance a priori n'est exigée. Ce manuscrit est composé de trois chapitres : le 1er chapitre est consacré aux travaux de l'état de l'art. Les approches d'optimisation pour la résolution du problème de restauration y sont d'abord discutées. Ensuite les principales méthodes de restauration, dites semi-aveugles car nécessitant un minimum de connaissance a priori sont analysées. Parmi ces méthodes, cinq sont retenues pour évaluation. Le 2ème chapitre est dédié à la comparaison des performances des méthodes retenues dans le chapitre précédent. Les principaux critères objectifs d'évaluation de la qualité des images restaurées sont présentés. Parmi ces critères, la norme L1 de l'erreur d'estimation est sélectionnée. L'étude comparative menée sur une banque d'images monochromes, dégradées artificiellement par deux fonctions floues de supports différents et trois niveaux de bruit a permis de mettre en évidence les deux méthodes les plus pertinentes. La première repose sur une approche alternée mono-échelle où la PSF et l'image sont estimées dans une seule étape. La seconde utilise une approche hybride multi-échelle qui consiste tout d'abord à estimer de manière alternée la PSF et une image latente, puis dans une étape suivante séquentielle, à restaurer l'image. Dans l'étude comparative conduite, l'avantage revient à cette dernière. Les performances de ces méthodes serviront de référence pour comparer ensuite la méthode développée. Le 3ème chapitre porte sur la méthode développée. Nous avons cherché à rendre aveugle l'approche hybride retenue dans le chapitre précédent tout en améliorant la qualité d'estimation de la PSF et de l'image restaurée. Les contributions ont porté sur plusieurs points. Une première série d'améliorations concerne la redéfinition des échelles, celle de l'initialisation de l'image latente à chaque niveau d'échelle, l'évolution des paramètres pour la sélection des contours pertinents servant de support à l'estimation de la PSF et enfin, la définition d'un critère d'arrêt aveugle. Une seconde série de contributions a porté sur l'estimation aveugle des deux paramètres de régularisation impliqués pour éviter d'avoir à les fixer empiriquement. Chaque paramètre est associé à une fonction coût distincte l'une pour l'estimation de la PSF et la seconde pour l'estimation d'une image latente. Dans l'étape séquentielle qui suit, nous avons cherché à affiner le support de la PSF estimée dans l'étape alternée, avant de l'exploiter dans le processus de restauration de l'image. A ce niveau, la seule connaissance a priori nécessaire est une borne supérieure du support de la PSF. Les différentes évaluations conduites sur des images monochromes et hyperspectrales dégradées artificiellement par plusieurs flous de type mouvement, de supports différents, montrent une nette amélioration de la qualité de restauration obtenue par l'approche développée par rapport aux deux meilleures approches de l'état de l'art retenues. / We propose in this thesis manuscript to develop a blind restoration method of single component blurred and noisy images where no prior knowledge is required. This manuscript is composed of three chapters: the first chapter focuses on state-of-art works. The optimization approaches for resolving the restoration problem are discussed first. Then, the main methods of restoration, so-called semi-blind ones because requiring a minimum of a priori knowledge are analysed. Five of these methods are selected for evaluation. The second chapter is devoted to comparing the performance of the methods selected in the previous chapter. The main objective criteria for evaluating the quality of the restored images are presented. Of these criteria, the l1 norm for the estimation error is selected. The comparative study conducted on a database of monochromatic images, artificially degraded by two blurred functions with different support size and three levels of noise, revealed the most two relevant methods. The first one is based on a single-scale alternating approach where both the PSF and the image are estimated alternatively. The second one uses a multi-scale hybrid approach, which consists first of alternatingly estimating the PSF and a latent image, then in a sequential next step, restoring the image. In the comparative study performed, the benefit goes to the latter. The performance of both these methods will be used as references to then compare the newly designed method. The third chapter deals with the developed method. We have sought to make the hybrid approach retained in the previous chapter as blind as possible while improving the quality of estimation of both the PSF and the restored image. The contributions covers a number of points. A first series concerns the redefinition of the scales that of the initialization of the latent image at each scale level, the evolution of the parameters for the selection of the relevant contours supporting the estimation of the PSF and finally the definition of a blind stop criterion. A second series of contributions concentrates on the blind estimation of the two regularization parameters involved in order to avoid having to fix them empirically. Each parameter is associated with a separate cost function either for the PSF estimation or for the estimation of a latent image. In the sequential step that follows, we refine the estimation of the support of the PSF estimated in the previous alternated step, before exploiting it in the process of restoring the image. At this level, the only a priori knowledge necessary is a higher bound of the support of the PSF. The different evaluations performed on monochromatic and hyperspectral images artificially degraded by several motion-type blurs with different support sizes, show a clear improvement in the quality of restoration obtained by the newly designed method in comparison to the best two state-of-the-art methods retained.
|
7 |
Algebraická chyba v maticových výpočtech v kontextu numerického řešení parciálních diferenciálních rovnic / Algebraic Error in Matrix Computations in the Context of Numerical Solution of Partial Differential EquationsPapež, Jan January 2017 (has links)
Title: Algebraic Error in Matrix Computations in the Context of Numerical Solution of Partial Differential Equations Author: Jan Papež Department: Department of Numerical Mathematics Supervisor: prof. Ing. Zdeněk Strakoš, DrSc., Department of Numerical Mathe- matics Abstract: Solution of algebraic problems is an inseparable and usually the most time-consuming part of numerical solution of PDEs. Algebraic computations are, in general, not exact, and in many cases it is even principally desirable not to perform them to a high accuracy. This has consequences that have to be taken into account in numerical analysis. This thesis investigates in this line some closely related issues. It focuses, in particular, on spatial distribution of the errors of different origin across the solution domain, backward error interpretation of the algebraic error in the context of function approximations, incorporation of algebraic errors to a posteriori error analysis, influence of algebraic errors to adaptivity, and construction of stopping criteria for (preconditioned) iterative algebraic solvers. Progress in these issues requires, in our opinion, understanding the interconnections between the phases of the overall solution process, such as discretization and algebraic computations. Keywords: Numerical solution of partial...
|
8 |
A posteriorní odhady chyby pro řešení konvektivně-difusních úloh / A posteriori error estimates for numerical solution of convection-difusion problemsŠebestová, Ivana January 2014 (has links)
This thesis is concerned with several issues of a posteriori error estimates for linear problems. In its first part error estimates for the heat conduction equation discretized by the backward Euler method in time and discontinuous Galerkin method in space are derived. In the second part guaranteed and locally efficient error estimates involving algebraic error for Poisson equation discretized by the discontinuous Galerkin method are derived. The technique is based on the flux reconstruction where meshes with hanging nodes and variable polynomial degree are allowed. An adaptive strategy combining both adaptive mesh refinement and stopping criteria for iterative algebraic solvers is proposed. In the last part a numerical method for computing guaranteed lower and upper bounds of principal eigenvalues of symmetric linear elliptic differential operators is presented. 1
|
Page generated in 0.0718 seconds