Spelling suggestions: "subject:"aximum a posteriori"" "subject:"amaximum a posteriori""
1 |
Circuit Design of Maximum a Posteriori Algorithm for Turbo Code DecoderKao, Chih-wei 30 July 2010 (has links)
none
|
2 |
Melhoria da Convergência do Método Ica-Map para Remoção de Ruído em Sinal de VozCARMO, F. L. 10 December 2013 (has links)
Made available in DSpace on 2018-08-02T00:00:56Z (GMT). No. of bitstreams: 1
tese_4705_DissLouvatti.pdf: 7714377 bytes, checksum: 4882a516f1ffdbc4fd342e10e0c06e83 (MD5)
Previous issue date: 2013-12-10 / O problema de separação de fontes consiste em recuperar um sinal latente de um conjunto
de misturas observáveis. Em problemas de denoising, que podem ser encarados como um
problema de separação de fontes, é necessário extrair um sinal de voz não observado a partir
de um sinal contaminado por ruído. Em tal caso, uma importante abordagem baseia-se
na análise de componentes independentes (modelos ICA). Neste sentido, o uso da ICA
com o algoritmo maximum a posteriori (MAP) é conhecido como ICA-MAP. O emprego
de duas transformações individuais para sinal de voz e ruído pode proporcionar uma
melhor estimativa dentro de um ambiente linear. Esse trabalho apresenta uma modificação
feita no algoritmo ICA-MAP a fim de melhorar sua convergência. Foi observado, através
de testes, que é possível limitar a magnitude do vetor gradiente, usado para estimar os
parâmetros do modelo de denoising, e assim melhorar a estabilidade do algoritmo. Tal
adaptação pode ser entendida como uma restrição no problema de otimização original.
Outra abordagem proposta é aproximar a derivada do modelo GGM (generalized gaussian
model) em torno de zero por uma spline. Para acelerar o algoritmo, é aplicado um passo
variável no algoritmo do gradiente. Testes comparativos foram realizados empregando-se
bases padrões de dados de voz (masculino e feminino) e de ruído. No final, os resultados
obtidos são comparados com técnicas clássicas, a fim de destacar as vantagens do método.
|
3 |
Estimating Wind Velocities in Atmospheric Mountain Waves Using Sailplane Flight DataZhang, Ni January 2012 (has links)
Atmospheric mountain waves form in the lee of mountainous terrain under appropriate conditions of the vertical structure of wind speed and atmospheric stability. Trapped lee waves can extend hundreds of kilometers downwind from the mountain range, and they can extend tens of kilometers vertically into the stratosphere. Mountain waves are of importance in meteorology as they affect the general circulation of the atmosphere, can influence the vertical structure of wind speed and temperature fields, produce turbulence and downdrafts that can be an aviation hazard, and affect the vertical transport of aerosols and trace gasses, and ozone concentration.
Sailplane pilots make extensive use of mountain lee waves as a source of energy with which to climb. There are many sailplane wave flights conducted every year throughout the world and they frequently cover large distances and reach high altitudes. Modern sailplanes frequently carry flight recorders that record their position at regular intervals during the flight. There is therefore potential to use this recorded data to determine the 3D wind velocity at positions on the sailplane flight path. This would provide an additional source of information on mountain waves to supplement other measurement techniques that might be useful for studies on mountain waves. The recorded data are limited however, and determination of wind velocities is not straightforward.
This thesis is concerned with the development and application of techniques to determine the vector wind field in atmospheric mountain waves using the limited flight data collected during sailplane flights. A detailed study is made of the characteristics, uniqueness, and sensitivity to errors in the data, of the problem of estimating the wind velocities from limited flight data consisting of ground velocities, possibly supplemented by air speed or heading data. A heuristic algorithm is developed for estimating 3D wind velocities in mountain waves from ground velocity and air speed data, and the algorithm is applied to flight data collected during “Perlan Project” flights. The problem is then posed as a statistical estimation problem and maximum likelihood and maximum a posteriori estimators are developed for a variety of different kinds of flight data. These estimators are tested on simulated flight data and data from Perlan Project flights.
|
4 |
Métaheuristiques adaptatives d'optimisation continue basées sur des méthodes d'apprentissage / adaptative metaheuristics for continuous optimization based on learning methodsGhoumari, Asmaa 10 December 2018 (has links)
Les problèmes d'optimisation continue sont nombreux, en économie, en traitement de signal, en réseaux de neurones, etc. L'une des solutions les plus connues et les plus employées est l'algorithme évolutionnaire, métaheuristique basée sur les théories de l'évolution qui emprunte des mécanismes stochastiques et qui a surtout montré de bonnes performances dans la résolution des problèmes d'optimisation continue. L’utilisation de cette famille d'algorithmes est très populaire, malgré les nombreuses difficultés qui peuvent être rencontrées lors de leur conception. En effet, ces algorithmes ont plusieurs paramètres à régler et plusieurs opérateurs à fixer en fonction des problèmes à résoudre. Dans la littérature, on trouve pléthore d'opérateurs décrits, et il devient compliqué pour l'utilisateur de savoir lesquels sélectionner afin d'avoir le meilleur résultat possible. Dans ce contexte, cette thèse avait pour objectif principal de proposer des méthodes permettant de remédier à ces problèmes sans pour autant détériorer les performances de ces algorithmes. Ainsi nous proposons deux algorithmes :- une méthode basée sur le maximum a posteriori qui utilise les probabilités de diversité afin de sélectionner les opérateurs à appliquer, et qui remet ce choix régulièrement en jeu,- une méthode basée sur un graphe dynamique d'opérateurs représentant les probabilités de passages entre les opérateurs, et en s'appuyant sur un modèle de la fonction objectif construit par un réseau de neurones pour mettre régulièrement à jour ces probabilités. Ces deux méthodes sont détaillées, ainsi qu'analysées via un benchmark d'optimisation continue / The problems of continuous optimization are numerous, in economics, in signal processing, in neural networks, and so on. One of the best-known and most widely used solutions is the evolutionary algorithm, a metaheuristic algorithm based on evolutionary theories that borrows stochastic mechanisms and has shown good performance in solving problems of continuous optimization. The use of this family of algorithms is very popular, despite the many difficulties that can be encountered in their design. Indeed, these algorithms have several parameters to adjust and a lot of operators to set according to the problems to solve. In the literature, we find a plethora of operators described, and it becomes complicated for the user to know which one to select in order to have the best possible result. In this context, this thesis has the main objective to propose methods to solve the problems raised without deteriorating the performance of these algorithms. Thus we propose two algorithms:- a method based on the maximum a posteriori that uses diversity probabilities for the operators to apply, and which puts this choice regularly in play,- a method based on a dynamic graph of operators representing the probabilities of transitions between operators, and relying on a model of the objective function built by a neural network to regularly update these probabilities. These two methods are detailed, as well as analyzed via a continuous optimization benchmark
|
5 |
Algorithmes et Bornes minimales pour la Synchronisation Temporelle à Haute Performance : Application à l’internet des objets corporels / Algorithms and minimum bounds for high performance time synchronization : Application to the wearable Internet of ThingsNasr, Imen 23 January 2017 (has links)
La synchronisation temporelle est la première opération effectuée par le démodulateur. Elle permet d'assurer que les échantillons transmis aux processus de démodulation puissent réaliser un taux d'erreurs binaires le plus faible.Dans cette thèse, nous proposons l'étude d'algorithmes innovants de synchronisation temporelle à haute performance.D'abord, nous avons proposé des algorithmes exploitant l'information souple du décodeur en plus du signal reçu afin d'améliorer l'estimation aveugle d'un retard temporel supposé constant sur la durée d'observation.Ensuite, nous avons proposé un algorithme original basé sur la synchronisation par lissage à faible complexité.Cette étape a consisté à proposer une technique opérant dans un contexte hors ligne, permettant l'estimation d'un retard aléatoire variable dans le temps via les boucles d'aller-retour sur plusieurs itération. Les performances d'un tel estimateur dépassent celles des algorithmes traditionnels.Afin d'évaluer la pertinence de tous les estimateurs proposés, pour des retards déterministe et aléatoire, nous avons évalué et comparé leurs performances à des bornes de Cramèr-Rao que nous avons développées pour ce cadre. Enfin, nous avons évalué les algorithmes proposés sur des signaux WBAN. / Time synchronization is the first function performed by the demodulator. It ensures that the samples transmitted to the demodulation processes allow to achieve the lowest bit error rate.In this thesis we propose the study of innovative algorithms for high performance time synchronization.First, we propose algorithms exploiting the soft information from the decoder in addition to the received signal to improve the blind estimation of the time delay. Next, we develop an original algorithm based on low complexity smoothing synchronization techniques. This step consisted in proposing a technique operating in an off-line context, making it possible to estimate a random delay that varies over time on several iterations via Forward- Backward loops. The performance of such estimators exceeds that of traditional algorithms. In order to evaluate the relevance of all the proposed estimators, for deterministic and random delays, we evaluated and compared their performance to Cramer-Rao bounds that we developed within these frameworks. We, finally, evaluated the proposed algorithms on WBAN signals.
|
6 |
A novel approach to restoration of Poissonian imagesShaked, Elad 09 February 2010 (has links)
The problem of reconstruction of digital images from their degraded measurements is regarded as a problem of central importance in various fields of engineering and imaging sciences. In such cases, the degradation is typically caused by the resolution limitations of an imaging device in use and/or by the destructive influence of measurement noise. Specifically, when the noise obeys a Poisson probability law, standard approaches to the problem of image reconstruction are based on using fixed-point algorithms which follow the methodology proposed by Richardson and Lucy in the beginning of the 1970s. The practice of using such methods, however, shows that their convergence properties tend to deteriorate at relatively high noise levels (which typically takes place in so-called low-count settings). This work introduces a novel method for de-noising and/or de-blurring of digital images that have been corrupted by Poisson noise. The proposed method is derived using the framework of MAP estimation, under the assumption that the image of interest can be sparsely represented in the domain of a properly designed linear transform. Consequently, a shrinkage-based iterative procedure is proposed, which guarantees the maximization of an associated maximum-a-posteriori criterion. It is shown in a series of both computer-simulated and real-life experiments that the proposed method outperforms a number of existing alternatives in terms of stability, precision, and computational efficiency.
|
7 |
A novel approach to restoration of Poissonian imagesShaked, Elad 09 February 2010 (has links)
The problem of reconstruction of digital images from their degraded measurements is regarded as a problem of central importance in various fields of engineering and imaging sciences. In such cases, the degradation is typically caused by the resolution limitations of an imaging device in use and/or by the destructive influence of measurement noise. Specifically, when the noise obeys a Poisson probability law, standard approaches to the problem of image reconstruction are based on using fixed-point algorithms which follow the methodology proposed by Richardson and Lucy in the beginning of the 1970s. The practice of using such methods, however, shows that their convergence properties tend to deteriorate at relatively high noise levels (which typically takes place in so-called low-count settings). This work introduces a novel method for de-noising and/or de-blurring of digital images that have been corrupted by Poisson noise. The proposed method is derived using the framework of MAP estimation, under the assumption that the image of interest can be sparsely represented in the domain of a properly designed linear transform. Consequently, a shrinkage-based iterative procedure is proposed, which guarantees the maximization of an associated maximum-a-posteriori criterion. It is shown in a series of both computer-simulated and real-life experiments that the proposed method outperforms a number of existing alternatives in terms of stability, precision, and computational efficiency.
|
8 |
Image Restoration Based upon Gauss-Markov Random FieldSheng, Ming-Cheng 20 June 2000 (has links)
Images are liable to being corrupted by noise when they are processed for many applications such as sampling, storage and transmission. In this thesis, we propose a method of image restoration for image corrupted by a white Gaussian noise. This method is based upon Gauss-Markov random field model combined with a technique of image segmentation. As a result, the image can be restored by MAP estimation.
In the approach of Gauss-Markov random field model, the image is restored by MAP estimation implemented by simulated annealing or deterministic search methods. By image segmentation, the region parameters and the power of generating noise can be obtained for every region. The above parameters are important for MAP estimation of the Gauss-Markov Random field model.
As a summary, we first segment the image to find the important region parameters and then restore the image by MAP estimation with using the above region parameters. Finally, the intermediate image is restored again by the conventional Gauss-Markov random field model method. The advantage of our method is the clear edges by the first restoration and deblured images by the second restoration.
|
9 |
Parameter Estimation for Compound Gauss-Markov Random Field and its application to Image RestorationHsu, I-Chien 20 June 2001 (has links)
The restoration of degraded images is one important application of image processing. The classical approach of image restoration, such as low-pass filter method, is usually stressed on the numerical error but with a disadvantage in visual quality of blurred texture. Therefore, a new method of image restoration, based upon image model by Compound Gauss-Markov(CGM) Random Fields, using MAP(maximum a posteriori probability) approach focused on image texture effect has been proved to be helpful. However, the contour of the restored image and numerical error for the method is poor because the conventional CGM model uses fixed global parameters for the whole image. To improve these disadvantages, we adopt the adjustable parameters method to estimate model parameters and restore the image. But the parameter estimation for the CGM model is difficult since the CGM model has 80 interdependent parameters. Therefore, we first adopt the parameter reduction approach to reduce the complexity of parameter estimation. Finally, the initial value set of the parameters is important. The different initial value might produce different results. The experiment results show that the proposed method using adjustable parameters has good numerical error and visual quality than the conventional methods using fixed parameters.
|
10 |
Investigation of Compound Gauss-Markov Image FieldLin, Yan-Li 05 August 2002 (has links)
This Compound Gauss-Markov image model has been proven helpful in image restoration. In this model, a pixel in the image random field is determined by the surrounding pixels according to a predetermined line field. In this thesis, we restored the noisy image based upon the traditional Compound Gauss-Markov image field without the constraint of the model parameters introduced in the original work. The image is restored in two steps iteratively: restoring the line field by the assumed image field and restoring the image field by the just computed line field.
Two methods are proposed to replace the traditional method in solving for the line field. They are probability method and vector method. In probability method, we break away from the limitation of the energy function Vcl(L) and the mystical system parameters Ckll(m,n) and£mw2. In vector method, the line field appears more reasonable than the original method. The image restored by our methods has a similar visual quality but a better numerical value than the original method.
|
Page generated in 0.0985 seconds