• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 84
  • 26
  • 13
  • 12
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 161
  • 161
  • 26
  • 26
  • 24
  • 22
  • 21
  • 20
  • 19
  • 18
  • 18
  • 17
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Spillovers and jumps in global markets: a comparative analysis / Saltos e Spillovers nos mercados globais: uma análise comparativa

Moura, Rodolfo Chiabai 08 June 2018 (has links)
We analyze the relation between volatility spillovers and jumps in financial markets. For this, we compared the volatility spillover index proposed by Diebold and Yilmaz (2009) with a global volatility component, estimated through a multivariate stochastic volatility model with jumps in the mean and in the conditional volatility. This model allows a direct dating of events that alter the global volatility structure, based on a permanent/transitory decomposition in the structure of returns and volatilities, and also the estimation of market risk measures. We conclude that the multivariate stochastic volatility model solves some limitations in the spillover index and can be a useful tool in measuring and managing risk in global financial markets. / Analisamos a relação existente entre spillovers e saltos na volatilidade nos mercados financeiros. Para isso, comparamos o índice de spillover de volatilidade proposto por Diebold and Yilmaz (2009), com um componente de volatilidade global, estimado através de um modelo multivariado de volatilidade estocástica com saltos na média e na volatilidade condicional. Este modelo permite uma datação direta dos eventos que alteram a estrutura de volatilidade global, baseando-se na decomposição das estruturas de retorno e volatilidade entre efeitos permanentes/transitórios, como também a estimação de medidas de risco de mercado. Concluímos que este modelo resolve algumas das limitações do índice de spillover além de fornecer um método prático para mensurar e administrar o risco nos mercados financeiros globais.
32

Regressão binária usando ligações potência e reversa de potência / Binary regression using power and reversal power links

Anyosa, Susan Alicia Chumbimune 07 April 2017 (has links)
O objetivo desta dissertação é estudar uma família de ligações assimétricas para modelos de regressão binária sob a abordagem bayesiana. Especificamente, apresenta-se a estimação dos parâmetros da família de modelos de regressão binária com funções de ligação potência e reversa de potência considerando o método de estimação Monte Carlo Hamiltoniano, na extensão No-U-Turn Sampler, e o método Metropolis-Hastings dentro de Gibbs. Além disso, estudam-se diferentes medidas de comparação de modelos, incluindo critérios de informação e de avaliação preditiva. Um estudo de simulação foi desenvolvido para estudar a acurácia e eficiência nos parâmetros estimados. Através da análise de dados educacionais, mostra-se que os modelos usando as ligações propostas apresentam melhor ajuste do que os modelos usando ligações tradicionais. / The aim of this dissertation is to study a family of asymmetric link functions for binary regression models under Bayesian approach. Specifically, we present the estimation of parameters of power and reversal power binary regression models considering Hamiltonian Monte Carlo method, on No-U-Turn Sampler extension, and Metropolis-Hastings within Gibbs sampling method. Furthermore, we study a wide variety of model comparison measures, including information criteria and measures of predictive evaluation. A simulation study was conducted in order to research accuracy and efficiency on estimated parameters. Through analysis of educational data we show that models using the proposed link functions perform better fit than models using standard links.
33

Métodos estatísticos para equalização de canais de comunicação. / Statistical methods for blind equalization of communication channels.

Claudio José Bordin Júnior 23 March 2006 (has links)
Nesta tese analisamos e propomos métodos para a equalização não-treinada (cega) de canais de comunicação lineares FIR baseados em filtros de partículas, que são técnicas recursivas para a solução Bayesiana de problemas de filtragem estocástica. Iniciamos propondo novos métodos para equalização sob ruído gaussiano que prescindem do uso de codificação diferencial, ao contrário dos métodos existentes. Empregando técnicas de evolução artificial de parâmetros, estendemos estes resultados para o caso de ruído aditivo com distribuição não-gaussiana. Em seguida, desenvolvemos novos métodos baseados nos mesmos princípios para equalizar e decodificar conjuntamente sistemas de comunicação que empregam códigos convolucionais ou em bloco. Através de simulações numéricas, observamos que os algoritmos propostos apresentam desempenhos, medidos em termos de taxa média de erro de bit e velocidade de convergência, marcadamente superiores aos de métodos tradicionais, freqüentemente aproximando o desempenho dos algoritmos ótimos (MAP) treinados. Além disso, observamos que os métodos baseados em filtros de partículas determinísticos exibem desempenhos consistentemente superiores aos dos demais métodos, sendo portanto a melhor escolha caso o modelo de sinal empregado permita a marginalização analítica dos parâmetros desconhecidos do canal. / In this thesis, we propose and analyze blind equalization methods suitable for linear FIR communications channels, focusing on the development of algorithms based on particle filters - recursive methods for approximating Bayesian solutions to stochastic filtering problems. Initially, we propose new equalization methods for signal models with gaussian additive noise that dispense with the need for differentially encoding the transmitted signals, as opposed to the previously existing methods. Next, we extend these algorithms to deal with non-gaussian additive noise by deploying artificial parameter evolution techniques. We next develop new joint blind equalization and decoding algorithms, suitable for convolutionally or block-coded communications systems. Via numerical simulations we show that the proposed algorithms outperform traditional approaches both in terms of mean bit error rate and convergence speed, and closely approach the performance of the optimal (MAP) trained equalizer. Furthermore, we observed that the methods based on deterministic particle filters consistently outperform those based on stochastic approaches, making them preferable when the adopted signal model allows for the analytic marginalization of the unknown channel parameters.
34

Estimation parcimonieuse de biais multitrajets pour les systèmes GNSS / Sparse estimation of multipath biases for GNSS

Lesouple, Julien 15 March 2019 (has links)
L’évolution des technologies électroniques (miniaturisation, diminution des coûts) a permis aux GNSS (systèmes de navigation par satellites) d’être de plus en plus accessibles et doncutilisés au quotidien, par exemple par le biais d’un smartphone, ou de récepteurs disponibles dans le commerce à des prix raisonnables (récepteurs bas-coûts). Ces récepteurs fournissent à l’utilisateur plusieurs informations, comme par exemple sa position et sa vitesse, ainsi que des mesures des temps de propagation entre le récepteur et les satellites visibles entre autres. Ces récepteurs sont donc devenus très répandus pour les utilisateurs souhaitant évaluer des techniques de positionnement sans développer tout le hardware nécessaire. Les signaux issus des satellites GNSS sont perturbés par de nombreuses sources d’erreurs entre le moment où ils sont traités par le récepteurs pour estimer la mesure correspondante. Il est donc nécessaire decompenser chacune des ces erreurs afin de fournir à l’utilisateur la meilleure position possible. Une des sources d’erreurs recevant beaucoup d’intérêt, est le phénomène de réflexion des différents signaux sur les éventuels obstacles de la scène dans laquelle se trouve l’utilisateur, appelé multitrajets. L’objectif de cette thèse est de proposer des algorithmes permettant de limiter l’effet des multitrajets sur les mesures GNSS. La première idée développée dans cette thèse est de supposer que ces signaux multitrajets donnent naissance à des biais additifs parcimonieux. Cette hypothèse de parcimonie permet d’estimer ces biais à l’aide de méthodes efficaces comme le problème LASSO. Plusieurs variantes ont été développés autour de cette hypothèse visant à contraindre le nombre de satellites ne souffrant pas de multitrajet comme non nul. La deuxième idée explorée dans cette thèse est une technique d’estimation des erreurs de mesure GNSS à partir d’une solution de référence, qui suppose que les erreurs dues aux multitrajets peuvent se modéliser à l’aide de mélanges de Gaussiennes ou de modèles de Markov cachés. Deux méthodes de positionnement adaptées à ces modèles sont étudiées pour la navigation GNSS. / The evolution of electronic technologies (miniaturization, price decreasing) allowed Global Navigation Satellite Systems (GNSS) to be used in our everyday life, through a smartphone for instance, or through receivers available in the market at reasonable prices (low cost receivers). Those receivers provide the user with many information, such as his position or velocity, but also measurements such as propagation delays of the signals emitted by the satellites and processed by the receiver. These receivers are thus widespread for users who want to challenge positioning techniques without developing the whole product. GNSS signals are affected by many error sources between the moment they are emitted and the moment they are processed by the receiver to compute the measurements. It is then necessary to mitigate each of these error sources to provide the user the most accurate solution. One of the most intense research topic in navigation is the phenomenon of reflexions on the eventual obstacles in the scene the receiver is located in, called multipath. The aim of this thesis is to propose algorithms allowing the effects of multipath on GNSS measurements to be reduced. The first idea presented in this thesis is to assume these multipath lead to sparse additive biases. This hypothesis allows us to estimate this biases thanks to efficient methods such as the LASSO problem. The second idea explored in this thesis is an estimation method of GNSS measurement errors corresponding to the proposed navigation algorithm thanks to a reference trajectory, which assumes these errors can be modelled by Gaussian mixtures or Hidden Markov Models. Two filtering methods corresponding to these two models are studied for GNSS navigation.
35

Evaluation of a statistical method to use prior information in the estimation of combustion parameters / Utvärdering av en statistisk metod för att förbättra estimering av förbränningsparametrar med hjälp av förkunskap

Rundin, Patrick January 2006 (has links)
<p>Ion current sensing, where information about the combustion process in an SI-engine is gained by applying a voltage over the spark gap, is currently used to detect and avoid knock and misfire. Several researchers have pointed out that information on peak pressure location and air/fuel ratio can be gained from the ion current and have suggested several ways to estimate these parameters.</p><p>Here a simplified Bayesian approach was taken to construct a lowpass-like filter or estimator that makes use of prior information to improve estimates in crucial areas. The algorithm is computationally light and could, if successful, improve estimates enough for production use.</p><p>The filter was implemented in several variants and evaluated in a number of simulated cases. It was found that the proposed filter requires a number of trade-offs between variance, bias, tracking speed and accuracy that are difficult to balance. For satisfactory estimates and trade-off balance the prior information must be more accurate than was available.</p><p>It was also found that similar a task, constructing a general Bayesian estimator, has already been tackled in the area of particle filtering and that there are promising and unexplored possibilities there. However, particle filters require computational power that will not be available to production engines for some years. </p> / <p>Vid jonströmsmätning utvinns information om förbränningsprocessen i en bensinmotor genom att en spänning läggs över gnistgapet och den resulterande strömmen mäts. Jonströmsmätning används idag för knack- och feltändningsdetektion. Flera forskare har påpekat att det finns än mer information i jonströmmen, bl.a. om bränsleblandningen och cylindertrycket och har även föreslagit metoder för att utvinna och använda den informationen för skattning av dessa parametrar.</p><p>Här presenteras en förenklad Bayesisk metod i form av en lågpassfilter-liknande skattare som använder förkunskap till att förbättra estimat på relevanta områden. Algoritmen är beräkningsmässigt lätt och kan, om den är framgångsrik, leverera skattningar av förbränningsparametrar som är tillräckligt bra för att användas för sluten styrning av en bensinmotor.</p><p>Skattaren, eller filtret, implementerades i flera varianter och utvärderades i ett antal simulerade fall. Resultaten visade på att flera svåra avvägningar måste göras mellan förbättring i varians, avvikelse och följning eftersom förbättring i den ena ledde till försämring i de andra. För att göra dessa avvägningar och få goda skattningar krävs bättre förhandskunskap och mätdata än vad som var tillgängligt.</p><p>Bayesisk skattning är ett stort befintligt område inom statistik och signalbehandling och den mest generella skattaren är partikelfiltret som har många intressanta tillämpningar och möjligheter. De har hittills inte använts inom skattning av förbränningsparametrar och har således go potential för framtida utveckling. De är dock beräkningsmässigt tunga och kräver beräkningsresurser utöver vad som är tillgängliga i ett motorstyrsystem idag.</p>
36

Optimal Bayesian Estimators for Image Segmentation and Surface Reconstruction

Marroquin, Jose L. 01 April 1985 (has links)
sA very fruitful approach to the solution of image segmentation andssurface reconstruction tasks is their formulation as estimationsproblems via the use of Markov random field models and Bayes theory.sHowever, the Maximuma Posteriori (MAP) estimate, which is the one mostsfrequently used, is suboptimal in these cases. We show that forssegmentation problems the optimal Bayesian estimator is the maximizersof the posterior marginals, while for reconstruction tasks, thesthreshold posterior mean has the best possible performance. We presentsefficient distributed algorithms for approximating these estimates insthe general case. Based on these results, we develop a maximumslikelihood that leads to a parameter-free distributed algorithm forsrestoring piecewise constant images. To illustrate these ideas, thesreconstruction of binary patterns is discussed in detail.
37

Analysis and Optimization of Classifier Error Estimator Performance within a Bayesian Modeling Framework

Dalton, Lori Anne 2012 May 1900 (has links)
With the advent of high-throughput genomic and proteomic technologies, in conjunction with the difficulty in obtaining even moderately sized samples, small-sample classifier design has become a major issue in the biological and medical communities. Training-data error estimation becomes mandatory, yet none of the popular error estimation techniques have been rigorously designed via statistical inference or optimization. In this investigation, we place classifier error estimation in a framework of minimum mean-square error (MMSE) signal estimation in the presence of uncertainty, where uncertainty is relative to a prior over a family of distributions. This results in a Bayesian approach to error estimation that is optimal and unbiased relative to the model. The prior addresses a trade-off between estimator robustness (modeling assumptions) and accuracy. Closed-form representations for Bayesian error estimators are provided for two important models: discrete classification with Dirichlet priors (the discrete model) and linear classification of Gaussian distributions with fixed, scaled identity or arbitrary covariances and conjugate priors (the Gaussian model). We examine robustness to false modeling assumptions and demonstrate that Bayesian error estimators perform especially well for moderate true errors. The Bayesian modeling framework facilitates both optimization and analysis. It naturally gives rise to a practical expected measure of performance for arbitrary error estimators: the sample-conditioned mean-square error (MSE). Closed-form expressions are provided for both Bayesian models. We examine the consistency of Bayesian error estimation and illustrate a salient application in censored sampling, where sample points are collected one at a time until the conditional MSE reaches a stopping criterion. We address practical considerations for gene-expression microarray data, including the suitability of the Gaussian model, a methodology for calibrating normal-inverse-Wishart priors from unused data, and an approximation method for non-linear classification. We observe superior performance on synthetic high-dimensional data and real data, especially for moderate to high expected true errors and small feature sizes. Finally, arbitrary error estimators may be optimally calibrated assuming a fixed Bayesian model, sample size, classification rule, and error estimation rule. Using a calibration function mapping error estimates to their optimally calibrated values off-line, error estimates may be calibrated on the fly whenever the assumptions apply.
38

Contributions to Bayesian wavelet shrinkage

Remenyi, Norbert 07 November 2012 (has links)
This thesis provides contributions to research in Bayesian modeling and shrinkage in the wavelet domain. Wavelets are a powerful tool to describe phenomena rapidly changing in time, and wavelet-based modeling has become a standard technique in many areas of statistics, and more broadly, in sciences and engineering. Bayesian modeling and estimation in the wavelet domain have found useful applications in nonparametric regression, image denoising, and many other areas. In this thesis, we build on the existing techniques and propose new methods for applications in nonparametric regression, image denoising, and partially linear models. The thesis consists of an overview chapter and four main topics. In Chapter 1, we provide an overview of recent developments and the current status of Bayesian wavelet shrinkage research. The chapter contains an extensive literature review consisting of almost 100 references. The main focus of the overview chapter is on nonparametric regression, where the observations come from an unknown function contaminated with Gaussian noise. We present many methods which employ model-based and adaptive shrinkage of the wavelet coefficients through Bayes rules. These includes new developments such as dependence models, complex wavelets, and Markov chain Monte Carlo (MCMC) strategies. Some applications of Bayesian wavelet shrinkage, such as curve classification, are discussed. In Chapter 2, we propose the Gibbs Sampling Wavelet Smoother (GSWS), an adaptive wavelet denoising methodology. We use the traditional mixture prior on the wavelet coefficients, but also formulate a fully Bayesian hierarchical model in the wavelet domain accounting for the uncertainty of the prior parameters by placing hyperpriors on them. Since a closed-form solution to the Bayes estimator does not exist, the procedure is computational, in which the posterior mean is computed via MCMC simulations. We show how to efficiently develop a Gibbs sampling algorithm for the proposed model. The developed procedure is fully Bayesian, is adaptive to the underlying signal, and provides good denoising performance compared to state-of-the-art methods. Application of the method is illustrated on a real data set arising from the analysis of metabolic pathways, where an iterative shrinkage procedure is developed to preserve the mass balance of the metabolites in the system. We also show how the methodology can be extended to complex wavelet bases. In Chapter 3, we propose a wavelet-based denoising methodology based on a Bayesian hierarchical model using a double Weibull prior. The interesting feature is that in contrast to the mixture priors traditionally used by some state-of-the-art methods, the wavelet coefficients are modeled by a single density. Two estimators are developed, one based on the posterior mean and the other based on the larger posterior mode; and we show how to calculate these estimators efficiently. The methodology provides good denoising performance, comparable even to state-of-the-art methods that use a mixture prior and an empirical Bayes setting of hyperparameters; this is demonstrated by simulations on standard test functions. An application to a real-word data set is also considered. In Chapter 4, we propose a wavelet shrinkage method based on a neighborhood of wavelet coefficients, which includes two neighboring coefficients and a parental coefficient. The methodology is called Lambda-neighborhood wavelet shrinkage, motivated by the shape of the considered neighborhood. We propose a Bayesian hierarchical model using a contaminated exponential prior on the total mean energy in the Lambda-neighborhood. The hyperparameters in the model are estimated by the empirical Bayes method, and the posterior mean, median, and Bayes factor are obtained and used in the estimation of the total mean energy. Shrinkage of the neighboring coefficients is based on the ratio of the estimated and observed energy. The proposed methodology is comparable and often superior to several established wavelet denoising methods that utilize neighboring information, which is demonstrated by extensive simulations. An application to a real-world data set from inductance plethysmography is considered, and an extension to image denoising is discussed. In Chapter 5, we propose a wavelet-based methodology for estimation and variable selection in partially linear models. The inference is conducted in the wavelet domain, which provides a sparse and localized decomposition appropriate for nonparametric components with various degrees of smoothness. A hierarchical Bayes model is formulated on the parameters of this representation, where the estimation and variable selection is performed by a Gibbs sampling procedure. For both the parametric and nonparametric part of the model we are using point-mass-at-zero contamination priors with a double exponential spread distribution. In this sense we extend the model of Chapter 2 to partially linear models. Only a few papers in the area of partially linear wavelet models exist, and we show that the proposed methodology is often superior to the existing methods with respect to the task of estimating model parameters. Moreover, the method is able to perform Bayesian variable selection by a stochastic search for the parametric part of the model.
39

General Adaptive Monte Carlo Bayesian Image Denoising

Zhang, Wen January 2010 (has links)
Image noise reduction, or denoising, is an active area of research, although many of the techniques cited in the literature mainly target additive white noise. With an emphasis on signal-dependent noise, this thesis presents the General Adaptive Monte Carlo Bayesian Image Denoising (GAMBID) algorithm, a model-free approach based on random sampling. Testing is conducted on synthetic images with two different signal-dependent noise types as well as on real synthetic aperture radar and ultrasound images. Results show that GAMBID can achieve state-of-the-art performance, but suffers from some limitations in dealing with textures and fine low-contrast features. These aspects can by addressed in future iterations when GAMBID is expanded to become a versatile denoising framework.
40

Evaluation of a statistical method to use prior information in the estimation of combustion parameters / Utvärdering av en statistisk metod för att förbättra estimering av förbränningsparametrar med hjälp av förkunskap

Rundin, Patrick January 2006 (has links)
Ion current sensing, where information about the combustion process in an SI-engine is gained by applying a voltage over the spark gap, is currently used to detect and avoid knock and misfire. Several researchers have pointed out that information on peak pressure location and air/fuel ratio can be gained from the ion current and have suggested several ways to estimate these parameters. Here a simplified Bayesian approach was taken to construct a lowpass-like filter or estimator that makes use of prior information to improve estimates in crucial areas. The algorithm is computationally light and could, if successful, improve estimates enough for production use. The filter was implemented in several variants and evaluated in a number of simulated cases. It was found that the proposed filter requires a number of trade-offs between variance, bias, tracking speed and accuracy that are difficult to balance. For satisfactory estimates and trade-off balance the prior information must be more accurate than was available. It was also found that similar a task, constructing a general Bayesian estimator, has already been tackled in the area of particle filtering and that there are promising and unexplored possibilities there. However, particle filters require computational power that will not be available to production engines for some years. / Vid jonströmsmätning utvinns information om förbränningsprocessen i en bensinmotor genom att en spänning läggs över gnistgapet och den resulterande strömmen mäts. Jonströmsmätning används idag för knack- och feltändningsdetektion. Flera forskare har påpekat att det finns än mer information i jonströmmen, bl.a. om bränsleblandningen och cylindertrycket och har även föreslagit metoder för att utvinna och använda den informationen för skattning av dessa parametrar. Här presenteras en förenklad Bayesisk metod i form av en lågpassfilter-liknande skattare som använder förkunskap till att förbättra estimat på relevanta områden. Algoritmen är beräkningsmässigt lätt och kan, om den är framgångsrik, leverera skattningar av förbränningsparametrar som är tillräckligt bra för att användas för sluten styrning av en bensinmotor. Skattaren, eller filtret, implementerades i flera varianter och utvärderades i ett antal simulerade fall. Resultaten visade på att flera svåra avvägningar måste göras mellan förbättring i varians, avvikelse och följning eftersom förbättring i den ena ledde till försämring i de andra. För att göra dessa avvägningar och få goda skattningar krävs bättre förhandskunskap och mätdata än vad som var tillgängligt. Bayesisk skattning är ett stort befintligt område inom statistik och signalbehandling och den mest generella skattaren är partikelfiltret som har många intressanta tillämpningar och möjligheter. De har hittills inte använts inom skattning av förbränningsparametrar och har således go potential för framtida utveckling. De är dock beräkningsmässigt tunga och kräver beräkningsresurser utöver vad som är tillgängliga i ett motorstyrsystem idag.

Page generated in 0.1234 seconds