• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 8
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Bayesian risk management : "Frequency does not make you smarter"

Fucik, Markus January 2010 (has links)
Within our research group Bayesian Risk Solutions we have coined the idea of a Bayesian Risk Management (BRM). It claims (1) a more transparent and diligent data analysis as well as (2)an open-minded incorporation of human expertise in risk management. In this dissertation we formulize a framework for BRM based on the two pillars Hardcore-Bayesianism (HCB) and Softcore-Bayesianism (SCB) providing solutions for the claims above. For data analysis we favor Bayesian statistics with its Markov Chain Monte Carlo (MCMC) simulation algorithm. It provides a full illustration of data-induced uncertainty beyond classical point-estimates. We calibrate twelve different stochastic processes to four years of CO2 price data. Besides, we calculate derived risk measures (ex ante/ post value-at-risks, capital charges, option prices) and compare them to their classical counterparts. When statistics fails because of a lack of reliable data we propose our integrated Bayesian Risk Analysis (iBRA) concept. It is a basic guideline for an expertise-driven quantification of critical risks. We additionally review elicitation techniques and tools supporting experts to express their uncertainty. Unfortunately, Bayesian thinking is often blamed for its arbitrariness. Therefore, we introduce the idea of a Bayesian due diligence judging expert assessments according to their information content and their inter-subjectivity. / Die vorliegende Arbeit befasst sich mit den Ansätzen eines Bayes’schen Risikomanagements zur Messung von Risiken. Dabei konzentriert sich die Arbeit auf folgende zentrale Fragestellungen: (1) Wie ist es möglich, transparent Risiken zu quantifizieren, falls nur eine begrenzte Anzahl an geeigneten historischen Beobachtungen zur Datenanalyse zur Verfügung steht? (2) Wie ist es möglich, transparent Risiken zu quantifizieren, falls mangels geeigneter historischer Beobachtungen keine Datenanalyse möglich ist? (3) Inwieweit ist es möglich, Willkür und Beliebigkeit bei der Risikoquantifizierung zu begrenzen? Zur Beantwortung der ersten Frage schlägt diese Arbeit die Anwendung der Bayes’schen Statistik vor. Im Gegensatz zu klassischen Kleinste-Quadrate bzw. Maximum-Likelihood Punktschätzern können Bayes’sche A-Posteriori Verteilungen die dateninduzierte Parameter- und Modellunsicherheit explizit messen. Als Anwendungsbeispiel werden in der Arbeit zwölf verschiedene stochastische Prozesse an CO2-Preiszeitreihen mittels des effizienten Bayes’schen Markov Chain Monte Carlo (MCMC) Simulationsalgorithmus kalibriert. Da die Bayes’sche Statistik die Berechnung von Modellwahrscheinlichkeiten zur kardinalen Modellgütemessung erlaubt, konnten Log-Varianz Prozesse als mit Abstand beste Modellklasse identifiziert werden. Für ausgewählte Prozesse wurden zusätzlich die Auswirkung von Parameterunsicherheit auf abgeleitete Risikomaße (ex-ante/ ex-post Value-at-Risks, regulatorische Kapitalrücklagen, Optionspreise) untersucht. Generell sind die Unterschiede zwischen Bayes’schen und klassischen Risikomaßen umso größer, je komplexer die Modellannahmen für den CO2-Preis sind. Überdies sind Bayes’sche Value-at-Risks und Kapitalrücklagen konservativer als ihre klassischen Pendants (Risikoprämie für Parameterunsicherheit). Bezüglich der zweiten Frage ist die in dieser Arbeit vertretene Position, dass eine Risikoquantifizierung ohne (ausreichend) verlässliche Daten nur durch die Berücksichtigung von Expertenwissen erfolgen kann. Dies erfordert ein strukturiertes Vorgehen. Daher wird das integrated Bayesian Risk Analysis (iBRA) Konzept vorgestellt, welches Konzepte, Techniken und Werkzeuge zur expertenbasierten Identifizierung und Quantifizierung von Risikofaktoren und deren Abhängigkeiten vereint. Darüber hinaus bietet es Ansätze für den Umgang mit konkurrierenden Expertenmeinungen. Da gerade ressourceneffiziente Werkzeuge zur Quantifizierung von Expertenwissen von besonderem Interesse für die Praxis sind, wurden im Rahmen dieser Arbeit der Onlinemarkt PCXtrade und die Onlinebefragungsplattform PCXquest konzipiert und mehrfach erfolgreich getestet. In zwei empirischen Studien wurde zudem untersucht, inwieweit Menschen überhaupt in der Lage sind, ihre Unsicherheiten zu quantifizieren und inwieweit sie Selbsteinschätzungen von Experten bewerten. Die Ergebnisse deuten an, dass Menschen zu einer Selbstüberschätzung ihrer Prognosefähigkeiten neigen und tendenziell hohes Vertrauen in solche Experteneinschätzungen zeigen, zu denen der jeweilige Experte selbst hohes Zutrauen geäußert hat. Zu letzterer Feststellung ist jedoch zu bemerken, dass ein nicht unbeträchtlicher Teil der Befragten sehr hohe Selbsteinschätzung des Experten als negativ ansehen. Da der Bayesianismus Wahrscheinlichkeiten als Maß für die persönliche Unsicherheit propagiert, bietet er keinerlei Rahmen für die Verifizierung bzw. Falsifizierung von Einschätzungen. Dies wird mitunter mit Beliebigkeit gleichgesetzt und könnte einer der Gründe sein, dass offen praktizierter Bayesianismus in Deutschland ein Schattendasein fristet. Die vorliegende Arbeit stellt daher das Konzept des Bayesian Due Diligence zur Diskussion. Es schlägt eine kriterienbasierte Bewertung von Experteneinschätzungen vor, welche insbesondere die Intersubjektivität und den Informationsgehalt von Einschätzungen beleuchtet.
2

Régression logistique bayésienne : comparaison de densités a priori

Deschênes, Alexandre 07 1900 (has links)
La régression logistique est un modèle de régression linéaire généralisée (GLM) utilisé pour des variables à expliquer binaires. Le modèle cherche à estimer la probabilité de succès de cette variable par la linéarisation de variables explicatives. Lorsque l’objectif est d’estimer le plus précisément l’impact de différents incitatifs d’une campagne marketing (coefficients de la régression logistique), l’identification de la méthode d’estimation la plus précise est recherchée. Nous comparons, avec la méthode MCMC d’échantillonnage par tranche, différentes densités a priori spécifiées selon différents types de densités, paramètres de centralité et paramètres d’échelle. Ces comparaisons sont appliquées sur des échantillons de différentes tailles et générées par différentes probabilités de succès. L’estimateur du maximum de vraisemblance, la méthode de Gelman et celle de Genkin viennent compléter le comparatif. Nos résultats démontrent que trois méthodes d’estimations obtiennent des estimations qui sont globalement plus précises pour les coefficients de la régression logistique : la méthode MCMC d’échantillonnage par tranche avec une densité a priori normale centrée en 0 de variance 3,125, la méthode MCMC d’échantillonnage par tranche avec une densité Student à 3 degrés de liberté aussi centrée en 0 de variance 3,125 ainsi que la méthode de Gelman avec une densité Cauchy centrée en 0 de paramètre d’échelle 2,5. / Logistic regression is a model of generalized linear regression (GLM) used to explain binary variables. The model seeks to estimate the probability of success of this variable by the linearization of explanatory variables. When the goal is to estimate more accurately the impact of various incentives from a marketing campaign (coefficients of the logistic regression), the identification of the choice of the optimum prior density is sought. In our simulations, using the MCMC method of slice sampling, we compare different prior densities specified by different types of density, location and scale parameters. These comparisons are applied to samples of different sizes generated with different probabilities of success. The maximum likelihood estimate, Gelman’s method and Genkin’s method complement the comparative. Our simulations demonstrate that the MCMC method with a normal prior density centered at 0 with variance of 3,125, the MCMC method with a Student prior density with 3 degrees of freedom centered at 0 with variance of 3,125 and Gelman’s method with a Cauchy density centered at 0 with scale parameter of 2,5 get estimates that are globally the most accurate of the coefficients of the logistic regression.
3

Topics in living cell miultiphoton laser scanning microscopy (MPLSM) image analysis

Zhang, Weimin 30 October 2006 (has links)
Multiphoton laser scanning microscopy (MPLSM) is an advanced fluorescence imaging technology which can produce a less noisy microscope image and minimize the damage in living tissue. The MPLSM image in this research is the dehydroergosterol (DHE, a fluorescent sterol which closely mimics those of cholesterol in lipoproteins and membranes) on living cell's plasma membrane area. The objective is to use a statistical image analysis method to describe how cholesterol is distributed on a living cell's membrane. Statistical image analysis methods applied in this research include image segmentation/classification and spatial analysis. In image segmentation analysis, we design a supervised learning method by using smoothing technique with rank statistics. This approach is especially useful in a situation where we have only very limited information of classes we want to segment. We also apply unsupervised leaning methods on the image data. In image data spatial analysis, we explore the spatial correlation of segmented data by a Monte Carlo test. Our research shows that the distributions of DHE exhibit a spatially aggregated pattern. We fit two aggregated point pattern models, an area-interaction process model and a Poisson cluster process model, to the data. For the area interaction process model, we design algorithms for maximum pseudo-likelihood estimator and Monte Carlo maximum likelihood estimator under lattice data setting. For the Poisson Cluster process parameter estimation, the method for implicit statistical model parameter estimate is used. A group of simulation studies shows that the Monte Carlo maximum estimation method produces consistent parameter estimates. The goodness-of-fit tests show that we cannot reject both models. We propose to use the area interaction process model in further research.
4

Modelos estocásticos com heterocedasticidade para séries temporais em finanças / Stochastic models with heteroscedasticity for time series in finance

Oliveira, Sandra Cristina de 20 May 2005 (has links)
Neste trabalho desenvolvemos um estudo sobre modelos auto-regressivos com heterocedasticidade (ARCH) e modelos auto-regressivos com erros ARCH (AR-ARCH). Apresentamos os procedimentos para a estimação dos modelos e para a seleção da ordem dos mesmos. As estimativas dos parâmetros dos modelos são obtidas utilizando duas técnicas distintas: a inferência Clássica e a inferência Bayesiana. Na abordagem de Máxima Verossimilhança obtivemos intervalos de confiança usando a técnica Bootstrap e, na abordagem Bayesiana, adotamos uma distribuição a priori informativa e uma distribuição a priori não-informativa, considerando uma reparametrização dos modelos para mapear o espaço dos parâmetros no espaço real. Este procedimento nos permite adotar distribuição a priori normal para os parâmetros transformados. As distribuições a posteriori são obtidas através dos métodos de simulação de Monte Carlo em Cadeias de Markov (MCMC). A metodologia é exemplificada considerando séries simuladas e séries do mercado financeiro brasileiro / In this work we present a study of autoregressive conditional heteroskedasticity models (ARCH) and autoregressive models with autoregressive conditional heteroskedasticity errors (AR-ARCH). We also present procedures for the estimation and the selection of these models. The estimates of the parameters of those models are obtained using both Maximum Likelihood estimation and Bayesian estimation. In the Maximum Likelihood approach we get confidence intervals using Bootstrap resampling method and in the Bayesian approach we present informative prior and non-informative prior distributions, considering a reparametrization of those models in order to map the space of the parameters into real space. This procedure permits to choose prior normal distributions for the transformed parameters. The posterior distributions are obtained using Monte Carlo Markov Chain methods (MCMC). The methodology is exemplified considering simulated and Brazilian financial series
5

"Métodos de estimação na teoria de resposta ao item" / Estimation methods in item response theory

Azevedo, Caio Lucidius Naberezny 27 February 2003 (has links)
Neste trabalho apresentamos os mais importantes processos de estimação em algumas classes de modelos de resposta ao item (Dicotômicos e Policotômicos). Discutimos algumas propriedades desses métodos. Com o objetivo de comparar o desempenho dos métodos conduzimos simulações apropriadas. / In this work we show the most important estimation methods for some item response models (both dichotomous and polichotomous). We discuss some proprieties of these methods. To compare the characteristic of these methods we conducted appropriate simulations.
6

Abordagem bayesiana para curva de crescimento com restrições nos parâmetros

AMARAL, Magali Teresópolis Reis 18 August 2008 (has links)
Submitted by (ana.araujo@ufrpe.br) on 2016-08-04T13:26:23Z No. of bitstreams: 1 Magali Teresopolis Reis Amaral.pdf: 5438608 bytes, checksum: a3ca949533ae94adaf7883fd465a627a (MD5) / Made available in DSpace on 2016-08-04T13:26:23Z (GMT). No. of bitstreams: 1 Magali Teresopolis Reis Amaral.pdf: 5438608 bytes, checksum: a3ca949533ae94adaf7883fd465a627a (MD5) Previous issue date: 2008-08-18 / The adjustment of the weight-age growth curves for animals plays an important role in animal production planning. These adjusted growth curves must be coherent with the biological interpretation of animal growth, which often demands imposition of constraints on model parameters.The inference of the parameters of nonlinear models with constraints, using classical techniques, presents various difficulties. In order to bypass those difficulties, a bayesian approach for adjustment of the growing curves is proposed. In this respect the bayesian proposed approach introduces restrictions on model parameters through choice of the prior density. Due to the nonlinearity, the posterior density of those parameters does not have a kernel that can be identified among the traditional distributions, and their moments can only be obtained using numerical techniques. In this work the MCMC simulation (Monte Carlo chain Markov) was implemented to obtain a summary of the posterior density. Besides, selection model criteria were used for the observed data, based on generated samples of the posterior density.The main purpose of this work is to show that the bayesian approach can be of practical use, and to compare the bayesian inference of the estimated parameters considering noninformative prior density (from Jeffreys), with the classical inference obtained by the Gauss-Newton method. Therefore it was possible to observe that the calculation of the confidence intervals based on the asymptotic theory fails, indicating non significance of certain parameters of some models, while in the bayesian approach the intervals of credibility do not present this problem. The programs in this work were implemented in R language,and to illustrate the utility of the proposed method, analysis of real data was performed, from an experiment of evaluation of system of crossing among cows from different herds, implemented by Embrapa Pecuária Sudeste. The data correspond to 12 measurements of weight of animals between 8 and 19 months old, from the genetic groups of the races Nelore and Canchim, belonging to the genotype AALLAB (Paz 2002). The results reveal excellent applicability of the bayesian method, where the model of Richard presented difficulties of convergence both in the classical and in the bayesian approach (with non informative prior). On the other hand the logistic model provided the best adjustment of the data for both methodologies when opting for non informative and informative prior density. / O ajuste de curva de crescimento peso-idade para animais tem um papel importante no planejamento da produção animal. No entanto, as curvas de crescimento ajustadas devem ser coerentes com as interpretações biológicas do crescimento do animal, o que exige muitas vezes que sejam impostas restrições aos parâmetros desse modelo.A inferência de parâmetros de modelos não lineares sujeito a restrições, utilizando técnicas clássicas apresenta diversas dificuldades. Para contornar estas dificuldades, foi proposta uma abordagem bayesiana para ajuste de curvas de crescimento. Neste sentido,a abordagem bayesiana proposta introduz as restrições nos parâmetros dos modelos através das densidades de probabilidade a priori adotadas. Devido à não linearidade, as densidades a posteriori destes parâmetros não têm um núcleo que possa ser identificado entre as distribuições tradicionalmente conhecidas e os seus momentos só podem ser obtidos numericamente. Neste trabalho, as técnicas de simulação de Monte Carlo Cadeia de Markov (MCMC) foram implementadas para obtenção de um sumário das densidades a posteriori. Além disso, foram utilizados critérios de seleção do melhor modelo para um determinado conjunto de dados baseados nas amostras geradas das densidades a posteriori.O objetivo principal deste trabalho é mostrar a viabilidade da abordagem bayesiana e comparar a inferência bayesiana dos parâmetros estimados, considerando-se densidades a priori não informativas (de Jeffreys), com a inferência clássica das estimativas obtidas pelo método de Gauss-Newton. Assim, observou-se que o cálculo de intervalos de confiança, baseado na teoria assintótica, falha, levando a não significância de certos parâmetros de alguns modelos. Enquanto na abordagem bayesiana os intervalos de credibilidade não apresentam este problema. Os programas utilizados foram implementados no R e para ilustração da aplicabilidade do método proposto, foram realizadas análises de dados reais oriundos de um experimento de avaliação de sistema de cruzamento entre raças bovinas de corte, executado na Embrapa Pecuária Sudeste. Os dados correspondem a 12 mensurações de peso dos 8 aos 19 meses de idade do grupo genético das raças Nelore e Canchim, pertencente ao grupo de genotípico AALLAB, ver (Paz 2002). Os resultados revelaram excelente aplicabilidade do método bayesiano, destacando que o modelo de Richard apresentou dificuldades de convergência tanto na abordagem clássica como bayesiana (com priori não informativa). Por outro lado o modelo Logístico foi quem melhor se ajustou aos dados em ambas metodologias quando se optou por densidades a priori não informativa e informativa.
7

Modelos estocásticos com heterocedasticidade para séries temporais em finanças / Stochastic models with heteroscedasticity for time series in finance

Sandra Cristina de Oliveira 20 May 2005 (has links)
Neste trabalho desenvolvemos um estudo sobre modelos auto-regressivos com heterocedasticidade (ARCH) e modelos auto-regressivos com erros ARCH (AR-ARCH). Apresentamos os procedimentos para a estimação dos modelos e para a seleção da ordem dos mesmos. As estimativas dos parâmetros dos modelos são obtidas utilizando duas técnicas distintas: a inferência Clássica e a inferência Bayesiana. Na abordagem de Máxima Verossimilhança obtivemos intervalos de confiança usando a técnica Bootstrap e, na abordagem Bayesiana, adotamos uma distribuição a priori informativa e uma distribuição a priori não-informativa, considerando uma reparametrização dos modelos para mapear o espaço dos parâmetros no espaço real. Este procedimento nos permite adotar distribuição a priori normal para os parâmetros transformados. As distribuições a posteriori são obtidas através dos métodos de simulação de Monte Carlo em Cadeias de Markov (MCMC). A metodologia é exemplificada considerando séries simuladas e séries do mercado financeiro brasileiro / In this work we present a study of autoregressive conditional heteroskedasticity models (ARCH) and autoregressive models with autoregressive conditional heteroskedasticity errors (AR-ARCH). We also present procedures for the estimation and the selection of these models. The estimates of the parameters of those models are obtained using both Maximum Likelihood estimation and Bayesian estimation. In the Maximum Likelihood approach we get confidence intervals using Bootstrap resampling method and in the Bayesian approach we present informative prior and non-informative prior distributions, considering a reparametrization of those models in order to map the space of the parameters into real space. This procedure permits to choose prior normal distributions for the transformed parameters. The posterior distributions are obtained using Monte Carlo Markov Chain methods (MCMC). The methodology is exemplified considering simulated and Brazilian financial series
8

"Métodos de estimação na teoria de resposta ao item" / Estimation methods in item response theory

Caio Lucidius Naberezny Azevedo 27 February 2003 (has links)
Neste trabalho apresentamos os mais importantes processos de estimação em algumas classes de modelos de resposta ao item (Dicotômicos e Policotômicos). Discutimos algumas propriedades desses métodos. Com o objetivo de comparar o desempenho dos métodos conduzimos simulações apropriadas. / In this work we show the most important estimation methods for some item response models (both dichotomous and polichotomous). We discuss some proprieties of these methods. To compare the characteristic of these methods we conducted appropriate simulations.

Page generated in 0.079 seconds