• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 2
  • 1
  • Tagged with
  • 12
  • 12
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Exploring complex loss functions for point estimation

Chaisee, Kuntalee January 2015 (has links)
This thesis presents several aspects of simulation-based point estimation in the context of Bayesian decision theory. The first part of the thesis (Chapters 4 - 5) concerns the estimation-then-minimisation (ETM) method as an efficient computational approach to compute simulation-based Bayes estimates. We are interested in applying the ETM method to compute Bayes estimates under some non-standard loss functions. However, for some loss functions, the ETM method cannot be implemented straightforwardly. We examine the ETM method via Taylor approximations and cubic spline interpolations for Bayes estimates in one dimension. In two dimensions, we implement the ETM method via bicubic interpolation. The second part of the thesis (Chapter 6) concentrates on the analysis of a mixture posterior distribution with a known number of components using the Markov chain Monte Carlo (MCMC) output. We aim for Bayesian point estimation related to a label invariant loss function which allows us to estimate the parameters in the mixture posterior distribution without dealing with label switching. We also investigate uncertainty of the point estimates which is presented by the uncertainty bound and the crude uncertainty bound of the expected loss evaluated at the point estimates based on MCMC samples. The crude uncertainty bound is relatively cheap, but it seems to be unreliable. On the other hand, the uncertainty bound which is approximated a 95% confidence interval seems to be reliable, but are very computationally expensive. The third part of the thesis (Chapter 7), we propose a possible alternative way to present the uncertainty for Bayesian point estimates. We adopt the idea of leaving out observations from the jackknife method to compute jackknife-Bayes estimates. We then use the jackknife-Bayes estimates to visualise the uncertainty of Bayes estimates. Further investigation is required to improve the method and some suggestions are made to maximise the efficiency of this approach.
2

Markov chains for genetics and extremes

Sisson, Scott Antony January 2001 (has links)
No description available.
3

Consequences of estimating models using symmetric loss functions when the actual problem is asymmetric

Ödmann, Erik, Carlsson, David January 2022 (has links)
Whenever we make a prediction we will make an error of a varying degree. What is worse,positive errors or negative ones? This question is important to answer before estimating amodel. When estimating a model a loss function is chosen, a function that gives an instruction of how to transform a particular error. Previous research hints at applications whereasymmetric loss functions provide more optimal models than using symmetric loss functions.Through a simulation study, this thesis highlights the consequences of using symmetric andasymmetric loss functions when assuming the actual problem is asymmetric. This thesis isconducted to cover a gap in literature as well as to correct a common statistical misunderstanding. Our core findings are that the models that take the asymmetry into account havethe lowest prediction errors, while also demonstrating that the larger the degree of asymmetry leads to a greater difference in performance between asymmetric and symmetric modelsin favour of the models estimated with asymmetric loss functions. This confirms what isdemonstrated in existing literature and what can be found in statistical theory.
4

Minimisation de fonctions de perte calibrée pour la classification des images / Minimization of calibrated loss functions for image classification

Bel Haj Ali, Wafa 11 October 2013 (has links)
La classification des images est aujourd'hui un défi d'une grande ampleur puisque ça concerne d’un côté les millions voir des milliards d'images qui se trouvent partout sur le web et d’autre part des images pour des applications temps réel critiques. Cette classification fait appel en général à des méthodes d'apprentissage et à des classifieurs qui doivent répondre à la fois à la précision ainsi qu'à la rapidité. Ces problèmes d'apprentissage touchent aujourd'hui un grand nombre de domaines d'applications: à savoir, le web (profiling, ciblage, réseaux sociaux, moteurs de recherche), les "Big Data" et bien évidemment la vision par ordinateur tel que la reconnaissance d'objets et la classification des images. La présente thèse se situe dans cette dernière catégorie et présente des algorithmes d'apprentissage supervisé basés sur la minimisation de fonctions de perte (erreur) dites "calibrées" pour deux types de classifieurs: k-Plus Proches voisins (kNN) et classifieurs linéaires. Ces méthodes d'apprentissage ont été testées sur de grandes bases d'images et appliquées par la suite à des images biomédicales. Ainsi, cette thèse reformule dans une première étape un algorithme de Boosting des kNN et présente ensuite une deuxième méthode d'apprentissage de ces classifieurs NN mais avec une approche de descente de Newton pour une convergence plus rapide. Dans une seconde partie, cette thèse introduit un nouvel algorithme d'apprentissage par descente stochastique de Newton pour les classifieurs linéaires connus pour leur simplicité et leur rapidité de calcul. Enfin, ces trois méthodes ont été utilisées dans une application médicale qui concerne la classification de cellules en biologie et en pathologie. / Image classification becomes a big challenge since it concerns on the one hand millions or billions of images that are available on the web and on the other hand images used for critical real-time applications. This classification involves in general learning methods and classifiers that must require both precision as well as speed performance. These learning problems concern a large number of application areas: namely, web applications (profiling, targeting, social networks, search engines), "Big Data" and of course computer vision such as the object recognition and image classification. This thesis concerns the last category of applications and is about supervised learning algorithms based on the minimization of loss functions (error) called "calibrated" for two kinds of classifiers: k-Nearest Neighbours (kNN) and linear classifiers. Those learning methods have been tested on large databases of images and then applied to biomedical images. In a first step, this thesis revisited a Boosting kNN algorithm for large scale classification. Then, we introduced a new method of learning these NN classifiers using a Newton descent approach for a faster convergence. In a second part, this thesis introduces a new learning algorithm based on stochastic Newton descent for linear classifiers known for their simplicity and their speed of convergence. Finally, these three methods have been used in a medical application regarding the classification of cells in biology and pathology.
5

Small Area Estimation for Survey Data: A Hierarchical Bayes Approach

Karaganis, Milana 14 September 2009 (has links)
Model-based estimation techniques have been widely used in small area estimation. This thesis focuses on the Hierarchical Bayes (HB) estimation techniques in application to small area estimation for survey data. We will study the impact of applying spatial structure to area-specific effects and utilizing a specific generalized linear mixed model in comparison with a traditional Fay-Herriot estimation model. We will also analyze different loss functions with applications to a small area estimation problem and compare estimates obtained under these loss functions. Overall, for the case study under consideration, area-specific geographical effects will be shown to have a significant effect on estimates. As well, using a generalized linear mixed model will prove to be more advantageous than the usual Fay-Herriot model. We will also demonstrate the benefits of using a weighted balanced-type loss function for the purpose of balancing the precision of estimates with their closeness to the direct estimates.
6

Small Area Estimation for Survey Data: A Hierarchical Bayes Approach

Karaganis, Milana 14 September 2009 (has links)
Model-based estimation techniques have been widely used in small area estimation. This thesis focuses on the Hierarchical Bayes (HB) estimation techniques in application to small area estimation for survey data. We will study the impact of applying spatial structure to area-specific effects and utilizing a specific generalized linear mixed model in comparison with a traditional Fay-Herriot estimation model. We will also analyze different loss functions with applications to a small area estimation problem and compare estimates obtained under these loss functions. Overall, for the case study under consideration, area-specific geographical effects will be shown to have a significant effect on estimates. As well, using a generalized linear mixed model will prove to be more advantageous than the usual Fay-Herriot model. We will also demonstrate the benefits of using a weighted balanced-type loss function for the purpose of balancing the precision of estimates with their closeness to the direct estimates.
7

Inferência bayesiana para testes acelerados "step-stress" com dados de falha sob censura e distribuição Gama / Bayesian inference for accelerated testing "step-stress" with fault data under censorship and Gamma

Chagas, Karlla Delalibera [UNESP] 16 April 2018 (has links)
Submitted by Karlla Delalibera Chagas null (karlladelalibera@gmail.com) on 2018-05-14T12:25:13Z No. of bitstreams: 1 dissertação - Karlla Delalibera.pdf: 2936984 bytes, checksum: 3d99ddd54b4c7d3230e5de9070915594 (MD5) / Approved for entry into archive by Claudia Adriana Spindola null (claudia@fct.unesp.br) on 2018-05-14T12:53:09Z (GMT) No. of bitstreams: 1 chagas_kd_me_prud.pdf: 2936984 bytes, checksum: 3d99ddd54b4c7d3230e5de9070915594 (MD5) / Made available in DSpace on 2018-05-14T12:53:09Z (GMT). No. of bitstreams: 1 chagas_kd_me_prud.pdf: 2936984 bytes, checksum: 3d99ddd54b4c7d3230e5de9070915594 (MD5) Previous issue date: 2018-04-16 / Pró-Reitoria de Pós-Graduação (PROPG UNESP) / Neste trabalho iremos realizar uma abordagem sobre a modelagem de dados que advém de um teste acelerado. Consideraremos o caso em que a carga de estresse aplicada foi do tipo "step-stress". Para a modelagem, utilizaremos os modelos step-stress simples e múltiplo sob censura tipo II e censura progressiva tipo II, e iremos supor que os tempos de vida dos itens em teste seguem uma distribuição Gama. Além disso, também será utilizado o modelo step-stress simples sob censura tipo II considerando a presença de riscos competitivos. Será realizada uma abordagem clássica, por meio do método de máxima verossimilhança e uma abordagem Bayesiana usando prioris não-informativas, para estimar os parâmetros dos modelos. Temos como objetivo realizar a comparação dessas duas abordagens por meio de simulações para diferentes tamanhos amostrais e utilizando diferentes funções de perda (Erro Quadrático, Linex, Entropia), e através de estatísticas verificaremos qual desses métodos se aproxima mais dos verdadeiros valores dos parâmetros. / In this work, we will perform an approach to data modeling that comes from an accelerated test. We will consider the case where the stress load applied was of the step-stress type. For the modeling, we will use the simple and multiple step-stress models under censorship type II and progressive censorship type II, and we will assume that the lifetimes of the items under test follow a Gamma distribution. In addition, the simple step-stress model under censorship type II will also be used considering the presence of competitive risks. A classical approach will be performed, using the maximum likelihood method and a Bayesian approach using non-informative prioris, to estimate the parameters of the models. We aim to compare these two approaches by simulations for different sample sizes and using different loss functions (Quadratic Error, Linex, Entropy), and through statistics, we will check which of these approaches is closer to the true values of the parameters.
8

Volatility Modelling in the Swedish and US Fixed Income Market : A comparative study of GARCH, ARCH, E-GARCH and GJR-GARCH Models on Government Bonds

Mortimore, Sebastian, Sturehed, William January 2023 (has links)
Volatility is an important variable in financial markets, risk management and making investment decisions. Different volatility models are beneficial tools to use when predicting future volatility. The purpose of this study is to compare the accuracy of various volatility models, including ARCH, GARCH and extensions of the GARCH framework. The study applies these volatility models to the Swedish and American Fixed Income Market for government bonds. The performance of these models is based on out-of-sample forecasting using different loss functions such as RMSE, MAE and MSE, specifically investigating their ability to forecast future volatility. Daily volatility forecasts from daily bid prices from Swedish and American 2, 5- and 10-year governments bonds will be compared against realized volatility which will act as the proxy for volatility. The result show US government bonds, excluding the US 2 YTM, did not show any significant negative volatility, volatility asymmetry or leverage effects. In overall, the ARCH and GARCH models outperformed E-GARCH and GJR-GARCH except the US 2-year YTM showing negative volatility, asymmetry, and leverage effects and the GJR-GARCH model outperforming the ARCH and GARCH models. / Volatilitet är en viktig variabel på finansmarknaden när det kommer till både riskhantering samt investeringsbeslut. Olika volatilitets modeller är fördelaktiga verktyg när det kommer till att göra prognoser av framtida volatilitet. Syftet med denna studie är att jämföra det olika volatilitetsmodellerna ARCH, GARCH och förlängningar av GARCH-ramverket för att ta reda på vilken av modellerna är den bästa att prognosera framtida volatilitet. Studien kommer tillämpa dessa modeller på den svenska och amerikanska marknaden för statsskuldväxlar. Prestandan för modellerna kommer baseras på out-of-sample prognoser med hjälp av det olika förlustfunktionerna RMSE, MAE och MSE. Förlustfunktionernas används endast till att undersöka deras förmåga till att prognostisera framtida volatilitet. Dagliga volatilitetsprognoser baseras på dagliga budpriser för amerikanska och svenska statsobligationer med 2, 5 och 10 års löptid. Dessa kommer jämföras med verklig volatilitet som agerar som Proxy för volatiliteten. Resultatet tyder på att amerikanska statsobligationer förutom den tvååriga, inte visar signifikant negativ volatilitet, asymmetri i volatilitet samt hävstångseffekt. De tvååriga amerikanska statsobligationerna visar bevis för negativ volatilitet, hävstångseffekt samt asymmetri i volatiliteten. ARCH och GARCH modellerna presterade övergripande sett bäst för både svenska och amerikanska statsobligationer förutom den tvååriga där GJR-GARCH modellen presterade bäst.
9

Fonctions de perte en actuariat

Craciun, Geanina January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
10

Fonctions de perte en actuariat

Craciun, Geanina January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal

Page generated in 0.4488 seconds