Spelling suggestions: "subject:"codistribution destimation"" "subject:"codistribution coestimation""
1 |
Estimation of the reciprocal of a binomial proportionWei, Jiajin 04 August 2020 (has links)
As a classic parameter originated from the binomial distribution, the binomial pro- portion has been well studied in the literature due to its wide range of applications. In contrast, the reciprocal of the binomial proportion, also known as the inverse proportion, is often overlooked, although it plays an important role in sampling designs and clinical studies. To estimate the inverse proportion, a simple method is to apply the maximum likelihood estimation (MLE). This estimator is, however, not a valid estimator because it suffers from the zero-event problem, which occurs when there is no successful event in the trials. At first, we review a number of methods proposed to overcome the zero-event problem and discuss whether they are feasible to estimate the inverse proportion. Inspired by the Wilson (1927) and Agresti and Coull (1998), in this thesis, we focus on a family of shrinkage estimators of the inverse proportion and propose to derive the optimal estimator within this family. The shrinkage estimator overcomes the zero-event problem by including a positive shrinkage parameter, which is intrinsically related to the expected value of the resulting estimator. To find the best shrinkage parameter, the relationship between the shrinkage parameter and the estimation bias of the shrinkage estimator is investigated systematically. Note that the explicit expression of the expected value function of the estimator and the best shrinkage parameter are quite complicated to compute when the number of trials is large. Hence, we review three methods in the literature which were proposed to approximate the expected value function. And after being inspired, we propose a new approximate formula for the expected value function and derive an approximate solution of the optimal shrinkage parameter by the Taylor expansion. Because there still exist an unknown binomial proportion in the optimal shrinkage parameter, we suggest a plug-in estimator for the unknown proportion with an adaptive threshold. Finally, simulation studies are conducted to evaluate the performance of our new estimator. As baselines for comparison, we also include the Fattorini estimator, the Haldane estimator and a piecewise estimator in the simulations. According to the simulation results, the new estimator is able to achieve a better or equally good performance compared with the Fattorini estimators in most settings. Hence, our new estimator can be a reliable estimator for the inverse proportion in most practical cases
|
2 |
Εκτίμηση για την κατανομή ParetoΑγγέλου, Γρηγορία 06 November 2014 (has links)
Η παρούσα μεταπτυχιακή διατριβή διαπραγματεύεται τη μελέτη της κατανομής Pareto,
την εκτίμηση και την σύγκριση των εκτιμητών των παραμέτρων της καθώς και την
εκτίμηση της συνάρτησης επιβίωσης της δεδομένου ότι η κατανομή Pareto χρησιμοποιείται ως μοντέλο για την εκτίμηση μεγάλων εισοδημάτων.
Στο Κεφάλαιο 1, παραθέτουμε μερικούς βασικούς ορισμούς και θεωρήματα της Μαθηματικής Στατιστικής όπου είναι αναγκαία για την ανάπτυξη της εργασίας μας.
Στο Κεφάλαιο 2, αναφερόμαστε στη κατανομή Pareto, στα γενικά χαρακτηριστικά της
και τη συσχέτισή της με άλλες γνωστές κατανομές.
Στο Κεφάλαιο 3, μελετάμε τους εκτιμητές των παραμέτρων της κατανομή Pareto ως
προς το τετραγωνικό σφάλμα κάνοντας και κάποιες συγκρίσεις μεταξύ των εκτιμητών.
Στο Κεφάλαιο 4, μελετάμε τους εκτιμητές Bayes των παραμέτρων της κατανομή Pareto
με συνάρτηση σφάλματος LINEX και τους συγκρίνουμε με τους εκτιμητές Bayes με
τετραγωνικό σφάλμα.
Στο Κεφάλαιο 5, εκτιμάμε της συνάρτηση επιβίωσης και μελετάμε τους αμερόληπτους
εκτιμητές ελάχιστης διασποράς της πυκνότητας πιθανότητας και της συνάρτησης κατανομής συγκρινόντας τους, στη συνέχεια, με τους αντίστοιχους εκτιμητές μέγιστης
πιθανοφάνειας.
Στο Κεφάλαιο 6, παρουσιάζουμε ένα παράδειγμα για την καλύτερη κατανόηση των
εκτιμήσεων μας. / We make an estimation for the Pareto distribution, we estimate the parameters of it and we make comparisons with each other.
|
3 |
Odhady diskrétních rozdělení pravděpodobnosti pro aplikace / Estimates of Discrete Probability Distributions for ApplicationsMašek, Jakub January 2016 (has links)
Master's thesis is focused on solution of the statistical problem to find a probability distribution of a discrete random variable on the basis of the observed data. These estimates are obtained by minimizing pseudo-quasinorm which is introduced here.The thesis further focuses on atributes of this pseudo-quasinorm. It also contains practical application of these methods.
|
4 |
Kvazinormy diskrétních rozdělení pravděpodobnosti a jejich aplikace / Quasinorms of Discrete Probability Distributions and their ApplicationsŠácha, Jakub January 2013 (has links)
Dissertation thesis is focused on solution of the statistical problem to find a probability distribution of a discrete random variable on the basis of the observed data. These estimates are obtained by minimizing quasi-norms with given constraints. The thesis further focuses on deriving confidence intervals for estimated probabilities. It also contains practical application of these methods.
|
5 |
Deep Time: Deep Learning Extensions to Time Series Factor Analysis with Applications to Uncertainty Quantification in Economic and Financial ModelingMiller, Dawson Jon 12 September 2022 (has links)
This thesis establishes methods to quantify and explain uncertainty through high-order moments in time series data, along with first principal-based improvements on the standard autoencoder and variational autoencoder. While the first-principal improvements on the standard variational autoencoder provide additional means of explainability, we ultimately look to non-variational methods for quantifying uncertainty under the autoencoder framework.
We utilize Shannon's differential entropy to accomplish the task of uncertainty quantification in a general nonlinear and non-Gaussian setting. Together with previously established connections between autoencoders and principal component analysis, we motivate the focus on differential entropy as a proper abstraction of principal component analysis to this more general framework, where nonlinear and non-Gaussian characteristics in the data are permitted.
Furthermore, we are able to establish explicit connections between high-order moments in the data to those in the latent space, which induce a natural latent space decomposition, and by extension, an explanation of the estimated uncertainty. The proposed methods are intended to be utilized in economic and financial factor models in state space form, building on recent developments in the application of neural networks to factor models with applications to financial and economic time series analysis. Finally, we demonstrate the efficacy of the proposed methods on high frequency hourly foreign exchange rates, macroeconomic signals, and synthetically generated autoregressive data sets. / Master of Science / This thesis establishes methods to quantify and explain uncertainty in time series data, along with improvements on some latent variable neural networks called autoencoders and variational autoencoders. Autoencoders and varitational autoencodes are called latent variable neural networks since they can estimate a representation of the data that has less dimension than the original data. These neural network architectures have a fundamental connection to a classical latent variable method called principal component analysis, which performs a similar task of dimension reduction but under more restrictive assumptions than autoencoders and variational autoencoders. In contrast to principal component analysis, a common ailment of neural networks is the lack of explainability, which accounts for the colloquial term black-box models. While the improvements on the standard autoencoders and variational autoencoders help with the problem of explainability, we ultimately look to alternative probabilistic methods for quantifying uncertainty. To accomplish this task, we focus on Shannon's differential entropy, which is entropy applied to continuous domains such as time series data. Entropy is intricately connected to the notion of uncertainty, since it depends on the amount of randomness in the data. Together with previously established connections between autoencoders and principal component analysis, we motivate the focus on differential entropy as a proper abstraction of principal component analysis to a general framework that does not require the restrictive assumptions of principal component analysis.
Furthermore, we are able to establish explicit connections between high-order moments in the data to the estimated latent variables (i.e., the reduced dimension representation of the data). Estimating high-order moments allows for a more accurate estimation of the true distribution of the data. By connecting the estimated high-order moments in the data to the latent variables, we obtain a natural decomposition of the uncertainty surrounding the latent variables, which allows for increased explainability of the proposed autoencoder. The methods introduced in this thesis are intended to be utilized in a class of economic and financial models called factor models, which are frequently used in policy and investment analysis.
A factor model is another type of latent variable model, which in addition to estimating a reduced dimension representation of the data, provides a means to forecast future observations. Finally, we demonstrate the efficacy of the proposed methods on high frequency hourly foreign exchange rates, macroeconomic signals, and synthetically generated autoregressive data sets. The results support the superiority of the entropy-based autoencoder to the standard variational autoencoder both in capability and computational expense.
|
Page generated in 0.1383 seconds