• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 2
  • Tagged with
  • 11
  • 11
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Prior elicitation and variable selection for bayesian quantile regression

Al-Hamzawi, Rahim Jabbar Thaher January 2013 (has links)
Bayesian subset selection suffers from three important difficulties: assigning priors over model space, assigning priors to all components of the regression coefficients vector given a specific model and Bayesian computational efficiency (Chen et al., 1999). These difficulties become more challenging in Bayesian quantile regression framework when one is interested in assigning priors that depend on different quantile levels. The objective of Bayesian quantile regression (BQR), which is a newly proposed tool, is to deal with unknown parameters and model uncertainty in quantile regression (QR). However, Bayesian subset selection in quantile regression models is usually a difficult issue due to the computational challenges and nonavailability of conjugate prior distributions that are dependent on the quantile level. These challenges are rarely addressed via either penalised likelihood function or stochastic search variable selection (SSVS). These methods typically use symmetric prior distributions for regression coefficients, such as the Gaussian and Laplace, which may be suitable for median regression. However, an extreme quantile regression should have different regression coefficients from the median regression, and thus the priors for quantile regression coefficients should depend on quantiles. This thesis focuses on three challenges: assigning standard quantile dependent prior distributions for the regression coefficients, assigning suitable quantile dependent priors over model space and achieving computational efficiency. The first of these challenges is studied in Chapter 2 in which a quantile dependent prior elicitation scheme is developed. In particular, an extension of the Zellners prior which allows for a conditional conjugate prior and quantile dependent prior on Bayesian quantile regression is proposed. The prior is generalised in Chapter 3 by introducing a ridge parameter to address important challenges that may arise in some applications, such as multicollinearity and overfitting problems. The proposed prior is also used in Chapter 4 for subset selection of the fixed and random coefficients in a linear mixedeffects QR model. In Chapter 5 we specify normal-exponential prior distributions for the regression coefficients which can provide adaptive shrinkage and represent an alternative model to the Bayesian Lasso quantile regression model. For the second challenge, we assign a quantile dependent prior over model space in Chapter 2. The prior is based on the percentage bend correlation which depends on the quantile level. This prior is novel and is used in Bayesian regression for the first time. For the third challenge of computational efficiency, Gibbs samplers are derived and setup to facilitate the computation of the proposed methods. In addition to the three major aforementioned challenges this thesis also addresses other important issues such as the regularisation in quantile regression and selecting both random and fixed effects in mixed quantile regression models.
2

二維聯合分配下條件常態分配相容性之探討 / Compatibility of normal conditional distributions under bivariate distribution

蕭惠玲 Unknown Date (has links)
根據Arnold and Press (1989) 提出檢驗兩個條件分配是否滿足相容條件的理論內容,本研究推論出,當給定二個條件機率密度函數的形式為常態(normal)時,如何判斷這兩個條件常態分配是否相容的充要條件,並進而推論出這兩個條件常態分配對應的聯合機率密度函數亦為常態分配的條件。我們更進一步透過電腦模擬方法,提供兩個不同聯合常態分配下所分別得到的兩組不同條件分配樣本,據以推得當對應的母數相差到何種程度時,可判定這兩組樣本其原始母體不同。 / Arnold and Press (1989) provide the theory about the compatibility of two conditional densities. In this research, we use their results to find the sufficient conditions of the compatibility of two conditional densities, which have the normal form. New sufficient conditions are also given if we further assume that corresponding joint density is normal. In addition, we use computer to generate two different samples from two different conditional normal distributions, which are from two different joint normal distributions. With the repeated samples, we provide the ranges of one population parameter when the other population parameters are fixed so that the samples are almost always incompatible.
3

Objective Bayesian analysis of Kriging models with anisotropic correlation kernel / Analyse bayésienne objective des modèles de krigeage avec noyau de corrélation anisotrope

Muré, Joseph 05 October 2018 (has links)
Les métamodèles statistiques sont régulièrement confrontés au manque de données qui engendre des difficultés à estimer les paramètres. Le paradigme bayésien fournit un moyen élégant de contourner le problème en décrivant la connaissance que nous avons des paramètres par une loi de probabilité a posteriori au lieu de la résumer par une estimation ponctuelle. Cependant, ce paradigme nécessite de définir une loi a priori adéquate, ce qui est un exercice difficile en l'absence de jugement d'expert. L'école bayésienne objective propose des priors par défaut dans ce genre de situation telle que le prior de référence de Berger-Bernardo. Un tel prior a été calculé par Berger, De Oliveira and Sansó [2001] pour le modèle de krigeage avec noyau de covariance isotrope. Une extension directe au cas des noyaux anisotropes poserait des problèmes théoriques aussi bien que pratiques car la théorie de Berger-Bernardo ne peut s'appliquer qu'à un jeu de paramètres ordonnés. Or dans ce cas de figure, tout ordre serait nécessairement arbitraire. Nous y substituons une solution bayésienne objective fondée sur les posteriors de référence conditionnels. Cette solution est rendue possible par une théorie du compromis entre lois conditionnelles incompatibles. Nous montrons en outre qu'elle est compatible avec le krigeage trans-gaussien. Elle est appliquée à un cas industriel avec des données non-stationnaires afin de calculer des Probabilités de Détection de défauts (POD de l'anglais Probability Of Detection) par tests non-destructifs dans les tubes de générateur de vapeur de centrales nucléaires. / A recurring problem in surrogate modelling is the scarcity of available data which hinders efforts to estimate model parameters. The Bayesian paradigm offers an elegant way to circumvent the problem by describing knowledge of the parameters by a posterior probability distribution instead of a pointwise estimate. However, it involves defining a prior distribution on the parameter. In the absence of expert opinion, finding an adequate prior can be a trying exercise. The Objective Bayesian school proposes default priors for such can be a trying exercise. The Objective Bayesian school proposes default priors for such situations, like the Berger-Bernardo reference prior. Such a prior was derived by Berger, De Oliveira and Sansó [2001] for the Kriging surrogate model with isotropic covariance kernel. Directly extending it to anisotropic kernels poses theoretical as well as practical problems because the reference prior framework requires ordering the parameters. Any ordering would in this case be arbitrary. Instead, we propose an Objective Bayesian solution for Kriging models with anisotropic covariance kernels based on conditional reference posterior distributions. This solution is made possible by a theory of compromise between incompatible conditional distributions. The work is then shown to be compatible with Trans-Gaussian Kriging. It is applied to an industrial case with nonstationary data in order to derive Probability Of defect Detection (POD) by non-destructive tests in steam generator tubes of nuclear power plants.
4

Some statistical results in high-dimensional dependence modeling / Contributions à l'analyse statistique des modèles de dépendance en grande dimension

Derumigny, Alexis 15 May 2019 (has links)
Cette thèse peut être divisée en trois parties.Dans la première partie, nous étudions des méthodes d'adaptation au niveau de bruit dans le modèle de régression linéaire en grande dimension. Nous prouvons que deux estimateurs à racine carrée, peuvent atteindre les vitesses minimax d'estimation et de prédiction. Nous montrons qu'une version similaire construite à parti de médianes de moyenne, peut encore atteindre les mêmes vitesses optimales en plus d'être robuste vis-à-vis de l'éventuelle présence de données aberrantes.La seconde partie est consacrée à l'analyse de plusieurs modèles de dépendance conditionnelle. Nous proposons plusieurs tests de l'hypothèse simplificatrice qu'une copule conditionnelle est constante vis-à-vis de son évènement conditionnant, et nous prouvons la consistance d'une technique de ré-échantillonage semi-paramétrique. Si la copule conditionnelle n'est pas constante par rapport à sa variable conditionnante, alors elle peut être modélisée via son tau de Kendall conditionnel. Nous étudions donc l'estimation de ce paramètre de dépendance conditionnelle sous 3 approches différentes : les techniques à noyaux, les modèles de type régression et les algorithmes de classification.La dernière partie regroupe deux contributions dans le domaine de l'inférence.Nous comparons et proposons différents estimateurs de fonctionnelles conditionnelles régulières en utilisant des U-statistiques. Finalement, nous étudions la construction et les propriétés théoriques d'intervalles de confiance pour des ratios de moyenne sous différents choix d'hypothèses et de paradigmes. / This thesis can be divided into three parts.In the first part, we study adaptivity to the noise level in the high-dimensional linear regression framework. We prove that two square-root estimators attains the minimax rates of estimation and prediction. We show that a corresponding median-of-means version can still attains the same optimal rates while being robust to outliers in the data.The second part is devoted to the analysis of several conditional dependence models.We propose some tests of the simplifying assumption that a conditional copula is constant with respect to its conditioning event, and prove the consistency of a semiparametric bootstrap scheme.If the conditional copula is not constant with respect to the conditional event, then it can be modelled using the corresponding Kendall's tau.We study the estimation of this conditional dependence parameter using 3 different approaches : kernel techniques, regression-type models and classification algorithms.The last part regroups two different topics in inference.We review and propose estimators for regular conditional functionals using U-statistics.Finally, we study the construction and the theoretical properties of confidence intervals for ratios of means under different sets of assumptions and paradigms.
5

Trading strategies based on estimates of conditional distribution of stock returns / Trading strategies based on estimates of conditional distribution of stock returns

Sedlačík, Adam January 2018 (has links)
In this thesis, a new trading strategy is proposed. By the help of quantile regression, the conditional distribution functions of stock market returns are estimated. Based on the knowledge of the distribution the strategy produced buying and selling signals which together with a weight function derived from exponential moving averages determines how much and when to buy or sell. The strategy performs better than the market in terms of absolute return and the Sharpe ratio in-sample, but it does not provide satisfactory results out-of-sample.
6

A Matrix Variate Generalization of the Skew Pearson Type VII and Skew T Distribution

Zheng, Shimin, Gupta, A. K., Liu, Xuefeng 01 January 2012 (has links)
We define and study multivariate and matrix variate skew Pearson type VII and skew t-distributions. We derive the marginal and conditional distributions, the linear transformation, and the stochastic representations of the multivariate and matrix variate skew Pearson type VII distributions and skew t-distributions. Also, we study the limiting distributions.
7

An Investigation of Distribution Functions

Su, Nan-cheng 24 June 2008 (has links)
The study of properties of probability distributions has always been a persistent theme of statistics and of applied probability. This thesis deals with an investigation of distribution functions under the following two topics: (i) characterization of distributions based on record values and order statistics, (ii) properties of the skew-t distribution. Within the extensive characterization literature there are several results involving properties of record values and order statistics. Although there have been many well known results already developed, it is still of great interest to find new characterization of distributions based on record values and order statistics. In the first part, we provide the conditional distribution of any record value given the maximum order statistics and study characterizations of distributions based on record values and the maximum order statistics. We also give some characterizations of the mean value function within the class of order statistics point processes, by using certain relations between the conditional moments of the jump times or current lives. These results can be applied to characterize the uniform distribution using the sequence of order statistics, and the exponential distribution using the sequence of record values, respectively. Azzalini (1985, 1986) introduced the skew-normal distribution which includes the normal distribution and has some properties like the normal and yet is skew. This class of distributions is useful in studying robustness and for modeling skewness. Since then, skew-symmetric distributions have been proposed by many authors. In the second part, the so-called generalized skew-t distribution is defined and studied. Examples of distributions in this class, generated by the ratio of two independent skew-symmetric distributions, are given. We also investigate properties of the skew-symmetric distribution.
8

Performance Analysis of Detection System Design Algorithms

Nyberg, Karl-Johan 11 April 2003 (has links)
Detection systems are widely used in industry. Designers, operators and users of these systems need to choose an appropriate design, based on the intended usage and the operating environment. The purpose of this research is to analyze the effect of various system design variables (controllable) and system parameters (uncontrollable) on the performance of detection systems. To optimize system performance one must manage the tradeoff between two errors that can occur. A False Alarm occurs if the detection system falsely indicates a target is present and a False Clear occurs if the detection system falsely fails to indicate a target is present. Given a particular detection system and a pre-specified false clear (or false alarm) rate, there is a minimal false alarm (or false clear) rate that can be achieved. Earlier research has developed methods that address this false alarm, false clear tradeoff problem (FAFCT) by formulating a Neyman-Pearson hypothesis problem, which can be solved as a Knapsack problem. The objective of this research is to develop guidelines that can be of help in designing detection systems. For example, what system design variables must be implemented to achieve a certain false clear standard for a parallel 2-sensor detection system for Salmonella detection? To meet this objective, an experimental design is constructed and an analysis of variance is performed. Computational results are obtained using the FAFCT-methodology and the results are presented and analyzed using ROC (Receiver Operating Characteristic) curves and an analysis of variance. The research shows that sample size (i.e., size of test data set used to estimate the distribution of sensor responses) has very little effect on the FAFCT compared to other factors. The analysis clearly shows that correlation has the most influence on the FAFCT. Negatively correlated sensor responses outperform uncorrelated and positively correlated sensor responses with large margins, especially for strict FC-standards (FC-standard is defined as the maximum allowed False Clear rate). Suggestions for future research are also included. FC-standard is the second most influential design variable followed by grid size. / Master of Science
9

The Inverse Problem of Multivariate and Matrix-Variate Skew Normal Distributions

Zheng, Shimin, Hardin, J. M., Gupta, A. K. 01 June 2012 (has links)
In this paper, we prove that the joint distribution of random vectors Z 1 and Z 2 and the distribution of Z 2 are skew normal provided that Z 1 is skew normally distributed and Z 2 conditioning on Z 1 is distributed as closed skew normal. Also, we extend the main results to the matrix variate case.
10

以最小平方法處理有限離散型條件分配相容性問題 / Addressing the compatibility issues of finite discrete conditionals by the least squares approach

李宛靜, Lee, Wan Ching Unknown Date (has links)
給定兩個有限離散型條件分配,我們可以去探討有關相容性及唯一性的問題。Tian et al.(2009)提出一個統合的方法,將相容性的問題轉換成具限制條件的線性方程系統(以邊際機率為未知數),並藉由 l_2-距離測量解之誤差,進而求出最佳解來。他們也提出了電腦數值計算法在檢驗相容性及唯一性時的準則。 由於 Tian et al.(2009)的方法是把邊際機率和為 1 的條件放置在線性方程系統中,從理論的觀點來看,我們認為該條件在此種做法下未必會滿足。因此,本文中將邊際機率和為 1 的條件從線性方程系統中抽離出來,放入限制條件中,再對修正後的問題求最佳解。 我們提出了兩個解決問題的方法:(一) LRG 法;(二) 干擾參數法。LRG 法是先不管機率值在 0 與 1 之間的限制,在邊際機率和為 1 的條件下,利用 Lagrange 乘數法導出解的公式,之後再利用 Rao-Ghangurde 法進行修正,使解滿足機率值在 0 與 1 之間的要求。干擾參數法是在 Lagrange 乘數法公式解中有關廣義逆矩陣的計算部份引進了微量干擾值,使近似的逆矩陣及解可快速求得。理論證明,引進干擾參數所增加的誤差不超過所選定的干擾值,易言之,由干擾參數法所求出的解幾近最佳解。故干擾參數法在處理相容性問題上,是非常實用、有效的方法。從進一步分析Lagrange 乘數法公式解的過程中,我們也發現了檢驗條件分配"理論"相容的充分條件。 最後,為了驗證 LRG 法與干擾參數法的可行性,我們利用 MATLAB 設計了程式來處理求解過程中的運算,並以 Tian et al.(2009)文中四個可涵蓋各種情況的範例來解釋說明處理的流程,同時將所獲得的結果和 Tian et al. 的結果做比較。 / Given two finite discrete conditional distributions, we could study the compatibility and uniqueness issues. Tian et al.(2009) proposed a unified method by converting the compatibility problem into a system of linear equations with constraints, in which marginal probability values are assumed unknown. It locates the optimum solution by means of the error of l_2 - discrepancy. They also provided criteria for determining the compatibility and uniqueness. Because the condition of sum of the marginal probability values being equal to one is in Tian et al.s’linear system, it might not be fulfilled by the optimum solution. By separating this condition from the linear system and adding into constraints, we would look for the optimum solution after modification. We propose two new methods: (1) LRG method and (2) Perturbation method. LRG method ignores the requirement of the probability values being between zero and one initially, it then uses the Lagrange multipliers method to derive the solution for a quadratic optimization problem subject to the sum of the marginal probability values being equal to 1. Afterward we use the Rao-Ghangurde method to modify the computed value to meet the requirement. The perturbation method introduces tiny perturbation parameter in finding the generalized inverse for the optimum solution obtained by the Lagrange multipliers method. It can be shown that the increased error is less than the perturbation value introduced. Thus it is a practical and effective method in dealing with compatibility issues. We also find some sufficient conditions for checking the compatibility of conditional distributions from further analysis on the solution given by Lagrange multipliers method. To show the feasibilities of LRG method and Perturbation method, we use MATLAB to device a program to conduct them. Several numerical examples raised by Tian et al.(2009) in their article are applied to illustrate our methods. Some comparisons with their method are also presented.

Page generated in 0.1241 seconds