Spelling suggestions: "subject:"codistribution (probability theory)."" "subject:"codistribution (aprobability theory).""
131 |
Some contributions to the evaluation of Pearsonian distribution functionsWhite, John Edward 10 June 2012 (has links)
These few examples present further illustration of the fact that the Pearson approximations are very adequate for many practical purposes, and may be regarded a justifiable procedure if the true distribution is not known. The purpose of this paper was the preparation of improved, tables which may lead to more extensive comparative studies of this kind. / Master of Science
|
132 |
Some properties of conditional distributions of a special typeBowen, Jacob Van January 1966 (has links)
The subject treated in this thesis is the conditional distribution of a random variable given that the outcome of an associated random variable lies within a specified interval. This may be considered to be an extension of the classical case in which the outcome of the associated random variable is known to assume a specific numerical value.
The primary purpose of the study was to examine the properties of a system formed by interval conditioning under the assumption of a suitable linear model. No attention was given to appropriate estimation procedures.
The principal conclusions of the study follow. Let X and Y be jointly distributed random variables such that E(Y|X) = α + βX, where α and β are constants, and such that the variance of Y given X is independent of X. Then E(Y|X∈ I) = α + β E(X|X∈ I) and the variance of Y given X∈ I is equal to the variance of Y given X plus β² times the variance of X in its truncated distribution, i.e. truncated in the conditioning interval I.
It was shown that the limiting cases of the system. led to the classical conditional results as the conditioning interval degenerates to a point, and to the classical marginal results as the interval expands to encompass the real line. These results were generalized into the case where a random variable Y is conditioned on a set of associated variables, {X<sub>i</sub>}<sup>p</sup><sub>i=1</sub>, such that X<sub>i</sub>∈ I<sub>i</sub>, i = 1, 2, … p.
Higher conditional moments were found in general. Since third and higher conditional moments are usually functions of the conditioned variables, only an analytic form was given.
Consideration was given to the case in which a vector of random variables is to be predicted given that an associated vector of random variables lies in a specified rectangular region. Two types of conditioning were considered simultaneously at this point, namely, the case in which part of the associated variables are conditioned to points and the remainder to intervals.
In various places in the body of the thesis and in the appendix consideration was given to the conditions under which the variance of a truncated random variable increases monotonically with the interval of truncation. This was found to be a complicated problem, but necessary and sufficient conditions for this property were developed in the appendix. / M.S.
|
133 |
Generative Modeling and Inference in Directed and Undirected Neural NetworksStinson, Patrick January 2020 (has links)
Generative modeling and inference are two broad categories in unsupervised learning whose goal is to answer the following questions, respectively: 1. Given a dataset, how do we (either implicitly or explicitly) model the underlying probability distribution from which the data came and draw samples from that distribution? 2. How can we learn an underlying abstract representation of the data? In this dissertation we provide three studies that each in a different way improve upon specific generative modeling and inference techniques. First, we develop a state-of-the-art estimator of a generic probability distribution's partition function, or normalizing constant, during simulated tempering. We then apply our estimator to the specific case of training undirected probabilistic graphical models and find our method able to track log-likelihoods during training at essentially no extra computational cost. We then shift our focus to variational inference in directed probabilistic graphical models (Bayesian networks) for generative modeling and inference. First, we generalize the aggregate prior distribution to decouple the variational and generative models to provide the model with greater flexibility and find improvements in the model's log-likelihood of test data as well as a better latent representation. Finally, we study the variational loss function and argue under a typical architecture the data-dependent term of the gradient decays to zero as the latent space dimensionality increases. We use this result to propose a simple modification to random weight initialization and show in certain models the modification gives rise to substantial improvement in training convergence time. Together, these results improve quantitative performance of popular generative modeling and inference models in addition to furthering our understanding of them.
|
134 |
Continuous Statistical Distribution Curve Fitting and Analysis ToolRobie, Gerald 01 January 1987 (has links) (PDF)
This paper reports on the implementation and utilization of a micro-computer based simulation modeling verification and validation tool. The interactive software tool, written in BASIC, computes and displays the frequency distribution of a given set of input data, computes appropriate parameters for a continuous statistical distribution which the user selects as likely to represent the population from which the input data sample was obtained, performs several goodness-of-fit analyses on the resultant distribution and provides sensitivity analysis capability on the input data, distribution type selection and distribution parameters.
|
135 |
On upper comonotonicity and stochastic ordersDong, Jing, 董靜 January 2009 (has links)
published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
|
136 |
Computer generation of directional data.January 1991 (has links)
by Carl Ka-fai Wong. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1991. / Includes bibliographical references. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter §1.1 --- Directional Data and Computer Simulation --- p.1 / Chapter §1.2 --- Computer Simulation Techniques --- p.2 / Chapter §1.3 --- Implementation and Preliminaries --- p.4 / Chapter Chapter 2 --- Generating Random Points on the N-sphere --- p.6 / Chapter §2.1 --- Methods --- p.6 / Chapter §2.2 --- Comparison of Methods --- p.10 / Chapter Chapter 3 --- Generating Variates from Non-uniform Distributions on the Circle --- p.14 / Chapter §3.1 --- Introduction --- p.14 / Chapter §3.2 --- Methods for Circular Distributions --- p.15 / Chapter Chapter 4 --- Generating Variates from Non-uniform Distributions on the Sphere --- p.28 / Chapter §4.1 --- Introduction --- p.28 / Chapter §4.2 --- Methods for Spherical Distributions --- p.29 / Chapter Chapter 5 --- Generating Variates from Non-uniform Distributions on the N-sphere --- p.56 / Chapter §5.1 --- Introduction --- p.56 / Chapter §5.2 --- Methods for Higher Dimensional Spherical Distributions --- p.56 / Chapter Chapter 6 --- Summary and Discussion --- p.69 / References --- p.72 / Appendix 1 --- p.77 / Appendix 2 --- p.98
|
137 |
Estimation of the precision matrix in the inverse Wishart distribution.January 1999 (has links)
Leung Kit Ying. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1999. / Includes bibliographical references (leaves 86-88). / Abstracts in English and Chinese. / Declaration --- p.i / Acknowledgement --- p.ii / Chapter 1 --- INTRODUCTION --- p.1 / Chapter 2 --- IMPROVED ESTIMATION OF THE NORMAL PRECISION MATRIX USING THE L1 AND L2 LOSS FUNCTIONS --- p.7 / Chapter 2.1 --- Previous Work --- p.9 / Chapter 2.2 --- Important Lemmas --- p.13 / Chapter 2.3 --- Improved Estimation of Σ-1 under L1 Loss Function --- p.20 / Chapter 2.4 --- Improved Estimation of Σ-1 under L2 Loss Function --- p.26 / Chapter 2.5 --- Simulation Study --- p.31 / Chapter 2.6 --- Comparison with Krishnammorthy and Gupta's result --- p.38 / Chapter 3 --- IMPROVED ESTIMATION OF THE NORMAL PRECISION MATRIX USING THE L3 AND L4 LOSS FUNCTIONS --- p.43 / Chapter 3.1 --- Justification of the Loss Functions --- p.46 / Chapter 3.2 --- Important Lemmas for Calculating Risks --- p.48 / Chapter 3.3 --- Improved Estimation of Σ-1 under L3 Loss Function --- p.55 / Chapter 3.4 --- Improved Estimation of Σ-1 under L4 Loss Function --- p.62 / Chapter 3.5 --- Simulation Study --- p.69 / Appendix --- p.77 / Reference --- p.35
|
138 |
Limiting distributions of maximum probability estimators of nonstationary autoregressive processes.January 2002 (has links)
Chau Ka Pik. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2002. / Includes bibliographical references (leaves 39-40). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Maximum Probability Estimator --- p.1 / Chapter 1.2 --- An Outline of the Thesis --- p.4 / Chapter 2 --- Asymptotic Distribution Theory --- p.7 / Chapter 3 --- Exponential Family Noise --- p.16 / Chapter 3.1 --- Stationary Case --- p.16 / Chapter 3.2 --- Nonstationary Case --- p.25 / Chapter 4 --- Conclusions --- p.37 / Bibliography --- p.39
|
139 |
On testing the equality of two proportionsChiou, Yow Yeu January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
140 |
An evaluation of various plotting positionsRys, Margaret J. (Margaret Joanna) January 2010 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries / Department: Industrial Engineering.
|
Page generated in 0.1139 seconds