541 |
Analysis for segmental sharing and linkage disequilibrium: a genomewide association study on myopiaLee, Yiu-fai., 李耀暉. January 2009 (has links)
published_or_final_version / Psychiatry / Doctoral / Doctor of Philosophy
|
542 |
Statistical analysis of value-at-risk (VaR).January 2008 (has links)
Sit, Tony. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 49-51). / Abstracts in English and Chinese. / Acknowledgement --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Background --- p.4 / Chapter 2.1 --- Approaches to Risk Measurement --- p.4 / Chapter 2.2 --- Is VaR a “Good´ح Risk Measure? --- p.9 / Chapter 2.3 --- "Efficient Capital Market, Random Walk and Unit Root" --- p.11 / Chapter 3 --- Historical VaR and Limitations --- p.17 / Chapter 3.1 --- Regression Analysis --- p.18 / Chapter 3.2 --- A Possible Artifact --- p.19 / Chapter 4 --- "Parametric VaR with GARCH(1,1)" --- p.27 / Chapter 4.1 --- "GARCH(1,1), a Conditional Heteroscedastic Model" --- p.27 / Chapter 4.2 --- RiskMctrics VaR --- p.29 / Chapter 5 --- VaR with Regression Quantiles --- p.34 / Chapter 5.1 --- Quantilc Regression --- p.35 / Chapter 5.1.1 --- "Quantiles, Ranks and Optimisation" --- p.35 / Chapter 5.2 --- CAViaR --- p.39 / Chapter 5.2.1 --- The model --- p.39 / Chapter 5.2.2 --- Empirical Studies --- p.42 / Chapter 6 --- Conclusion and Future Research --- p.46 / Bibliography --- p.48
|
543 |
COMPOSITE NONPARAMETRIC TESTS IN HIGH DIMENSIONVillasante Tezanos, Alejandro G. 01 January 2019 (has links)
This dissertation focuses on the problem of making high-dimensional inference for two or more groups. High-dimensional means both the sample size (n) and dimension (p) tend to infinity, possibly at different rates. Classical approaches for group comparisons fail in the high-dimensional situation, in the sense that they have incorrect sizes and low powers. Much has been done in recent years to overcome these problems. However, these recent works make restrictive assumptions in terms of the number of treatments to be compared and/or the distribution of the data. This research aims to (1) propose and investigate refined small-sample approaches for high-dimension data in the multi-group setting (2) propose and study a fully-nonparametric approach, and (3) conduct an extensive comparison of the proposed methods with some existing ones in a simulation.
When treatment effects can meaningfully be formulated in terms of means, a semiparametric approach under equal and unequal covariance assumptions is investigated. Composites of F-type statistics are used to construct two tests. One test is a moderate-p version – the test statistic is centered by asymptotic mean – and the other test is a large-p version asymptotic-expansion based finite-sample correction for the mean of the test statistic. These tests do not make any distributional assumptions and, therefore, they are nonparametric in a way. The theory for the tests only requires mild assumptions to regulate the dependence. Simulation results show that, for moderately small samples, the large-p version yields substantial gain in the size with a small power tradeoff.
In some situations mean-based inference is not appropriate, for example, for data that is in ordinal scale or heavy tailed. For these situations, a high-dimensional fully-nonparametric test is proposed. In the two-sample situation, a composite of a Wilcoxon-Mann-Whitney type test is investigated. Assumptions needed are weaker than those in the semiparametric approach. Numerical comparisons with the moderate-p version of the semiparametric approach show that the nonparametric test has very similar size but achieves superior power, especially for skewed data with some amount of dependence between variables.
Finally, we conduct an extensive simulation to compare our proposed methods with other nonparametric test and rank transformation methods. A wide spectrum of simulation settings is considered. These simulation settings include a variety of heavy tailed and skewed data distributions, homoscedastic and heteroscedastic covariance structures, various amounts of dependence and choices of tuning (smoothing window) parameter for the asymptotic variance estimators. The fully-nonparametric and the rank transformation methods behave similarly in terms of type I and type II errors. However, the two approaches fundamentally differ in their hypotheses. Although there are no formal mathematical proofs for the rank transformations, they have a tendency to provide immunity against effects of outliers. From a theoretical standpoint, our nonparametric method essentially uses variable-by-variable ranking which naturally arises from estimating the nonparametric effect of interest. As a result of this, our method is invariant against application of any monotone marginal transformations. For a more practical comparison, real-data from an Encephalogram (EEG) experiment is analyzed.
|
544 |
Sequence alignmentChia, Nicholas Lee-Ping, January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 80-87).
|
545 |
Scaling and phase transitions in one-dimensional nonequilibrium driven systems /Ha, Meesoon, January 2003 (has links)
Thesis (Ph. D.)--University of Washington, 2003. / Vita. Includes bibliographical references (leaves 99-114).
|
546 |
Uses and misuses of common statistical techniques in current clinical biomedical researchRifkind, Geraldine Lavonne Freeman, 1931- January 1974 (has links)
No description available.
|
547 |
Eliciting and combining expert opinion : an overview and comparison of methodsChinyamakobvu, Mutsa Carole January 2015 (has links)
Decision makers have long relied on experts to inform their decision making. Expert judgment analysis is a way to elicit and combine the opinions of a group of experts to facilitate decision making. The use of expert judgment is most appropriate when there is a lack of data for obtaining reasonable statistical results. The experts are asked for advice by one or more decision makers who face a specific real decision problem. The decision makers are outside the group of experts and are jointly responsible and accountable for the decision and committed to finding solutions that everyone can live with. The emphasis is on the decision makers learning from the experts. The focus of this thesis is an overview and comparison of the various elicitation and combination methods available. These include the traditional committee method, the Delphi method, the paired comparisons method, the negative exponential model, Cooke’s classical model, the histogram technique, using the Dirichlet distribution in the case of a set of uncertain proportions which must sum to one, and the employment of overfitting. The supra Bayes approach, the determination of weights for the experts, and combining the opinions of experts where each opinion is associated with a confidence level that represents the expert’s conviction of his own judgment are also considered.
|
548 |
Simulation of Mathematical Models in Genetic AnalysisPatel, Dinesh Govindal 01 May 1964 (has links)
In recent years a new field of statistics has become of importance in many branches of experimental science. This is the Monte Carlo Method, so called because it is based on simulation of stochastic processes. By stochastic process, it is meant some possible physical process in the real world that has some random or stochastic element in its structure. This is the subject which may appropriately be called the dynamic part of statistics or the statistics of "change," in contrast with the static statistical problems which have so far been the more systematically studied. Many obvious examples of such processes are to be found in various branches of science and technology, for example, the phenomenon of Brownian Motion, the growth of a bacterial colony, the fluctuating numbers of electrons and protons in a cosmic ray shower or the random segregation and assortment of genes (chemical entities responsible for governing physical traits for the plant and animal systems) under linkage condition. Their occurrences are predominant in the fields of medicine, genetics, physics, oceanography, economics, engineering and industry, to name only a few scientific disciplines. The scientists making measurements in his laboratory, the meteriologist attempting to forecast weather, the control systems engineer designing a servomechanism (such as an aircraft or a thermostatic control), the electrical engineer designing a communication system (such as the radio link between entertainer and audience or the apparatus and cables that transmit messages from one point to another), economist studying price fluctuations in business cycles and the neurosurgion studying brain wave records, all are encountering problems to which the theory of stochastic processes may be relevant. Let us consider a few of these processes in a little more detail. In statistical physics many parts of the theory of stochastic processes were developed in correlation with the study of fluctuations and noise in physical systems (Einstein, 1905; Smoluchowski, 1906; and Schottky, 1918). Consequently, the theory of stochastic processes can be regarded as the mathematical foundation of statistical physics. The stochastic models for population growth consider the size and composition of a population which is constantly fluctuating. These are mostly considered by Bailey (1957), Bartlett (1960), and Bharucha-Reid (1960). In communication theory a wide variety of problems involving communication and/or control such as the problem of automatic tracking of moving objects, the reception of radio signals in the presence of natural and artificial disturbances, the reproduction of sound and images, the design of guidance systems, the design of control systems for industrial processes may be regarded as special cases of the following general problem; that is, let T denote a set of points in a time axis such that at each point t in T an observation has been made of a random variable X(t). Given the observations [x(t), t ϵT] and a quantity Z related to the observation, one desires to from in an optimum manner, estimates of, and tests of hypothesis about Z and various functions h(Z).
|
549 |
Exploration and Statistical Modeling of ProfitGibson, Caleb 01 December 2023 (has links) (PDF)
For any company involved in sales, maximization of profit is the driving force that guides all decision-making. Many factors can influence how profitable a company can be, including external factors like changes in inflation or consumer demand or internal factors like pricing and product cost. Understanding specific trends in one's own internal data, a company can readily identify problem areas or potential growth opportunities to help increase profitability.
In this discussion, we use an extensive data set to examine how a company might analyze their own data to identify potential changes the company might investigate to drive better performance. Based upon general trends in the data, we recommend potential actions the company could take. Additionally, we examine how a company can utilize predictive modeling to help them adapt their decision-making process as the trends identified from the initial analysis of the data evolve over time.
|
550 |
Some mixture models for the joint distribution of stock's return and trading volumeWong, Po-shing., 黃寶誠. January 1991 (has links)
published_or_final_version / Statistics / Master / Master of Philosophy
|
Page generated in 0.0934 seconds