• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • Tagged with
  • 10
  • 10
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An Empirical Analysis of Resampled Efficiency

Kohli, Jasraj . January 2005 (has links)
Thesis (M.S.) -- Worcester Polytechnic Institute. / Keywords: Resampling; Michaud. Includes bibliographical references (p. 21 ).
2

An Empirical Analysis of Resampled Efficiency

Kohli, Jasraj 26 April 2005 (has links)
Michaud introduced resampled efficiency as an alternative and improvement to Markowitz mean-variance efficiency. While resampled efficiency is far from becoming the standard paradigm of capital allocation amongst risky assets, it has nonetheless gained considerable ground in financial circles and become a fairly debated portfolio construction technique. This thesis applies Michaud's techniques to a wide array of stocks and tries to validate claims of performance superiority of resampled portfolios. While there seems to be no conclusive advantage or disadvantage of using resampling as a technique to obtain better returns, resampled portfolios do seem to offer higher stability and lower transaction costs.
3

Resampling tests for some survival models /

Tang, Nga-yan, Fancy. January 2001 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2002. / Includes bibliographical references (leaves 82-91).
4

Resampling tests for some survival models

鄧雅恩, Tang, Nga-yan, Fancy. January 2001 (has links)
published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
5

Transaction costs and resampling in mean-variance portfolio optimization

Asumeng-Denteh, Emmanuel. January 2004 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: Resampling; Portfolio Optimization; Transaction Costs; Mean-Variance Optimization. Includes bibliographical references (p. 23).
6

On lower bounds of mixture L₂-discrepancy, construction of uniform design and gamma representative points with applications in estimation and simulation

Ke, Xiao 13 May 2015 (has links)
Two topics related to the experimental design are considered in this thesis. On the one hand, the uniform experimental design (UD), a major kind of space-filling design, is widely used in applications. The majority of UD tables (UDs) with good uniformity are generated under the centralized {dollar}L_2{dollar}-discrepancy (CD) and the wrap-around {dollar}L_2{dollar}-discrepancy (WD). Recently, the mixture {dollar}L_2{dollar}-discrepancy (MD) is proposed and shown to be more reasonable than CD and WD in terms of uniformity. In first part of the thesis we review lower bounds for MD of two-level designs from a different point of view and provide a new lower bound. Following the same idea we obtain a lower bound for MD of three-level designs. Moreover, we construct UDs under the measurement of MD by the threshold accepting (TA) algorithm, and finally we attach two new UD tables with good properties derived from TA under the measurement of MD. On the other hand, the problem of selecting a specific number of representative points (RPs) to maintain as much information as a given distribution has raised attention. Previously, a method has been given to select type-II representative points (RP-II) from normal distribution. These point sets have good properties and minimize the information loss. Whereafter, following similar idea, Fu, 1985 have discussed RP-II for gamma distribution. In second part of the thesis, we improve the discussion of selecting Gamma RP-II and provide more RP-II tables with a number of parameters. Further in statistical simulation, we also evaluate the estimation performance of point sets resampled from Gamma RP-II by making comparison in different situations.
7

Quasi-Monte Carlo methods for bootstrap

Yam, Chiu Yu 01 January 2000 (has links)
No description available.
8

Modelling and resampling based multiple testing with applications to genetics

Huang, Yifan. January 2005 (has links)
Thesis (Ph. D.)--Ohio State University, 2005. / Title from first page of PDF file. Document formatted into pages; contains xii, 97 p.; also includes graphics. Includes bibliographical references (p. 94-97). Available online via OhioLINK's ETD Center
9

Statistically and Computationally Efficient Resampling and Distributionally Robust Optimization with Applications

Liu, Zhenyuan January 2024 (has links)
Uncertainty quantification via construction of confidence regions has been long studied in statistics. While these existing methods are powerful and commonly used, some modern problems that require expensive model fitting, or those that elicit convoluted interactions between statistical and computational noises, could challenge the effectiveness of these methods. To remedy some of these challenges, this thesis proposes novel approaches that not only guarantee statistical validity but also are computationally efficient. We study two main methodological directions: resampling-based methods in the first half (Chapters 2 and 3) and optimization-based methods, in particular so-called distributionally robust optimization, in the second half (Chapters 4 to 6) of this thesis. The first half focuses on the bootstrap, a common approach for statistical inference. This approach resamples data and hinges on the principle of using the resampling distribution as an approximation to the sampling distribution. However, implementing the bootstrap often demands extensive resampling and model refitting effort to wash away the Monte Carlo error, which can be computationally expensive for modern problems. Chapters 2 and 3 study bootstrap approaches using fewer resamples while maintaining coverage validity, and also the quantification of uncertainty for models with both statistical and Monte Carlo computation errors. In Chapter 2, we investigate bootstrap-based construction of confidence intervals using minimal resampling. We use a “cheap” bootstrap perspective based on sample-resample independence that yields valid coverage with as small as one resample, even when the problem dimension grows closely with the data size. We validate our theoretical findings and assess our approach against other benchmarks through various large-scale or high-dimensional problems. In Chapter 3, we focus on the so-called input uncertainty problem in stochastic simulation, which refers to the propagation of the statistical noise in calibrating input models to impact output accuracy. Unlike most existing literature that focuses on real-valued output quantities, we aim at constructing confidence bands for the entire output distribution function that can contain more holistic information. We develop a new test statistic that generalizes the Kolmogorov-Smirnov statistic to construct confidence bands that account for input uncertainty on top of Monte Carlo errors via an additional asymptotic component formed by a mean-zero Gaussian process. We also demonstrate how subsampling can be used to efficiently estimate the covariance function of this Gaussian process in a computationally cheap fashion. The second part of the thesis is devoted to optimization-based methods, in particular distributionally robust optimization (DRO). Originally built to tackle the uncertainty of the underlying distribution in a stochastic optimization, DRO adopts a worst-case perspective and seeks decisions that optimize under the worst-case scenario, over the so-called ambiguity set that represents the distributional uncertainty. In this thesis, we turn DRO broadly into a statistical tool (still referred to as DRO) by optimizing targets of interest over the ambiguity set and transforming the coverage guarantee of the ambiguity set into confidence bounds for targets. The flexibility of ambiguity sets advantageously allows the injection of prior distribution knowledge that operates with less data requirement than existing methods. In Chapter 4, motivated by the bias-variance tradeoff and other technical complications in conventional multivariate extreme value theory, we propose a shape-constrained DRO called orthounimodality DRO (OU-DRO) as a vehicle to incorporate natural and verifiable information into the tail. We study the statistical guarantee, and tractability especially in the bivariate setting via a new Choquet representation in convex analysis. Chapter 5 further studies a general approach that applies to higher dimensions via sample average approximation (SAA) and importance sampling. We establish convergence guarantee of the SAA optimal value for OU-DRO in any dimension under regularity conditions. We also argue that the resulting SAA problem is a linear program that can be solved by off-the-shelf algorithms. In Chapter 6, we study the connection between the out-of-sample errors of data-driven stochastic optimization and DRO via large deviations theory. We propose a special type of DRO formulation which uses an ambiguity set based on a Kullback Leibler divergence smoothed by the Wasserstein or Levy-Prokhorov distance. We relate large deviations theory to the performance of the proposed DRO and show it achieves nearly optimal out-of-sample performance in terms of the exponential decay rate of the generalization error. Furthermore, the computation of the proposed DRO is not harder than DRO problems based on f-divergence or Wasserstein distances, which leads to a statistically optimal and computationally tractable DRO formulation.
10

Contributions to robust methods in nonparametric frontier models

Bruffaerts, Christopher 10 September 2014 (has links)
Les modèles de frontières sont actuellement très utilisés par beaucoup d’économistes, gestionnaires ou toute personne dite « decision-maker ». Dans ces modèles de frontières, le but du chercheur consiste à attribuer à des unités de production (des firmes, des hôpitaux ou des universités par exemple) une mesure de leur efficacité en terme de production. Ces unités (dénotées DMU-Decision-Making Units) utilisent-elles à bon escient leurs « inputs » et « outputs »? Font-elles usage de tout leur potentiel dans le processus de production? <p>L’ensemble de production est l’ensemble contenant toutes les combinaisons d’inputs et d’outputs qui sont physiquement réalisables dans une économie. De cet ensemble contenant p inputs et q outputs, la notion d’efficacité d ‘une unité de production peut être définie. Celle-ci se définie comme une distance séparant le DMU de la frontière de l’ensemble de production. A partir d’un échantillon de DMUs, le but est de reconstruire cette frontière de production afin de pouvoir y évaluer l’efficacité des DMUs. A cette fin, le chercheur utilise très souvent des méthodes dites « classiques » telles que le « Data Envelopment Analysis » (DEA).<p><p>De nos jours, le statisticien bénéficie de plus en plus de données, ce qui veut également dire qu’il n’a pas l’opportunité de faire attention aux données qui font partie de sa base de données. Il se peut en effet que certaines valeurs aberrantes s’immiscent dans les jeux de données sans que nous y fassions particulièrement attention. En particulier, les modèles de frontières sont extrêmement sensibles aux valeurs aberrantes et peuvent fortement influencer l’inférence qui s’en suit. Pour éviter que certaines données n’entravent une analyse correcte, des méthodes robustes sont utilisées.<p><p>Allier le côté robuste au problème d’évaluation d’efficacité est l’objectif général de cette thèse. Le premier chapitre plante le décor en présentant la littérature existante dans ce domaine. Les quatre chapitres suivants sont organisés sous forme d’articles scientifiques. <p>Le chapitre 2 étudie les propriétés de robustesse d’un estimateur d’efficacité particulier. Cet estimateur mesure la distance entre le DMU analysé et la frontière de production le long d’un chemin hyperbolique passant par l’unité. Ce type de distance très spécifique s’avère très utile pour définir l’efficacité de type directionnel. <p>Le chapitre 3 est l’extension du premier article au cas de l’efficacité directionnelle. Ce type de distance généralise toutes les distances de type linéaires pour évaluer l’efficacité d’un DMU. En plus d’étudier les propriétés de robustesse de l’estimateur d’efficacité de type directionnel, une méthode de détection de valeurs aberrantes est présentée. Celle-ci s’avère très utile afin d’identifier les unités de production influençantes dans cet espace multidimensionnel (dimension p+q). <p>Le chapitre 4 présente les méthodes d’inférence pour les efficacités dans les modèles nonparamétriques de frontière. En particulier, les méthodes de rééchantillonnage comme le bootstrap ou le subsampling s’avère être très utiles. Dans un premier temps, cet article montre comment améliorer l’inférence sur les efficacités grâce au subsampling et prouve qu’il n’est pas suffisant d’utiliser un estimateur d’efficacité robuste dans les méthodes de rééchantillonnage pour avoir une inférence qui soit fiable. C’est pourquoi, dans un second temps, cet article propose une méthode robuste de rééchantillonnage qui est adaptée au problème d’évaluation d’efficacité. <p>Finalement, le dernier chapitre est une application empirique. Plus précisément, cette analyse s’intéresse à l ‘efficacité des universités américaines publiques et privées au niveau de leur recherche. Des méthodes classiques et robustes sont utilisées afin de montrer comment tous les outils étudiés précédemment peuvent s’appliquer en pratique. En particulier, cette étude permet d’étudier l’impact sur l’efficacité des institutions américaines de certaines variables telles que l’enseignement, l’internationalisation ou la collaboration avec le monde de l’industrie.<p> / Doctorat en sciences, Orientation statistique / info:eu-repo/semantics/nonPublished

Page generated in 0.1175 seconds