1 |
An Empirical Analysis of Resampled EfficiencyKohli, Jasraj 26 April 2005 (has links)
Michaud introduced resampled efficiency as an alternative and improvement to Markowitz mean-variance efficiency. While resampled efficiency is far from becoming the standard paradigm of capital allocation amongst risky assets, it has nonetheless gained considerable ground in financial circles and become a fairly debated portfolio construction technique. This thesis applies Michaud's techniques to a wide array of stocks and tries to validate claims of performance superiority of resampled portfolios. While there seems to be no conclusive advantage or disadvantage of using resampling as a technique to obtain better returns, resampled portfolios do seem to offer higher stability and lower transaction costs.
|
2 |
An Empirical Analysis of Resampled EfficiencyKohli, Jasraj . January 2005 (has links)
Thesis (M.S.) -- Worcester Polytechnic Institute. / Keywords: Resampling; Michaud. Includes bibliographical references (p. 21 ).
|
3 |
The effects of resampling on information extraction from thematic mapper imageryAtkinson, P. January 1987 (has links)
No description available.
|
4 |
A NEW RESAMPLING METHOD TO IMPROVE QUALITY RESEARCH WITH SMALL SAMPLESBAI, HAIYAN 03 April 2007 (has links)
No description available.
|
5 |
Measuring the Stability of Results from Supervised Statistical LearningPhilipp, Michel, Rusch, Thomas, Hornik, Kurt, Strobl, Carolin 17 January 2017 (has links) (PDF)
Stability is a major requirement to draw reliable conclusions when
interpreting results from supervised statistical learning. In this
paper, we present a general framework for assessing and comparing the
stability of results, that can be used in real-world statistical
learning applications or in benchmark studies. We use the framework to
show that stability is a property of both the algorithm and the
data-generating process. In particular, we demonstrate that unstable
algorithms (such as recursive partitioning) can produce stable results
when the functional form of the relationship between the predictors
and the response matches the algorithm. Typical uses of the framework
in practice would be to compare the stability of results generated by
different candidate algorithms for a data set at hand or to assess the
stability of algorithms in a benchmark study. Code to perform the
stability analyses is provided in the form of an R-package. / Series: Research Report Series / Department of Statistics and Mathematics
|
6 |
The statistical analysis of complex sampling dataPaulse, Bradley January 2018 (has links)
>Magister Scientiae - MSc / Most standard statistical techniques illustrated in text books assume that the data are collected from a simple random sample (SRS) and hence are independently and identically distributed (i.i.d.). In reality, data are often sourced through complex sampling (CS) designs, with a combination of stratification and clustering at different levels of the design. Consequently, the CS data are not i.i.d. and sampling weights that are developed over different stages, are calculated and included in the analysis of this data to account for the sampling design. Logistic regression is often employed in the modelling of survey data since the response under investigation typically has a dichotomous outcome. Furthermore, since the logistic regression model has no homogeneity or normality assumptions, it is appealing when modelling a dichotomous response from survey data.
This research considers the comparison of the estimates of the logistic regression model parameters when the CS design is accounted for, i.e. weighting is present, to when the data are modelled using an SRS design, i.e. no weighting. In addition, the standard errors of the estimators will be obtained using three different variance techniques, viz. Taylor series linearization, the jackknife and the bootstrap. The different estimated standard errors will be used in the calculation of the standard (asymptotic) interval which will be compared to the bootstrap percentile interval in terms of the interval coverage probability. A further level of comparison is obtained when using only design weights to those obtained using calibrated and integrated sampling weights. This simulation study is based on the Income and Expenditure Survey (IES) of 2005/2006. The results showed that generally when weighting was used the estimators performed better as opposed to when the design was ignored, i.e. under the assumption of SRS, with the results for the Taylor series linearization being more stable.
|
7 |
Tests zur Modellspezifikation in der nichtlinearen RegressionBartels, Knut January 1999 (has links)
Als Grundlage vieler statistischer Verfahren wird der Prozess der Entstehung von Daten modelliert, um dann weitere Schätz- und Testverfahren anzuwenden. Diese Arbeit befasst sich mit der Frage, wie diese Spezifikation für parametrische Modelle selbst getestet werden kann. In Erweiterung bestehender Verfahren werden Tests mit festem Kern eingeführt und ihre asymptotischen Eigenschaften werden analysiert. Es wird gezeigt, dass die Bestimmung der kritischen Werte mit mehreren Stichprobenwiederholungsverfahren möglich ist. Von diesen ist eine neue Monte-Carlo-Approximation besonders wichtig, da sie die Komplexität der Berechnung deutlich verringern kann. Ein bedingter Kleinste-Quadrate-Schätzer für nichtlineare parametrische Modelle wird definiert und seine wesentlichen asymptotischen Eigenschaften werden hergeleitet. Sämtliche Versionen der Tests und alle neuen Konzepte wurden in Simulationsstudien untersucht, deren wichtigste Resultate präsentiert werden. Die praktische Anwendbarkeit der Testverfahren wird an einem Datensatz zur Produktwahl dargelegt, der mit multinomialen Logit-Modellen analysiert werden soll. / The data generating process often is modeled as a basis for many subsequent statistical estimation and testing procedures. In this work the question is studied, how this specification of parametric models itself can be tested. In generalization of existing methods, tests with fixed kernel are introduced and their asymptotics are analyzed. It is shown that the determination of critical values is possible using several resampling procedures. Of these a new Monte-Carlo-approximation is of special importance, since it can reduce the complexity of calculation substantially. A conditional least squares estimator for nonlinear models is defined and its essential asymptotic properties are derived. All versions of the tests and all new concepts were studied in simulation studies and the most important results are presented. The applicability of the tests is demonstrated with a dataset on product choice that is to be analyzed with multinomial logit models.
|
8 |
Digital resampling and timing recovery in QAM systemsDuong, Quang Xuan 29 November 2010
Digital resampling is a process that converts a digital signal from one sampling rate to another. This process is performed by means of interpolating between the input samples to produce output samples at an output sampling rate. The digital interpolation process is accomplished with an interpolation filter.<p>
The problem of resampling digital signals at an output sampling rate that is incommensurate with the input sampling rate is the first topic of this thesis. This problem is often encountered in practice, for example in multiplexing video signals from different sources for the purpose of distribution. There are basically two approaches to resample the signals. Both approaches are thoroughly described and practical circuits for hardware implementation are provided. A comparison of the two circuits shows that one circuit requires a division to compute the new sampling times. This time scaling operation adds complexity to the implementation with no performance advantage over the other circuit, and makes the 'division free' circuit the preferred one for resampling.<p>
The second topic of this thesis is performance analysis of interpolation filters for Quadrature Amplitude Modulation (QAM) signals in the context of timing recovery. The performance criterion of interest is Modulation Error Ratio (MER), which is considered to be a very useful indicator of the quality of modulated signals in QAM systems. The methodology of digital resampling in hardware is employed to describe timing recovery circuits and propose an approach to evaluate the performance of interpolation filters. A MER performance analysis circuit is then devised. The circuit is simulated with MATLAB/Simulink as well as implemented in Field Programmable Gate Array (FPGA). Excellent agreement between results obtained from simulation and hardware implementation proves the validity of the methodology and practical application of the research works.
|
9 |
Digital resampling and timing recovery in QAM systemsDuong, Quang Xuan 29 November 2010 (has links)
Digital resampling is a process that converts a digital signal from one sampling rate to another. This process is performed by means of interpolating between the input samples to produce output samples at an output sampling rate. The digital interpolation process is accomplished with an interpolation filter.<p>
The problem of resampling digital signals at an output sampling rate that is incommensurate with the input sampling rate is the first topic of this thesis. This problem is often encountered in practice, for example in multiplexing video signals from different sources for the purpose of distribution. There are basically two approaches to resample the signals. Both approaches are thoroughly described and practical circuits for hardware implementation are provided. A comparison of the two circuits shows that one circuit requires a division to compute the new sampling times. This time scaling operation adds complexity to the implementation with no performance advantage over the other circuit, and makes the 'division free' circuit the preferred one for resampling.<p>
The second topic of this thesis is performance analysis of interpolation filters for Quadrature Amplitude Modulation (QAM) signals in the context of timing recovery. The performance criterion of interest is Modulation Error Ratio (MER), which is considered to be a very useful indicator of the quality of modulated signals in QAM systems. The methodology of digital resampling in hardware is employed to describe timing recovery circuits and propose an approach to evaluate the performance of interpolation filters. A MER performance analysis circuit is then devised. The circuit is simulated with MATLAB/Simulink as well as implemented in Field Programmable Gate Array (FPGA). Excellent agreement between results obtained from simulation and hardware implementation proves the validity of the methodology and practical application of the research works.
|
10 |
Resampling tests for some survival models /Tang, Nga-yan, Fancy. January 2001 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2002. / Includes bibliographical references (leaves 82-91).
|
Page generated in 0.067 seconds