• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

An investigation of bootstrap methods for estimating the standard error of equating under the common-item nonequivalent groups design

Wang, Chunxin 01 July 2011 (has links)
The purpose of this study was to investigate the performance of the parametric bootstrap method and to compare the parametric and nonparametric bootstrap methods for estimating the standard error of equating (SEE) under the common-item nonequivalent groups (CINEG) design with the frequency estimation (FE) equipercentile method under a variety of simulated conditions. When the performance of the parametric bootstrap method was investigated, bivariate polynomial log-linear models were employed to fit the data. With the consideration of the different polynomial degrees and two different numbers of cross-product moments, a total of eight parametric bootstrap models were examined. Two real datasets were used as the basis to define the population distributions and the "true" SEEs. A simulation study was conducted reflecting three levels for group proficiency differences, three levels of sample sizes, two test lengths and two ratios of the number of common items and the total number of items. Bias of the SEE, standard errors of the SEE, root mean square errors of the SEE, and their corresponding weighted indices were calculated and used to evaluate and compare the simulation results. The main findings from this simulation study were as follows: (1) The parametric bootstrap models with larger polynomial degrees generally produced smaller bias but larger standard errors than those with lower polynomial degrees. (2) The parametric bootstrap models with a higher order cross product moment (CPM) of two generally yielded more accurate estimates of the SEE than the corresponding models with the CPM of one. (3) The nonparametric bootstrap method generally produced less accurate estimates of the SEE than the parametric bootstrap method. However, as the sample size increased, the differences between the two bootstrap methods became smaller. When the sample size was equal to or larger than 3,000, the differences between the nonparametric bootstrap method and the parametric bootstrap model that produced the smallest RMSE were very small. (4) Of all the models considered in this study, parametric bootstrap models with the polynomial degree of four performed better under most simulation conditions. (5) Aside from method effects, sample size and test length had the most impact on estimating the SEE. Group proficiency differences and the ratio of the number of common items to the total number of items had little effect on a short test, but had slight effect on a long test.
2

Power Analysis of Bootstrap Methods for Testing Homogeneity of Variances with Small Sample

Shih, Chiang-Ming 23 July 2008 (has links)
Several classical tests are investigated for testing the homogeneity of variances. However, in case of homoscedasticity statistics do not perform well with small sample size. In this article we discuss the use of bootstrap technique for the problem of testing equality of variances with small samples. Two important features of the proposed resampling method are their flexibility and robustness. Both £\ levels and power of our new proposed procedure is compared with the other classical methods discussed here.
3

Evaluating Performance for Network Equipment Manufacturing Firms

Lin, Hong-jia 08 July 2009 (has links)
none
4

Apply bootstrap method to verify the stock-picking ability and persistence of mutual fund performance

Yu, Yu-hsin 16 June 2005 (has links)
How to evaluate mutual fund performance correctly and determine the investment targets of mutual funds are the important issues to investors. In this study, we apply an innovative bootstrap statistical technique, to solve the small sample size problem and the distribution assumption disturbance in previous research. We examine the performance of domestic open-end mutual funds over the period from 1998 to 2003 using five performance measurement models. We further test the persistence of mutual fund performance. This study shows that¡G 1. On average, mutual fund managers do not own superior ability in stock selection. Most funds experiencing abnormal performance may simply result from good luck, since random selection also creates abnormal performance. 2. Mutual fund managers do not own market-timing ability. Classified further by investment objectives, the sample indicates that only the group of small-scale stocks shows significant market-timing ability. 3. Performance persistence does not exist no matter in long-term or short-term period.
5

Confidence Interval Estimation for Distribution Systems Power Consumption by Using the Bootstrap Method

Cugnet, Pierre 17 July 1997 (has links)
The objective of this thesis is to estimate, for a distribution network, confidence intervals containing the values of nodal hourly power consumption and nodal maximum power consumption per customer where they are not measured. The values of nodal hourly power consumption are needed in operational as well as in planning stages to carry out load flow studies. As for the values of nodal maximum power consumption per customer, they are used to solve planning problems such as transformer sizing. Confidence interval estimation was preferred to point estimation because it takes into consideration the large variability of the consumption values. A computationally intensive statistical technique, namely the bootstrap method, is utilized to estimate these intervals. It allows us to replace idealized model assumptions for the load distributions by model free analyses. Two studies have been executed. The first one is based on the original nonparametric bootstrap method to calculate a 95% confidence interval for nodal hourly power consumption. This estimation is carried out for a given node and a given hour of the year. The second one makes use of the parametric bootstrap method in order to infer a 95% confidence interval for nodal maximum power consumption per customer. This estimation is realized for a given node and a given month. Simulation results carried out on a real data set are presented and discussed. / Master of Science
6

A Empirical Study on Stock Market Timing with Technical Trading rules

Chao, Yung-Yu 10 July 2002 (has links)
In the last few years, it has been proved that the movements of financial asset have the property of non-linearity and show some tendency within a given period. Increasing evidence that technical trading rules can detect non-linearity in financial time series has renewed interest in technical analysis. This study evaluates the market timing ability of the moving average trading rules in twelve equity markets in the developed markets and the emerging markets from January 1990 through Match 2002. We use traditional test, bootstrap p-value test, Cumby-Modest¡¦s market timing ability test and simulation stock trade to evaluate market timing ability. The overall results indicate that the moving average trading rules have predictive ability with respect to market indices in the Asia emerging stock markets. These findings may provide investors with important asset allocation information.
7

Cox Model Analysis with the Dependently Left Truncated Data

Li, Ji 07 August 2010 (has links)
A truncated sample consists of realizations of a pair of random variables (L, T) subject to the constraint that L ≤T. The major study interest with a truncated sample is to find the marginal distributions of L and T. Many studies have been done with the assumption that L and T are independent. We introduce a new way to specify a Cox model for a truncated sample, assuming that the truncation time is a predictor of T, and this causes the dependence between L and T. We develop an algorithm to obtain the adjusted risk sets and use the Kaplan-Meier estimator to estimate the Marginal distribution of L. We further extend our method to more practical situation, in which the Cox model includes other covariates associated with T. Simulation studies have been conducted to investigate the performances of the Cox model and the new estimators.
8

Jackknife Empirical Likelihood Inference For The Pietra Ratio

Su, Yueju 17 December 2014 (has links)
Pietra ratio (Pietra index), also known as Robin Hood index, Schutz coefficient (Ricci-Schutz index) or half the relative mean deviation, is a good measure of statistical heterogeneity in the context of positive-valued data sets. In this thesis, two novel methods namely "adjusted jackknife empirical likelihood" and "extended jackknife empirical likelihood" are developed from the jackknife empirical likelihood method to obtain interval estimation of the Pietra ratio of a population. The performance of the two novel methods are compared with the jackknife empirical likelihood method, the normal approximation method and two bootstrap methods (the percentile bootstrap method and the bias corrected and accelerated bootstrap method). Simulation results indicate that under both symmetric and skewed distributions, especially when the sample is small, the extended jackknife empirical likelihood method gives the best performance among the six methods in terms of the coverage probabilities and interval lengths of the confidence interval of Pietra ratio; when the sample size is over 20, the adjusted jackknife empirical likelihood method performs better than the other methods, except the extended jackknife empirical likelihood method. Furthermore, several real data sets are used to illustrate the proposed methods.
9

Svavelhaltsmätning av bränd kalk från Rättvik

Makhmour, Salim, Thunström, Robert January 2016 (has links)
This thesis project was carried out by two students on behalf of SMA Mineral AB, which owns the lime plant in Rättvik, where there was need to establish a sampling method for the local quick lime product. The aim was to ensure a maximum concentration of impurities in the product—primarily carbon and sulphur. The mean value of sulphur found in the input material varied over time. Consequently, a suitable statistical method was needed to ensure product quality for the prospective customer as they required that the sulphur content of the proposed product never exceed 500 ppm.The aim was, on the one hand, to process and compile the sampling results in accordance with a suitable statistical method which enabled reasonable conclusions about the product quality and, on the other hand, to answer three key queries that SMA Mineral AB posed:• to investigate whether the product’s sulphur content was affected during conveyance through the lime plant;• to investigate whether sampling at various time intervals may have been a factor which affected the product’s sulphur content;• to investigate whether there was, or were, any particular times of day at which the sulphur content always maintained the correct level.A number of phases were required to find answers to these questions. The planning phase was initiated by a visit to Rättvik, with the purpose of gaining an overall picture of how work at the plant was conducted as well as which guidelines and regulations were in effect. After this visit, a project plan was drawn up in order to serve as support for further work.The sampling campaign took place during the period of 13–16 April 2015 and analysis of the collected material was carried out the following week at the company’s laboratory in Persberg, Sweden. However, the results from the sampling campaign did not provide sufficient basis for answering the company’s questions, which is why data from SMA Mineral AB’s own data collection was used. Data collected during the sampling campaign proved to follow normal distribution. Subsequently, the statistical analysis of variance method, ANOVA, was applied in order to investigate whether the sulphur content changed with respect to the time interval and the sampling site. The test results demonstrated p-values under 0.005, which meant that neither the sampling site nor the sampling time intervals had an effect on the product’s sulphur content. The company’s question, whether there were daily time intervals of acceptable sulphur content in the product, was answered with the assistance of the company’s own data collection, which demonstrated that it did not follow normal distribution. For that reason, the bootstrap method was used to create confidence intervals for the different points in time. The result showed that there were no points in time during which acceptable material was produced. One reason for this is the occurrence of a set of deviating values that were observed to have a sulphur content that exceeded 1,000 ppm. This report presents recommendations for various measures independently of any opinions SMA Mineral AB may have concerning the source of these values and whether they can possibly be avoided. / <p>Validerat; 20160612 (global_studentproject_submitter)</p>
10

Zobecněné lineární modely v upisovacím riziku / Generalized Linear Models in Reserving Risk

Zboňáková, Lenka January 2015 (has links)
In the presented thesis we deal with the generalized linear models framework in a claims reserving problem. Claims reserving in non-life insurance is firstly described and the considered class of models is introduced. Consequently, this branch of stochastic modelling is implemented in the reserving setup. For computation of the risk associated with claims reserving, we need a predictive distribution of future liabilities in order to evaluate risk measures such as Va- lue at Risk and Conditional Value at Risk. Since datasets in non-life insurance commonly consist of a small number of observations and estimation of predictive distributions can be complicated, we adopt a bootstrap method for this purpose. Model fitting, simulations and consequent measuring of the reserving risk are performed within the use of real-life data. Based on this, an analysis of fitted models and their comparison together with graphical outputs is included. 1

Page generated in 0.0779 seconds