• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 9
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 72
  • 72
  • 20
  • 18
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A PARTIAL SIMULATION STUDY OF PHANTOM EFFECTS IN MULTILEVEL ANALYSIS OF SCHOOL EFFECTS: THE CASE OF SCHOOL SOCIOECONOMIC COMPOSITION

Zhou, Hao 01 January 2019 (has links)
Socioeconomic status (SES) affects students’ academic achievement at different levels of an educational system. However, misspecified Hierarchical Linear Model (HLM) may bias school SES estimation. In this study, a partial simulation study was conducted to examine how misspecified HLM model bias school and student SES estimation. The result of this study can be summarized by four important points. First, based on partial simulation procedure, phantom effects of school SES and student SES are real. Second, characteristics of phantom effects are generalized. The stronger the correlation between prior science achievement measure and present science achievement measure, the greater the decrease in both student SES effects and school SES effects. Third, the procedure of partial simulation provides a new angle to conduct theoretical studies (full simulation), which is entirely based on ideal assumption. Finally, the procedure of partial simulation offers researchers a way to create prior student academic achievement measures when they are not available for data analysis.
12

Estimation and Inference for Quantile Regression of Longitudinal Data : With Applications in Biostatistics

Karlsson, Andreas January 2006 (has links)
<p>This thesis consists of four papers dealing with estimation and inference for quantile regression of longitudinal data, with an emphasis on nonlinear models. </p><p>The first paper extends the idea of quantile regression estimation from the case of cross-sectional data with independent errors to the case of linear or nonlinear longitudinal data with dependent errors, using a weighted estimator. The performance of different weights is evaluated, and a comparison is also made with the corresponding mean regression estimator using the same weights. </p><p>The second paper examines the use of bootstrapping for bias correction and calculations of confidence intervals for parameters of the quantile regression estimator when longitudinal data are used. Different weights, bootstrap methods, and confidence interval methods are used.</p><p>The third paper is devoted to evaluating bootstrap methods for constructing hypothesis tests for parameters of the quantile regression estimator using longitudinal data. The focus is on testing the equality between two groups of one or all of the parameters in a regression model for some quantile using single or joint restrictions. The tests are evaluated regarding both their significance level and their power.</p><p>The fourth paper analyzes seven longitudinal data sets from different parts of the biostatistics area by quantile regression methods in order to demonstrate how new insights can emerge on the properties of longitudinal data from using quantile regression methods. The quantile regression estimates are also compared and contrasted with the least squares mean regression estimates for the same data set. In addition to looking at the estimates, confidence intervals and hypothesis testing procedures are examined.</p>
13

Estimation and Inference for Quantile Regression of Longitudinal Data : With Applications in Biostatistics

Karlsson, Andreas January 2006 (has links)
This thesis consists of four papers dealing with estimation and inference for quantile regression of longitudinal data, with an emphasis on nonlinear models. The first paper extends the idea of quantile regression estimation from the case of cross-sectional data with independent errors to the case of linear or nonlinear longitudinal data with dependent errors, using a weighted estimator. The performance of different weights is evaluated, and a comparison is also made with the corresponding mean regression estimator using the same weights. The second paper examines the use of bootstrapping for bias correction and calculations of confidence intervals for parameters of the quantile regression estimator when longitudinal data are used. Different weights, bootstrap methods, and confidence interval methods are used. The third paper is devoted to evaluating bootstrap methods for constructing hypothesis tests for parameters of the quantile regression estimator using longitudinal data. The focus is on testing the equality between two groups of one or all of the parameters in a regression model for some quantile using single or joint restrictions. The tests are evaluated regarding both their significance level and their power. The fourth paper analyzes seven longitudinal data sets from different parts of the biostatistics area by quantile regression methods in order to demonstrate how new insights can emerge on the properties of longitudinal data from using quantile regression methods. The quantile regression estimates are also compared and contrasted with the least squares mean regression estimates for the same data set. In addition to looking at the estimates, confidence intervals and hypothesis testing procedures are examined.
14

Numerical Simulation Study to Investigate Expected Productivity Improvement Using the "Slot-Drill" Completion

Odunowo, Tioluwanimi Oluwagbemiga 2012 May 1900 (has links)
The "slot-drill" completion method, which utilizes a mechanically cut high-conductivity "slot" in the target formation created using a tensioned abrasive cable, has been proposed as an alternative stimulation technique for shale-gas and other low/ultra-low permeability formations. This thesis provides a comprehensive numerical simulation study on the "slot drill" completion technique. Using a Voronoi gridding scheme, I created representative grid systems for the slot-drill completion, as well as for the case of a vertical well with a single fracture, the case of a horizontal well with multiple hydraulic fractures, and various combinations of these completions. I also created a rectangular slot configuration, which is a simplified approximation of the actual "slot-drill" geometry, and investigated the ability of this rectangular approximation to model flow from the more complicated (actual) slot-drill configuration(s). To obtain the maximum possible diagnostic and analytical value, I simulated up to 3,000 years of production, allowing the assessment of production up to the point of depletion (or boundary-dominated flow). These scenarios provided insights into all the various flow regimes, as well as provided a quantitative evaluation of all completion schemes considered in the study. The results of my study illustrated that the "slot-drill" completion technique was not, in general, competitive in terms of reservoir performance and recovery compared to the more traditional completion techniques presently in use. Based on my modeling, it appears that the larger surface area to flow that multistage hydraulic fracturing provides is much more significant than the higher conductivity achieved using the slot-drill technique. This work provides quantitative results and diagnostic interpretations of productivity and flow behavior for low and ultra-low permeability formations completed using the slot-drill method. The results of this study can be used to (a) help evaluate the possible application of the "slot-drill" technique from the perspective of performance and recovery, and (b) to establish aggregated economic factors for comparing the slot-drill technique to more conventional completion and stimulation techniques applied to low and ultra-low permeability reservoirs.
15

Sample Size in Ordinal Logistic Hierarchical Linear Modeling

Timberlake, Allison M 07 May 2011 (has links)
Most quantitative research is conducted by randomly selecting members of a population on which to conduct a study. When statistics are run on a sample, and not the entire population of interest, they are subject to a certain amount of error. Many factors can impact the amount of error, or bias, in statistical estimates. One important factor is sample size; larger samples are more likely to minimize bias than smaller samples. Therefore, determining the necessary sample size to obtain accurate statistical estimates is a critical component of designing a quantitative study. Much research has been conducted on the impact of sample size on simple statistical techniques such as group mean comparisons and ordinary least squares regression. Less sample size research, however, has been conducted on complex techniques such as hierarchical linear modeling (HLM). HLM, also known as multilevel modeling, is used to explain and predict an outcome based on knowledge of other variables in nested populations. Ordinal logistic HLM (OLHLM) is used when the outcome variable has three or more ordered categories. While there is a growing body of research on sample size for two-level HLM utilizing a continuous outcome, there is no existing research exploring sample size for OLHLM. The purpose of this study was to determine the impact of sample size on statistical estimates for ordinal logistic hierarchical linear modeling. A Monte Carlo simulation study was used to investigate this research query. Four variables were manipulated: level-one sample size, level-two sample size, sample outcome category allocation, and predictor-criterion correlation. Statistical estimates explored include bias in level-one and level-two parameters, power, and prediction accuracy. Results indicate that, in general, holding other conditions constant, bias decreases as level-one sample size increases. However, bias increases or remains unchanged as level-two sample size increases, holding other conditions constant. Power to detect the independent variable coefficients increased as both level-one and level-two sample size increased, holding other conditions constant. Overall, prediction accuracy is extremely poor. The overall prediction accuracy rate across conditions was 47.7%, with little variance across conditions. Furthermore, there is a strong tendency to over-predict the middle outcome category.
16

Measurement invariance of health-related quality of life: a simulation study and numeric example

Sarkar, Joykrishna 23 September 2010 (has links)
Measurement invariance (MI) is a prerequisite to conduct valid comparisons of Health-related quality of life (HRQOL) measures across distinct populations. This research investigated the performance of estimation methods for testing MI hypotheses in complex survey data using a simulation study, and demonstrates the application of these methods for a HRQOL measure. Four forms of MI were tested using confirmatory factory analysis. The simulation study showed that the maximum likelihood method for small sample size and low intraclass correlation (ICC) performed best, whereas the pseudomaximum likelihood with weights and clustering effects performed better for large sample sizes with high ICC to test configural invariance. Both methods performed similarly to test other forms of MI. In the numeric example, MI of one HRQOL measure in the Canadian Community Health Survey was investigated and established for Aboriginal and non-Aboriginal populations with chronic conditions, indicating that they had similar conceptualizations of quality of life.
17

Extreme Value Mixture Modelling with Simulation Study and Applications in Finance and Insurance

Hu, Yang January 2013 (has links)
Extreme value theory has been used to develop models for describing the distribution of rare events. The extreme value theory based models can be used for asymptotically approximating the behavior of the tail(s) of the distribution function. An important challenge in the application of such extreme value models is the choice of a threshold, beyond which point the asymptotically justified extreme value models can provide good extrapolation. One approach for determining the threshold is to fit the all available data by an extreme value mixture model. This thesis will review most of the existing extreme value mixture models in the literature and implement them in a package for the statistical programming language R to make them more readily useable by practitioners as they are not commonly available in any software. There are many different forms of extreme value mixture models in the literature (e.g. parametric, semi-parametric and non-parametric), which provide an automated approach for estimating the threshold and taking into account the uncertainties with threshold selection. However, it is not clear that how the proportion above the threshold or tail fraction should be treated as there is no consistency in the existing model derivations. This thesis will develop some new models by adaptation of the existing ones in the literature and placing them all within a more generalized framework for taking into account how the tail fraction is defined in the model. Various new models are proposed by extending some of the existing parametric form mixture models to have continuous density at the threshold, which has the advantage of using less model parameters and being more physically plausible. The generalised framework all the mixture models are placed within can be used for demonstrating the importance of the specification of the tail fraction. An R package called evmix has been created to enable these mixture models to be more easily applied and further developed. For every mixture model, the density, distribution, quantile, random number generation, likelihood and fitting function are presented (Bayesian inference via MCMC is also implemented for the non-parametric extreme value mixture models). A simulation study investigates the performance of the various extreme value mixture models under different population distributions with a representative variety of lower and upper tail behaviors. The results show that the kernel density estimator based non-parametric form mixture model is able to provide good tail estimation in general, whilst the parametric and semi-parametric forms mixture models can give a reasonable fit if the distribution below the threshold is correctly specified. Somewhat surprisingly, it is found that including a constraint of continuity at the threshold does not substantially improve the model fit in the upper tail. The hybrid Pareto model performs poorly as it does not include the tail fraction term. The relevant mixture models are applied to insurance and financial applications which highlight the practical usefulness of these models.
18

Measurement invariance of health-related quality of life: a simulation study and numeric example

Sarkar, Joykrishna 23 September 2010 (has links)
Measurement invariance (MI) is a prerequisite to conduct valid comparisons of Health-related quality of life (HRQOL) measures across distinct populations. This research investigated the performance of estimation methods for testing MI hypotheses in complex survey data using a simulation study, and demonstrates the application of these methods for a HRQOL measure. Four forms of MI were tested using confirmatory factory analysis. The simulation study showed that the maximum likelihood method for small sample size and low intraclass correlation (ICC) performed best, whereas the pseudomaximum likelihood with weights and clustering effects performed better for large sample sizes with high ICC to test configural invariance. Both methods performed similarly to test other forms of MI. In the numeric example, MI of one HRQOL measure in the Canadian Community Health Survey was investigated and established for Aboriginal and non-Aboriginal populations with chronic conditions, indicating that they had similar conceptualizations of quality of life.
19

On Pairs Trading : A Comparison between Cointegration and Correlation as Selection-criteria

Hognesius, Erik, Höllerbauer, Jakob January 2014 (has links)
In this paper we show that pairs of stocks which have a true long run equilibrium (cointegration) yield a higher return than pairs of stocks that relies on a more spurious relationship (correlation) when applying Pairs Trading for a trading period from 31/12-09 to 25/6-14. We get an annual return for the cointegration portfolio of 4,15%, with a Sharpe-ratio of 0,87. For the correlated portfolio we get 2,08% and 0,45, respectively. The Sharpe-ratio for a buy-and-hold market index during the same period was 1,08.
20

識別性検査A‐1001の「関係判断力・応用力」領域および「記憶」領域の適応型テスト化の試み

野口, 裕之, Noguchi, Hiroyuki 26 December 1997 (has links)
国立情報学研究所で電子化したコンテンツを使用している。

Page generated in 0.1167 seconds