• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • Tagged with
  • 17
  • 17
  • 17
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Multiple Frame Sampling Theory And Applications

Dalcik, Aylin 01 February 2010 (has links) (PDF)
One of the most important practical problems in conducting sample surveys is the list that can be used for selecting the sample is generally incomplete or out of date. Therefore, sample surveys can produce seriously biased estimates of the population parameters. On the other hand updating a list is a difficult and very expensive operation. Multiple-frame sampling refers to surveys where two or more frames are used and independent samples are taken respectively from each of the frames. It is assumed that the union of the different frames covers the whole population. There are two major reasons for the use of multiple-frame sampling method. One is that, using two or more frames can cover most of the target population and therefore reduces biases due to coverage error. The second is that multipleframe sampling design may result in considerable cost savings over a single frame design.
2

The Effects Of Inspection Error And Rework On Quality Loss For A Nominal-the-best Type Quality Characteristic

Taseli, Aysun 01 August 2004 (has links) (PDF)
Taguchi defines quality loss as the loss imposed to the consumer for each unit of deviation from the target consumer requirements. In this thesis, the effects of inspection error and rework on quality loss are studied for a nominal-the-best type quality characteristic. The distribution of the quality characteristic in a production environment where there are inspection error and a separate rework facility is investigated. 100 % inspection policy is considered. After deriving the mean and variance of the resulting distribution of the quality characteristic, the true and simulated quality loss values for a number of scenarios are calculated. Furthermore, effects of deviation of the process mean from the target and variance of the rework are studied besides inspection error and process capability through a full factor factorial experimental design. Results are discussed for possible uses as quality improvement project selection criteria.
3

The Turkish Catastrophe Insurance Pool Claims Modeling 2000-2008 Data

Saribekir, Gozde 01 March 2013 (has links) (PDF)
After the 1999 Marmara Earthquake, social, economic and engineering studies on earthquakes became more intensive. The Turkish Catastrophe Insurance Pool (TCIP) was established after the Marmara Earthquake to share the deficit in the budget of the Government. The TCIP has become a data source for researchers, consisting of variables such as number of claims, claim amount and magnitude. In this thesis, the TCIP earthquake claims, collected between 2000 and 2008, are studied. The number of claims and claim payments (aggregate claim amount) are modeled by using Generalized Linear Models (GLM). Observed sudden jumps in claim data are represented by using the exponential kernel function. Model parameters are estimated by using the Maximum Likelihood Estimation (MLE). The results can be used as recommendation in the computation of expected value of the aggregate claim amounts and the premiums of the TCIP.
4

Pairwise Multiple Comparisons Under Short-tailed Symmetric Distribution

Balci, Sibel 01 May 2007 (has links) (PDF)
In this thesis, pairwise multiple comparisons and multiple comparisons with a control are studied when the observations have short-tailed symmetric distributions. Under non-normality, the testing procedure is given and Huber estimators, trimmed mean with winsorized standard deviation, modified maximum likelihood estimators and ordinary sample mean and sample variance used in this procedure are reviewed. Finally, robustness properties of the stated estimators are compared with each other and it is shown that the test based on the modified maximum likelihood estimators has better robustness properties under short-tailed symmetric distribution.
5

Effect Of Estimation In Goodness-of-fit Tests

Eren, Emrah 01 September 2009 (has links) (PDF)
In statistical analysis, distributional assumptions are needed to apply parametric procedures. Assumptions about underlying distribution should be true for accurate statistical inferences. Goodness-of-fit tests are used for checking the validity of the distributional assumptions. To apply some of the goodness-of-fit tests, the unknown population parameters are estimated. The null distributions of test statistics become complicated or depend on the unknown parameters if population parameters are replaced by their estimators. This will restrict the use of the test. Goodness-of-fit statistics which are invariant to parameters can be used if the distribution under null hypothesis is a location-scale distribution. For location and scale invariant goodness-of-fit tests, there is no need to estimate the unknown population parameters. However, approximations are used in some of those tests. Different types of estimation and approximation techniques are used in this study to compute goodness-of-fit statistics for complete and censored samples from univariate distributions as well as complete samples from bivariate normal distribution. Simulated power properties of the goodness-of-fit tests against a broad range of skew and symmetric alternative distributions are examined to identify the estimation effects in goodness-of-fit tests. The main aim of this thesis is to modify goodness-of-fit tests by using different estimators or approximation techniques, and finally see the effect of estimation on the power of these tests.
6

Robust Estimation And Hypothesis Testing In Microarray Analysis

Ulgen, Burcin Emre 01 August 2010 (has links) (PDF)
Microarray technology allows the simultaneous measurement of thousands of gene expressions simultaneously. As a result of this, many statistical methods emerged for identifying differentially expressed genes. Kerr et al. (2001) proposed analysis of variance (ANOVA) procedure for the analysis of gene expression data. Their estimators are based on the assumption of normality, however the parameter estimates and residuals from this analysis are notably heavier-tailed than normal as they commented. Since non-normality complicates the data analysis and results in inefficient estimators, it is very important to develop statistical procedures which are efficient and robust. For this reason, in this work, we use Modified Maximum Likelihood (MML) and Adaptive Maximum Likelihood estimation method (Tiku and Suresh, 1992) and show that MML and AMML estimators are more efficient and robust. In our study we compared MML and AMML method with widely used statistical analysis methods via simulations and real microarray data sets.
7

The Application Of Disaggregation Methods To The Unemployment Rate Of Turkey

Tuker, Utku Goksel 01 September 2010 (has links) (PDF)
Modeling and forecasting of the unemployment rate of a country is very important to be able to take precautions on the governmental policies. The available unemployment rate data of Turkey provided by the Turkish Statistical Institute (TURKSTAT) are not in suitable format to have a time series model. The unemployment rate data between 1988 and 2009 create a problem of building a reliable time series model due to the insufficient number and irregular form of observations. The application of disaggregation methods to some parts of the unemployment rate data enables us to fit an appropriate time series model and to have forecasts as a result of the suggested model.
8

The Effect Of Temporal Aggregation On Univariate Time Series Analysis

Sariaslan, Nazli 01 September 2010 (has links) (PDF)
Most of the time series are constructed by some kind of aggregation and temporal aggregation that can be defined as aggregation over consecutive time periods. Temporal aggregation takes an important role in time series analysis since the choice of time unit clearly influences the type of model and forecast results. A totally different time series model can be fitted on the same variable over different time periods. In this thesis, the effect of temporal aggregation on univariate time series models is studied by considering modeling and forecasting procedure via a simulation study and an application based on a southern oscillation data set. Simulation study shows how the model, mean square forecast error and estimated parameters change when temporally aggregated data is used for different orders of aggregation and sample sizes. Furthermore, the effect of temporal aggregation is also demonstrated through southern oscillation data set for different orders of aggregation. It is observed that the effect of temporal aggregation should be taken into account for data analysis since temporal aggregation can give rise to misleading results and inferences.
9

Which Method Gives The Best Forecast For Longitudinal Binary Response Data?: A Simulation Study

Aslan, Yasemin 01 October 2010 (has links) (PDF)
Panel data, also known as longitudinal data, are composed of repeated measurements taken from the same subject over different time points. Although it is generally used in time series applications, forecasting can also be used in panel data due to its time dimension. However, there is limited number of studies in this area in the literature. In this thesis, forecasting is studied for panel data with binary response because of its increasing importance and increasing fundamental roles. A simulation study is held to compare the efficiency of different methods and to find the one that gives the optimal forecast values. In this simulation, 21 different methods, including na&iuml / ve and complex ones, are used by the help of R software. It is concluded that transition models and random effects models with no lag of response can be chosen for getting the most accurate forecasts, especially for the first two years of forecasting.
10

On Multivariate Longitudinal Binary Data Models And Their Applications In Forecasting

Asar, Ozgur 01 July 2012 (has links) (PDF)
Longitudinal data arise when subjects are followed over time. This type of data is typically dependent, due to including repeated observations and this type of dependence is termed as within-subject dependence. Often the scientific interest is on multiple longitudinal measurements which introduce two additional types of associations, between-response and cross-response temporal dependencies. Only the statistical methods which take these association structures might yield reliable and valid statistical inferences. Although the methods for univariate longitudinal data have been mostly studied, multivariate longitudinal data still needs more work. In this thesis, although we mainly focus on multivariate longitudinal binary data models, we also consider other types of response families when necessary. We extend a work on multivariate marginal models, namely multivariate marginal models with response specific parameters (MMM1), and propose multivariate marginal models with shared regression parameters (MMM2). Both of these models are generalized estimating equation (GEE) based, and are valid for several response families such as Binomial, Gaussian, Poisson, and Gamma. Two different R packages, mmm and mmm2 are proposed to fit them, respectively. We further develop a marginalized multilevel model, namely probit normal marginalized transition random effects models (PNMTREM) for multivariate longitudinal binary response. By this model, implicit function theorem is introduced to explicitly link the levels of marginalized multilevel models with transition structures for the first time. An R package, bf pnmtrem is proposed to fit the model. PNMTREM is applied to data collected through Iowa Youth and Families Project (IYFP). Five different models, including univariate and multivariate ones, are considered to forecast multivariate longitudinal binary data. A comparative simulation study, which includes a model-independent data simulation process, is considered for this purpose. Forecasting independent variables are taken into account as well. To assess the forecasts, several accuracy measures, such as expected proportion of correct prediction (ePCP), area under the receiver operating characteristic (AUROC) curve, mean absolute scaled error (MASE) are considered. Mother&#039 / s Stress and Children&#039 / s Morbidity (MSCM) data are used to illustrate this comparison in real life. Results show that marginalized models yield better forecasting results compared to marginal models. Simulation results are in agreement with these results as well.

Page generated in 0.0496 seconds