• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 67
  • 18
  • 10
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 138
  • 138
  • 64
  • 47
  • 38
  • 32
  • 28
  • 27
  • 25
  • 24
  • 21
  • 16
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Leveraged Plans for Measurement System Assessment

Browne, Ryan January 2009 (has links)
In manufacturing, measurement systems are used to control processes and inspect parts with the goal of producing high quality product for the customer. Modern Quality Systems require the periodic assessment of key measurement systems to ensure that they are functioning as expected. Estimating the proportion of the process variation due to the measurement system is an important part of these assessments. The measurement system may be simple, for example, with one gauge automatically measuring a single characteristic on every part or complex with multiple characteristics, gauges, operators etc. Traditional assessment plans involve selecting a random sample of parts and then repeatedly measuring each part under a variety of conditions that depend on the complexity of the measurement system. In this thesis, we propose new plans for assessing the measurement system variation based on the concept of leveraging. In a leveraged plan, we select parts (non-randomly) with extreme initial values to measure repeatedly. Depending on the context, parts with initial measurements may be available from regular production or from a specially conducted baseline study. We use the term leveraging because of the re-use of parts with extreme values. The term leverage has been used by the  proponents of the problem solving system initially proposed by Dorian  Shainin. Parts with relatively large and small values of the response  are compared to identify the major causes of the variation. There is no discussion of the theory of leveraging in the literature or its application to measurement system  assessment. In this thesis, we provide motivation for why leveraging  is valuable and apply it to measurement system  assessments. We consider three common contexts in the thesis: Simple measurement systems with one gauge, no operator effects and no external information about the process performance; Measurement systems, as stated above, where we have external information, as would be the case, for example, if the measurement system was used for 100% inspection; Measurement systems with multiple operators. For each of these contexts, we develop new leveraged assessment plans and show that these plans are substantially more efficient than traditional plans in estimating the proportion of the process variation due to the measurement system. In each case, we also provide methodology for planning the leveraged study and for analysing the data generated. We then develop another new application of leveraging in the assessment of a measurement system used for 100% inspection. A common practice is to re-measure all parts with a first  measurement outside of inspection limits. We propose using these repeated measurements to assess the variation in the measurement system. Here the system itself does the leveraging since we have repeated measurements only on relatively large or small parts. We recommend using maximum likelihood estimation but we show that the ANOVA estimator, although  biased, is comparable to the MLE when the measurement system is reliable.  We also provide guidelines on how to schedule such assessments. To outline the thesis, in the first two chapters, we review the contexts described above. For each context, we discuss how to characterize the measurement system performance, the common assessment plans and their analysis. In Chapter 3, we introduce the concept of leveraging and provide motivation for why it is effective. Chapters 4 to 7 contain the bulk of the new results in the thesis. In Chapters 4, 5 and 6, which correspond to the three contexts described above, we provide new leveraged plans, show their superiority to the standard plans and provide a methodology to help design leveraged plans. In Chapter 7, we show how to assess an inspection system using repeated measurements on initially rejected parts. In the final chapter, we discuss other potential applications of leveraging to other measurement system assessment problems and to a problem in genetics.
2

Leveraged Plans for Measurement System Assessment

Browne, Ryan January 2009 (has links)
In manufacturing, measurement systems are used to control processes and inspect parts with the goal of producing high quality product for the customer. Modern Quality Systems require the periodic assessment of key measurement systems to ensure that they are functioning as expected. Estimating the proportion of the process variation due to the measurement system is an important part of these assessments. The measurement system may be simple, for example, with one gauge automatically measuring a single characteristic on every part or complex with multiple characteristics, gauges, operators etc. Traditional assessment plans involve selecting a random sample of parts and then repeatedly measuring each part under a variety of conditions that depend on the complexity of the measurement system. In this thesis, we propose new plans for assessing the measurement system variation based on the concept of leveraging. In a leveraged plan, we select parts (non-randomly) with extreme initial values to measure repeatedly. Depending on the context, parts with initial measurements may be available from regular production or from a specially conducted baseline study. We use the term leveraging because of the re-use of parts with extreme values. The term leverage has been used by the  proponents of the problem solving system initially proposed by Dorian  Shainin. Parts with relatively large and small values of the response  are compared to identify the major causes of the variation. There is no discussion of the theory of leveraging in the literature or its application to measurement system  assessment. In this thesis, we provide motivation for why leveraging  is valuable and apply it to measurement system  assessments. We consider three common contexts in the thesis: Simple measurement systems with one gauge, no operator effects and no external information about the process performance; Measurement systems, as stated above, where we have external information, as would be the case, for example, if the measurement system was used for 100% inspection; Measurement systems with multiple operators. For each of these contexts, we develop new leveraged assessment plans and show that these plans are substantially more efficient than traditional plans in estimating the proportion of the process variation due to the measurement system. In each case, we also provide methodology for planning the leveraged study and for analysing the data generated. We then develop another new application of leveraging in the assessment of a measurement system used for 100% inspection. A common practice is to re-measure all parts with a first  measurement outside of inspection limits. We propose using these repeated measurements to assess the variation in the measurement system. Here the system itself does the leveraging since we have repeated measurements only on relatively large or small parts. We recommend using maximum likelihood estimation but we show that the ANOVA estimator, although  biased, is comparable to the MLE when the measurement system is reliable.  We also provide guidelines on how to schedule such assessments. To outline the thesis, in the first two chapters, we review the contexts described above. For each context, we discuss how to characterize the measurement system performance, the common assessment plans and their analysis. In Chapter 3, we introduce the concept of leveraging and provide motivation for why it is effective. Chapters 4 to 7 contain the bulk of the new results in the thesis. In Chapters 4, 5 and 6, which correspond to the three contexts described above, we provide new leveraged plans, show their superiority to the standard plans and provide a methodology to help design leveraged plans. In Chapter 7, we show how to assess an inspection system using repeated measurements on initially rejected parts. In the final chapter, we discuss other potential applications of leveraging to other measurement system assessment problems and to a problem in genetics.
3

Essays on random effects models and GARCH /

Skoglund, Jimmy, January 1900 (has links)
Diss. Stockholm : Handelshögsk., 2001.
4

Robustness of normal theory inference when random effects are not normally distributed

Devamitta Perera, Muditha Virangika January 1900 (has links)
Master of Science / Department of Statistics / Paul I. Nelson / The variance of a response in a one-way random effects model can be expressed as the sum of the variability among and within treatment levels. Conventional methods of statistical analysis for these models are based on the assumption of normality of both sources of variation. Since this assumption is not always satisfied and can be difficult to check, it is important to explore the performance of normal based inference when normality does not hold. This report uses simulation to explore and assess the robustness of the F-test for the presence of an among treatment variance component and the normal theory confidence interval for the intra-class correlation coefficient under several non-normal distributions. It was found that the power function of the F-test is robust for moderately heavy-tailed random error distributions. But, for very heavy tailed random error distributions, power is relatively low, even for a large number of treatments. Coverage rates of the confidence interval for the intra-class correlation coefficient are far from nominal for very heavy tailed, non-normal random effect distributions.
5

Multilevel modelling of child mortality : Gibbs sampling versus other approaches

Prevost, Andrew Toby January 1996 (has links)
No description available.
6

Models and estimation for repeated ordinal responses, with application to telecommunications experiments

Wolfe, Rory St John January 1996 (has links)
No description available.
7

A Meta-Analysis of School-Based Problem-Solving Consultation Outcomes: A Review from 1986 to 2009

Davis, Cole 2012 August 1900 (has links)
School-based problem-solving consultation is an indirect problem-solving process where the consultant works directly with the teacher in order to solve a current work problem of the teacher. The focus of school-based problem-solving consultation was to remediate a current difficult; however, during school-based problem-solving consultation, the teacher developed coping skills that improved his/her ability to handle future problems. Although the subject of several previous syntheses of the literature attesting to its promise, the current state of school-based problem consultation effectiveness was not known. This study sought to update the school-based problem-solving consultation effectiveness literature as measured by conducting a meta-analysis spanning the years 1986 to 2009. A secondary goal was to identify variables that functioned as moderators. Following procedures advocated by Lipsey and Wilson in 2001, 19 studies were identified producing 205 effect sizes. However, these effect sizes were not calculated independently. Instead, the effect sizes from each study were averaged in order to form a mean effect size per study. The mean effects were then averaged to form the omnibus mean effect size. The omnibus mean effect size from the 19 studies was g = 0.42, with a range of -0.01 to 1.52 demonstrating a medium-sized effect. This effect size was more modest in magnitude when compared to the previous school-based problem-solving consultation meta-analyses; however, the results indicated that school-based problem-solving consultation positively impacted client-level outcomes. With the exception of grade level, moderator analyses produced little information in terms of statistical differences between and among categories for “teacher type of class, consultant type, school type, referral source, referral reason, consultation model, comparison group, intervention type, design quality, outcome measured, and data type. For grade level, students in the “Other/Not Specified” category benefited most from school-based problem-solving consultation when compared to the “Elementary (K-6)” category. In addition to examining the omnibus mean effect size and potential moderators, limitations and implications for practice and future research were discussed.
8

Causal Inference Using Propensity Score Matching in Clustered Data

Oelrich, Oscar January 2014 (has links)
Propensity score matching is commonly used to estimate causal effects of treatments. However, when using data with a hierarchical structure, we need to take the multilevel nature of the data into account. In this thesis the estimation of propensity scores with multilevel models is presented to extend propensity score matching for use with multilevel data. A Monte Carlo simulation study is performed to evaluate several different estimators. It is shown that propensity score estimators ignoring the multilevel structure of the data are biased, while fixed effects models produce unbiased results. An empirical study of the causal effect of truancy on mathematical ability for Swedish 9th graders is also performed, where it is shown that truancy has a negative effect on mathematical ability.
9

Spatio-temporal modelling of climate-sensitive disease risk : towards an early warning system for dengue in Brazil

Lowe, Rachel January 2011 (has links)
The transmission of many infectious diseases is affected by climate variations, particularly for diseases spread by arthropod vectors such as malaria and dengue. Previous epidemiological studies have demonstrated statistically significant associations between infectious disease incidence and climate variations. Such research has highlighted the potential for developing climate-based epidemic early warning systems. To establish how much variation in disease risk can be attributed to climatic conditions, non-climatic confounding factors should also be considered in the model parameterisation to avoid reporting misleading climate-disease associations. This issue is sometimes overlooked in climate related disease studies. Due to the lack of spatial resolution and/or the capability to predict future disease risk (e.g. several months ahead), some previous models are of limited value for public health decision making. This thesis proposes a framework to model spatio-temporal variation in disease risk using both climate and non-climate information. The framework is developed in the context of dengue fever in Brazil. Dengue is currently one of the most important emerging tropical diseases and dengue epidemics impact heavily on Brazilian public health services. A negative binomial generalised linear mixed model (GLMM) is adopted which makes allowances for unobserved confounding factors by including spatially structured and unstructured random effects. The model successfully accounts for the large amount of overdispersion found in disease counts. The parameters in this spatio-temporal Bayesian hierarchical model are estimated using Markov Chain Monte Carlo (MCMC). This allows posterior predictive distributions for disease risk to be derived for each spatial location and time period (month/season). Given decision and epidemic thresholds, probabilistic forecasts can be issued, which are useful for developing epidemic early warning systems. The potential to provide useful early warnings of future increased and geographically specific dengue risk is investigated. The predictive validity of the model is evaluated by fitting the GLMM to data from 2001-2007 and comparing probabilistic predictions to the most recent out-of-sample data in 2008-2009. For a probability decision threshold of 30% and the pre-defined epidemic threshold of 300 cases per 100,000 inhabitants, successful epidemic alerts would have been issued for 94% of the 54 microregions that experienced high dengue incidence rates in South East Brazil, during February - April 2008.
10

Meta-Analytic Estimation Techniques for Non-Convergent Repeated-Measure Clustered Data

Wang, Aobo 01 January 2016 (has links)
Clustered data often feature nested structures and repeated measures. If coupled with binary outcomes and large samples (>10,000), this complexity can lead to non-convergence problems for the desired model especially if random effects are used to account for the clustering. One way to bypass the convergence problem is to split the dataset into small enough sub-samples for which the desired model convergences, and then recombine results from those sub-samples through meta-analysis. We consider two ways to generate sub-samples: the K independent samples approach where the data are split into k mutually-exclusive sub-samples, and the cluster-based approach where naturally existing clusters serve as sub-samples. Estimates or test statistics from either of these sub-sampling approaches can then be recombined using a univariate or multivariate meta-analytic approach. We also provide an innovative approach for simulating clustered and dependent binary data by simulating parameter templates that yield the desired cluster behavior. This approach is used to conduct simulation studies comparing the performance of the K independent samples and cluster-based approaches to generating sub-samples, the results from which are combined either with univariate and multivariate meta-analytic techniques. These studies show that using natural clusters leaded to lower biased test statistics when the number of clusters and treatment effect were large, as compared to the K independent samples approach for both the univariate and multivariate meta-analytic approaches. And the independent samples approach was preferred when the number of clusters and treatment effect were small. We also apply these methods to data on cancer screening behaviors obtained from electronic health records of n=15,652 individuals and showed that these estimated results support the conclusions from the simulation studies.

Page generated in 0.0905 seconds