• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 331
  • 135
  • 10
  • 4
  • Tagged with
  • 928
  • 928
  • 467
  • 437
  • 384
  • 380
  • 380
  • 184
  • 174
  • 92
  • 68
  • 66
  • 63
  • 62
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

Effect of an Interactive Component on Students' Conceptual Understanding of Hypothesis Testing

Inkpen, Sarah Anne 01 January 2016 (has links)
The Premier Technical College of Qatar (PTC-Q) has seen high failure rates among students taking a college statistics course. The students are English as a foreign language (EFL) learners in business studies and health sciences. Course delivery has involved conventional content/curriculum-centered instruction with minimal to no interactive components. The purpose of this quasi-experimental study was to assess the effectiveness of an interactive approach to teaching and learning statistics used in North America and the United Kingdom when used with EFL students in the Middle East. Guided by von Glasersfeld's constructivist framework, this study compared conceptual understanding between a convenience sample of 42 students whose learning experience included a hands-on, interactive component and 38 students whose learning experience did not. ANCOVA was used to analyze posttest scores on the Comprehensive Assessment of Outcomes in Statistics (CAOS) as the dependent variable, the course placement (hands-on versus no hands-on component) as the independent variable, and the pretest score on CAOS as the covariate. Students who were exposed to the hands-on learning demonstrated greater conceptual understanding than students who were not. Based on these results, a 3-day workshop was designed to create a learning community to enable statistics instructors to address the problem of high failure, to introduce delivery methods that involve place-based examples, and to devise hands-on activities designed to reflect authentic research. This study has implications for positive social change in Qatar, in that application of the findings may result in producing trained graduates capable of filling the shortage of qualified researchers, thereby supporting the nation's goal of being a leader in research as stated in the Qatar National Vision 2030.
582

Application of Inter-Die Rank Statistics in Defect Detection

Bakshi, Vivek 01 March 2012 (has links)
This thesis presents a statistical method to identify the test escapes. Test often acquires parametric measurements as a function of logical state of a chip. The usual method of classifying chips as pass or fail is to compare each state measurement to a test limit. Subtle manufacturing defects are escaping the test limits due to process variations in deep sub-micron technologies which results in mixing of healthy and faulty parametric test measurements. This thesis identifies the chips with subtle defects by using rank order of the parametric measurements. A hypothesis is developed that a defect is likely to disturb the defect-free ranking, whereas a shift caused by process variations will not affect the rank. The hypothesis does not depend on a-priori knowledge of a defect-free ranking of parametric measurements. This thesis introduces a modified Estimation Maximization (EM) algorithm to separate the healthy and faulty tau components calculated from parametric responses of die pairs on a wafer. The modified EM uses generalized beta distributions to model the two components of tau mixture distribution. The modified EM estimates the faulty probability of each die on a wafer. The sensitivity of the modified EM is evaluated using Monte Carlo simulations. The modified EM is applied on production product A. An average 30% reduction in DPPM (defective parts per million) is observed in Product A across all lots.
583

A Comparison of Techniques for Handling Missing Data in Longitudinal Studies

Bogdan, Alexander R 07 November 2016 (has links)
Missing data are a common problem in virtually all epidemiological research, especially when conducting longitudinal studies. In these settings, clinicians may collect biological samples to analyze changes in biomarkers, which often do not conform to parametric distributions and may be censored due to limits of detection. Using complete data from the BioCycle Study (2005-2007), which followed 259 premenopausal women over two menstrual cycles, we compared four techniques for handling missing biomarker data with non-Normal distributions. We imposed increasing degrees of missing data on two non-Normally distributed biomarkers under conditions of missing completely at random, missing at random, and missing not at random. Generalized estimating equations were used to obtain estimates from complete case analysis, multiple imputation using joint modeling, multiple imputation using chained equations, and multiple imputation using chained equations and predictive mean matching on Day 2, Day 13 and Day 14 of a standardized 28-day menstrual cycle. Estimates were compared against those obtained from analysis of the completely observed biomarker data. All techniques performed comparably when applied to a Normally distributed biomarker. Multiple imputation using joint modeling and multiple imputation using chained equations produced similar estimates across all types and degrees of missingness for each biomarker. Multiple imputation using chained equations and predictive mean matching consistently deviated from both the complete data estimates and the other missing data techniques when applied to a biomarker with a bimodal distribution. When addressing missing biomarker data in longitudinal studies, special attention should be given to the underlying distribution of the missing variable. As biomarkers become increasingly Normal, the amount of missing data tolerable while still obtaining accurate estimates may also increase when data are missing at random. Future studies are necessary to assess these techniques under more elaborate missingness mechanisms and to explore interactions between biomarkers for improved imputation models.
584

The Interquartile Range: Theory and Estimation.

Whaley, Dewey Lonzo 16 August 2005 (has links) (PDF)
The interquartile range (IQR) is used to describe the spread of a distribution. In an introductory statistics course, the IQR might be introduced as simply the “range within which the middle half of the data points lie.” In other words, it is the distance between the two quartiles, IQR = Q3 - Q1. We will compute the population IQR, the expected value, and the variance of the sample IQR for various continuous distributions. In addition, a bootstrap confidence interval for the population IQR will be evaluated.
585

Comparison of Time Series and Functional Data Analysis for the Study of Seasonality.

Allen, Jake 17 August 2011 (has links) (PDF)
Classical time series analysis has well known methods for the study of seasonality. A more recent method of functional data analysis has proposed phase-plane plots for the representation of each year of a time series. However, the study of seasonality within functional data analysis has not been explored extensively. Time series analysis is first introduced, followed by phase-plane plot analysis, and then compared by looking at the insight that both methods offer particularly with respect to the seasonal behavior of a variable. Also, the possible combination of both approaches is explored, specifically with the analysis of the phase-plane plots. The methods are applied to data observations measuring water flow in cubic feet per second collected monthly in Newport, TN from the French Broad River. Simulated data corresponding to typical time series cases are then used for comparison and further exploration.
586

Early Stopping of a Neural Network via the Receiver Operating Curve.

Yu, Daoping 13 August 2010 (has links) (PDF)
This thesis presents the area under the ROC (Receiver Operating Characteristics) curve, or abbreviated AUC, as an alternate measure for evaluating the predictive performance of ANNs (Artificial Neural Networks) classifiers. Conventionally, neural networks are trained to have total error converge to zero which may give rise to over-fitting problems. To ensure that they do not over fit the training data and then fail to generalize well in new data, it appears effective to stop training as early as possible once getting AUC sufficiently large via integrating ROC/AUC analysis into the training process. In order to reduce learning costs involving the imbalanced data set of the uneven class distribution, random sampling and k-means clustering are implemented to draw a smaller subset of representatives from the original training data set. Finally, the confidence interval for the AUC is estimated in a non-parametric approach.
587

Estimating the Difference of Percentiles from Two Independent Populations.

Tchouta, Romual Eloge 12 August 2008 (has links) (PDF)
We first consider confidence intervals for a normal percentile, an exponential percentile and a uniform percentile. Then we develop confidence intervals for a difference of percentiles from two independent normal populations, two independent exponential populations and two independent uniform populations. In our study, we mainly focus on the maximum likelihood to develop our confidence intervals. The efficiency of this method is examined via coverage rates obtained in a simulation study done with the statistical software R.
588

Bayesian Reference Inference on the Ratio of Poisson Rates.

Guo, Changbin 06 May 2006 (has links) (PDF)
Bayesian reference analysis is a method of determining the prior under the Bayesian paradigm. It incorporates as little information as possible from the experiment. Estimation of the ratio of two independent Poisson rates is a common practical problem. In this thesis, the method of reference analysis is applied to derive the posterior distribution of the ratio of two independent Poisson rates, and then to construct point and interval estimates based on the reference posterior. In addition, the Frequentist coverage property of HPD intervals is verified through simulation.
589

Amended Estimators of Several Ratios for Categorical Data.

Chen, Dandan 15 August 2006 (has links) (PDF)
Point estimation of several association parameters in categorical data are presented. Typically, a constant is added to the frequency counts before the association measure is computed. We will study the accuracy of these adjusted point estimators based on frequentist and Bayesian methods respectively. In particular, amended estimators for the ratio of independent Poisson rates, relative risk, odds ratio, and the ratio of marginal binomial proportions will be examined in terms of bias and mean squared error.
590

Temporally Correlated Dirichlet Processes in Pollution Receptor Modeling

Heaton, Matthew J. 31 May 2007 (has links) (PDF)
Understanding the effect of human-induced pollution on the environment is an important precursor to promoting public health and environmental stability. One aspect of understanding pollution is understanding pollution sources. Various methods have been used and developed to understand pollution sources and the amount of pollution those sources emit. Multivariate receptor modeling seeks to estimate pollution source profiles and pollution emissions from concentrations of pollutants such as particulate matter (PM) in the air. Previous approaches to multivariate receptor modeling make the following two key assumptions: (1) PM measurements are independent and (2) source profiles are constant through time. Notwithstanding these assumptions, the existence of temporal correlation among PM measurements and time-varying source profiles is commonly accepted. In this thesis an approach to multivariate receptor modeling is developed in which the temporal structure of PM measurements is accounted for by modeling source profiles as a time-dependent Dirichlet process. The Dirichlet process (DP) pollution model developed herein is evaluated using several simulated data sets. In the presence of time-varying source profiles, the DP model more accurately estimates source profiles and source contributions than other multivariate receptor model approaches. Additionally, when source profiles are constant through time, the DP model outperforms other pollution receptor models by more accurately estimating source profiles and source contributions.

Page generated in 0.1182 seconds