• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 129
  • 16
  • 11
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 177
  • 177
  • 177
  • 44
  • 31
  • 22
  • 20
  • 16
  • 14
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Reduction / elimination of errors in cost estimates using calibration an algorithmic approach /

Gandhi, Raju. January 2005 (has links)
Thesis (M.S.)--Ohio University, November, 2005. / Title from PDF t.p. Includes bibliographical references (p. 71-74)
52

Using the partitioning principle to control generalized familywise error rate

Xu, Haiyan. January 2005 (has links)
Thesis (Ph. D.)--Ohio State University, 2005. / Title from first page of PDF file. Document formatted into pages; contains xiii, 104 p.; also includes graphics (some col.). Includes bibliographical references (p. 101-104). Available online via OhioLINK's ETD Center
53

A no free lunch result for optimization and its implications

Smith, Marisa B. January 2009 (has links)
Thesis (M.S.)--Duquesne University, 2009. / Title from document title page. Abstract included in electronic submission form. Includes bibliographical references (p. 42) and index.
54

Numerical error analysis in foundation phase (Grade 3) mathematics

Ndamase- Nzuzo, Pumla Patricia January 2014 (has links)
The focus of the research was on numerical errors committed in foundation phase mathematics. It therefore explored: (1) numerical errors learners in foundation phase mathematics encounter (2) relationships underlying numerical errors and (3) the implementable strategies suitable for understanding numerical error analysis in foundation phase mathematics (Grade 3). From 375 learners who formed the population of the study in the primary schools (16 in total), the researcher selected by means of a simple random sample technique 80 learners as the sample size, which constituted 10% of the population as response rate. On the basis of the research questions and informed by positivist paradigm, a quantitative approach was used by means of tables, graphs and percentages to address the research questions. A Likert scale was used with four categories of responses ranging from (A) Agree, (S A) Strongly Agree, (D) Disagree and (S D) Strongly Disagree. The results revealed that: (1) the underlying numerical errors that learners encounter, include the inability to count backwards and forwards, number sequencing, mathematical signs, problem solving and word sums (2) there was a relationship between committing errors and a) copying numbers b) confusion of mathematical signs or operational signs c) reading numbers which contained more than one digit (3) It was also revealed that teachers needed frequent professional training for development; topics need to change and lastly government needs to involve teachers at ground roots level prior to policy changes on how to implement strategies with regards to numerical errors in the foundational phase. It is recommended that attention be paid to the use of language and word sums in order to improve cognition processes in foundation phase mathematics. Moreover, it recommends that learners are to be assisted time and again when reading or copying their work, so that they could have fewer errors in foundation phase mathematics. Additionally it recommends that teachers be trained on how to implement strategies of numerical error analysis in foundation phase mathematics. Furthermore, teachers can use tests to identify learners who could be at risk of developing mathematical difficulties in the foundation phase.
55

Error analysis and tractability for multivariate integration and approximation

Huang, Fang-Lun 01 January 2004 (has links)
No description available.
56

The estimation and presentation of standard errors in a survey report

Swanepoel, Rene 26 May 2006 (has links)
The vast number of different study variables or population characteristics and the different domains of interest in a survey, make it impractical and almost impossible to calculate and publish standard errors for each estimated value of a population variable or characteristic and each domain individually. Since estimated values are subject to statistical variation (resulting from the probability sampling), standard errors may not be omitted from the survey report. Estimated values can be evaluated only if their precision is known. The purpose of this research project is to study the feasibility of mathematical modeling to estimate the standard errors of estimated values of population parameters or characteristics in survey data sets and to investigate effective and user-friendly presentation methods of these models in reports. The following data sets were used in the investigation: • October Household Survey (OHS) 1995 - Workers and Household data set • OHS 1996 - Workers and Household data set • OHS 1997 - Workers and Household data set • Victims of Crime Survey (VOC) 1998 The basic methodology consists of the estimation of standard errors of the statistics considered in the survey for a variety of domains (such as the whole country, provinces, urban/rural areas, population groups, gender and age groups as well as combinations of these). This is done by means of a computer program that takes into consideration the complexity of the different sample designs. The direct calculated standard errors were obtained in this way. Different models are then fitted to the data by means of regression modeling in the search for a suitable standard error model. A function of the direct calculated standard error value served as the dependent variable and a function of the size of the statistic served as the independent variable. A linear model, equating the natural logarithm of the coefficient of relative variation of a statistic to a linear function of the natural logarithm of the size of the statistic, gave an adequate fit in most of the cases. Well-known tests for the occurrence of outliers were applied in the model fitting procedure. For each observation indicated as an outlier, it was established whether the observation could be deleted legitimately (e.g. when the domain sample size was too small, or the estimate biased). Afterwards the fitting procedure was repeated. The Australian Bureau of Statistics also uses the above model in similar surveys. They derived this model especially for variables that count people in a specific category. It was found that this model performs equally well when the variable of interest counts households or incidents as in the case of the VOC. The set of domains considered in the fitting procedure included segregated classes, mixed classes and cross-classes. Thus, the model can be used irrespective of the type of subclass domain. This result makes it possible to approximate standard errors for any type of domain with the same model. The fitted model, as a mathematical formula, is not a user-friendly presentation method of the precision of estimates. Consequently, user-friendly and effective presentation methods of standard errors are summarized in this report. The suitability of a specific presentation method, however, depends on the extent of the survey and the number of study variables involved. / Dissertation (MSc (Mathematical Statistics))--University of Pretoria, 2007. / Mathematics and Applied Mathematics / unrestricted
57

An investigation of the market model when prices are observed with error

Gendron, Michel January 1984 (has links)
The market model, which relates securities returns to their systematic risk (β), plays a major role in finance. The estimation of β , in particular, is fundamental to many empirical studies and investment decisions. This dissertation develops a model which explains the observed serial correlations in returns and the intervaling effects which are inconsistent with the market model assumptions. The model accounts for thin trading and different frictions in the trading process and has as special cases other models of thin trading and frictions presented in the finance literature. The main assumption of the model is that the prices observed in the market and used to compute returns differ by an error from the true prices generated by a Geometric Brownian Motion model, hence its name, the error in prices (EIP) model. Three estimation methods for β are examined for the EIP model: the Maximum Likelihood (ML) method, the Least Squares (LS) method and a method of moments. It is suggested to view the EIP model as a missing information model and use the EM algorithm to find the ML estimates of the parameters of the model. The approximate small sample and asymptotic properties of the LS estimate of β are derived. It is shown that replacing the true covariances by their sample moments estimates leads to a convenient and familiar form for a consistent estimate of β. Finally, some illustrations of six different estimation methods for β are presented using simulated and real securities returns. / Business, Sauder School of / Graduate
58

The accuracy of parameter estimates and coverage probability of population values in regression models upon different treatments of systematically missing data

Othuon, Lucas Onyango A. 11 1900 (has links)
Several methods are available for the treatment of missing data. Most of the methods are based on the assumption that data are missing completely at random (MCAR). However, data sets that are MCAR are rare in psycho-educational research. This gives rise to the need for investigating the performance of missing data treatments (MDTs) with non-randomly or systematically missing data, an area that has not received much attention by researchers in the past. In the current simulation study, the performance of four MDTs, namely, mean substitution (MS), pairwise deletion (PW), expectation-maximization method (EM), and regression imputation (RS), was investigated in a linear multiple regression context. Four investigations were conducted involving four predictors under low and high multiple R² , and nine predictors under low and high multiple R² . In addition, each investigation was conducted under three different sample size conditions (94, 153, and 265). The design factors were missing pattern (2 levels), percent missing (3 levels) and non-normality (4 levels). This design gave rise to 72 treatment conditions. The sampling was replicated one thousand times in each condition. MDTs were evaluated based on accuracy of parameter estimates. In addition, the bias in parameter estimates, and coverage probability of regression coefficients, were computed. The effect of missing pattern, percent missing, and non-normality on absolute error for R² estimate was of practical significance. In the estimation of R², EM was the most accurate under the low R² condition, and PW was the most accurate under the high R² condition. No MDT was consistently least biased under low R² condition. However, with nine predictors under the high R² condition, PW was generally the least biased, with a tendency to overestimate population R². The mean absolute error (MAE) tended to increase with increasing non-normality and increasing percent missing. Also, the MAE in R² estimate tended to be smaller under monotonic pattern than under non-monotonic pattern. MDTs were most differentiated at the highest level of percent missing (20%), and under non-monotonic missing pattern. In the estimation of regression coefficients, RS generally outperformed the other MDTs with respect to accuracy of regression coefficients as measured by MAE . However, EM was competitive under the four predictors, low R² condition. MDTs were most differentiated only in the estimation of β₁, the coefficient of the variable with no missing values. MDTs were undifferentiated in their performance in the estimation for b₂,...,bp, p = 4 or 9, although the MAE remained fairly the same across all the regression coefficients. The MAE increased with increasing non-normality and percent missing, but decreased with increasing sample size. The MAE was generally greater under non-monotonic pattern than under monotonic pattern. With four predictors, the least bias was under RS regardless of the magnitude of population R². Under nine predictors, the least bias was under PW regardless of population R². The results for coverage probabilities were generally similar to those under estimation of regression coefficients, with coverage probabilities closest to nominal alpha under RS. As expected, coverage probabilities decreased with increasing non-normality for each MDT, with values being closest to nominal value for normal data. MDTs were most differentiated with respect to coverage probabilities under non-monotonic pattern than under monotonic pattern. Important implications of the results to researchers are numerous. First, the choice of MDT was found to depend on the magnitude of population R², number of predictors, as well as on the parameter estimate of interest. With the estimation of R² as the goal of analysis, use of EM is recommended if the anticipated R² is low (about .2). However, if the anticipated R² is high (about .6), use of PW is recommended. With the estimation of regression coefficients as the goal of analysis, the choice of MDT was found to be most crucial for the variable with no missing data. The RS method is most recommended with respect to estimation accuracy of regression coefficients, although greater bias was recorded under RS than under PW or MS when the number of predictors was large (i.e., nine predictors). Second, the choice of MDT seems to be of little concern if the proportion of missing data is 10 percent, and also if the missing pattern is monotonic rather than non-monotonic. Third, the proportion of missing data seems to have less impact on the accuracy of parameter estimates under monotonic missing pattern than under non-monotonic missing pattern. Fourth, it is recommended for researchers that in the control of Type I error rates under low R² condition, the EM method should be used as it produced coverage probability of regression coefficients closest to nominal value at .05 level. However, in the control of Type I error rates under high R² condition, the RS method is recommended. Considering that simulated data were used in the present study, it is suggested that future research should attempt to validate the findings of the present study using real field data. Also, a future investigator could modify the number of predictors as well as the confidence interval in the calculation of coverage probabilities to extend generalization of results. / Education, Faculty of / Educational and Counselling Psychology, and Special Education (ECPS), Department of / Graduate
59

The performance of three fitting criteria for multidimensional scaling /

McGlynn, Marion January 1990 (has links)
No description available.
60

Estimating measurement error in blood pressure, using structural equations modelling

Kepe, Lulama Patrick January 2004 (has links)
Thesis (MSc)--Stellenbosch University, 2004. / ENGLISH ABSTRACT: Any branch in science experiences measurement error to some extent. This maybe due to conditions under which measurements are taken, which may include the subject, the observer, the measurement instrument, and data collection method. The inexactness (error) can be reduced to some extent through the study design, but at some level further reduction becomes difficult or impractical. It then becomes important to determine or evaluate the magnitude of measurement error and perhaps evaluate its effect on the investigated relationships. All this is particularly true for blood pressure measurement. The gold standard for measunng blood pressure (BP) is a 24-hour ambulatory measurement. However, this technology is not available in Primary Care Clinics in South Africa and a set of three mercury-based BP measurements is the norm for a clinic visit. The quality of the standard combination of the repeated measurements can be improved by modelling the measurement error of each of the diastolic and systolic measurements and determining optimal weights for the combination of measurements, which will give a better estimate of the patient's true BP. The optimal weights can be determined through the method of structural equations modelling (SEM) which allows a richer model than the standard repeated measures ANOVA. They are less restrictive and give more detail than the traditional approaches. Structural equations modelling which is a special case of covariance structure modelling has proven to be useful in social sciences over the years. Their appeal stem from the fact that they includes multiple regression and factor analysis as special cases. Multi-type multi-time (MTMT) models are a specific type of structural equations models that suit the modelling of BP measurements. These designs (MTMT models) constitute a variant of repeated measurement designs and are based on Campbell and Fiske's (1959) suggestion that the quality of methods (time in our case) can be determined by comparing them with other methods in order to reveal both the systematic and random errors. MTMT models also showed superiority over other data analysis methods because of their accommodation of the theory of BP. In particular they proved to be a strong alternative to be considered for the analysis of BP measurement whenever repeated measures are available even when such measures do not constitute equivalent replicates. This thesis focuses on SEM and its application to BP studies conducted in a community survey of Mamre and the Mitchells Plain hypertensive clinic population. / AFRIKAANSE OPSOMMING: Elke vertakking van die wetenskap is tot 'n minder of meerdere mate onderhewig aan metingsfout. Dit is die gevolg van die omstandighede waaronder metings gemaak word soos die eenheid wat gemeet word, die waarnemer, die meetinstrument en die data versamelingsmetode. Die metingsfout kan verminder word deur die studie ontwerp maar op 'n sekere punt is verdere verbetering in presisie moeilik en onprakties. Dit is dan belangrik om die omvang ven die metingsfout te bepaal en om die effek hiervan op verwantskappe te ondersoek. Hierdie aspekte is veral waar vir die meting van bloeddruk by die mens. Die goue standaard vir die meet van bloeddruk is 'n 24-uur deurlopenee meting. Hierdie tegnologie is egter nie in primêre gesondheidsklinieke in Suid-Afrika beskikbaar nie en 'n stel van drie kwik gebasseerde bloedrukmetings is die norm by 'n kliniek besoek. Die kwaliteit van die standard kombinasie van die herhaalde metings kan verbeter word deur die modellering van die metingsfout van diastoliese en sistoliese bloeddruk metings. Die bepaling van optimale gewigte vir die lineêre kombinasie van die metings lei tot 'n beter skatting van die pasiënt se ware bloedruk. Die gewigte kan berekening word met die metode van strukturele vergelykings modellering (SVM) wat 'n ryker klas van modelle bied as die standaard herhaalde metings analise van variansie modelle. Dié model het minder beperkings en gee dus meer informasie as die tradisionele benaderings. Strukurele vergelykings modellering wat 'n spesial geval van kovariansie strukturele modellering is, is oor die jare nuttig aangewend in die sosiale wetenskap. Die aanhang is die gevolg van die feit dat meervoudige lineêre regressie en faktor analise ook spesiale gevalle van die metode is. Meervoudige-tipe meervoudige-tyd (MTMT) modelle is 'n spesifieke strukturele vergelykings model wat die modellering van bloedruk pas. Hierdie tipe model is 'n variant van die herhaalde metings ontwerp en is gebaseer op Campbell en Fiske (1959) se voorstel dat die kwaliteit van verskillende metodes bepaal kan word deur dit met ander metodes te vergelyk om sodoende sistematiese en stogastiese foute te onderskei. Die MTMT model pas ook goed in by die onderliggende fisiologies aspekte van bloedruk en die meting daarvan. Dit is dus 'n goeie alternatief vir studies waar die herhaalde metings nie ekwivalente replikate is nie. Hierdie tesis fokus op die strukturele vergelykings model en die toepassing daarvan in hipertensie studies uitgevoer in die Mamre gemeenskap en 'n hipertensie kliniek populasie in Mitchells Plain.

Page generated in 0.1217 seconds