• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 157
  • 21
  • 15
  • 13
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 244
  • 244
  • 177
  • 46
  • 45
  • 30
  • 27
  • 26
  • 22
  • 20
  • 19
  • 19
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Periods, partial words, and an extension of a result of Guibas and Odlyzko

Shirey, Brian. January 1900 (has links) (PDF)
Thesis (M.S.)--University of North Carolina at Greensboro, 2007. / Title from PDF title page screen. Advisor: Francine Blanchet-Sadri; submitted to the Dept. of Computer Science. Includes bibliographical references (p. 70-73).
72

Analytical study of the spectral-analysis-of-surface-waves method at complex geotechnical sites

Bertel, Jeffrey D. January 2006 (has links)
Thesis (M.S.) University of Missouri-Columbia, 2006. / The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on August 21, 2007) Includes bibliographical references.
73

A no free lunch result for optimization and its implications

Smith, Marisa B. January 2009 (has links)
Thesis (M.S.)--Duquesne University, 2009. / Title from document title page. Abstract included in electronic submission form. Includes bibliographical references (p. 42) and index.
74

Numerical error analysis in foundation phase (Grade 3) mathematics

Ndamase- Nzuzo, Pumla Patricia January 2014 (has links)
The focus of the research was on numerical errors committed in foundation phase mathematics. It therefore explored: (1) numerical errors learners in foundation phase mathematics encounter (2) relationships underlying numerical errors and (3) the implementable strategies suitable for understanding numerical error analysis in foundation phase mathematics (Grade 3). From 375 learners who formed the population of the study in the primary schools (16 in total), the researcher selected by means of a simple random sample technique 80 learners as the sample size, which constituted 10% of the population as response rate. On the basis of the research questions and informed by positivist paradigm, a quantitative approach was used by means of tables, graphs and percentages to address the research questions. A Likert scale was used with four categories of responses ranging from (A) Agree, (S A) Strongly Agree, (D) Disagree and (S D) Strongly Disagree. The results revealed that: (1) the underlying numerical errors that learners encounter, include the inability to count backwards and forwards, number sequencing, mathematical signs, problem solving and word sums (2) there was a relationship between committing errors and a) copying numbers b) confusion of mathematical signs or operational signs c) reading numbers which contained more than one digit (3) It was also revealed that teachers needed frequent professional training for development; topics need to change and lastly government needs to involve teachers at ground roots level prior to policy changes on how to implement strategies with regards to numerical errors in the foundational phase. It is recommended that attention be paid to the use of language and word sums in order to improve cognition processes in foundation phase mathematics. Moreover, it recommends that learners are to be assisted time and again when reading or copying their work, so that they could have fewer errors in foundation phase mathematics. Additionally it recommends that teachers be trained on how to implement strategies of numerical error analysis in foundation phase mathematics. Furthermore, teachers can use tests to identify learners who could be at risk of developing mathematical difficulties in the foundation phase.
75

Error analysis and tractability for multivariate integration and approximation

Huang, Fang-Lun 01 January 2004 (has links)
No description available.
76

The estimation and presentation of standard errors in a survey report

Swanepoel, Rene 26 May 2006 (has links)
The vast number of different study variables or population characteristics and the different domains of interest in a survey, make it impractical and almost impossible to calculate and publish standard errors for each estimated value of a population variable or characteristic and each domain individually. Since estimated values are subject to statistical variation (resulting from the probability sampling), standard errors may not be omitted from the survey report. Estimated values can be evaluated only if their precision is known. The purpose of this research project is to study the feasibility of mathematical modeling to estimate the standard errors of estimated values of population parameters or characteristics in survey data sets and to investigate effective and user-friendly presentation methods of these models in reports. The following data sets were used in the investigation: • October Household Survey (OHS) 1995 - Workers and Household data set • OHS 1996 - Workers and Household data set • OHS 1997 - Workers and Household data set • Victims of Crime Survey (VOC) 1998 The basic methodology consists of the estimation of standard errors of the statistics considered in the survey for a variety of domains (such as the whole country, provinces, urban/rural areas, population groups, gender and age groups as well as combinations of these). This is done by means of a computer program that takes into consideration the complexity of the different sample designs. The direct calculated standard errors were obtained in this way. Different models are then fitted to the data by means of regression modeling in the search for a suitable standard error model. A function of the direct calculated standard error value served as the dependent variable and a function of the size of the statistic served as the independent variable. A linear model, equating the natural logarithm of the coefficient of relative variation of a statistic to a linear function of the natural logarithm of the size of the statistic, gave an adequate fit in most of the cases. Well-known tests for the occurrence of outliers were applied in the model fitting procedure. For each observation indicated as an outlier, it was established whether the observation could be deleted legitimately (e.g. when the domain sample size was too small, or the estimate biased). Afterwards the fitting procedure was repeated. The Australian Bureau of Statistics also uses the above model in similar surveys. They derived this model especially for variables that count people in a specific category. It was found that this model performs equally well when the variable of interest counts households or incidents as in the case of the VOC. The set of domains considered in the fitting procedure included segregated classes, mixed classes and cross-classes. Thus, the model can be used irrespective of the type of subclass domain. This result makes it possible to approximate standard errors for any type of domain with the same model. The fitted model, as a mathematical formula, is not a user-friendly presentation method of the precision of estimates. Consequently, user-friendly and effective presentation methods of standard errors are summarized in this report. The suitability of a specific presentation method, however, depends on the extent of the survey and the number of study variables involved. / Dissertation (MSc (Mathematical Statistics))--University of Pretoria, 2007. / Mathematics and Applied Mathematics / unrestricted
77

An investigation of the market model when prices are observed with error

Gendron, Michel January 1984 (has links)
The market model, which relates securities returns to their systematic risk (β), plays a major role in finance. The estimation of β , in particular, is fundamental to many empirical studies and investment decisions. This dissertation develops a model which explains the observed serial correlations in returns and the intervaling effects which are inconsistent with the market model assumptions. The model accounts for thin trading and different frictions in the trading process and has as special cases other models of thin trading and frictions presented in the finance literature. The main assumption of the model is that the prices observed in the market and used to compute returns differ by an error from the true prices generated by a Geometric Brownian Motion model, hence its name, the error in prices (EIP) model. Three estimation methods for β are examined for the EIP model: the Maximum Likelihood (ML) method, the Least Squares (LS) method and a method of moments. It is suggested to view the EIP model as a missing information model and use the EM algorithm to find the ML estimates of the parameters of the model. The approximate small sample and asymptotic properties of the LS estimate of β are derived. It is shown that replacing the true covariances by their sample moments estimates leads to a convenient and familiar form for a consistent estimate of β. Finally, some illustrations of six different estimation methods for β are presented using simulated and real securities returns. / Business, Sauder School of / Graduate
78

The accuracy of parameter estimates and coverage probability of population values in regression models upon different treatments of systematically missing data

Othuon, Lucas Onyango A. 11 1900 (has links)
Several methods are available for the treatment of missing data. Most of the methods are based on the assumption that data are missing completely at random (MCAR). However, data sets that are MCAR are rare in psycho-educational research. This gives rise to the need for investigating the performance of missing data treatments (MDTs) with non-randomly or systematically missing data, an area that has not received much attention by researchers in the past. In the current simulation study, the performance of four MDTs, namely, mean substitution (MS), pairwise deletion (PW), expectation-maximization method (EM), and regression imputation (RS), was investigated in a linear multiple regression context. Four investigations were conducted involving four predictors under low and high multiple R² , and nine predictors under low and high multiple R² . In addition, each investigation was conducted under three different sample size conditions (94, 153, and 265). The design factors were missing pattern (2 levels), percent missing (3 levels) and non-normality (4 levels). This design gave rise to 72 treatment conditions. The sampling was replicated one thousand times in each condition. MDTs were evaluated based on accuracy of parameter estimates. In addition, the bias in parameter estimates, and coverage probability of regression coefficients, were computed. The effect of missing pattern, percent missing, and non-normality on absolute error for R² estimate was of practical significance. In the estimation of R², EM was the most accurate under the low R² condition, and PW was the most accurate under the high R² condition. No MDT was consistently least biased under low R² condition. However, with nine predictors under the high R² condition, PW was generally the least biased, with a tendency to overestimate population R². The mean absolute error (MAE) tended to increase with increasing non-normality and increasing percent missing. Also, the MAE in R² estimate tended to be smaller under monotonic pattern than under non-monotonic pattern. MDTs were most differentiated at the highest level of percent missing (20%), and under non-monotonic missing pattern. In the estimation of regression coefficients, RS generally outperformed the other MDTs with respect to accuracy of regression coefficients as measured by MAE . However, EM was competitive under the four predictors, low R² condition. MDTs were most differentiated only in the estimation of β₁, the coefficient of the variable with no missing values. MDTs were undifferentiated in their performance in the estimation for b₂,...,bp, p = 4 or 9, although the MAE remained fairly the same across all the regression coefficients. The MAE increased with increasing non-normality and percent missing, but decreased with increasing sample size. The MAE was generally greater under non-monotonic pattern than under monotonic pattern. With four predictors, the least bias was under RS regardless of the magnitude of population R². Under nine predictors, the least bias was under PW regardless of population R². The results for coverage probabilities were generally similar to those under estimation of regression coefficients, with coverage probabilities closest to nominal alpha under RS. As expected, coverage probabilities decreased with increasing non-normality for each MDT, with values being closest to nominal value for normal data. MDTs were most differentiated with respect to coverage probabilities under non-monotonic pattern than under monotonic pattern. Important implications of the results to researchers are numerous. First, the choice of MDT was found to depend on the magnitude of population R², number of predictors, as well as on the parameter estimate of interest. With the estimation of R² as the goal of analysis, use of EM is recommended if the anticipated R² is low (about .2). However, if the anticipated R² is high (about .6), use of PW is recommended. With the estimation of regression coefficients as the goal of analysis, the choice of MDT was found to be most crucial for the variable with no missing data. The RS method is most recommended with respect to estimation accuracy of regression coefficients, although greater bias was recorded under RS than under PW or MS when the number of predictors was large (i.e., nine predictors). Second, the choice of MDT seems to be of little concern if the proportion of missing data is 10 percent, and also if the missing pattern is monotonic rather than non-monotonic. Third, the proportion of missing data seems to have less impact on the accuracy of parameter estimates under monotonic missing pattern than under non-monotonic missing pattern. Fourth, it is recommended for researchers that in the control of Type I error rates under low R² condition, the EM method should be used as it produced coverage probability of regression coefficients closest to nominal value at .05 level. However, in the control of Type I error rates under high R² condition, the RS method is recommended. Considering that simulated data were used in the present study, it is suggested that future research should attempt to validate the findings of the present study using real field data. Also, a future investigator could modify the number of predictors as well as the confidence interval in the calculation of coverage probabilities to extend generalization of results. / Education, Faculty of / Educational and Counselling Psychology, and Special Education (ECPS), Department of / Graduate
79

The performance of three fitting criteria for multidimensional scaling /

McGlynn, Marion January 1990 (has links)
No description available.
80

Interval finite element analysis for load pattern and load combination

Saxena, Vishal 01 December 2003 (has links)
No description available.

Page generated in 0.0787 seconds