41 
Symbolic Error Analysis and Robot PlanningBrooks, Rodney A. 01 September 1982 (has links)
A program to control a robot manipulator for industrial assembly operations must take into account possible errors in parts placement and tolerances of the parts themselves. Previous approaches to this problem have been to (1) engineer the situation so that the errors are small or (2) let the programmer analyze the errors and take explicit account of them. This paper gives the mathematical underpinnings for building programs (plan checkers) to carry out approach (2) automatically. The plan checker uses a geometric CADtype database to infer the effects of actions and the propagation of errors. It does this symbolically rather than numerically, so that computations can be reversed and desired resultant tolerances can be used to infer required initial tolerances or the necessity for sensing. The checker modifies plans to include sensing and adds constraints to the plan which ensure that it will succeed. An implemented system is described and results of its execution are presented. The plan checker could be used as part of an automatic planning system of as an aid to a human robot programmer.

42 
Implementation of Pipeline FloatingPoint CORDIC Processor and its Error Analysis and ApplicationsYang, Chihyu 19 August 2007 (has links)
In this thesis, the traditional fixedpoint CORDIC algorithm is extended to floatingpoint version in order to calculate transcendental functions (such as sine/cosine, logarithm, powering function, etc.) with high accuracy and large range. Based on different algorithm derivations, two different floatingpoint highthroughput pipelined CORDIC architectures are proposed. The first architecture adopts barrel shifters to implement the shift operations in each pipelined stage. The second architecture uses pure hardwired method for the shifting operations. Another key contribution of this thesis is to analyze the execution errors in the floatingpoint CORDIC architectures and make comparison with the execution resulting from pure software programs. Finally, the thesis applies the floatingpoint CORDIC to realizing the rotationrelated operations required in 3D graphics applications.

43 
Study on the syntax and the pedagogy of Chinese idiomsJheng, Pei siou 01 July 2005 (has links)
Previous work on Chinese idioms has made significant on both meaning derivation process as well as internal combination patterns. As for teaching idioms at junior high schools, recent textbooks encounter three problems: first, there is no appropriate idiom lists; second, teachers rarely mention their syntactic functions and collocational relations; third, students often use idioms in the wrong way. This study aims to investigate syntactic functions of idioms, by examing the learners¡¦ errors.
Chap1 clarifies the definition and the characteristics of idioms. Chap2 prosecutes error analysis. Three types of error are identified: semantic errors, grammar errors and semantic restriction errors which are the most frequent type. With regard to the influence of the familiarity and transparency of idioms, idioms that are low familiar and opaque make learning more difficult. After understanding the learning difficulties, chap4 studies syntactic functions and internal construction of idioms. The main construction type is subjectpredicate and the most popular function is predicate. We cannot predict the function of an idiom by its construction, and the argument of an idiom doesn¡¦t affect its function neither. However, the core elements of an idiom is correlate with its function. Chap5 explains learners¡¦ difficulties and designs teaching strategies, such as teaching collocation. Moreover, this paper provides two idiom lists, one for junior high school students, the other for advanced learners.

44 
Error Analysis in Optical Flows of Machine Vision with Multiple CamerasChang, Chaojen 27 July 2006 (has links)
Abstract
In the researches of image tracking to restore an object¡¦s position or velocity in the space, it is expectable that increasing numbers of camera can reduce the error. In fact, this phenomenon happens in practical applications. But so far, the physical theory behind this effect has not been fully known. Therefore, based on this motivation, this thesis tends to lay the physical foundation of specific machine vision problem using the statistical probability concept. Extensive error analysis and computer simulation for motion vector of translation movement solved by the least squares technique are conducted by incorporating Gaussian noised into optical flow components. It is expected to provide an effective theoretical model for further developments.
Keywords¡GImage tracking, The least squares method, Gauss distribution, Error
analysis.

45 
Logistic regression with misclassified covariates using auxiliary dataDong, Nathan Nguyen. January 2009 (has links)
Thesis (PhD.)  University of Texas at Arlington, 2009.

46 
Production log analysis and statistical error minimizationLi, Huitang, January 2000 (has links)
Thesis (Ph. D.)University of Texas at Austin, 2000. / Vita. Includes bibliographical references (leaves 182185). Available also in a digital version from Dissertation Abstracts.

47 
Variance reduction and variable selection methods for Alho's logistic capture recapture model with applications to census data /Caples, Jerry Joseph, January 2000 (has links)
Thesis (Ph. D.)University of Texas at Austin, 2000. / Vita. Includes bibliographical references (leaves 224226). Available also in a digital version from Dissertation Abstracts.

48 
Error analysis for radiation transportTencer, John Thomas 18 February 2014 (has links)
All relevant sources of error in the numerical solution of the radiative transport equation are considered. Common spatial discretization methods are discussed for completeness. The application of these methods to the radiative transport equation is not substantially different than for any other partial differential equation. Several of the most prevalent angular approximations within the heat transfer community are implemented and compared. Three model problems are proposed. The relative accuracy of each of the angular approximations is assessed for a range of optical thickness and scattering albedo. The model problems represent a range of application spaces. The quantified comparison of these approximations on the basis of accuracy over such a wide parameter space is one of the contributions of this work.
The major original contribution of this work involves the treatment of errors associated with the energydependence of intensity. The full spectrum correlatedk distribution (FSK) method has received recent attention as being a good compromise between computational expense and accuracy. Two approaches are taken towards quantifying the error associated with the FSK method. The MultiSource Full Spectrum k–Distribution (MSFSK) method makes use of the convenient property that the FSK method is exact for homogeneous media. It involves a linebyline solution on a coarse grid and a number of kdistribution solutions on subdomains to effectively increase the grid resolution. This yields highly accurate solutions on fine grids and a known rate of convergence as the number of subdomains increases.
The stochastic full spectrum kdistribution (SFSK) method is a more general approach to estimating the error in kdistribution solutions. The FSK method relies on a spectral reordering and scaling which greatly simplify the spectral dependence of the absorption coefficient. This reordering is not necessarily consistent across the entire domain which results in errors. The SFSK method involves treating the absorption line blackbody distribution function not as deterministic but rather as a stochastic process. The mean, covariance, and correlation structure are all fit empirically to data from a high resolution spectral database. The standard deviation of the heat flux prediction is found to be a good error estimator for the kdistribution method. / text

49 
Error analysis of boundary conditions in the Wigner transport equationPhilip, Timothy 21 September 2015 (has links)
This work presents a method to quantitatively calculate the error induced through application of approximate boundary conditions in quantum charge transport simulations based on the Wigner transport equation (WTE). Except for the special case of homogeneous material, there exists no methodology for the calculation of exact boundary conditions. Consequently, boundary conditions are customarily approximated by equilibrium or nearequilibrium distributions known to be correct in the classical limit. This practice can, however, exert deleterious impact on the accuracy of numerical calculations and can even lead to unphysical results.
The Yoder group has recently developed a series expansion for exact boundary conditions which, when truncated, can be used to calculate boundary conditions of successively greater accuracy through consideration of successively higher order terms, the computational penalty for which is however not to be underestimated.
This thesis focuses on the calculation and analysis of the second order term of the series expansion. A method is demonstrated to calculate the term for any general device structure in one spatial dimension. In addition, numerical analysis is undertaken to directly compare the first and second order terms. Finally a method to incorporate the first order term into simulation is formulated.

50 
The accuracy of parameter estimates and coverage probability of population values in regression models upon different treatments of systematically missing dataOthuon, Lucas Onyango A. 11 1900 (has links)
Several methods are available for the treatment of missing data. Most of the methods are
based on the assumption that data are missing completely at random (MCAR). However, data
sets that are MCAR are rare in psychoeducational research. This gives rise to the need for
investigating the performance of missing data treatments (MDTs) with nonrandomly or
systematically missing data, an area that has not received much attention by researchers in the
past.
In the current simulation study, the performance of four MDTs, namely, mean
substitution (MS), pairwise deletion (PW), expectationmaximization method (EM), and
regression imputation (RS), was investigated in a linear multiple regression context. Four
investigations were conducted involving four predictors under low and high multiple R² , and nine
predictors under low and high multiple R² . In addition, each investigation was conducted under
three different sample size conditions (94, 153, and 265). The design factors were missing
pattern (2 levels), percent missing (3 levels) and nonnormality (4 levels). This design gave rise
to 72 treatment conditions. The sampling was replicated one thousand times in each condition.
MDTs were evaluated based on accuracy of parameter estimates. In addition, the bias in
parameter estimates, and coverage probability of regression coefficients, were computed.
The effect of missing pattern, percent missing, and nonnormality on absolute error for
R² estimate was of practical significance. In the estimation of R², EM was the most accurate under
the low R² condition, and PW was the most accurate under the high R² condition. No MDT was
consistently least biased under low R² condition. However, with nine predictors under the high
R² condition, PW was generally the least biased, with a tendency to overestimate population R².
The mean absolute error (MAE) tended to increase with increasing nonnormality and increasing
percent missing. Also, the MAE in R²
estimate tended to be smaller under monotonic pattern than
under nonmonotonic pattern. MDTs were most differentiated at the highest level of percent
missing (20%), and under nonmonotonic missing pattern.
In the estimation of regression coefficients, RS generally outperformed the other MDTs
with respect to accuracy of regression coefficients as measured by MAE . However, EM was
competitive under the four predictors, low R² condition. MDTs were most differentiated only in
the estimation of β₁, the coefficient of the variable with no missing values. MDTs were
undifferentiated in their performance in the estimation for b₂,...,bp, p = 4 or 9, although the MAE
remained fairly the same across all the regression coefficients. The MAE increased with
increasing nonnormality and percent missing, but decreased with increasing sample size. The
MAE was generally greater under nonmonotonic pattern than under monotonic pattern. With
four predictors, the least bias was under RS regardless of the magnitude of population R². Under
nine predictors, the least bias was under PW regardless of population R².
The results for coverage probabilities were generally similar to those under estimation of
regression coefficients, with coverage probabilities closest to nominal alpha under RS. As
expected, coverage probabilities decreased with increasing nonnormality for each MDT, with
values being closest to nominal value for normal data. MDTs were most differentiated with
respect to coverage probabilities under nonmonotonic pattern than under monotonic pattern.
Important implications of the results to researchers are numerous. First, the choice of
MDT was found to depend on the magnitude of population R², number of predictors, as well as
on the parameter estimate of interest. With the estimation of R² as the goal of analysis, use of EM
is recommended if the anticipated R² is low (about .2). However, if the anticipated R² is high
(about .6), use of PW is recommended. With the estimation of regression coefficients as the goal
of analysis, the choice of MDT was found to be most crucial for the variable with no missing
data. The RS method is most recommended with respect to estimation accuracy of regression
coefficients, although greater bias was recorded under RS than under PW or MS when the
number of predictors was large (i.e., nine predictors). Second, the choice of MDT seems to be of
little concern if the proportion of missing data is 10 percent, and also if the missing pattern is
monotonic rather than nonmonotonic. Third, the proportion of missing data seems to have less
impact on the accuracy of parameter estimates under monotonic missing pattern than under nonmonotonic
missing pattern. Fourth, it is recommended for researchers that in the control of Type
I error rates under low R² condition, the EM method should be used as it produced coverage
probability of regression coefficients closest to nominal value at .05 level. However, in the
control of Type I error rates under high R² condition, the RS method is recommended.
Considering that simulated data were used in the present study, it is suggested that future research
should attempt to validate the findings of the present study using real field data. Also, a future
investigator could modify the number of predictors as well as the confidence interval in the
calculation of coverage probabilities to extend generalization of results.

Page generated in 0.1176 seconds