Spelling suggestions: "subject:"error analysis"" "subject:"arror analysis""
41 |
Implementation of Pipeline Floating-Point CORDIC Processor and its Error Analysis and ApplicationsYang, Chih-yu 19 August 2007 (has links)
In this thesis, the traditional fixed-point CORDIC algorithm is extended to floating-point version in order to calculate transcendental functions (such as sine/cosine, logarithm, powering function, etc.) with high accuracy and large range. Based on different algorithm derivations, two different floating-point high-throughput pipelined CORDIC architectures are proposed. The first architecture adopts barrel shifters to implement the shift operations in each pipelined stage. The second architecture uses pure hardwired method for the shifting operations. Another key contribution of this thesis is to analyze the execution errors in the floating-point CORDIC architectures and make comparison with the execution resulting from pure software programs. Finally, the thesis applies the floating-point CORDIC to realizing the rotation-related operations required in 3D graphics applications.
|
42 |
Study on the syntax and the pedagogy of Chinese idiomsJheng, Pei- siou 01 July 2005 (has links)
Previous work on Chinese idioms has made significant on both meaning derivation process as well as internal combination patterns. As for teaching idioms at junior high schools, recent textbooks encounter three problems: first, there is no appropriate idiom lists; second, teachers rarely mention their syntactic functions and collocational relations; third, students often use idioms in the wrong way. This study aims to investigate syntactic functions of idioms, by examing the learners¡¦ errors.
Chap1 clarifies the definition and the characteristics of idioms. Chap2 prosecutes error analysis. Three types of error are identified: semantic errors, grammar errors and semantic restriction errors which are the most frequent type. With regard to the influence of the familiarity and transparency of idioms, idioms that are low familiar and opaque make learning more difficult. After understanding the learning difficulties, chap4 studies syntactic functions and internal construction of idioms. The main construction type is subject-predicate and the most popular function is predicate. We cannot predict the function of an idiom by its construction, and the argument of an idiom doesn¡¦t affect its function neither. However, the core elements of an idiom is correlate with its function. Chap5 explains learners¡¦ difficulties and designs teaching strategies, such as teaching collocation. Moreover, this paper provides two idiom lists, one for junior high school students, the other for advanced learners.
|
43 |
Error Analysis in Optical Flows of Machine Vision with Multiple CamerasChang, Chao-jen 27 July 2006 (has links)
Abstract
In the researches of image tracking to restore an object¡¦s position or velocity in the space, it is expectable that increasing numbers of camera can reduce the error. In fact, this phenomenon happens in practical applications. But so far, the physical theory behind this effect has not been fully known. Therefore, based on this motivation, this thesis tends to lay the physical foundation of specific machine vision problem using the statistical probability concept. Extensive error analysis and computer simulation for motion vector of translation movement solved by the least squares technique are conducted by incorporating Gaussian noised into optical flow components. It is expected to provide an effective theoretical model for further developments.
Keywords¡GImage tracking, The least squares method, Gauss distribution, Error
analysis.
|
44 |
Logistic regression with misclassified covariates using auxiliary dataDong, Nathan Nguyen. January 2009 (has links)
Thesis (PhD.) -- University of Texas at Arlington, 2009.
|
45 |
Production log analysis and statistical error minimizationLi, Huitang, January 2000 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2000. / Vita. Includes bibliographical references (leaves 182-185). Available also in a digital version from Dissertation Abstracts.
|
46 |
Variance reduction and variable selection methods for Alho's logistic capture recapture model with applications to census data /Caples, Jerry Joseph, January 2000 (has links)
Thesis (Ph. D.)--University of Texas at Austin, 2000. / Vita. Includes bibliographical references (leaves 224-226). Available also in a digital version from Dissertation Abstracts.
|
47 |
Error analysis for radiation transportTencer, John Thomas 18 February 2014 (has links)
All relevant sources of error in the numerical solution of the radiative transport equation are considered. Common spatial discretization methods are discussed for completeness. The application of these methods to the radiative transport equation is not substantially different than for any other partial differential equation. Several of the most prevalent angular approximations within the heat transfer community are implemented and compared. Three model problems are proposed. The relative accuracy of each of the angular approximations is assessed for a range of optical thickness and scattering albedo. The model problems represent a range of application spaces. The quantified comparison of these approximations on the basis of accuracy over such a wide parameter space is one of the contributions of this work.
The major original contribution of this work involves the treatment of errors associated with the energy-dependence of intensity. The full spectrum correlated-k distribution (FSK) method has received recent attention as being a good compromise between computational expense and accuracy. Two approaches are taken towards quantifying the error associated with the FSK method. The Multi-Source Full Spectrum k–Distribution (MSFSK) method makes use of the convenient property that the FSK method is exact for homogeneous media. It involves a line-by-line solution on a coarse grid and a number of k-distribution solutions on subdomains to effectively increase the grid resolution. This yields highly accurate solutions on fine grids and a known rate of convergence as the number of subdomains increases.
The stochastic full spectrum k-distribution (SFSK) method is a more general approach to estimating the error in k-distribution solutions. The FSK method relies on a spectral reordering and scaling which greatly simplify the spectral dependence of the absorption coefficient. This reordering is not necessarily consistent across the entire domain which results in errors. The SFSK method involves treating the absorption line blackbody distribution function not as deterministic but rather as a stochastic process. The mean, covariance, and correlation structure are all fit empirically to data from a high resolution spectral database. The standard deviation of the heat flux prediction is found to be a good error estimator for the k-distribution method. / text
|
48 |
Error analysis of boundary conditions in the Wigner transport equationPhilip, Timothy 21 September 2015 (has links)
This work presents a method to quantitatively calculate the error induced through application of approximate boundary conditions in quantum charge transport simulations based on the Wigner transport equation (WTE). Except for the special case of homogeneous material, there exists no methodology for the calculation of exact boundary conditions. Consequently, boundary conditions are customarily approximated by equilibrium or near-equilibrium distributions known to be correct in the classical limit. This practice can, however, exert deleterious impact on the accuracy of numerical calculations and can even lead to unphysical results.
The Yoder group has recently developed a series expansion for exact boundary conditions which, when truncated, can be used to calculate boundary conditions of successively greater accuracy through consideration of successively higher order terms, the computational penalty for which is however not to be underestimated.
This thesis focuses on the calculation and analysis of the second order term of the series expansion. A method is demonstrated to calculate the term for any general device structure in one spatial dimension. In addition, numerical analysis is undertaken to directly compare the first and second order terms. Finally a method to incorporate the first order term into simulation is formulated.
|
49 |
The accuracy of parameter estimates and coverage probability of population values in regression models upon different treatments of systematically missing dataOthuon, Lucas Onyango A. 11 1900 (has links)
Several methods are available for the treatment of missing data. Most of the methods are
based on the assumption that data are missing completely at random (MCAR). However, data
sets that are MCAR are rare in psycho-educational research. This gives rise to the need for
investigating the performance of missing data treatments (MDTs) with non-randomly or
systematically missing data, an area that has not received much attention by researchers in the
past.
In the current simulation study, the performance of four MDTs, namely, mean
substitution (MS), pairwise deletion (PW), expectation-maximization method (EM), and
regression imputation (RS), was investigated in a linear multiple regression context. Four
investigations were conducted involving four predictors under low and high multiple R² , and nine
predictors under low and high multiple R² . In addition, each investigation was conducted under
three different sample size conditions (94, 153, and 265). The design factors were missing
pattern (2 levels), percent missing (3 levels) and non-normality (4 levels). This design gave rise
to 72 treatment conditions. The sampling was replicated one thousand times in each condition.
MDTs were evaluated based on accuracy of parameter estimates. In addition, the bias in
parameter estimates, and coverage probability of regression coefficients, were computed.
The effect of missing pattern, percent missing, and non-normality on absolute error for
R² estimate was of practical significance. In the estimation of R², EM was the most accurate under
the low R² condition, and PW was the most accurate under the high R² condition. No MDT was
consistently least biased under low R² condition. However, with nine predictors under the high
R² condition, PW was generally the least biased, with a tendency to overestimate population R².
The mean absolute error (MAE) tended to increase with increasing non-normality and increasing
percent missing. Also, the MAE in R²
estimate tended to be smaller under monotonic pattern than
under non-monotonic pattern. MDTs were most differentiated at the highest level of percent
missing (20%), and under non-monotonic missing pattern.
In the estimation of regression coefficients, RS generally outperformed the other MDTs
with respect to accuracy of regression coefficients as measured by MAE . However, EM was
competitive under the four predictors, low R² condition. MDTs were most differentiated only in
the estimation of β₁, the coefficient of the variable with no missing values. MDTs were
undifferentiated in their performance in the estimation for b₂,...,bp, p = 4 or 9, although the MAE
remained fairly the same across all the regression coefficients. The MAE increased with
increasing non-normality and percent missing, but decreased with increasing sample size. The
MAE was generally greater under non-monotonic pattern than under monotonic pattern. With
four predictors, the least bias was under RS regardless of the magnitude of population R². Under
nine predictors, the least bias was under PW regardless of population R².
The results for coverage probabilities were generally similar to those under estimation of
regression coefficients, with coverage probabilities closest to nominal alpha under RS. As
expected, coverage probabilities decreased with increasing non-normality for each MDT, with
values being closest to nominal value for normal data. MDTs were most differentiated with
respect to coverage probabilities under non-monotonic pattern than under monotonic pattern.
Important implications of the results to researchers are numerous. First, the choice of
MDT was found to depend on the magnitude of population R², number of predictors, as well as
on the parameter estimate of interest. With the estimation of R² as the goal of analysis, use of EM
is recommended if the anticipated R² is low (about .2). However, if the anticipated R² is high
(about .6), use of PW is recommended. With the estimation of regression coefficients as the goal
of analysis, the choice of MDT was found to be most crucial for the variable with no missing
data. The RS method is most recommended with respect to estimation accuracy of regression
coefficients, although greater bias was recorded under RS than under PW or MS when the
number of predictors was large (i.e., nine predictors). Second, the choice of MDT seems to be of
little concern if the proportion of missing data is 10 percent, and also if the missing pattern is
monotonic rather than non-monotonic. Third, the proportion of missing data seems to have less
impact on the accuracy of parameter estimates under monotonic missing pattern than under non-monotonic
missing pattern. Fourth, it is recommended for researchers that in the control of Type
I error rates under low R² condition, the EM method should be used as it produced coverage
probability of regression coefficients closest to nominal value at .05 level. However, in the
control of Type I error rates under high R² condition, the RS method is recommended.
Considering that simulated data were used in the present study, it is suggested that future research
should attempt to validate the findings of the present study using real field data. Also, a future
investigator could modify the number of predictors as well as the confidence interval in the
calculation of coverage probabilities to extend generalization of results.
|
50 |
The performance of three fitting criteria for multidimensional scaling /McGlynn, Marion January 1990 (has links)
A Monte Carlo study was performed to investigate the ability of MSCAL to recover by Euclidean metric multi-dimensional scaling (MDS) the true structure for dissimilarity data with different underlying error distributions. Error models for three typical error distributions: normal, lognormal, and squared normal are implemented in MSCAL through data transformations incorporated into the criterion function. Recovery of the true configuration and true distances for (i) single replication data with low error levels and (ii) matrix conditional data with high error levels was studied as a function of the type of error distribution, fitting criterion, and dimensionality. Results indicated that if the data conform to the error distribution hypotheses, then the corresponding fitting criteria provide improved recovery, but only for data with low error levels when the true dimensionality is known.
|
Page generated in 0.0627 seconds