91 |
Native Swedish Speakers’ Problems with English PrepositionsJansson, Hanna January 2007 (has links)
<p>This essay investigates native Swedish speakers’ problems in the area of prepositions. A total of 19 compositions, including 678 prepositions, written by native Swedish senior high school students were analysed. All the prepositions in the material were judged as either basic, systematic or idiomatic. Then all the errors of substitution, addition and omission were counted and corrected. As hypothesised, least errors were found in the category of basic prepositions and most errors were found in the category of idiomatic prepositions. However, the small difference between the two categories of systematic and idiomatic prepositions suggests that the learners have greater problems with systematic prepositions than what was first thought to be the case. Basic prepositions cause little or no problems. Systematic prepositions, i.e. those that are rule governed or whose usage is somehow generalisable, seem to be quite problematic to native Swedish speakers. Idiomatic prepositions seem to be learnt as ‘chunks’, and the learners are either aware of the whole constructions or do not use them at all. They also cause some problems for Swedish speakers. Since prepositions are often perceived as rather arbitrary without rules to sufficiently describe them, these conclusions might not be surprising to teachers, students and language learners. The greatest error cause was found to be interference from Swedish, and a few errors could be explained as intralingual errors. It seems as if the learners’ knowledge of their mother tongue strongly influences the acquisition of English prepositions.</p>
|
92 |
Toward the estimation of errors in cloud cover derived by threshold methodsChang, Fu-Lung 01 July 1991 (has links)
The accurate determination of cloud cover amount is important for characterizing
the role of cloud feedbacks in the climate system. Clouds have a large influence on
the climate system through their effect on the earth's radiation budget. As indicated
by the NASA Earth Radiation Budget Experiment (ERBE), the change in the earth's
radiation budget brought about by clouds is ~-15 Wm⁻² on a global scale, which
is several times the ~4 Wm⁻² gain in energy to the troposphere-surface system that
would arise from a doubling of CO₂ in the atmosphere. Consequently, even a small
change in global cloud amount may lead to a major change in the climate system.
Threshold methods are commonly used to derive cloud properties from satellite
imagery data. Here, in order to quantify errors due to thresholds, cloud cover is
obtained using three different values of thresholds. The three thresholds are applied to
the 11 μm, (4 km)² NOAA-9 AVHRR GAC satellite imagery data over four oceanic
regions. Regional cloud-cover fractions are obtained for two different scales, (60 km)²
and (250 km)². The spatial coherence method for obtaining cloud cover from imagery
data is applied to coincident data. The differences between cloud cover derived by the
spatial coherence method and by the threshold methods depends on the setting of the
threshold. Because the spatial coherence method is believed to provide good estimates
of cloud cover for opaque, single-layered cloud systems, this study is limited to such
systems, and the differences in derived cloud cover are interpreted as errors due to the
application of thresholds. The threshold errors are caused by pixels that are partially
covered by clouds and the errors have a dependence on the regional scale cloud cover.
The errors can be derived from the distribution of pixel-scale cloud cover.
Two simple models which assume idealized distributions for pixel-scale cloud
cover are constructed and used to estimate the threshold errors. The results show
that these models, though simple, perform rather well in estimating the differences
between cloud cover derived by the spatial coherence method and those obtained by
threshold methods. / Graduation date: 1992
|
93 |
Semiparametric maximum likelihood for regression with measurement errorSuh, Eun-Young 03 May 2001 (has links)
Semiparametric maximum likelihood analysis allows inference in errors-invariables
models with small loss of efficiency relative to full likelihood analysis but
with significantly weakened assumptions. In addition, since no distributional
assumptions are made for the nuisance parameters, the analysis more nearly
parallels that for usual regression. These highly desirable features and the high
degree of modelling flexibility permitted warrant the development of the approach
for routine use. This thesis does so for the special cases of linear and nonlinear
regression with measurement errors in one explanatory variable. A transparent and
flexible computational approach is developed, the analysis is exhibited on some
examples, and finite sample properties of estimates, approximate standard errors,
and likelihood ratio inference are clarified with simulation. / Graduation date: 2001
|
94 |
Using p-adic valuations to decrease computational errorLimmer, Douglas J. 08 June 1993 (has links)
The standard way of representing numbers on computers gives rise to errors
which increase as computations progress. Using p-adic valuations can reduce
error accumulation. Valuation theory tells us that p-adic and standard valuations
cannot be directly compared. The p-adic valuation can, however, be used in
an indirect way. This gives a method of doing arithmetic on a subset of the
rational numbers without any error. This exactness is highly desirable, and can
be used to solve certain kinds of problems which the standard valuation cannot
conveniently handle. Programming a computer to use these p-adic numbers is
not difficult, and in fact uses computer resources similar to the standard floating-point
representation for real numbers. This thesis develops the theory of p-adic
valuations, discusses their implementation, and gives some examples where p-adic
numbers achieve better results than normal computer computation. / Graduation date: 1994
|
95 |
Limitations of Geometric Hashing in the Presence of Gaussian NoiseSarachik, Karen B. 01 October 1992 (has links)
This paper presents a detailed error analysis of geometric hashing for 2D object recogition. We analytically derive the probability of false positives and negatives as a function of the number of model and image, features and occlusion, using a 2D Gaussian noise model. The results are presented in the form of ROC (receiver-operating characteristic) curves, which demonstrate that the 2D Gaussian error model always has better performance than that of the bounded uniform model. They also directly indicate the optimal performance that can be achieved for a given clutter and occlusion rate, and how to choose the thresholds to achieve these rates.
|
96 |
Estimation of the standard error and confidence interval of the indirect effect in multiple mediator modelsBriggs, Nancy Elizabeth, January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 135-139).
|
97 |
Analysis of epidemiological data with covariate errorsDelongchamp, Robert 18 February 1993 (has links)
In regression analysis, random errors in an explanatory variable cause the
usual estimates of its regression coefficient to be biased. Although this problem has
been studied for many years, routine methods have not emerged. This thesis
investigates some aspects of this problem in the setting of analysis of epidemiological
data.
A major premise is that methods to cope with this problem must account for
the shape of the frequency distribution of the true covariable, e.g., exposure. This is
not widely recognized, and many existing methods focus only on the variability of the
true covariable, rather than on the shape of its distribution. Confusion about this
issue is exacerbated by the existence of two classical models, one in which the
covariable is a sample from a distribution and the other in which it is a collection of
fixed values. A unified approach is taken here, in which for the latter of these models
more attention than usual is given to the frequency distribution of the fixed values.
In epidemiology the distribution of exposures is often very skewed, making
these issues particularly important. In addition, the data sets can be very large, and
another premise is that differences in the performance of methods are much greater
when the samples are very large.
Traditionally, methods have largely been evaluated by their ability to remove
bias from the regression estimates. A third premise is that in large samples there may
be various methods that will adequately remove the bias, but they may differ widely in
how nearly they approximate the estimates that would be obtained using the
unobserved true values.
A collection of old and new methods is considered, representing a variety of
basic rationales and approaches. Some comparisons among them are made on
theoretical grounds provided by the unified model. Simulation results are given which
tend to confirm the major premises of this thesis. In particular, it is shown that the
performance of one of the most standard approaches, the "correction for attenuation"
method, is poor relative to other methods when the sample size is large and the
distribution of covariables is skewed. / Graduation date: 1993
|
98 |
Acquisition of Hebrew Noun Plurals in Early Immersion and Bilingual EducationYunger, Robyn Rebecca 01 January 2011 (has links)
This study examined the acquisition of Hebrew noun plurals in early immersion and bilingual education by focusing on performance, as well as morpho-syntactic and semantic errors in inflecting nouns. A total of 196 students from Senior Kindergarten (n = 86) and grades 1 (n = 58) and 2 (n = 53) were administered measures of inflectional morphology in Hebrew. Results indicated that children applied high frequency, salient, simple to apply inflectional patterns
involving male-female nouns, as well as the basic way of noting plurality. Two major obstacles in the pluralisation of Hebrew nouns were suffix regularity and stem transparency. Error analysis revealed three categories of responses: rule-based, analogy-based and non-strategic errors. The principal conclusion was that errors notwithstanding, young children learning
Hebrew as a foreign language are moving toward an understanding of plural formation. The development of morpho-syntactic structures gradually develops over time and with exposure to Hebrew instruction.
|
99 |
Acquisition of Hebrew Noun Plurals in Early Immersion and Bilingual EducationYunger, Robyn Rebecca 01 January 2011 (has links)
This study examined the acquisition of Hebrew noun plurals in early immersion and bilingual education by focusing on performance, as well as morpho-syntactic and semantic errors in inflecting nouns. A total of 196 students from Senior Kindergarten (n = 86) and grades 1 (n = 58) and 2 (n = 53) were administered measures of inflectional morphology in Hebrew. Results indicated that children applied high frequency, salient, simple to apply inflectional patterns
involving male-female nouns, as well as the basic way of noting plurality. Two major obstacles in the pluralisation of Hebrew nouns were suffix regularity and stem transparency. Error analysis revealed three categories of responses: rule-based, analogy-based and non-strategic errors. The principal conclusion was that errors notwithstanding, young children learning
Hebrew as a foreign language are moving toward an understanding of plural formation. The development of morpho-syntactic structures gradually develops over time and with exposure to Hebrew instruction.
|
100 |
On Generating Complex Numbers for FFT and NCO Using the CORDIC Algorithm / Att generera komplexa tal för FFT och NCO med CORDIC-algoritmenAndersson, Anton January 2008 (has links)
This report has been compiled to document the thesis work carried out by Anton Andersson for Coresonic AB. The task was to develop an accelerator that could generate complex numbers suitable for fast fourier transforms (FFT) and tuning the phase of complex signals (NCO). Of many ways to achieve this, the CORDIC algorithm was chosen. It is very well suited since the basic implementation allows rotation of 2D-vectors using only shift and add operations. Error bounds and proof of convergence are derived carefully The accelerator was implemented in VHDL in such a way that all critical parameters were easy to change. Performance measures were extracted by simulating realistic test cases and then compare the output with reference data precomputed with high precision. Hardware costs were estimated by synthesizing a set of different configurations. Utilizing graphs of performance versus cost makes it possible to choose an optimal configuration. Maximum errors were extracted from simulations and seemed rather large for some configurations. The maximum error distribution was then plotted in histograms revealing that the typical error is often much smaller than the largest one. Even after trouble-shooting, the errors still seem to be somewhat larger than what other implementations of CORDIC achieve. However, precision was concluded to be sufficient for targeted applications. / Den här rapporten dokumenterar det examensarbete som utförts av AntonAndersson för Coresonic AB. Uppgiften bestod i att utveckla enaccelerator som kan generera komplexa tal som är lämpliga att använda försnabba fouriertransformer (FFT) och till fasvridning av komplexasignaler (NCO). Det finns en mängd sätt att göra detta men valet föllpå en algoritm kallad CORDIC. Den är mycket lämplig då den kan rotera2D-vektorer godtycklig vinkel med enkla operationer som bitskift ochaddition. Felgränser och konvergens härleds noggrannt. Acceleratorn implementerades i språket VHDL med målet att kritiskaparametrar enkelt skall kunna förändras. Därefter simuleradesmodellen i realistiska testfall och resulteten jämfördes medreferensdata som förberäknats med mycket hög precision. Dessutomsyntetiserades en mängd olika konfigurationer så att prestanda enkeltkan viktas mot kostnad.Ur de koefficienter som erhölls genom simuleringar beräknades detstörsta erhållna felet för en mängd olika konfigurationer. Felenverkade till en början onormalt stora vilket krävde vidareundersökning. Samtliga fel från en konfiguration ritades ihistogramform, vilket visade att det typiska felet oftast varbetydligt mindre än det största. Även efter felsökning verkar acceleratorngenerera tal med något större fel än andra implementationer avCORDIC. Precisionen anses dock vara tillräcklig för avsedda applikationer.
|
Page generated in 0.0374 seconds