241 |
An Analysis of English Essays Written by Swedish StudentsGrant, Sofia January 2016 (has links)
The aim of this study is to analyse essays written in English by Swedish pupils and to map the most common errors made in written communication. The grammatical features selected for the analysis are prepositions, articles, verb forms, subject-verb agreement and word order. Furthermore, the errors will be grouped and ranked according to the Obligatory Occasion Analysis not only to assess the pupils’ development but also to help the teachers to prepare for their lesson planning.
|
242 |
Evaluation of the user interface of the BLAST annotation toolKondapalli, Vamshi Prakash January 2012 (has links)
In general, annotations are a type of notes that are made on text while reading by highlighting or underlining. Marking of text is considered as error annotations in a machine translation system. Error annotations give information about the translation error classification. The main focus of this thesis was to evaluate the graphical user interface of an annotation tool called BLAST, which can be used to perform human error analysis for any language from any machine translation system. The primary intended use of BLAST is for annotation of translation errors. Evaluation of BLAST mainly focuses on identification of usability issues, understandability and proposal of redesign to overcome issues of usability. By allowing the subjects to explore BLAST, the usage and performance of the tool are observed and later explained. In this usability study, five participants were involved and they were requested to perform user tasks designed to evaluate the usability of tool. Based on the user tasks required data is collected. Data collection methodology included interviews, observation and questionnaire. Collected data were analyzed both using quantitative and qualitative approaches. The Participant’s technical knowledge and interest to experiment new interface shows the impact on the evaluation of the tool. The problems faced by individuals while evaluating was found and solutions to overcome those problems were learned. So finally a redesign proposal for BLAST was an approach to overcome the problems. I proposed few designs addressing the issues found in designing the interface. Designs can be adapted to the existing system or can be implemented new. There is also a chance of doing an evaluation study on interface designs proposed.
|
243 |
Honest Mistakes : A study of grammatical mistakes in Swedish pupils’ production of oral English, with a focus on grammar teaching.Rosén, Anna January 2007 (has links)
When speaking a language, whether it is our first or second language, grammatical mistakes will be made. The aim of this essay is to look into what kinds of mistakes some Swedish learners of English make when speaking English and to analyze why these mistakes are made. The essay also aims at looking into what grammar teaching can look like in Sweden and how some teachers look upon their students’ oral proficiency. The method used for this study was a qualitative one, namely interviews. Twelve students, eight in grade seven and four in grade nine, and two teachers were interviewed. During the interviews with the students a dictaphone was used. When interviewing the teachers notes were taken, and these have been the foundation of the analysis. The results showed that many of the mistakes made by the students seemed to originate in transfer from their first language. Preposition mistakes, for instance, were made in 20% of the cases and they mainly originated in interference with their first language. Verbs turned out to be the area where most mistakes were made, followed by prepositions and pronouns. 50% of the mistakes made by students in grade nine were verb mistakes, whereas the students in grade seven made verb mistakes in 33% of the cases. This study further shows that the teachers had a good grasp of what their students know, and do not know, but there were some mistakes the learners made which the teachers did not mention. Finally, the study showed that spoken language is in focus within the classroom. Students are allowed to make mistakes, even though the interviewed teachers find grammar important.
|
244 |
Native Swedish Speakers’ Problems with English PrepositionsJansson, Hanna January 2007 (has links)
This essay investigates native Swedish speakers’ problems in the area of prepositions. A total of 19 compositions, including 678 prepositions, written by native Swedish senior high school students were analysed. All the prepositions in the material were judged as either basic, systematic or idiomatic. Then all the errors of substitution, addition and omission were counted and corrected. As hypothesised, least errors were found in the category of basic prepositions and most errors were found in the category of idiomatic prepositions. However, the small difference between the two categories of systematic and idiomatic prepositions suggests that the learners have greater problems with systematic prepositions than what was first thought to be the case. Basic prepositions cause little or no problems. Systematic prepositions, i.e. those that are rule governed or whose usage is somehow generalisable, seem to be quite problematic to native Swedish speakers. Idiomatic prepositions seem to be learnt as ‘chunks’, and the learners are either aware of the whole constructions or do not use them at all. They also cause some problems for Swedish speakers. Since prepositions are often perceived as rather arbitrary without rules to sufficiently describe them, these conclusions might not be surprising to teachers, students and language learners. The greatest error cause was found to be interference from Swedish, and a few errors could be explained as intralingual errors. It seems as if the learners’ knowledge of their mother tongue strongly influences the acquisition of English prepositions.
|
245 |
A novel fuzzy digital image correlation algorithm for non-contact measurement of the strain during tensile tests / Développement et validation d'un algorithme de corrélation d'images numériques utilisant la logique floue pour mesurer la déformation pendant les tests de tractionZhang, Juan January 2016 (has links)
Cette thèse a pour objet la mesure de déformations sans contact lors d'un essai de traction à l'aide de la méthode de corrélation d'images numériques DIC (Digital Image Correlation). Cette technologie utilise le repérage d'un motif aléatoire de tachetures pour mesurer avec précision les déplacements sur une surface donnée d'un objet subissant une déformation. Plus précisément, un algorithme DIC plus efficace a été formulé, appliqué et validé. La présente thèse comporte cinq parties consacrées au développement et à la validation du nouvel algorithme DIC: (a) la formulation mathématique et la programmation, (b) la vérification numérique, (c) la validation expérimentale, par essai de traction, en comparant les mesures DIC à celles obtenues par des jauges de déformation, (d) l'étude d'un procédé d'atomisation novateur pour générer de façon reproductible le motif de tachetures pour un repérage plus exact, et (e) l'analyse des sources d'erreur dans les mesures DIC. Plus précisément, l'algorithme DIC a servi à analyser, à titre d'exemple d'application, les propriétés mécaniques du polyméthyl métacrylate utilisé pour la reconstruction du squelette. Avec l'algorithme DIC, les images d'un objet sont acquises pendant la déformation de celui-ci. On applique ensuite des techniques d'optimisation non linéaire pour suivre le motif de tachetures à la surface des objets subissant une déformation en traction avant et après le déplacement. Ce procédé d'optimisation demande un choix de valeurs de déplacement initiales. Plus l'estimation de ces valeurs de déplacement initiales est juste, plus il y a de chances que la convergence du processus d'optimisation soit efficace. Ainsi, cette thèse présente une technique de traitement novatrice reposant sur une logique floue incluant aussi l'approximation des valeurs initiales du déplacement pour démarrer un processus itératif d'optimisation, ayant pour résultat une reproduction plus exacte et efficace des déplacements et des déformations. La formulation mathématique du nouvel algorithme a été développée et ensuite mise en œuvre avec succès dans le langage de programmation MATLAB. La vérification de l'algorithme a été faite à l'aide d'images de synthèse simulant des déplacements de corps rigides et des déformations de traction uniformes. Plus particulièrement, les images de déplacement simulaient (1) des déplacements de 0, 1 - 1 pixel en translation, (2) des angles de rotation de 0, 5 - 5°, et (3) de grandes déformations en traction de l'ordre de 5000 à 300000µE déformation, respectivement. Les processus de vérification ont démontré que le taux d'exactitude du nouvel algorithme DIC est supérieur à 99% en ce qui concerne les mesures des différents types et niveaux de déplacements simulés. Une validation expérimentale a été menée afin d'examiner l'efficacité de la nouvelle technique dans des conditions réalistes. Des échantillons de PMMA normalisés, respectant la norme ASTM F3087, ont été produits, inspectés et soumis à une charge de traction jusqu'à la rupture. La déformation de la surface des échantillons a été mesurée au moyen (a) du nouvel algorithme DIC, et (b) des techniques utilisant des jauges de déformation de type rosette. La force maximale moyenne et la limite de résistance mécanique des quatre échantillons étaient de 880 ± 110 N et 49 ± 7 MPa, respectivement. La limite moyenne de déformation mesurée par la jauge de déformation et provenant de l'algorithme DIC étaient de 15750±2570 et 19890±3790 µs déformation, respectivement. Des déformations d'un tel ordre sont courantes pour les matériaux polymériques, et jusqu'à maintenant, la technique DIC n'n’était pas développée pour faire des mesures de déformations aussi importantes. On a constaté que l'erreur relative de la mesure DIC, par rapport à la technique de la jauge de déformation, s'élevait à 26 ± 8%. Par ailleurs, le module de Young moyen et le coefficient de Poisson moyen mesurés en utilisant des jauges de déformations étaient de 3, 78 ± 0, 07 G Pa et 0, 37 ± 0, 02, alors qu'ils étaient de 3, 16 ± 0, 61 GPa et 0, 37 ± 0, 08, respectivement lorsque mesurés avec l'algorithme DIC. L'écart croissant entre les mesures de déformation DIC et celles obtenues au moyen de jauges de déformation est probablement lié à la distorsion graduelle du motif de tachetures à la surface des échantillons de traction. Par la suite, on a introduit un facteur de correction de 1, 27 afin de corriger l'erreur systématique dans les mesures de déformation provenant de l'algorithme DIC. La limite de déformation des mesures DIC a été rajustée à 15712±357 µs déformation avec un taux d'erreur moyen relatif de -0, 5 ± 7, 1 %, comparé aux déformations mesurées par la jauge de déformation. Le module de Young moyen et le coefficient moyen de Poisson de l'algorithme DIC et des mesures obtenues par la jauge de déformation ont par ailleurs été rajustés à 3, 8 ± 0, 4 GPa et 0, 368 ± 0, 025, respectivement. Au moyen d'un procédé d'atomisation, des taches de peinture ont été générées de façon reproductible sur la surface d'un objet. Une approche expérimentale de planification factorielle a été utilisée pour étudier le motif de tachetures (répartition et gradient de l'échelle des tons de gris) pour mesurer l'exactitude de l'algorithme DIC. Plus particulièrement, neuf motifs de tachetures différents ont été générés au moyen du procédé d'atomisation et testés pour la translation et la rotation de corps rigides. Les résultats ont révélé que l'erreur moyenne relative parmi les neuf motifs de tachetures variait de 1, 1 ± 0, 3% à -6, 5 ± 3, 6%. Le motif de tachetures préféré, lequel se démarquait par une large gamme de taches claires et de valeurs de tons de gris, a produit une erreur relative de 1, 1 ± 0, 3%. Une analyse des erreurs et des sources d'erreurs relatives de la mesure de l'algorithme DIC a été menée. Ti-ois catégories de sources d'erreurs, incluant l'algorithme lui-même, les paramètres du processus (taille des sous-ensembles, nombre de pixels calculés) et l'environnement physique (uniformité des échantillons, motifs de tachetures, effet thermique de la caméra CCD et distorsion de la lentille, erreur de non-linéarité dans le circuit de la jauge de déformation) ont fait l'objet d'une étude et de discussions. Enfin, des solutions ont été amenées afin d'aider à réduire les erreurs systématiques et aléatoires en lien avec les trois catégories de sources d'erreurs susmentionnées. Pour terminer, un nouvel algorithme DIC permettant une approximation plus juste de l'estimation initiale, entraînant par conséquent une convergence efficace et précise de l'optimisation a été développé, programmé, mis en oeuvre et vérifié avec succès pour ce qui est des déformations importantes. La validation expérimentale a fait ressortir une erreur systématique inattendue des mesures DIC lorsque comparées aux mesures obtenues au moyen de la technique des jauges de déformation. Plus l'échantillon se déformait, plus l'erreur augmentait proportionnellement. Par conséquent, la distorsion graduelle des tachetures sur la surface de l'objet était probablement la cause de l'erreur. L'erreur étant systématique, elle a été corrigée. Le procédé d'atomisation a permis de générer des tachetures de façon reproductible sur la surface d'un objet. Grâce aux mesures DIC, le comportement mécanique des polymères soumis à des déformations importantes, comme le polyméthyl métacrylate servant à la reconstruction du squelette, peut être étudié et une fois maîtrisé, servir à l'élaboration de matériaux plus efficaces. / Abstract : The present thesis is focused on the non-contact and efficient strain measurement using the Digital Image Correlation (DIC) method, which employs the tracking of random speckle pattern for accurate measurement of displacements on a surface of an object undergoing deformation. Specifically, a more efficient DIC algorithm was successfully developed, implemented, and validated. This thesis consists of five parts related to the novel DIC algorithm: (a) the development and implementation, (b) the numerical verification, (c) the experimental validation, for tensile loading, by comparing to the deformation measurements using the strain gauge technique, (d) the investigation of a novel atomization process to reproducibly generate the speckle pattern for accurate tracking, and (e) the analysis of the error sources in the DIC measurements. Specifically, the DIC algorithm was used to exemplarily examine the mechanical properties of polymethyl methacrylate (PMMA) used in skeletal reconstruction.
In the DIC algorithm, images of an object are captured as it deforms. Nonlinear optimization techniques are then used to correlate the speckle on the surface of the objects before and after the displacement. This optimization process includes a choice of suitable initial displacement values. The more accurate the estimation of these initial displacement values are, the more likely and the more efficient the convergence of the optimization process is. The thesis introduced a novel, fuzzy logics based processing technique, approximation of the initial values of the displacement for initializing iterative optimization, which more accurately and efficiently renders the displacements and deformations as results. The mathematical formulation of the novel algorithm was developed and then successfully implemented into MATLAB programming language. The algorithmic verification was performed using computer-generated images simulating rigid body displacements and uniform tensile deformations. Specifically, the rigid motion images simulated (1) displacements of 0.1-1 pixel for the rigid body translation, (2) rotation angles of 0.5-5 ̊ for rigid body rotation and (3) large tensile deformations of 5000-300000µɛ, respectively. The verification processes showed that the accuracy of the novel DIC algorithm, for the simulated displacement types and levels above 99%.
The experimental validation was conducted to examine the effectiveness of the novel technique under realistic testing conditions. Normalized PMMA specimens, in accordance to ASTM F3087, were produced, inspected and subjected to tensile loading until failure. The deformation of the specimen surface was measured using (a) the novel DIC, and (b) strain gauge rosette techniques. The mean maximum force and ultimate strength of four specimens were 882.2±108.3 N and 49.3±6.2 MPa, respectively. The mean ultimate deformation from the gauge and DIC groups were 15746±2567µɛ and 19887±3790µɛ, respectively. These large deformations are common in polymeric materials, and the DIC technique has thus far not been investigated for large deformation. The relative mean error of the DIC measurement, in reference to those of the strain gauge technique, was found to be up to 26.0±7.1%. Accordingly, the mean Young's modulus and Poisson's ratio of strain gauge measurement were 3.78±0.07 GPa and 0.374±0.02, and of the DIC measurements were 3.16±0.61 GPa and 0.373±0.08, respectively. The increasing difference of the DIC strain measurements relative to those of the strain gauge technique is likely related to the gradual distortion of the speckle pattern on the surface of the tensile specimen. Subsequently, a Correction Factor (CF) of 1.27 was introduced to correct for the systematic error in the deformation measurements of the DIC group. The corrected ultimate deformation of the DIC measurements became 15712±357µɛ with the relative mean error of -0.5±7.1%, if compared to those measurements of the strain gauge techniques. Correspondingly, the mean Young's Modulus and Poisson's ratio of the DIC and of the strain gauge measurements became 3.8±0.4 GPa and 0.368±0.025, respectively.
Using an atomization process, paint speckles were reproducibly generated on the surface of an object. A factorial design of experiments was used to investigate the speckle pattern (grey value distribution and gradient) for the DIC measurement accuracy. Specifically, nine different speckle patterns were generated using the atomization process and tested for rigid body translation and rotation. The results showed the relative mean errors among the nine speckle patterns varied from 1.1±0.3% to -6.5±3.6%. The preferred speckle pattern, which was characterized by a wide range of sharp speckle and of grey values, produced a mean error of 1.1±0.3%.
The analysis of errors and relating sources in the DIC measurement was conducted. Three categories of sources including algorithmic sources, processing parameters sources (subset size, number of pixels computed) and physical environment sources (specimen uniformity, speckle pattern, self-heating effect of the CCD camera and lens distortion of the camera, non-linearity error in strain gauge circuit) were investigated and discussed. Finally, the solutions were provided in order to help reduce the systematic and random errors relating to the aforementioned three categories of sources for errors.
In conclusion, a novel DIC algorithm for a more accurate approximation of the initial guess and accordingly for an efficient and accurate convergence of the optimization was successfully formulated, developed, implemented and verified for relatively large deformations. The experimental validation surprisingly showed a systematic error of the DIC measurements, if compared to the measurements of the strain gauge technique. The larger the deformation applied to the specimen, the larger the error gradually became. Therefore, the gradual distortion of the speckles on the surface of the object was likely the underlying cause of the error. The error was systematic and therefore corrected. The atomization process allowed generating reproducible speckles on the surface of an object. Using the DIC measurements, the mechanical behavior of polymers, undergoing large deformations, such as polymethyl methacrylate used in skeletal reconstruction can be investigated and, once understood, the knowledge gained can help develop more effective materials.
|
246 |
Contribution à l'étude du raisonnement et des stratégies dans le diagnostic de dépannage des systèmes réels et simulésLuc, Françoise January 1991 (has links)
Doctorat en sciences psychologiques / info:eu-repo/semantics/nonPublished
|
247 |
Lag length selection for vector error correction modelsSharp, Gary David January 2010 (has links)
This thesis investigates the problem of model identification in a Vector Autoregressive framework. The study reviews the existing research, conducts an extensive simulation based analysis of thirteen information theoretic criterion (IC), one of which is a novel derivation. The simulation exercise considers the evaluation of seven alternative error restricted vector autoregressive models with four different lag lengths. Alternative sample sizes and parameterisations are also evaluated and compared to results in the existing literature. The results of the comparative analysis provide strong support for the efficiency based criterion of Akaike and in particular the selection capability of the novel criterion, referred to as a modified corrected Akaike information criterion, demonstrates useful finite sample properties.
|
248 |
Nonconforming Immersed Finite Element Methods for Interface ProblemsZhang, Xu 04 May 2013 (has links)
In science and engineering, many simulations are carried out over domains consisting of multiple materials separated by curves/surfaces. If partial differential equations (PDEs) are used to model these simulations, it usually leads to the so-called interface problems of PDEs whose coefficients are discontinuous. In this dissertation, we consider nonconforming immersed "nite element (IFE) methods and error analysis for interface problems.
We "first consider the second order elliptic interface problem with a discontinuous diffusion coefficient. We propose new IFE spaces based on the nonconforming rotated Q1 "finite elements on Cartesian meshes. The degrees of freedom of these IFE spaces are determined by midpoint values or average integral values on edges. We investigate fundamental properties of these IFE spaces, such as unisolvency and partition of unity, and extend well-known trace inequalities and inverse inequalities to these IFE functions. Through interpolation error analysis, we prove that these IFE spaces have optimal approximation capabilities.
We use these IFE spaces to develop partially penalized Galerkin (PPG) IFE schemes whose bilinear forms contain penalty terms over interface edges. Error estimation is carried out for these IFE schemes. We prove that the PPG schemes with IFE spaces based on integral-value degrees of freedom have the optimal convergence in an energy norm. Following a similar approach, we prove that the interior penalty discontinuous Galerkin schemes based on these IFE functions also have the optimal convergence. However, for the PPG schemes based on midpoint-value degrees of freedom, we prove that they have at least a sub-optimal convergence. Numerical experiments are provided to demonstrate features of these IFE methods and compare them with other related numerical schemes.
We extend nonconforming IFE schemes to the planar elasticity interface problem with discontinuous Lam"e parameters. Vector-valued nonconforming rotated Q1 IFE functions with integral-value degrees of freedom are unisolvent with appropriate interface jump conditions. More importantly, the Galerkin IFE scheme using these vector-valued nonconforming rotated Q1 IFE functions are "locking-free" for nearly incompressible elastic materials.
In the last part of this dissertation, we consider potential applications of IFE methods to time dependent PDEs with moving interfaces. Using IFE functions in the discretization in space enables the applicability of the method of lines. Crank-Nicolson type fully discrete schemes are also developed as alternative approaches for solving moving interface problems. / Ph. D.
|
249 |
Návrh přesné napěťové reference v ACMOS procesu / Design of precise bandgap reference in ACMOS processKacafírek, Jiří January 2010 (has links)
In this thesis the principle of voltage reference especially bangap reference is described. Below are described two circuits of this type designed in ACMOS process. There is handmade evaluation of error analysis to identify main error contributors and also monte-carlo simulation. Also statistical analysis is made on the circuit. Results of all methods are compared. Error of reference voltage is compared for both circuits. Circuit with bigger error is optimized to achieve a better precision. Obtained results showed a good agreement of all methods, which evidences importance of hand error evaluation.
|
250 |
THE ERROR ESTIMATION IN FINITE ELEMENT METHODS FOR ELLIPTIC EQUATIONS WITH LOW REGULARITYJing Yang (8800844) 05 May 2020 (has links)
<div>
<div>
<div>
<p>This dissertation contains two parts: one part is about the error estimate for the
finite element approximation to elliptic PDEs with discontinuous Dirichlet boundary
data, the other is about the error estimate of the DG method for elliptic equations
with low regularity.
</p>
<p>Elliptic problems with low regularities arise in many applications, error estimate
for sufficiently smooth solutions have been thoroughly studied but few results have
been obtained for elliptic problems with low regularities. Part I provides an error estimate for finite element approximation to elliptic partial differential equations (PDEs)
with discontinuous Dirichlet boundary data. Solutions of problems of this type are
not in H1 and, hence, the standard variational formulation is not valid. To circumvent this difficulty, an error estimate of a finite element approximation in the W1,r(Ω)
(0 < r < 2) norm is obtained through a regularization by constructing a continuous
approximation of the Dirichlet boundary data. With discontinuous boundary data,
the variational form is not valid since the solution for the general elliptic equations
is not in H1. By using the W1,r (1 < r < 2) regularity and constructing continuous approximation to the boundary data, here we present error estimates for general
elliptic equations.
</p>
<p>Part II presents a class of DG methods and proves the stability when the solution belong to H1+ε where ε < 1/2 could be very small. we derive a non-standard
variational formulation for advection-diffusion-reaction problems. The formulation is
defined in an appropriate function space that permits discontinuity across element
</p>
</div>
</div>
<div>
<div>
<p>viii
</p>
</div>
</div>
</div>
<div>
<div>
<div>
<p>interfaces and does not require piece wise Hs(Ω), s ≥ 3/2, smoothness. Hence, both
continuous and discontinuous (including Crouzeix-Raviart) finite element spaces may
be used and are conforming with respect to this variational formulation. Then it establishes the a priori error estimates of these methods when the underlying problem
is not piece wise H3/2 regular. The constant in the estimate is independent of the
parameters of the underlying problem. Error analysis presented here is new. The
analysis makes use of the discrete coercivity of the bilinear form, an error equation,
and an efficiency bound of the continuous finite element approximation obtained in
the a posteriori error estimation. Finally a new DG method is introduced i to over-
come the difficulty in convergence analysis in the standard DG methods and also
proves the stability.
</p>
</div>
</div>
</div>
|
Page generated in 0.0293 seconds