• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 39
  • 34
  • 19
  • 11
  • 9
  • 8
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 440
  • 440
  • 199
  • 94
  • 52
  • 46
  • 45
  • 43
  • 41
  • 37
  • 36
  • 33
  • 32
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Subject-Verb Agreement Errors in Swedish 9th and 11th Grade Students’ English Written Production

Tsukanaka, Maiko January 2023 (has links)
This study aims to investigate possible factors contributing to subject-verb agreement errors in Swedish junior and senior high school students' English written production. The sample data is collected from the Swedish Learner English Corpus (SLEC), which comprises student texts produced in a classroom setting. The texts are randomly chosen but evenly distributed in terms of binary gender, school year, and type of high school program. In this study, the texts included in the scope are written by students attending a Swedish-speaking school with Swedish as their first language. Errors are classified as overgeneralization or transfer and further classified in relation to the subject type, the verb type, and the distance between the subject and the verb. The classification of all correct instances of subject-verb agreement is also performed to further investigate possible error explanations. A total of 41 agreement errors were found in 24 texts written by students in the 9th and 11th grades. The results show that overgeneralization is more frequent than transfer errors. Overgeneralization suggests that the students are aware of the third-singular form but do not always apply it correctly, while transfer errors show a potential lack of awareness or attention to the form. In both cases, the errors indicate that these students have not automatized the principle yet. Errors are often related to subject types “a pronoun” or “a noun/noun phrase" and the verb be, which is the most frequently used verb. Most of the errors occur when the subject and the verb are in immediate contact, and more than half of them involve a relative pronoun as subject, which indicates that the learners have misinterpreted the grammatical principle or have not fully acquired it. Overuse of the third-person singular form can also be an effect of teaching and explicit learning, which makes learners apply the form whenever it seems possible and relevant.
362

Míra přesnosti vyjadřování časových údajů v textech nerodilých mluvčích češtiny na základě analýzy žákovského korpusu Merlin / The Degree of Time Expressions Accuracy in the Texts of Non-native speakers of Czech in the learner corpus Merlin

Marková, Iva January 2022 (has links)
1 Abstract The diploma Thesis will focus on the general description of the expression of temporal quantification by non-native speakers of Czech in written speeches. Based on professional literature, the graduate will define time indicators, in the next step she will analyze the most frequently used textbook series of Czech for foreigners - Czech step by step by L. Holá, in which she will monitor the presentation of time data, both grammatically and lexically in terms of the range of indicators with regard to the degree of language acquisition, but also in terms of the way of presentation of the indicators. In the next part, an analysis of the occurrence of time indicators in the Merlin student corpus will be performed. The scope of use of time indicators will be monitored with regard to the level of knowledge according to SERR, the degree of accuracy of the indication of time quantification and the error typology of time means.
363

English Word Formation Processes: The use of affixations and implications for second language learning : A Case Study of Swedish Secondary Schools Grades 7-9

Håkansson, Jeannette January 2021 (has links)
This work explains the types of affixation errors second language learners make when learning English word formation processes, especially derivational and inflectional affixations. The data for the study were collected as primary sources from two secondary schools in Sweden. The data were analyzed with the use of Error Analysis noted by Corder (1967) and the error analysis framework adapted by Ellis et al. (2005, p. 57). The method chosen was to identify, classify, describe,and evaluate derivational and inflectional affixation errors. In total 2,812 answers were retrieved. The results consist of some findings, for example, some of the derivationaland inflectional affixations errors were noticed to be intralingual and interlingual. Also, the nature of the errors issuch that they are either transferred, omissive, additive or substitutive errors. Moreover, the errors were also due to overgeneralization, including substitutionerrors, or additive errors. Previous research findings showedstudents make grammatical errors with letter insertions, letter omission, or substitutionerrors. This study made the same findings as students made errors of letter insertion, letter omission, substitution errors, and errors due to overgeneralization. Some of the most difficult derivational and inflectional affixation errors were also noticed across all the grades.
364

Secondary And Postsecondary Calculus Instructors' Expectations Of Student Knowledge Of Functions: A Multiple-case Study

Avila, Cheryl 01 January 2013 (has links)
This multiple-case study examines the explicit and implicit assumptions of six veteran calculus instructors from three types of educational institutions, comparing and contrasting their views on the iteration of conceptual understanding and procedural fluency of pre-calculus topics. There were three components to the research data recording process. The first component was a written survey, the second component was a "think-aloud" activity of the instructors analyzing the results of a function diagnostic instrument administered to a calculus class, and for the third component, the instructors responded to two quotations. As a result of this activity, themes were found between and among instructors at the three types of educational institutions related to their expectations of their incoming students’ prior knowledge of pre-calculus topics related to functions. Differences between instructors of the three types of educational institutions included two identifiable areas: (1) the teachers’ expectations of their incoming students and (2) the methods for planning instruction. In spite of these differences, the veteran instructors were in agreement with other studies’ findings that an iterative approach to conceptual understanding and procedural fluency are necessary for student understanding of pre-calculus concepts.
365

Addressing Misconceptions in Geometry through Written Error Analyses

Kembitzky, Kimberle Ann January 2009 (has links)
No description available.
366

Rigorous defect control and the numerical solution of ordinary differential equations

Ernsthausen, John+ 10 1900 (has links)
Modern numerical ordinary differential equation initial-value problem (ODE-IVP) solvers compute a piecewise polynomial approximate solution to the mathematical problem. Evaluating the mathematical problem at this approximate solution defines the defect. Corless and Corliss proposed rigorous defect control of numerical ODE-IVP. This thesis automates rigorous defect control for explicit, first-order, nonlinear ODE-IVP. Defect control is residual-based backward error analysis for ODE, a special case of Wilkinson's backward error analysis. This thesis describes a complete software implementation of the Corless and Corliss algorithm and extensive numerical studies. Basic time-stepping software is adapted to defect control and implemented. Advances in software developed for validated computing applications and advances in programming languages supporting operator overloading enable the computation of a tight rigorous enclosure of the defect evaluated at the approximate solution with Taylor models. Rigorously bounding a norm of the defect, the Corless and Corliss algorithm controls to mathematical certainty the norm of the defect to be less than a user specified tolerance over the integration interval. The validated computing software used in this thesis happens to compute a rigorous supremum norm. The defect of an approximate solution to the mathematical problem is associated with a new problem, the perturbed reference problem. This approximate solution is often the product of a numerical procedure. Nonetheless, it solves exactly the new problem including all errors. Defect control accepts the approximate solution whenever the sup-norm of the defect is less than a user specified tolerance. A user must be satisfied that the new problem is an acceptable model. / Thesis / Master of Science (MSc) / Many processes in our daily lives evolve in time, even the weather. Scientists want to predict the future makeup of the process. To do so they build models to model physical reality. Scientists design algorithms to solve these models, and the algorithm implemented in this project was designed over 25 years ago. Recent advances in mathematics and software enabled this algorithm to be implemented. Scientific software implements mathematical algorithms, and sometimes there is more than one software solution to apply to the model. The software tools developed in this project enable scientists to objectively compare solution techniques. There are two forces at play; models and software solutions. This project build software to automate the construction of the exact solution of a nearby model. That's cool.
367

Error correction in the adult communicative ESL classroom : teachers’ perceptions and realities

Brown, Nancy January 1990 (has links)
No description available.
368

Design, Analysis, and Application of Immersed Finite Element Methods

Guo, Ruchi 19 June 2019 (has links)
This dissertation consists of three studies of immersed finite element (IFE) methods for inter- face problems related to partial differential equations (PDEs) with discontinuous coefficients. These three topics together form a continuation of the research in IFE method including the extension to elasticity systems, new breakthroughs to higher degree IFE methods, and its application to inverse problems. First, we extend the current construction and analysis approach of IFE methods in the literature for scalar elliptic equations to elasticity systems in the vector format. In particular, we construct a group of low-degree IFE functions formed by linear, bilinear, and rotated Q1 polynomials to weakly satisfy the jump conditions of elasticity interface problems. Then we analyze the trace inequalities of these IFE functions and the approximation capabilities of the resulted IFE spaces. Based on these preparations, we develop a partially penalized IFE (PPIFE) scheme and prove its optimal convergence rates. Secondly, we discuss the limitations of the current approaches of IFE methods when we try to extend them to higher degree IFE methods. Then we develop a new framework to construct and analyze arbitrary p-th degree IFE methods. In this framework, each IFE function is the extension of a p-th degree polynomial from one subelement to the whole interface element by solving a local Cauchy problem on interface elements in which the jump conditions across the interface are employed as the boundary conditions. All the components in the analysis, including existence of IFE functions, the optimal approximation capabilities and the trace inequalities, are all reduced to key properties of the related discrete extension operator. We employ these results to show the optimal convergence of a discontinuous Galerkin IFE (DGIFE) method. In the last part, we apply the linear IFE methods in the literature together with the shape optimization technique to solve a group of interface inverse problems. In this algorithm, both the governing PDEs and the objective functional for interface inverse problems are discretized optimally by the IFE method regardless of the location of the interface in a chosen mesh. We derive the formulas for the gradients of the objective function in the optimization problem which can be implemented efficiently in the IFE framework through a discrete adjoint method. We demonstrate the properties of the proposed algorithm by applying it to three representative applications. / Doctor of Philosophy / Interface problems arise from many science and engineering applications modeling the transmission of some physical quantities between multiple materials. Mathematically, these multiple materials in general are modeled by partial differential equations (PDEs) with discontinuous parameters, which poses challenges to developing efficient and reliable numerical methods and the related theoretical error analysis. The main contributions of this dissertation is on the development of a special finite element method, the so called immersed finite element (IFE) method, to solve the interface problems on a mesh independent of the interface geometry which can be advantageous especially when the interface is moving. Specifically, this dissertation consists of three projects of IFE methods: elasticity interface problems, higher-order IFE methods and interface inverse problems, including their design, analysis, and application.
369

Numerical treatment of the Black-Scholes variational inequality in computational finance

Mautner, Karin 16 February 2007 (has links)
In der Finanzmathematik hat der Besitzer einer amerikanische Option das Recht aber nicht die Pflicht, eine Aktie innerhalb eines bestimmten Zeitraums, für einen bestimmten Preis zu kaufen oder zu verkaufen. Die Bewertung einer amerikanische Option wird als so genanntes optimale stopping Problem formuliert. Erfolgt die Modellierung des Aktienkurses durch eine geometrische Brownsche Bewegung, wird der Wert einer amerikanischen Option durch ein deterministisches freies Randwertproblem (FRWP), oder einer äquivalenten Variationsungleichung (VU) auf ganz R in gewichteten Sobolev Räumen gegeben. Um Standardmethoden der Numerischen Mathematik anzuwenden, wird das unbeschränkte Gebiet zu einem beschränkten Gebiet abgeschnitten. Mit Hilfe der Fourier-Transformation wird eine Integraldarstellung der Lösung die den freien Rand explizit beinhaltet, hergeleitet. Mittels dieser Integraldarstellung werden Abschneidefehlerschranken bewiesen. Danach werden gewichtete Poincare Ungleichungen mit expliziten Konstanten bewiesen. Der Abschneidefehler und die gewichtete Poincare Ungleichung ermöglichen, einen zuverlässigen a posteriori Fehlerschätzer zwischen der exakten Lösung der VU und der semidiskreten Lösung des penalisierten Problems auf R herzuleiten. Eine hinreichend glatte Lösung der VU garantiert die Konvergenz der Lösung des penaltisierten Problems zur Lösung der VU. Ein a priori Fehlerschätzer für den Fehler zwischen der exakten Lösung der VU und der semidiskreten Lösung des penaltisierten Problems beendet die numerische Analysis. Die eingeführten aposteriori Fehlerschätzer motivieren einen Algorithmus für adaptive Netzverfeinerung. Numerische Experimente zeigen die verbesserte Konvergenz des adaptiven Verfahrens gegenüber der uniformen Verfeinerung. Der zuverlässige a posteriori Fehlerschätzer ermöglicht es, den Abschneidepunkt so zu wählen, dass der Gesamtfehler (Diskretisierungsfehler plus Abschneidefehler) kleiner als eine gegebenen Toleranz ist. / Among the central concerns in mathematical finance is the evaluation of American options. An American option gives the holder the right but not the obligation to buy or sell a certain financial asset within a certain time-frame, for a certain strike price. The valuation of American options is formulated as an optimal stopping problem. If the stock price is modelled by a geometric Brownian motion, the value of an American option is given by a deterministic parabolic free boundary value problem (FBVP) or equivalently a non-symmetric variational inequality (VI) on weighted Sobolev spaces on R. To apply standard numerical methods, the unbounded domain R is truncated to a bounded one. Applying the Fourier transform to the FBVP yields an integral representation of the solution including the free boundary explicitely. This integral representation allows to prove explicit truncation errors. Since the VI is formulated within the framework of weighted Sobolev spaces, we establish a weighted Poincare inequality with explicit determined constants. The truncation error estimate and the weighted Poncare inequality enable a reliable a posteriori error estimate between the exact solution of the VI and the semi-discrete solution of the penalised problem on R. A sufficient regular solution provides the convergence of the solution of the penalised problem to the solution of the VI. An a priori error estimate for the error between the exact solution of the VI and the semi-discrete solution of the penalised problem concludes the numerical analysis. The established a posteriori error estimates motivates an algorithm for adaptive mesh refinement. Numerical experiments show the improved convergence of the adaptive algorithm compared to uniform mesh refinement. The reliable a posteriori error estimate including explicit truncation errors allows to determine a truncation point such that the total error (discretisation and truncation error) is below a given error tolerance.
370

Critical slowing down and error analysis of lattice QCD simulations

Virotta, Francesco 07 May 2012 (has links)
In dieser Arbeit untersuchen wir das Critical Slowing down der Gitter-QCD Simulationen. Wir führen eine Vorstudie in der quenched Approximation durch, in der wir feststellen, dass unsere Schätzung der exponentiellen Autokorrelation wie $\tauexp(a) \sim a^{-5} $ skaliert, wobei $a$ der Gitterabstand ist. In unquenched Simulationen mit O(a)-verbesserten Wilson-Fermionen finden wir ein ähnliches Skalierungsgesetz. Die Diskussion wird von einem gro\ss{}en Satz an Ensembles sowohl in reiner Eichtheorie als auch in der Theorie mit zwei entarteten Seequarks unterstützt. Wir haben darüber hinaus die Wirkung von langsamen algorithmischen Modi in der Fehleranalyse des Erwartungswertes von typischen Gitter-QCD-Observablen (hadronische Matrixelemente und Massen) untersucht. Im Kontext der Simulationen, die durch langsame Modi betroffen sind, schlagen wir vor und testen eine Methode, um zuverlässige Schätzungen der statistischen Fehler zu bekommen. Diese Methode soll in dem typischen Simulationsbereich der Gitter-QCD helfen, nämlich dann, wenn die gesamte erfasste Statistik O(10)\tauexp ist. Dies ist der typische Fall bei Simulationen in der Nähe des Kontinuumslimes, wo der Rechenaufwand für die Erzeugung von zwei unabhängigen Datenpunkten sehr gro\ss{} sein kann. Schlie\ss{}lich diskutieren wir die Skalenbestimmung in N_f=2-Simulationen mit der Kaon Zerfallskonstante f_K als experimentellem Input. Die Methode wird zusammen mit einer gründlichen Diskussion der angewandten Fehleranalyse erklärt. Eine Beschreibung der öffentlich zugänglichen Software, die für die Fehleranalyse genutzt wurde, ist eingeschlossen. / In this work we investigate the critical slowing down of lattice QCD simulations. We perform a preliminary study in the quenched approximation where we find that our estimate of the exponential auto-correlation time scales as $\tauexp(a)\sim a^{-5}$, where $a$ is the lattice spacing. In unquenched simulations with O(a) improved Wilson fermions we do not obtain a scaling law but find results compatible with the behavior that we find in the pure gauge theory. The discussion is supported by a large set of ensembles both in pure gauge and in the theory with two degenerate sea quarks. We have moreover investigated the effect of slow algorithmic modes in the error analysis of the expectation value of typical lattice QCD observables (hadronic matrix elements and masses). In the context of simulations affected by slow modes we propose and test a method to obtain reliable estimates of statistical errors. The method is supposed to help in the typical algorithmic setup of lattice QCD, namely when the total statistics collected is of O(10)\tauexp. This is the typical case when simulating close to the continuum limit where the computational costs for producing two independent data points can be extremely large. We finally discuss the scale setting in Nf=2 simulations using the Kaon decay constant f_K as physical input. The method is explained together with a thorough discussion of the error analysis employed. A description of the publicly available code used for the error analysis is included.

Page generated in 0.0195 seconds