• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 39
  • 34
  • 19
  • 11
  • 9
  • 8
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 439
  • 439
  • 199
  • 94
  • 52
  • 46
  • 45
  • 43
  • 41
  • 37
  • 36
  • 33
  • 32
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Target Element Sizes For Finite Element Tidal Models From A Domain-wide, Localized Truncation Error Analysis Incorporating Botto

Parrish, Denwood 01 January 2007 (has links)
A new methodology for the determination of target element sizes for the construction of finite element meshes applicable to the simulation of tidal flow in coastal and oceanic domains is developed and tested. The methodology is consistent with the discrete physics of tidal flow, and includes the effects of bottom stress. The method enables the estimation of the localized truncation error of the nonconservative momentum equations throughout a triangulated data set of water surface elevation and flow velocity. The method's domain-wide applicability is due in part to the formulation of a new localized truncation error estimator in terms of complex derivatives. More conventional criteria that are often used to determine target element sizes are limited to certain bathymetric conditions. The methodology developed herein is applicable over a broad range of bathymetric conditions, and can be implemented efficiently. Since the methodology permits the determination of target element size at points up to and including the coastal boundary, it is amenable to coastal domain applications including estuaries, embayments, and riverine systems. These applications require consideration of spatially varying bottom stress and advective terms, addressed herein. The new method, called LTEA-CD (localized truncation error analysis with complex derivatives), is applied to model solutions over the Western North Atlantic Tidal model domain (the bodies of water lying west of the 60° W meridian). The convergence properties of LTEACD are also analyzed. It is found that LTEA-CD may be used to build a series of meshes that produce converging solutions of the shallow water equations. An enhanced version of the new methodology, LTEA+CD (which accounts for locally variable bottom stress and Coriolis terms) is used to generate a mesh of the WNAT model domain having 25% fewer nodes and elements than an existing mesh upon which it is based; performance of the two meshes, in an average sense, is indistinguishable when considering elevation tidal signals. Finally, LTEA+CD is applied to the development of a mesh for the Loxahatchee River estuary; it is found that application of LTEA+CD provides a target element size distribution that, when implemented, outperforms a high-resolution semi-uniform mesh as well as a manually constructed, existing, documented mesh.
312

Immersed Finite Elements for a Second Order Elliptic Operator and Their Applications

Zhuang, Qiao 17 June 2020 (has links)
This dissertation studies immersed finite elements (IFE) for a second order elliptic operator and their applications to interface problems of related partial differential equations. We start with the immersed finite element methods for the second order elliptic operator with a discontinuous coefficient associated with the elliptic interface problems. We introduce an energy norm stronger than the one used in [111]. Then we derive an estimate for the IFE interpolation error with this energy norm using patches of interface elements. We prove both the continuity and coercivity of the bilinear form in a partially penalized IFE (PPIFE) method. These properties allow us to derive an error bound for the PPIFE solution in the energy norm under the standard piecewise $H^2$ regularity assumption instead of the more stringent $H^3$ regularity used in [111]. As an important consequence, this new estimation further enables us to show the optimal convergence in the $L^2$ norm which could not be done by the analysis presented in [111]. Then we consider applications of IFEs developed for the second order elliptic operator to wave propagation and diffusion interface problems. The first application is for the time-harmonic wave interface problem that involves the Helmholtz equation with a discontinuous coefficient. We design PPIFE and DGIFE schemes including the higher degree IFEs for Helmholtz interface problems. We present an error analysis for the symmetric linear/bilinear PPIFE methods. Under the standard piecewise $H^2$ regularity assumption for the exact solution, following Schatz's arguments, we derive optimal error bounds for the PPIFE solutions in both an energy norm and the usual $L^2$ norm provided that the mesh size is sufficiently small. {In the second group of applications, we focus on the error analysis for IFE methods developed for solving typical time-dependent interface problems associated with the second order elliptic operator with a discontinuous coefficient.} For hyperbolic interface problems, which are typical wave propagation interface problems, we reanalyze the fully-discrete PPIFE method in [143]. We derive the optimal error bounds for this PPIFE method for both an energy norm and the $L^2$ norm under the standard piecewise $H^2$ regularity assumption in the space variable of the exact solution. Simulations for standing and travelling waves are presented to corroborate the results of the error analysis. For parabolic interface problems, which are typical diffusion interface problems, we reanalyze the PPIFE methods in [113]. We prove that these PPIFE methods have the optimal convergence not only in an energy norm but also in the usual $L^2$ norm under the standard piecewise $H^2$ regularity. / Doctor of Philosophy / This dissertation studies immersed finite elements (IFE) for a second order elliptic operator and their applications to a few types of interface problems. We start with the immersed finite element methods for the second order elliptic operator with a discontinuous coefficient associated with the elliptic interface problem. We can show that the IFE methods for the elliptic interface problems converge optimally when the exact solution has lower regularity than that in the previous publications. Then we consider applications of IFEs developed for the second order elliptic operator to wave propagation and diffusion interface problems. For interface problems of the Helmholtz equation which models time-Harmonic wave propagations, we design IFE schemes, including higher degree schemes, and derive error estimates for a lower degree scheme. For interface problems of the second order hyperbolic equation which models time dependent wave propagations, we derive better error estimates for the IFE methods and provides numerical simulations for both the standing and traveling waves. For interface problems of the parabolic equation which models the time dependent diffusion, we also derive better error estimates for the IFE methods.
313

REAL-TIME RECONCILIATION OF COAL COMBUSTOR DATA

Montgomery, Roger Lee January 1982 (has links)
No description available.
314

Fehleranalyse : Untersuchungen an afrikaanssprachigen Studienanfängern im Fach Deutsch

Annas, Rolf 03 1900 (has links)
Thesis (MA) -- Stellenbosch University, 1980. / No Abtsract Available
315

Incorporating measurement error and density gradients in distance sampling surveys

Marques, Tiago Andre Lamas Oliveira January 2007 (has links)
Distance sampling is one of the most commonly used methods for estimating density and abundance. Conventional methods are based on the distances of detected animals from the center of point transects or the center line of line transects. These distances are used to model a detection function: the probability of detecting an animal, given its distance from the line or point. The probability of detecting an animal in the covered area is given by the mean value of the detection function with respect to the available distances to be detected. Given this probability, a Horvitz-Thompson- like estimator of abundance for the covered area follows, hence using a model-based framework. Inferences for the wider survey region are justified using the survey design. Conventional distance sampling methods are based on a set of assumptions. In this thesis I present results that extend distance sampling on two fronts. Firstly, estimators are derived for situations in which there is measurement error in the distances. These estimators use information about the measurement error in two ways: (1) a biased estimator based on the contaminated distances is multiplied by an appropriate correction factor, which is a function of the errors (PDF approach), and (2) cast into a likelihood framework that allows parameter estimation in the presence of measurement error (likelihood approach). Secondly, methods are developed that relax the conventional assumption that the distribution of animals is independent of distance from the lines or points (usually guaranteed by appropriate survey design). In particular, the new methods deal with the case where animal density gradients are caused by the use of non-random sampler allocation, for example transects placed along linear features such as roads or streams. This is dealt with separately for line and point transects, and at a later stage an approach for combining the two is presented. A considerable number of simulations and example analysis illustrate the performance of the proposed methods.
316

Automatic validation and optimisation of biological models

Cooper, Jonathan Paul January 2009 (has links)
Simulating the human heart is a challenging problem, with simulations being very time consuming, to the extent that some can take days to compute even on high performance computing resources. There is considerable interest in computational optimisation techniques, with a view to making whole-heart simulations tractable. Reliability of heart model simulations is also of great concern, particularly considering clinical applications. Simulation software should be easily testable and maintainable, which is often not the case with extensively hand-optimised software. It is thus crucial to automate and verify any optimisations. CellML is an XML language designed for describing biological cell models from a mathematical modeller’s perspective, and is being developed at the University of Auckland. It gives us an abstract format for such models, and from a computer science perspective looks like a domain specific programming language. We are investigating the gains available from exploiting this viewpoint. We describe various static checks for CellML models, notably checking the dimensional consistency of mathematics, and investigate the possibilities of provably correct optimisations. In particular, we demonstrate that partial evaluation is a promising technique for this purpose, and that it combines well with a lookup table technique, commonly used in cardiac modelling, which we have automated. We have developed a formal operational semantics for CellML, which enables us to mathematically prove the partial evaluation of CellML correct, in that optimisation of models will not change the results of simulations. The use of lookup tables involves an approximation, thus introduces some error; we have analysed this using a posteriori techniques and shown how it may be managed. While the techniques could be applied more widely to biological models in general, this work focuses on cardiac models as an application area. We present experimental results demonstrating the effectiveness of our optimisations on a representative sample of cardiac cell models, in a variety of settings.
317

Apprentissage du français comme langue étrangère (L3+) par des étudiants indiens

Thomas-Anugraham, Alice January 2008 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal.
318

Une analyse des erreurs des étudiants persanophones de français dans l'emploi des connecteurs à l'écrit / An error analysis of iranian students of french in the use of connectors in written

Esmail Jamaat, Narges 25 June 2012 (has links)
Dans cette recherche consacrée à l'emploi des connecteurs à l'écrit par des apprenants persanophones de français langue étrangère, nous avons étudié leur contribution à l'organisation des textes, ainsi que le rôle de ces connecteurs dans la construction de la cohésion textuelle et dans l'évolution de l'information. Après avoir observé les différences structurelles entre la langue française et la langue persane, surtout en ce qui concerne les connecteurs, nous avons formé l'hypothèse que la langue maternelle de nos apprenants pourrait contribuer à l'emploi erroné des connecteurs. Dans cette optique, nous nous sommes inspirés des théories relatives à l'analyse d'erreurs et à l'analyse contrastive. Celles-ci nous ont permis dans un premier temps de formuler une typologie des erreurs relevées, et dans un second temps de proposer une explication des erreurs constatées. L'analyse de l'emploi de différents types de connecteurs dans les productions écrites d'un groupe d'étudiants iraniens a révélé des points intéressants à la fois sur les compétences des apprenants mais également à propos de leurs besoins langagiers. Cette recherche a confirmé notre hypothèse initiale tout en la nuançant, car elle démontre que le recours à la langue maternelle ne conduit pas inexorablement à un emploi erroné des connecteurs en français L2. / In this research, by focusing on the usage of the connectors in the written texts of the Iranian students in a French language course, we investigated their contribution of the connectors to the text organization in addition to the role of these connectors in the construction of textual cohesion and in the information evolution. After observing the structural differences between the French language and the Persian language, especially regarding the connectors, we formed a hypothesis that the mother tongue of our students may cause the wrong usage of the connectors. To this end, we were inspired by the theories relative to the error analysis and contrastive analysis which allowed us initially to further clarify a classification of the errors and to propose an explanation of the errors found. The analysis of the usage of different types of connectors in the written productions of the group revealed some interesting points on both the students' competence level and their language needs. This research confirmed our initial hypothesis while showing that the recourse to the mother language was not necessarily a cause for wrong usage of connectors in French L2.
319

Mathematics for history's sake : a new approach to Ptolemy's Geography

Mintz, Daniel V. January 2011 (has links)
Almost two thousand years ago, Claudius Ptolemy created a guide to drawing maps of the world, identifying the names and coordinates of over 8,000 settlements and geographical features. Using the coordinates of those cities and landmarks which have been identified with modern locations, a series of best-fit transformations has been applied to several of Ptolemy’s regional maps, those of Britain, Spain, and Italy. The transformations relate Ptolemy’s coordinates to their modern equivalents by rotation and skewed scaling. These reflect the types of error that appear in Ptolemy’s data, namely those of distance and orientation. The mathematical techniques involved in this process are all modern. However, these techniques have been altered in order to deal with the historical difficulties of Ptolemy’s maps. To think of Ptolemy’s data as similar to that collected from a modern random sampling of a population and to apply unbiased statistical methods to it would be erroneous. Ptolemy’s data is biased, and the nature of that bias is going to be informed by the history of the data. Using such methods as cluster analysis, Procrustes analysis, and multidimensional scaling, we aimed to assess numerically the accuracy of Ptolemy’s maps. We also investigated the nature of the errors in the data and whether or not these could be linked to historical developments in the areas mapped.
320

Tense and aspect errors in junior high school students’ writing : A study of risk taking

Essving, Linn January 2019 (has links)
English is taught in Swedish schools as a foreign language. The students are at different levels, and most of them try to achieve a higher proficiency level. While the extent to which students are successful at learning a language depends on many different factors. Previous studies have shown that students who are open to taking risks in their production are at an advantage. The present study investigated 80 texts written by students in the seventh and the ninth grade. The main aim was to investigate to what extent errors and complexity levels can be explained in relation to risk taking. In more detail, the study examined differences between the grades in terms of degree of syntactic complexity and what kinds of aspect and tense errors were made. To be able to investigate the errors an approach called Error Analysis was used. The results showed that for both grades, substitution errors were the most common error and there was a significant difference between the grades (p<0.001); however, the other errors showed no significant differences. Regarding the complexity levels, there was a highly significant difference (significance level p<0.001) for the least complex sentences, but there were no significant differences between the grades for the highest and second highest levels of complexity. The results furthermore suggest that there is a correlation between risk taking and a higher likelihood of making errors, as a large proportion of the erroneous sentences written by students from the ninth grade were found in syntactically complex sentences. Most of the errors made by students in the seventh grade were found in less syntactically complex sentences however.

Page generated in 0.072 seconds