• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 39
  • 34
  • 19
  • 11
  • 9
  • 8
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 439
  • 439
  • 199
  • 94
  • 52
  • 46
  • 45
  • 43
  • 41
  • 37
  • 36
  • 33
  • 32
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Adaptive multiscale modeling of polymeric materials using goal-oriented error estimation, Arlequin coupling, and goals algorithms

Bauman, Paul Thomas, 1980- 29 August 2008 (has links)
Scientific theories that explain how physical systems behave are described by mathematical models which provide the basis for computer simulations of events that occur in the physical universe. These models, being only mathematical characterizations of actual phenomena, are obviously subject to error because of the inherent limitations of all mathematical abstractions. In this work, new theory and methodologies are developed to quantify such modeling error in a special way that resolves a fundamental and standing issue: multiscale modeling, the development of models of events that transcend many spatial and temporal scales. Specifically, we devise the machinery for a posteriori estimates of relative modeling error between a model of fine scale and another of coarser scale, and we use this methodology as a general approach to multiscale problems. The target application is one of critical importance to nanomanufacturing: imprint lithography of semiconductor devices. The development of numerical methods for multiscale modeling has become one of the most important areas of computational science. Technological developments in the manufacturing of semiconductors hinge upon the ability to understand physical phenomena from the nanoscale to the microscale and beyond. Predictive simulation tools are critical to the advancement of nanomanufacturing semiconductor devices. In principle, they can displace expensive experiments and testing and optimize the design of the manufacturing process. The development of such tools rest on the edge of contemporary methods and high-performance computing capabilities and is a major open problem in computational science. In this dissertation, a molecular model is used to simulate the deformation of polymeric materials used in the fabrication of semiconductor devices. Algorithms are described which lead to a complex molecular model of polymer materials designed to produce an etch barrier, a critical component in imprint lithography approaches to semiconductor manufacturing. Each application of this so-called polymerization process leads to one realization of a lattice-type model of the polymer, a molecular statics model of enormous size and complexity. This is referred to as the base model for analyzing the deformation of the etch barrier, a critical feature of the manufacturing process. To reduce the size and complexity of this model, a sequence of coarser surrogate models is generated. These surrogates are the multiscale models critical to the successful computer simulation of the entire manufacturing process. The surrogate involves a combination of particle models, the molecular model of the polymer, and a coarse-scale model of the polymer as a nonlinear hyperelastic material. Coefficients for the nonlinear elastic continuum model are determined using numerical experiments on representative volume elements of the polymer model. Furthermore, a simple model of initial strain is incorporated in the continuum equations to model the inherit shrinking of the A coupled particle and continuum model is constructed using a special algorithm designed to provide constraints on a region of overlap between the continuum and particle models. This coupled model is based on the so-called Arlequin method that was introduced in the context of coupling two continuum models with differing levels of discretization. It is shown that the Arlequin problem for the particle-tocontinuum model is well posed in a one-dimensional setting involving linear harmonic springs coupled with a linearly elastic continuum. Several numerical examples are presented. Numerical experiments in three dimensions are also discussed in which the polymer model is coupled to a nonlinear elastic continuum. Error estimates in local quantities of interest are constructed in order to estimate the modeling error due to the approximation of the particle model by the coupled multiscale surrogate model. The estimates of the error are computed by solving an auxiliary adjoint, or dual, problem that incorporates as data the quantity of interest or its derivatives. The solution of the adjoint problem indicates how the error in the approximation of the polymer model inferences the error in the quantity of interest. The error in the quantity of interest represents the relative error between the value of the quantity evaluated for the base model, a quantity typically unavailable or intractable, and the value of the quantity of interest provided by the multiscale surrogate model. To estimate the error in the quantity of interest, a theorem is employed that establishes that the error coincides with the value of the residual functional acting on the adjoint solution plus a higher-order remainder. For each surrogate in a sequence of surrogates generated, the residual functional acting on various approximations of the adjoint is computed. These error estimates are used to construct an adaptive algorithm whereby the model is adapted by supplying additional fine-scale data in certain subdomains in order to reduce the error in the quantity of interest. The adaptation algorithm involves partitioning the domain and selecting which subdomains are to use the particle model, the continuum model, and where the two overlap. When the algorithm identifies that a region contributes a relatively large amount to the error in the quantity of interest, it is scheduled for refinement by switching the model for that region to the particle model. Numerical experiments on several configurations representative of nano-features in semiconductor device fabrication demonstrate the effectiveness of the error estimate in controlling the modeling error as well as the ability of the adaptive algorithm to reduce the error in the quantity of interest. There are two major conclusions of this study: 1. an effective and well posed multiscale model that couples particle and continuum models can be constructed as a surrogate to molecular statics models of polymer networks and 2. an error estimate of the modeling error for such systems can be estimated with sufficient accuracy to provide the basis for very effective multiscale modeling procedures. The methodology developed in this study provides a general approach to multiscale modeling. The computational procedures, computer codes, and results could provide a powerful tool in understanding, designing, and optimizing an important class of semiconductormanufacturing processes. The study in this dissertation involves all three components of the CAM graduate program requirements: Area A, Applicable Mathematics; Area B, Numerical Analysis and Scientific Computation; and Area C, Mathematical Modeling and Applications. The multiscale modeling approach developed here is based on the construction of continuum surrogates and coupling them to molecular statics models of polymer as well as a posteriori estimates of error and their adaptive control. A detailed mathematical analysis is provided for the Arlequin method in the context of coupling particle and continuum models for a class of one-dimensional model problems. Algorithms are described and implemented that solve the adaptive, nonlinear problem proposed in the multiscale surrogate problem. Large scale, parallel computations for the base model are also shown. Finally, detailed studies of models relevant to applications to semiconductor manufacturing are presented. / text
202

Αξιολόγηση της επίδοσης στο γλωσσικό μάθημα μαθητών γυμνασίου : μια ερευνητική προσέγγιση της Ελληνικής ως δεύτερης ή ξένης γλώσσας

Τσιλομελέκη, Κωνσταντίνα 28 May 2015 (has links)
Η αξιολόγηση της επίδοσης τόσο των Ελλήνων και όσο και των αλλοδαπών μαθητών κρίνεται αναγκαίο να είναι περισσότερο αντικειμενική και βασισμένη σε κριτήρια ούτως ώστε το σχολείο να προωθεί την ισότητα. Σκοπός της παρούσας εργασίας είναι η αξιολόγηση της επίδοσης στο γλωσσικό μάθημα Ελλήνων και αλλοδαπών μαθητών καθώς και η διερεύνηση της εξελικτικής τους πορείας στις τάξεις του Γυμνασίου. / Τhe performance assessment of Greeks and foreign students is considered necessary to be more subjective and based on criteria so as school promotes equality. The aim of the present essay is the performance assessment in greek language lesson of both Greeks and foreign students as well as the investigation of their progress during secondary education classes.
203

DEVELOPMENT AND ANALYSIS OF OPTICAL PH IMAGING TECHNIQUES

Lin, Yuxiang January 2010 (has links)
The pH of tumors and surrounding tissues is a key biophysical property of the tumor microenvironment that affects how a tumor survives and how it invades the surrounding space of normal tissue. Research into tumorigenesis and tumor treatment is greatly dependent on accurate, precise, and reproducible measurements. Optical imaging is generally regarded as the best choice for non-invasive and high spatial resolution measurements. Ratiometric fluorescence imaging and fluorescence lifetime imaging microscopy (FLIM) are two primary ways for measuring tumor pH.pH measurements in a window chamber animal model using a ratiometric fluorescence imaging technique is demonstrated in this dissertation. The experimental setup, imaging protocols, and results are presented. A significantly varying bias was consistently observed in the measured pH. A comprehensive analysis on the possible error sources accounting for this bias is carried out. The result of analysis reveals that accuracy of ratiometric method is most likely limited by biological and physiological factors.FLIM is a promising alternative because the fluorescence lifetime is insensitive to the biological and physiological factors. Photon noise is the predominant error source of FLIM. The Fisher information matrix and the Cramér-Rao lower bound are used to calculate the lowest possible variance of estimated lifetime for time-domain (TD) FLIM. A statistical analysis of frequency-domain (FD) FLIM using homodyne lock-in detection is also performed and the probability density function of the estimated lifetime is derived. The results allow the derivation of the optimum experimental parameters, which yields the lowest variance of the estimated lifetime in a given period of imaging time. The analyses of both TD and FD-FLIM agree with results of corresponding Monte Carlo simulations.
204

Physics Aware Programming Paradigm and Runtime Manager

Zhang, Yeliang January 2007 (has links)
The overarching goal of this dissertation research is to realize a virtual collaboratory for the investigation of large-scale scientific computing applications which generally experience different execution phases at runtime and each phase has different computational, communication and storage requirements as well as different physical characteristics. Consequently, an optimal solution or numerical scheme for one execution phase might not be appropriate for the next phase of the application execution. Choosing the ideal numerical algorithms and solutions for all application runtime phases remains an active research area. In this dissertation, we present Physics Aware Programming (PAP) paradigm that enables programmers to identify the appropriate solution methods to exploit the heterogeneity and the dynamism of the application execution states. We implement a Physics Aware Runtime Manager (PARM) to exploit the PAP paradigm. PARM periodically monitors and analyzes the runtime characteristics of the application to identify its current execution phase (state). For each change in the application execution phase, PARM will adaptively exploit the spatial and temporal attributes of the application in the current state to identify the ideal numerical algorithms/solvers that optimize its performance. We have evaluated our approach using a real world application (Variable Saturated Aquifer Flow and Transport (VSAFT2D)) commonly used in subsurface modeling, diffusion problem kernel and seismic problem kernel. We evaluated the performance gain of the PAP paradigm with up to 2,000,000 nodes in the computation domain implemented on 32 processors. Our experimental results show that by exploiting the application physics characteristics at runtime and applying the appropriate numerical scheme with adapted spatial and temporal attributes, a significant speedup can be achieved (around 80%) and the overhead injected by PAP is negligible (less than 2%). We also show that the results using PAP is as accurate as the numerical solutions that use fine grid resolution.
205

Error Analysis of the National Test in English coursesA and B

Alagic, Aida January 2010 (has links)
This paper sets out to examine the most common errors in the national test and whether the students make the same errors in English course B as in English course A at Upper Secondary School in Sweden. The method used for this study is quantitative where nine grammatical features are used to count the errors made. Twenty national tests were used to carry out this study; ten national tests are from English course A and the other ten from English course B. Results from all features from English course A are compared with some features from English course B. The results show that the most common errors made in the national test are subject verb agreement and tense. Those two features had also a worsening in the English course B. The genitive errors have also doubled in English course B. The best improvement happened with the capital letters. Other features either stayed the same or improved slightly. One of the solutions for grammatical errors could be that teachers and students pay more attention to it and that the teachers include more grammar in their lessons so that the students have an opportunity to improve.
206

The syntactic development of grade 12 ESL learners / K. Hattingh

Hattingh, Karien January 2005 (has links)
The primary aim of this study is to determine the level of syntactic development in English of South African matriculants. The ESL standard in South Africa has been criticised, but no objective data are available. This study provides relevant data, based on an index, and indicates shortcomings in learners' syntactic competence. Poor school-leaving standards in English are a cause of great concern in South Africa, and complaints about school leavers' standard of English have increased over the last few years. Yet, apart from generally being labelled as "poor", little is known about the actual level of development reached by ESL learners. Comments are often based on subjective impressions. This study focuses on syntactic development in writing and aims to determine the level of syntactic development of Grade 12 ESL learners in an objective way. Interlanguage, the concept of 'stages of development’ and fossilization are discussed. The need for an index that can measure language development objectively is considered. General means of measuring syntactic development are evaluated and an index formula is established by means of statistical analyses. This formula is based on the T-unit and assigns numerical values to levels of development. The index formula is used to determine the level of syntactic development of a group of Grade 12 ESL learners. The compositions that were analysed were obtained from six provinces in South Africa. Index values are calculated for Higher and Standard Grade, for the group as a whole, and each of the six provinces. An Error Analysis is conducted and error frequencies are reported. Problem areas in syntax are identified. The implications of the findings are considered and recommendations are briefly made for the teaching and learning of grammar. / Thesis (M.A. (English))--North-West University, Potchefstroom Campus, 2005.
207

Spatio-temporal analysis of wind power prediction errors / Išgaunamos vėjo enegijos prognozės paklaidų analizė

Vlasova, Julija 16 August 2007 (has links)
Nowadays there is no need to convince anyone about the necessity of renewable energy. One of the most promising ways to obtain it is the wind power. Countries like Denmark, Germany or Spain proved that, while professionally managed, it can cover a substantial part of the overall energy demand. One of the main and specific problems related to the wind power management — development of the accurate power prediction models. Nowadays State-Of-Art systems provide predictions for a single wind turbine, wind farm or a group of them. However, the spatio-temporal propagation of the errors is not adequately considered. In this paper the potential for improving modern wind power prediction tool WPPT, based on the spatio-temporal propagation of the errors, is examined. Several statistical models (Linear, Threshold, Varying-coefficient and Conditional Parametric) capturing the cross-dependency of the errors, obtained in different parts of the country, are presented. The analysis is based on the weather forecast information and wind power prediction errors obtained for the territory of Denmark in the year 2004. / Vienas iš perspektyviausių bei labiausiai plėtojamų atsinaujinančių energijos šaltinių - vėjas. Tokios Europos Sąjungos šalys kaip Danija, Vokietija bei Ispanija savo patirtimi įrodė, jog tinkamai valdomas bei vystomas vėjo ūkis gali padengti svarią šalies energijos paklausos dalį. Pagal Europos Sąjungos direktyvą 2001/77/EC Lietuva yra įsipareigojusi iki 2010 m. pasiekti, kad elektros energijos gamyba iš atsinaujinančių energijos išteklių sudarytų 7% suvartojamos elektros energijos. Šių įsipareigojimų įvykdymui Lietuvos vyriausybės priimtu nutarimu yra nustatyta atsinaujinančių energijos išteklių naudojimo skatinimo tvarka, pagal kurią numatyta palaipsniui plėsti vėjo energijos naudojimą šalyje. Planuojama, kad iki 2010 m. bus pastatyta 200 MW bendros galios vėjo elektrinių, kurios gamins apie 2,2% visos suvartojamos elektros energijos [Marčiukaitis, 2007]. Didėjant vėjo energijos daliai energetikos sistemoje, Lietuvoje ateityje kils sistemos balansavimo problemų dėl nuolatinių vėjo jėgainių galios svyravimų. Kaip rodo kitų šalių patirtis, vėjo elektrinių galios prognozė yra efektyvi priemonė, leidžianti išspręsti šias problemas. Šiame darbe pristatyti keletas statistinių modelių bei metodų, skirtų išgaunamos vėjo energijos prognozėms gerinti. Analizė bei modeliavimas atlikti nagrinėjant Danijos WPPT (Wind Power Prediction Tool) duomenis bei meteorologines prognozes. Pagrindinis darbo tikslas - modifikuoti WPPT, atsižvelgiant į vėjo krypties bei stiprio įtaką energijos... [toliau žr. visą tekstą]
208

The syntactic development of grade 12 ESL learners / K. Hattingh

Hattingh, Karien January 2005 (has links)
The primary aim of this study is to determine the level of syntactic development in English of South African matriculants. The ESL standard in South Africa has been criticised, but no objective data are available. This study provides relevant data, based on an index, and indicates shortcomings in learners' syntactic competence. Poor school-leaving standards in English are a cause of great concern in South Africa, and complaints about school leavers' standard of English have increased over the last few years. Yet, apart from generally being labelled as "poor", little is known about the actual level of development reached by ESL learners. Comments are often based on subjective impressions. This study focuses on syntactic development in writing and aims to determine the level of syntactic development of Grade 12 ESL learners in an objective way. Interlanguage, the concept of 'stages of development’ and fossilization are discussed. The need for an index that can measure language development objectively is considered. General means of measuring syntactic development are evaluated and an index formula is established by means of statistical analyses. This formula is based on the T-unit and assigns numerical values to levels of development. The index formula is used to determine the level of syntactic development of a group of Grade 12 ESL learners. The compositions that were analysed were obtained from six provinces in South Africa. Index values are calculated for Higher and Standard Grade, for the group as a whole, and each of the six provinces. An Error Analysis is conducted and error frequencies are reported. Problem areas in syntax are identified. The implications of the findings are considered and recommendations are briefly made for the teaching and learning of grammar. / Thesis (M.A. (English))--North-West University, Potchefstroom Campus, 2005.
209

Controlled Lagrangian particle tracking: analyzing the predictability of trajectories of autonomous agents in ocean flows

Szwaykowska, Klementyna 13 January 2014 (has links)
Use of model-based path planning and navigation is a common strategy in mobile robotics. However, navigation performance may degrade in complex, time-varying environments under model uncertainty because of loss of prediction ability for the robot state over time. Exploration and monitoring of ocean regions using autonomous marine robots is a prime example of an application where use of environmental models can have great benefits in navigation capability. Yet, in spite of recent improvements in ocean modeling, errors in model-based flow forecasts can still significantly affect the accuracy of predictions of robot positions over time, leading to impaired path-following performance. In developing new autonomous navigation strategies, it is important to have a quantitative understanding of error in predicted robot position under different flow conditions and control strategies. The main contributions of this thesis include development of an analytical model for the growth of error in predicted robot position over time and theoretical derivation of bounds on the error growth, where error can be attributed to drift caused by unmodeled components of ocean flow. Unlike most previous works, this work explicitly includes spatial structure of unmodeled flow components in the proposed error growth model. It is shown that, for a robot operating under flow-canceling control in a static flow field with stochastic errors in flow values returned at ocean model gridpoints, the error growth is initially rapid, but slows when it reaches a value of approximately twice the ocean model gridsize. Theoretical values for mean and variance of error over time under a station-keeping feedback control strategy and time-varying flow fields are computed. Growth of error in predicted vehicle position is modeled for ocean models whose flow forecasts include errors with large spatial scales. Results are verified using data from several extended field deployments of Slocum autonomous underwater gliders, in Monterey Bay, CA in 2006, and in Long Bay, SC in 2012 and 2013.
210

Multiple-path stack algorithms for decoding convolutional codes

Haccoun, David January 1974 (has links)
No description available.

Page generated in 0.0811 seconds