• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 39
  • 34
  • 19
  • 11
  • 9
  • 8
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 439
  • 439
  • 199
  • 94
  • 52
  • 46
  • 45
  • 43
  • 41
  • 37
  • 36
  • 33
  • 32
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Coupled Space-Angle Adaptivity and Goal-Oriented Error Control for Radiation Transport Calculations

Park, HyeongKae 15 November 2006 (has links)
This research is concerned with the self-adaptive numerical solution of the neutral particle radiation transport problem. Radiation transport is an extremely challenging computational problem since the governing equation is seven-dimensional (3 in space, 2 in direction, 1 in energy, and 1 in time) with a high degree of coupling between these variables. If not careful, this relatively large number of independent variables when discretized can potentially lead to sets of linear equations of intractable size. Though parallel computing has allowed the solution of very large problems, available computational resources will always be finite due to the fact that ever more sophisticated multiphysics models are being demanded by industry. There is thus the pressing requirement to optimize the discretizations so as to minimize the effort and maximize the accuracy. One way to achieve this goal is through adaptive phase-space refinement. Unfortunately, the quality of discretization (and its solution) is, in general, not known a priori; accurate error estimates can only be attained via the a posteriori error analysis. In particular, in the context of the finite element method, the a posteriori error analysis provides a rigorous error bound. The main difficulty in applying a well-established a posteriori error analysis and subsequent adaptive refinement in the context of radiation transport is the strong coupling between spatial and angular variables. This research attempts to address this issue within the context of the second-order, even-parity form of the transport equation discretized with the finite-element spherical harmonics method. The objective of this thesis is to develop a posteriori error analysis in a coupled space-angle framework and an efficient adaptive algorithm. Moreover, the mesh refinement strategy which is tuned for minimizing the error in the target engineering output has been developed by employing the dual argument of the problem. This numerical framework has been implemented in the general-purpose neutral particle code EVENT for assessment.
152

On evaluating errors produced by some L2 speakers of English

Wong, Yuk-ling, Denise., 黃玉玲. January 1985 (has links)
published_or_final_version / Language Studies / Master / Master of Arts
153

Analysis of the quasicontinuum method and its application

Wang, Hao January 2013 (has links)
The present thesis is on the error estimates of different energy based quasicontinuum (QC) methods, which are a class of computational methods for the coupling of atomistic and continuum models for micro- or nano-scale materials. The thesis consists of two parts. The first part considers the a priori error estimates of three energy based QC methods. The second part deals with the a posteriori error estimates of a specific energy based QC method which was recently developed. In the first part, we develop a unified framework for the a priori error estimates and present a new and simpler proof based on negative-norm estimates, which essentially extends previous results. In the second part, we establish the a posteriori error estimates for the newly developed energy based QC method for an energy norm and for the total energy. The analysis is based on a posteriori residual and stability estimates. Adaptive mesh refinement algorithms based on these error estimators are formulated. In both parts, numerical experiments are presented to illustrate the results of our analysis and indicate the optimal convergence rates. The thesis is accompanied by a thorough introduction to the development of the QC methods and its numerical analysis, as well as an outlook of the future work in the conclusion.
154

Indicadores de erros a posteriori na aproximação de funcionais de soluções de problemas elípticos no contexto do método Galerkin descontínuo hp-adaptivo / A posteriori error indicators in the approximation of functionals of elliptic problems solutions in the context of hp-adaptive discontinuous Galerkin method

Gonçalves, João Luis, 1982- 19 August 2018 (has links)
Orientador: Sônia Maria Gomes, Philippe Remy Bernard Devloo, Igor Mozolevski / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Matemática, Estatística e Computação Científica / Made available in DSpace on 2018-08-19T03:23:02Z (GMT). No. of bitstreams: 1 Goncalves_JoaoLuis_D.pdf: 15054031 bytes, checksum: 23ef9ef75ca5a7ae7455135fc552a678 (MD5) Previous issue date: 2011 / Resumo: Neste trabalho, estudamos indicadores a posteriori para o erro na aproximação de funcionais das soluções das equações biharmônica e de Poisson obtidas pelo método de Galerkin descontínuo. A metodologia usada na obtenção dos indicadores é baseada no problema dual associado ao funcional, que é conhecida por gerar os indicadores mais eficazes. Os dois principais indicadores de erro com base no problema dual já obtidos, apresentados para problemas de segunda ordem, são estendidos neste trabalho para problemas de quarta ordem. Também propomos um terceiro indicador para problemas de segunda e quarta ordem. Estudamos as características dos diferentes indicadores na localização dos elementos com as maiores contribuições do erro, na caracterização da regularidade das soluções, bem como suas consequências na eficiência dos indicadores. Estabelecemos uma estratégia hp-adaptativa específica para os indicadores de erro em funcionais. Os experimentos numéricos realizados mostram que a estratégia hp-adaptativa funciona adequadamente e que o uso de espaços de aproximação hp-adaptados resulta ser eficiente para a redução do erro em funcionais com menor úmero de graus de liberdade. Além disso, nos exemplos estudados, a qualidade dos resultados varia entre os indicadores, dependendo do tipo de singularidade e da equação tratada, mostrando a importância de dispormos de uma maior diversidade de indicadores / Abstract: In this work we study goal-oriented a posteriori error indicators for approximations by the discontinuous Galerkin method for the biharmonic and Poisson equations. The methodology used for the indicators is based on the dual problem associated with the functional, which is known to generate the most effective indicators. The two main error indicators based on the dual problem, obtained for second order problems, are extended to fourth order problems. We also propose a third indicator for second and fourth order problems. The characteristics of the different indicators are studied for the localization of the elements with the greatest contributions of the error, and for the characterization of the regularity of the solutions, as well as their consequences on indicators efficiency. We propose an hp-adaptive strategy specific for goal-oriented error indicators. The performed numerical experiments show that the hp-adaptive strategy works properly, and that the use of hp-adapted approximation spaces turns out to be efficient to reduce the error with a lower number of degrees of freedom. Moreover, in the examples studied, a comparison of the quality of results for the different indicators shows that it may depend on the type of singularity and of the equation treated, showing the importance of having a wider range of indicators / Doutorado / Matematica Aplicada / Doutor em Matemática Aplicada
155

Diesel engine heat release analysis by using newly defined dimensionless parameters

Abbaszadehmosayebi, Gholamreza January 2014 (has links)
Diesel engine combustion has been studied during the last decades by researchers in terms of improving the performance of the engine. In order to improve the analysis of the diesel engine combustion, dimensionless parameters were used in this study. It was concluded that the newly introduced dimensionless parameters developed in this study facilitate understanding of diesel engine combustion process. A new method has been proposed to determine the values of the form factor (m) and efficiency factors (a) of the Wiebe equation. This is achieved by developing a modified form of Wiebe equation with only one constant. The modified version of Wiebe equation facilitates the determination of constants accurately, which enhances the accuracy of evaluating the burn fraction. The error induced on the burn fraction f with respect to the values of constants a and m obtained through different methods is discussed and compared. The form factor affects the burn fraction significantly compared to the efficiency factor. A new non-dimensional parameter ‘combustion burn factor (Ci)’ has been identified in the modified Wiebe equation. The burn fraction f was found to be a function of Ci only, thus the benefits of expressing heat release rate with respect to Ci have been presented. The errors associated with the determination of apparent heat release rate (AHRR) and the cumulative heat release (Cum.Hrr) from the measured cylinder pressure data and the assumed specific heat ratio (γ) was determined and compared. The γ affected the calculated AHRR more than the cylinder pressure. Overestimation of γ resulted in an underestimation of the peak value of the AHRR and vice versa, this occurred without any shift in the combustion phasing. A new methodology has been proposed to determine the instantaneous and mean value of γ for a given combustion. A two litre Ford puma Zetec diesel engine, four cylinder and 16 valves was employed to carry out this investigation. This new methodology has been applied to determine γ for a wide range of injection pressure (800 bar to 1200 bar), injection timing (9 deg BTDC to -2 deg BTDC) and engine loads at 2.7 BMEP and 5 BMEP. Standard ultra-low sulphur diesel fuel and two bio-diesels (Rapeseed Methyl Ester and Jatropha Methyl Ester) were studied in this investigation. Ignition delay is one the most important parameter that characterises the combustion and performance of diesel engines. The relation between ignition delay and combustion performance in terms of efficiency and emission was revealed by researchers. Ignition delay period measurements in diesel engine combustion along with the most used correlation for calculating ignition delay are discussed in this work. The effect of constants on accuracy in the correlation were discussed, and induced error on calculated ignition delay periods with respect to constants were calculated and compared. New techniques were proposed to calculate the constant values directly by using the experimental data. It was found that the calculated values for ignition delay using the new techniques matched well with the experimental data. These techniques can improve the accuracy of the ignition delay correlation. Also a new correlation without any constants was introduced in this work. This correlation can be used to predict ignition delay directly by using engine parameters only. The introduced correlation provides better results compared to Arrhenius type correlation presented by Wolfer. This new correlation can be used for feedback control engine combustion process.
156

Applying the cognitive reliability and error analysis method to reduce catheter associated urinary tract infections

Griebel, MaryLynn January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Malgorzata Rys / Catheter associated urinary tract infections (CAUTIs) are a source of concern in the healthcare industry because they occur more frequently than other healthcare associated infections and the rates of CAUTI have not improved in recent years. The use of urinary catheters is common among patients; between 15 and 25 percent of all hospital patients will use a urinary catheter at some point during their hospitalization (CDC, 2016). The prevalence of urinary catheters in hospitalized patients and high CAUTI occurrence rates led to the application of human factors engineering to develop a tool to help hospitals reduce CAUTI rates. Human reliability analysis techniques are methods used by human factors engineers to quantify the probability of human error in a system. A human error during a catheter insertion has the opportunity to introduce bacteria into the patient’s system and cause a CAUTI; therefore, human reliability analysis techniques can be applied to catheter insertions to determine the likelihood of a human error. A comparison of three human reliability analysis techniques led to the selection of the Cognitive Reliability and Error Analysis Method (CREAM). To predict a patient’s probability of developing a CAUTI, the human error probability found from CREAM is incorporated with several health factors that affect the patient’s risk of developing CAUTI. These health factors include gender, duration, diabetes, and a patient’s use of antibiotics, and were incorporated with the probability of human error using fuzzy logic. Membership functions were developed for each of the health factors and the probability of human error, and the centroid defuzzification method is used to find a crisp value for the probability of a patient developing CAUTI. Hospitals that implement this tool can choose risk levels for CAUTI that places the patient into one of three zones: green, yellow, or red. The placement into the zones depends on the probability of developing a CAUTI. The tool also provides specific best practice interventions for each of the zones.
157

Automatic source camera identification by lens aberration and JPEG compression statistics

Choi, Kai-san., 蔡啟新. January 2006 (has links)
published_or_final_version / abstract / Electrical and Electronic Engineering / Master / Master of Philosophy
158

Do Students Who Continue Their English Studies Outperform Students Who Do Not? : A Study of Subject-verb Concord in Written Compositions in English by Swedish University Students

Preber, Louise January 2006 (has links)
<p>This essay deals with subject-verb concord in written compositions by Swedish students at Uppsala University. The essay investigates the possibility that students who continue studying English beyond the A level at the university make fewer errors than students who do not continue.</p><p>In order to minimize the influence of the students’ gender and first language, only essays written by female students were included in the study; in addition, all students included had Swedish as their first language, and so did their parents. 25 essays by students who continued their studies and 25 essays by students who may not have done so were chosen. All 50 essays were examined for both correct and incorrect instances concerning concord between subjects and verbs in the present tense. The primary verbs to be, to do and to have were analysed as well as regular and irregular verbs.</p><p>The results show that the 25 students who continued beyond the A level made fewer errors than the 25 students who may not have continued. The results also indicate that subject-verb concord is not a serious problem for Swedish learners.</p>
159

The Last Stages of Second Language Acquisition: Linguistic Evidence from Academic Writing by Advanced Non-Native English Speakers

Ene, Simona Estela January 2006 (has links)
Second Language Acquisition (SLA) researchers have yet to map the developmental stages language learners go through as they approach the target language. In studies of ESL writing, the term "advanced learner" has been applied indiscriminately to learners ranging from freshman ESL composition to graduate students (Bardovi-Harlig and Bofman, 1989; Chaudron and Parker, 1990; Connor and Mayberry, 1996; Hinkel, 1997, 2003). There is a need to examine the advanced stages of SLA in order to refine SLA theories and pedagogical approaches.A corpus of texts written by eleven graduate students in applied linguistics who are non-native-English speakers from several linguistic backgrounds was analyzed to determine the texts' lexical, morphological, and syntactic fluency, accuracy, and complexity. A sub-corpus of papers by seven native-English-speaking peers was used for comparison. The texts were sit-down and take-home examinations written in a doctoral program at the end of the first semester and three years later. Surveys and interviews were conducted to supplement the corpus with ethnographic data.This dissertation defines data-based criteria that distinguish four quantitatively and qualitatively distinct developmental stages: the advanced, highly advanced, near-native, and native-like stages. Advanced learners make more frequent and varied errors (with articles, prepositions, plural and possessive markers, agreement and anaphors), which can be explained by linguistic transfer. Native-like writers make few errors that can be explained by overgeneralization of conventions from informal English and working memory limitations (just like native speakers' errors). Throughout the four stages, errors (i.e., incorrect forms that reflect lack of linguistic knowledge (Corder, 1967)) became less frequent, and more of the incorrect usages appeared to be mistakes (occasional slips).This dissertation supports Herschensohn's (1999) proposal that SLA is a process of transfer followed by relearning of morpho-syntactic specifications. Syntax was used with the greatest accuracy (Bardovi-Harlig and Bofman, 1989), while lexicon (especially function words) was the weakest. In addition, length of stay in an English-speaking country and amount of interaction with native speakers were proportional with accuracy. An important pedagogical recommendation is that (corpus-assisted) language teaching should continue until the target language is reached.
160

Analysis of the quasicontinuum method

Ortner, Christoph January 2006 (has links)
The aim of this work is to provide a mathematical and numerical analysis of the static quasicontinuum (QC) method. The QC method is, in essence, a finite element method for atomistic material models. By restricting the set of admissible deformations to linear splines with respect to a finite element mesh, the computational complexity of atomistic material models is reduced considerably. We begin with a general review of atomistic material models and the QC method and, most importantly, a thorough discussion of the correct concept of static equilibrium. For example, it is shown that, in contrast to global energy minimization, a ‘dynamic’ selection procedure based on gradient flows models the physically correct behaviour. Next, an atomistic model with long-range Lennard–Jones type interactions is analyzed in one dimension. A rigorous demonstration is given for the existence and stability of elastic as well as fractured steady states, and it is shown that they can be approximated by a QC method if the mesh is sufficiently well adapted to the exact solution; this can be measured by the interpolation error. While the a priori error analysis is an important theoretical step for understanding the approximation properties of the QC method, it is in general unclear how to compute the QC deformation whose existence is guaranteed by the a priori analysis. An a posteriori analysis is therefore performed as well. It is shown that, if a computed QC deformation is stable and has a sufficiently small residual, then there exists a nearby exact solution and the error is estimated. This a posteriori existence idea is also analyzed in an abstract setting. Finally, extensions of the ideas to higher dimensions are investigated in detail.

Page generated in 0.0409 seconds