• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 39
  • 34
  • 19
  • 11
  • 9
  • 8
  • 6
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 439
  • 439
  • 199
  • 94
  • 52
  • 46
  • 45
  • 43
  • 41
  • 37
  • 36
  • 33
  • 32
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Error analysis for randomized uniaxial stretch test on high strain materials and tissues

Jhun, Choon-Sik 16 August 2006 (has links)
Many people have readily suggested different types of hyperelastic models for high strain materials and biotissues since the 1940’s without validating them. But, there is no agreement for those models and no model is better than the other because of the ambiguity. The existence of ambiguity is because the error analysis has not been done yet (Criscione, 2003). The error analysis is motivated by the fact that no physical quantity can be measured without having some degree of uncertainties. Inelastic behavior is inevitable for the high strain materials and biotissues, and validity of the model should be justified by understanding the uncertainty due to it. We applied the fundamental statistical theory to the data obtained by randomized uniaxial stretch-controlled tests. The goodness-of-fit test (2R) and test of significance (t-test) were also employed. We initially presumed the factors that give rise to the inelastic deviation are time spent testing, stretch-rate, and stretch history. We found that these factors characterize the inelastic deviation in a systematic way. A huge amount of inelastic deviation was found at the stretch ratio of 1.1 for both specimens. The significance of this fact is that the inelastic uncertainties in the low stretch ranges of the rubber-like materials and biotissues are primarily related to the entropy. This is why the strain energy can hardly be determined by the experimentation at low strain ranges and there has been a deficiency in the understanding of the exclusive nature of the strain energy function at low strain ranges of the rubber-like materials and biotissues (Criscione, 2003). We also found the answers for the significance, effectiveness, and differences of the presumed factors above. Lastly, we checked the predictive capability by comparing the unused deviation data to the predicted deviation. To check if we have missed any variables for the prediction, we newly defined the prediction deviation which is the difference between the observed deviation and the point forecasting deviation. We found that the prediction deviation is off in a random way and what we have missed is random which means we didn’t miss any factors to predict the degree of inelastic deviation in our fitting.
182

A one-group parametric sensitivity analysis for the graphite isotope ratio method and other related techniques using ORIGEN 2.2

Chesson, Kristin Elaine 02 June 2009 (has links)
Several methods have been developed previously for estimating cumulative energy production and plutonium production from graphite-moderated reactors. The Graphite Isotope Ratio Method (GIRM) is one well-known technique. This method is based on the measurement of trace isotopes in the reactor’s graphite matrix to determine the change in their isotopic ratios due to burnup. These measurements are then coupled with reactor calculations to determine the total plutonium and energy production of the reactor. To facilitate sensitivity analysis of these methods, a one-group cross section and fission product yield library for the fuel and graphite activation products has been developed for MAGNOX-style reactors. This library is intended for use in the ORIGEN computer code, which calculates the buildup, decay, and processing of radioactive materials. The library was developed using a fuel cell model in Monteburns. This model consisted of a single fuel rod including natural uranium metal fuel, magnesium cladding, carbon dioxide coolant, and Grade A United Kingdom (UK) graphite. Using this library a complete sensitivity analysis can be performed for GIRM and other techniques. The sensitivity analysis conducted in this study assessed various input parameters including 235U and 238U cross section values, aluminum alloy concentration in the fuel, and initial concentrations of trace elements in the graphite moderator. The results of the analysis yield insight into the GIRM method and the isotopic ratios the method uses as well as the level of uncertainty that may be found in the system results.
183

The Writing Process : Are there any differences between boys' and girls' writing in English?

Dahl, Rebecca January 2012 (has links)
This essay studies the written performance of 43 Swedish junior high school students. Relative clauses, prepositional usage and subject-verb agreement are studied and analysed in order to see what and how many errors the students make and then finally to see if there is any difference in the performance of boys and girls. Previous research in the area has shown an advantage in favour of girls and this study confirmed this. Even though the differences were not marked, the girls performed better than the boys in the majority of the cases studied. The data further indicated that there is great variation within the gender groups as well as between them.
184

"My ideal boyfriend have to love me no matter what." : A comprarative study of errors in English subject-verb agreement in Swedish students' writing in Spain and in Sweden

Staaf, Kerstin January 2011 (has links)
The main purpose of this study is to increase the understanding of a third language’s possible effect on learners’ second language acquisition. There is research how a first language affects the acquisition of a second language and that research has shown that a first language does affect the learning of an additional language in different ways. Even though  it is proven that languages do influence each other in learning processes there is very little previous research that studies if and how a third language can be affected by or affect a learner’s second language. To investigate possible differences in error-making, the first research question is to investigate what kind of errors the students make. The most common errors that students make are when subject-verb agreement is noncontiguous. The second research question is to see if Swedish students who know Spanish make different errors in English subject-verb agreement than Swedish students who do not know Spanish. This study finds that there are slight differences in how Swedish students who know Spanish and students who do not know Spanish make errors with English subject-verb agreement. The difference is that the students who know Spanish make fewer errors with noncontiguous subject-verb agreement, especially in relative clauses and with coordinated verb phrases. The fact that these students make fewer errors with noncontiguous subject-verb agreement may be an indication that they have a greater understanding of this grammatical feature. / Lokalt ID: 2011vt4810
185

Design of a Table-Driven Function Evaluation Generator Using Bit-Level Truncation Methods

Tseng, Yu-ling 30 August 2011 (has links)
Functional evaluation is one of key arithmetic operations in many applications including 3D graphics and stereo. Among various designs of hardware-based function evaluators, piecewise polynomial approximation methods are the most popular which interpolate the piecewise function curve in a sub-interval using polynomials with polynomial coefficients of each sub-interval stored in an entry of a ROM. The conventional piecewise methods usually determine the bit-widths of each ROM entry and multipliers and adders by analyzing the various error sources, including polynomial approximation errors, coefficient quantization errors, truncation errors of arithmetic operations, and the final rounding error. In this thesis, we present a new piecewise function evaluation design by considering all the error sources together. By combining all the error sources during the approximation, quantization, truncation and rounding, we can efficiently reduce the area cost of ROM and the corresponding arithmetic units. The proposed method is applied to piecewise function evaluators of both uniform and non-uniform segmentation.
186

Design of a CORDIC Function Generator Using Table-Driven Function Evaluation with Bit-Level Truncation

Hsu, Wei-Cheng 10 September 2012 (has links)
Functional evaluation is one of key arithmetic operations in many applications including 3D graphics and stereo. Among various designs of hardware-based function evaluation methods, piecewise polynomial approximation is the most popular approach which interpolates the piecewise function curve in a sub-interval using polynomials with polynomial coefficients of each sub-interval stored in an entry of a lookup table ROM. The conventional piecewise methods usually determine the bit-widths of each ROM entry, multipliers, and adders by analyzing the various error sources, including polynomial approximation errors, coefficient quantization errors, truncation errors of arithmetic operations, and the final rounding error. In this thesis, we present a new piecewise function evaluation design by considering all the error sources together. By combining all the error sources during the approximation, quantization, truncation and rounding, we can efficiently reduce the area cost of ROM and the corresponding arithmetic units in the design of CORDIC processors.
187

Improved Bit-Level Truncation with Joint Error Analysis for Table-Based Function Evaluation

Lin, Shin-hung 12 September 2012 (has links)
Function evaluation is often used in many science and engineering applications. In order to reduce the computation time, different hardware implementations have been proposed to accelerate the speed of function evaluation. Table-based piecewise polynomial approximation is one of the major methods used in hardware function evaluation designs that require simple hardware components to achieve desired precision. Piecewise polynomial method approximates the original function values in each partitioned subinterval using low-degree polynomials with coefficients stored in look-up tables. Errors are introduced in the hardware implementations. Conventional error analysis in piecewise polynomial methods includes four types of error sources: polynomial approximation error, coefficient quantization error, arithmetic truncation error, and final rounding error. Typical design approach is to pre-allocated maximum allowable error budget for each individual hardware component so that the total error induced from these individual errors satisfies the bit accuracy. In this thesis, we present a new design approach by jointly considering the error sources in designing all the hardware components, including look-up tables and arithmetic units, so that the total area cost is reduced compared to the previously published designs.
188

A one-group parametric sensitivity analysis for the graphite isotope ratio method and other related techniques using ORIGEN 2.2

Chesson, Kristin Elaine 02 June 2009 (has links)
Several methods have been developed previously for estimating cumulative energy production and plutonium production from graphite-moderated reactors. The Graphite Isotope Ratio Method (GIRM) is one well-known technique. This method is based on the measurement of trace isotopes in the reactor’s graphite matrix to determine the change in their isotopic ratios due to burnup. These measurements are then coupled with reactor calculations to determine the total plutonium and energy production of the reactor. To facilitate sensitivity analysis of these methods, a one-group cross section and fission product yield library for the fuel and graphite activation products has been developed for MAGNOX-style reactors. This library is intended for use in the ORIGEN computer code, which calculates the buildup, decay, and processing of radioactive materials. The library was developed using a fuel cell model in Monteburns. This model consisted of a single fuel rod including natural uranium metal fuel, magnesium cladding, carbon dioxide coolant, and Grade A United Kingdom (UK) graphite. Using this library a complete sensitivity analysis can be performed for GIRM and other techniques. The sensitivity analysis conducted in this study assessed various input parameters including 235U and 238U cross section values, aluminum alloy concentration in the fuel, and initial concentrations of trace elements in the graphite moderator. The results of the analysis yield insight into the GIRM method and the isotopic ratios the method uses as well as the level of uncertainty that may be found in the system results.
189

A Study of the Ability Development and Error Analysis in Learning Two-Variable Linear Equation for Middle School Students

Lin, Liwen 29 July 2001 (has links)
This study used the multiple methods of classroom observation, interview with teachers and students, and paper-and-pencil test to investigate the ability development of seventh-grade students in learning two-dimensional linear systems of equations and the corresponding error analysis. Hopefully, the results of this study can be as a reference for the middle school math teachers to plan the suitable teaching strategies when they teach two-dimensional linear systems of equations to their students. At the beginning, the researcher entered two seventh-grade classrooms of one middle school in Kaohsiung to make the preliminary observations and let students (also the teachers) to get used to the appearance of the researcher in the classroom during the period that one-variable linear equations were taught. Subsequently, the formal observations were carried out for 40 class periods that two-dimensional linear systems of equations were taught. All the observations made about how teachers taught and how students learned were recorded and content analyzed. Two paper-and-pencil tests were administered during the period of preliminary observations. And three paper-and pencil tests were given during the period of the formal observations. All the test results were collected and analyzed in numerous ways. Based on the literature survey and the interviews with six middle school math teachers, all relevant abilities of mastering two-dimensional linear systems of equations were classified into three categories: Character Symbols (10 sub-abilities), Operational Principals (five sub-abilities), and Other Abilities (16 sub-abilities). Based on the results of the content analyses of classroom observations and the error analyses of five paper-and-pencils tests for each sub-abilities of mastering the subject, it was observed that during the period of developing the abilities on solving two-dimensional linear systems of equations, most students showed some signs of obstacles and puzzles. Even by the end of the course on two-dimensional linear systems of equations, most students still did not master the subject well. Based on the results of this study, it is proposed that the length of teaching period needs to be increased and more efficient learning strategies need to be introduced to the students when two-dimensional linear systems of equations are taught.
190

Error analysis for randomized uniaxial stretch test on high strain materials and tissues

Jhun, Choon-Sik 16 August 2006 (has links)
Many people have readily suggested different types of hyperelastic models for high strain materials and biotissues since the 1940’s without validating them. But, there is no agreement for those models and no model is better than the other because of the ambiguity. The existence of ambiguity is because the error analysis has not been done yet (Criscione, 2003). The error analysis is motivated by the fact that no physical quantity can be measured without having some degree of uncertainties. Inelastic behavior is inevitable for the high strain materials and biotissues, and validity of the model should be justified by understanding the uncertainty due to it. We applied the fundamental statistical theory to the data obtained by randomized uniaxial stretch-controlled tests. The goodness-of-fit test (2R) and test of significance (t-test) were also employed. We initially presumed the factors that give rise to the inelastic deviation are time spent testing, stretch-rate, and stretch history. We found that these factors characterize the inelastic deviation in a systematic way. A huge amount of inelastic deviation was found at the stretch ratio of 1.1 for both specimens. The significance of this fact is that the inelastic uncertainties in the low stretch ranges of the rubber-like materials and biotissues are primarily related to the entropy. This is why the strain energy can hardly be determined by the experimentation at low strain ranges and there has been a deficiency in the understanding of the exclusive nature of the strain energy function at low strain ranges of the rubber-like materials and biotissues (Criscione, 2003). We also found the answers for the significance, effectiveness, and differences of the presumed factors above. Lastly, we checked the predictive capability by comparing the unused deviation data to the predicted deviation. To check if we have missed any variables for the prediction, we newly defined the prediction deviation which is the difference between the observed deviation and the point forecasting deviation. We found that the prediction deviation is off in a random way and what we have missed is random which means we didn’t miss any factors to predict the degree of inelastic deviation in our fitting.

Page generated in 0.0703 seconds