• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2069
  • 468
  • 321
  • 181
  • 169
  • 71
  • 68
  • 65
  • 52
  • 51
  • 49
  • 43
  • 28
  • 23
  • 22
  • Tagged with
  • 4359
  • 717
  • 536
  • 529
  • 506
  • 471
  • 431
  • 408
  • 390
  • 323
  • 316
  • 305
  • 295
  • 286
  • 275
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

STATIC ERROR MODELING IN TURNING OPERATION AND ITS EFFECT ON FORM ERRORS

ANAND, RAJ B. 18 April 2008 (has links)
No description available.
162

Studies on error control of 3-D zerotree wavelet video streaming

Zhao, Yi 24 August 2005 (has links)
No description available.
163

Error estimates for the normal approximation to normal sums of random variables of a Markov chain /

Kunes, Laurence Edward January 1969 (has links)
No description available.
164

The Accuracy of Dual Photon Absorptiometry Measurements of Soft Tissue Composition

Gordon, Christopher L. 04 1900 (has links)
During routine measurements of body composition using a 153Gd based dual photon densitometer, it was observed that negative values were being obtained for the body fat fraction in some adults, in children and in small animals. In these three groups, there appears to be a body size dependent error whereby the measured fat fraction becomes increasingly negative as subject size becomes smaller. The fat fraction is derived from relating the measured mass attenuation coefficient of soft tissue to an internal calibration based on the use of water and lard as substitutes for muscle and fat. To investigate whether this procedure for instrument calibration is the cause of the fat fraction errors, soft tissue phantoms which contained known amounts of fat, water and protein were prepared. Over the range of fat fractions used, accurate results were obtained. By using prepared soft tissue and water phantoms it was established that the measured fat fraction incorrectly became progressively smaller as object thickness decreased and incorrectly increased with object thickness. However, accurate measurements were obtained if the equivalent tissue thickness is greater than 9 em and less tnan 16 em of water. Equally reproducible measurements are obtained at all thicknesses investigated. When dual photon measurements of body composition in 13 adolescent females were compared with measurements obtained from skinfold thicknesses or bioimpedance, there was good agreement between techniques but dual photon results demonstrated a broader range of variation with body size. Comparisons between dual photon absorptiometry derived body composition measurements of 52 male athletes with results obtained from under water weighing allowed for derivation of a simple correction factor for the accuracy errors due to body size. / Thesis / Master of Science (MSc)
165

Double checking medicines: defence against error or contributory factor?

Armitage, Gerry R. 08 1900 (has links)
RATIONALE AND The double checking of medicines in health care is a contestable procedure. It occupies an obvious position in health care practice and is understood to be an effective defence against medication error but the process is variable and the outcomes have not been exposed to testing. This paper presents an appraisal of the process using data from part of a larger study on the contributory factors in medication errors and their reporting. METHODS: Previous research studies are reviewed; data are analysed from a review of 991 drug error reports and a subsequent series of 40 in-depth interviews with health professionals in an acute hospital in northern England. RESULTS: The incident reports showed that errors occurred despite double checking but that action taken did not appear to investigate the checking process. Most interview participants (34) talked extensively about double checking but believed the process to be inconsistent. Four key categories were apparent: deference to authority, reduction of responsibility, automatic processing and lack of time. Solutions to the problems were also offered, which are discussed with several recommendations. CONCLUSIONS: Double checking medicines should be a selective and systematic procedure informed by key principles and encompassing certain behaviours. Psychological research may be instructive in reducing checking errors but the aviation industry may also have a part to play in increasing error wisdom and reducing risk.
166

Formation Fidelity of Simulated Unmanned Autonomous Vehicles through Periodic Communication

Twigg, Jeffrey Newman 07 December 2009 (has links)
Controlling a formation of unmanned autonomous vehicles is a daunting prospect even when the formation operates under ideal conditions. When communication between vehicles is limited, maintaining a formation becomes difficult. In some cases the formation may become unstable. While a control law may stabilize a formation of vehicles with good communication, it may not be able to do so with poor communication. The resulting lack of formation stability affects the level of ï¬ delity the formation has to the original control law. Formation ï¬ delity is the degree to which the vehicles in a formation follow the trajectories prescribed by a control law. Many formation control laws assume certain conditions. Perfect formation ï¬ delity is not guaranteed when the vehicles in a formation are no longer operated under those conditions. We seek to mitigate the detrimental effects of poor communication and other real-world phenomena on formation ï¬ delity. Through simulation we test the effectiveness of a new way to implement an existing formation control law. Real-world conditions such as rigid-body motion, swarm dynamics, poor communication, and other phenomena are assessed and discussed. It is concluded through testing in simulation that it is possible to control a formation of boats by directing each boat with a unique set of waypoints in simulation. While these waypoints do not lead to perfect formation behavior, testing shows that implementing this control law using these waypoints allows the formation to be more robust to reduced communication. / Master of Science
167

625 MBIT/SEC BIT ERROR LOCATION ANALYSIS FOR INSTRUMENTATION RECORDING APPLICATIONS

Waschura, Thomas E. 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California / This paper describes techniques for error location analysis used in the design and testing of high-speed instrumentation data recording and communications applications. It focuses on the differences between common bit error rate testing and new error location analysis. Examples of techniques presented include separating bit and burst error components, studying probability of burst occurrences, looking at error free interval occurrence rates as well as auto-correlating error position. Each technique contributes to a better understanding of the underlying error phenomenon and enables higher-quality digital recording and communication. Specific applications in error correction coding emulation, magnetic media error mapping and systematic error interference are discussed.
168

What did you really earn last year?: explaining measurement error in survey income data

Angel, Stefan, Disslbacher, Franziska, Humer, Stefan, Schnetzer, Matthias January 2019 (has links) (PDF)
The paper analyses the sources of income measurement error in surveys with a unique data set. We use the Austrian 2008-2011 waves of the European Union "Statistics on income and living conditions" survey which provide individual information on wages, pensions and unemployment benefits from survey interviews and officially linked administrative records. Thus, we do not have to fall back on complex two-sample matching procedures like related studies. We empirically investigate four sources of measurement error, namely social desirabil- ity, sociodemographic characteristics of the respondent, the survey design and the presence of learning effects. We find strong evidence for a social desirability bias in income reporting, whereas the presence of learning effects is mixed and depends on the type of income under consideration. An Owen value decomposition reveals that social desirability is a major expla- nation of misreporting in wages and pensions, whereas sociodemographic characteristics are most relevant for mismatches in unemployment benefits.
169

An Unsupervised Approach to Detecting and Correcting Errors in Text

Islam, Md Aminul 01 June 2011 (has links)
In practice, most approaches for text error detection and correction are based on a conventional domain-dependent background dictionary that represents a fixed and static collection of correct words of a given language and, as a result, satisfactory correction can only be achieved if the dictionary covers most tokens of the underlying correct text. Again, most approaches for text correction are for only one or at best a very few types of errors. The purpose of this thesis is to propose an unsupervised approach to detecting and correcting text errors, that can compete with supervised approaches and answer the following questions: Can an unsupervised approach efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature? What is the magnitude of error coverage, in terms of the number of errors that can be corrected? We conclude that (1) it is possible that an unsupervised approach can efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature. Error types include: real-word spelling errors, typographical errors, lexical choice errors, unwanted words, missing words, prepositional errors, article errors, punctuation errors, and many of the grammatical errors (e.g., errors in agreement and verb formation). (2) The magnitude of error coverage, in terms of the number of errors that can be corrected, is almost double of the number of correct words of the text. Although this is not the upper limit, this is what is practically feasible. We use engineering approaches to answer the first question and theoretical approaches to answer and support the second question. We show that finding inherent properties of a correct text using a corpus in the form of an n-gram data set is more appropriate and practical than using other approaches to detecting and correcting errors. Instead of using rule-based approaches and dictionaries, we argue that a corpus can effectively be used to infer the properties of these types of errors, and to detect and correct these errors. We test the robustness of the proposed approach separately for some individual error types, and then for all types of errors. The approach is language-independent, it can be applied to other languages, as long as n-grams are available. The results of this thesis thus suggest that unsupervised approaches, which are often dismissed in favor of supervised ones in the context of many Natural Language Processing (NLP) related tasks, may present an interesting array of NLP-related problem solving strengths.
170

An Unsupervised Approach to Detecting and Correcting Errors in Text

Islam, Md Aminul 01 June 2011 (has links)
In practice, most approaches for text error detection and correction are based on a conventional domain-dependent background dictionary that represents a fixed and static collection of correct words of a given language and, as a result, satisfactory correction can only be achieved if the dictionary covers most tokens of the underlying correct text. Again, most approaches for text correction are for only one or at best a very few types of errors. The purpose of this thesis is to propose an unsupervised approach to detecting and correcting text errors, that can compete with supervised approaches and answer the following questions: Can an unsupervised approach efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature? What is the magnitude of error coverage, in terms of the number of errors that can be corrected? We conclude that (1) it is possible that an unsupervised approach can efficiently detect and correct a text containing multiple errors of both syntactic and semantic nature. Error types include: real-word spelling errors, typographical errors, lexical choice errors, unwanted words, missing words, prepositional errors, article errors, punctuation errors, and many of the grammatical errors (e.g., errors in agreement and verb formation). (2) The magnitude of error coverage, in terms of the number of errors that can be corrected, is almost double of the number of correct words of the text. Although this is not the upper limit, this is what is practically feasible. We use engineering approaches to answer the first question and theoretical approaches to answer and support the second question. We show that finding inherent properties of a correct text using a corpus in the form of an n-gram data set is more appropriate and practical than using other approaches to detecting and correcting errors. Instead of using rule-based approaches and dictionaries, we argue that a corpus can effectively be used to infer the properties of these types of errors, and to detect and correct these errors. We test the robustness of the proposed approach separately for some individual error types, and then for all types of errors. The approach is language-independent, it can be applied to other languages, as long as n-grams are available. The results of this thesis thus suggest that unsupervised approaches, which are often dismissed in favor of supervised ones in the context of many Natural Language Processing (NLP) related tasks, may present an interesting array of NLP-related problem solving strengths.

Page generated in 0.0812 seconds