• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2077
  • 469
  • 321
  • 181
  • 169
  • 71
  • 68
  • 65
  • 53
  • 51
  • 49
  • 43
  • 28
  • 23
  • 22
  • Tagged with
  • 4366
  • 717
  • 538
  • 529
  • 506
  • 472
  • 432
  • 408
  • 390
  • 323
  • 316
  • 306
  • 296
  • 286
  • 275
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
381

A microeconometric analysis of the take-up of income support in Britain

Crenian, Robert A. January 1998 (has links)
This thesis deals with the take-up of social security benefits in Britain. It is well documented that not everyone who is entitled to benefits actually claims them. Nontake- up of benefits has been found to be a problem especially for benefits which are means-tested. So, throughout this thesis, we concentrate on Income Support, the main means-tested benefit in Britain. The latest official estimates on the extent of non-takeup (for 1993/94) suggest that up to 1.4 million persons are not receiving close to £1.7 billion of IS in spite of being entitled to it. The main question this thesis addresses IS what are the factors which determine whether an individual will or will not take-up their benefit entitlement? We consider the problem from an economic perspective by constructing suitable models set in both static and dynamic environments. These models provide some interesting insights about the nature of non-take-up. In tum, they also form the basis to a series of econometric models. Previous empirical evidence has shown that the entitlement level itself is one of the key determinants of whether or not an individual will take-up. In addition, it has long been recognized that - due to the complex nature of the benefit system - determining individual entitlements is, in many cases, error-prone with resulting benefit entitlements that are subject to measurement error. Hence, unlike any other studies thus far, we account for the presence of measurement error in the benefit entitlement when modelling the likelihood of take-up. Finally, we shed new light on the dynamics of take-up by using the information contained in our panel data set. In particular, we consider the effect claiming in the past has on the current decision to take-up and how future changes, expected or known with certainty, influence the decision to take-up or not
382

Improving the Reliability of NAND Flash, Phase-change RAM and Spin-torque Transfer RAM

January 2014 (has links)
abstract: Non-volatile memories (NVM) are widely used in modern electronic devices due to their non-volatility, low static power consumption and high storage density. While Flash memories are the dominant NVM technology, resistive memories such as phase change access memory (PRAM) and spin torque transfer random access memory (STT-MRAM) are gaining ground. All these technologies suffer from reliability degradation due to process variations, structural limits and material property shift. To address the reliability concerns of these NVM technologies, multi-level low cost solutions are proposed for each of them. My approach consists of first building a comprehensive error model. Next the error characteristics are exploited to develop low cost multi-level strategies to compensate for the errors. For instance, for NAND Flash memory, I first characterize errors due to threshold voltage variations as a function of the number of program/erase cycles. Next a flexible product code is designed to migrate to a stronger ECC scheme as program/erase cycles increases. An adaptive data refresh scheme is also proposed to improve memory reliability with low energy cost for applications with different data update frequencies. For PRAM, soft errors and hard errors models are built based on shifts in the resistance distributions. Next I developed a multi-level error control approach involving bit interleaving and subblock flipping at the architecture level, threshold resistance tuning at the circuit level and programming current profile tuning at the device level. This approach helped reduce the error rate significantly so that it was now sufficient to use a low cost ECC scheme to satisfy the memory reliability constraint. I also studied the reliability of a PRAM+DRAM hybrid memory system and analyzed the tradeoffs between memory performance, programming energy and lifetime. For STT-MRAM, I first developed an error model based on process variations. I developed a multi-level approach to reduce the error rates that consisted of increasing the W/L ratio of the access transistor, increasing the voltage difference across the memory cell and adjusting the current profile during write operation. This approach enabled use of a low cost BCH based ECC scheme to achieve very low block failure rates. / Dissertation/Thesis / Ph.D. Electrical Engineering 2014
383

Coding techniques for insertion/deletion error correction

Cheng, Ling 04 June 2012 (has links)
D. Ing. / In Information Theory, synchronization errors can be modelled as the insertion and deletion of symbols. Error correcting codes are proposed in this research as a method of recovering from a single insertion or deletion error; adjacent multiple deletion errors; or multiple insertion, deletion and substitution errors. A moment balancing template is a single insertion or deletion correcting construction based on number theoretic codes. The implementation of this previously published technique is extended to spectral shaping codes, (d, k) constrained codes and run-length limited sequences. Three new templates are developed. The rst one is an adaptation to DC-free codes, and the second one is an adaptation to spectral null codes. The third one is a generalized moment balancing template for both (d, k) constrained codes and run-length limited sequences. Following this, two new coding methods are investigated to protect a binary sequence against adjacent deletion errors. The rst class of codes is a binary code derived from the Tenengolts non-binary single insertion or deletion correcting code, with additional selection rules. The second class of codes is designed by using interleaving techniques. The asymptotic cardinality bounds of these new codes are also derived. Compared to the previously published codes, the new codes are more exible, since they can protect against any given xed known length of adjacent deletion errors. Based on these two methods, a nested construction is further proposed to guarantee correction of adjacent deletion errors, up to a certain xed number.
384

Low complexity bit-level soft-decision decoding for Reed-Solomon codes

Oh, Min-seok January 1999 (has links)
Reed-Solomon codes (RS codes) are an important method for achieving error-correction in communication and storage systems. However, it has proved difficult to find a soft-decision decoding method which has low complexity. Moreover, in some previous soft-decision decoding approaches, bit-level soft-decision information could not be employed fully. Even though RS codes have powerful error correction capability, this is a critical shortcoming. This thesis presents bit-level soft-decision decoding schemes for RS codes. The aim is to design a low complexity sequential decoding method based on bit-level soft- decision information approaching maximum likelihood performance. Firstly a trellis decoding scheme which gives easy implementation is introduced, since the soft-decision information can be used directly. In order to allow bit-level soft-decision, a binary equivalent code is introduced and Wolf's method is used to construct the binary-trellis from a systematic parity check matrix. Secondly, the Fano sequential decoding method is chosen, which is sub-optimal and adaptable to channel conditions. This method does not need a large amount of storage to perform an efficient trellis search. The Fano algorithm is then modified to improve the error correcting performance. Finally, further methods of complexity reduction are presented without loss of decoding performance, based on reliability-first search decoding using permutation groups for RS codes. Compared with the decoder without permutation, those schemes give a large complexity reduction and performance improvement approaching near maximum likelihood performance. In this thesis, three types of permutation, cyclic, squaring and hybrid permutation, are presented and the decoding methods using them are implemented.
385

Improving the safety of radiotherapy treatment delivery

Gilbert, L. January 2015 (has links)
Errors during radiotherapy treatment can cause severe, and potentially fatal, patient harm. The final check immediately prior to treatment delivery, whereby two radiographers ensure that the dose about to be delivered corresponds with the prescription, is the last defence against error. The aim of this research was to increase understanding of this final treatment check and factors affecting error detection, in order to improve the safety of radiotherapy treatment delivery. The research adopted a mixed methods approach, combining qualitative and experimental studies to investigate the interaction of factors affecting accuracy during the final treatment checks. The qualitative interviews and task analysis pointed to difficulties maintaining attention and variation in how these checks are conducted. The interface used to conduct the final treatment check was also recognised to have usability issues. The laboratory-based experimental studies results indicated that a structured form of double checking, called challenge-response, is most effective at error detection, when compared to single or unstructured double checking. Furthermore, it was found that alternating the roles of challenger and responder, and the order parameters are checked in, significantly increases accuracy during repeated treatment checks. The original contribution of this research was a detailed investigation of a previously understudied aspect of radiotherapy treatment. The results informed the design of an original, evidence and theoretical based two-person checking protocol for use during the final treatment check. Qualitative evaluation indicates that it would be well received as a standardised method of treatment checking. Furthermore, an alternative interface design has been proposed, specifically for use during the final treatment check. This was comparatively tested against the most frequently used software package within the UK and found to have a significant positive impact upon user’s accuracy. An additional output is a series of practice based recommendations to improve accuracy during repeated treatment checking. This research has concluded that implementation of the practice recommendations, checking protocol and interface design should help maintain radiographers’ attention during repeated final treatment checks, thereby preventing errors passing undetected. Future research into the radiotherapy interface design and implementation of the standardised final treatment check protocol have been identified.
386

Understanding and increasing Right First Time (RFT) Performance in a production environment: a case study

Gregoire, Carrie January 1900 (has links)
Master of Agribusiness / Department of Agricultural Economics / Vincent R. Amanor-Boadu / It is estimated that the animal health biologics sector will increase by over 27% between 2015 and 2020. This projection and the increasing competition among the sector’s players suggests need to find ways to enhance their efficiencies in manufacturing to sustain their relative competitiveness. One approach to enhancing efficiencies is to ensure that all work is done once, i.e., everything is done right the first time. This research focused on human error as a major source of inefficiency in manufacturing and hypothesized that addressing issues that reduce human error would contribute to reducing inefficiencies. The research used the Kaizen process to assess the before and after counts of human error in a biologics manufacturing unit of Z Animal Health Company (ZAHC). The study found that human error accounted for about 51% of all sources of error in the pre-Kaizen period and only about 34% of all errors in the post-Kaizen period, a reduction in excess of 33.3%. Given that humans are directly or indirectly responsible for all activities in the manufacturing process, the Kaizen process also contributed to a reduction in most other error sources. For example, errors in raw materials and components went reduced by about 50%. We tested the hypothesis that undertaking the Kaizen was statistically effective in reducing human error compared to all other errors using a logit model. Our results confirmed this hypothesis, showing that the odds ratio of human error in the post-Kaizen period was about 50% of the odds of non-human error. The research suggests that in a highly technical manufacturing environment, such as in animal health biologics, human errors can be a major problem that can erode competitiveness quickly. Focusing employees’ on root causes of errors and helping them address these through structured quality-enhancing initiatives such as Kaizen produce superior results. It is, therefore, suggested that when organizations discover human error as a major source of inefficiency, it is prudent to help employees understand what they do and how what they do contributes to the overall performance of the organization. This appreciation of how their actions fit into the big picture could provide a foundation upon which significant improvements can be achieved.
387

Error analysis and tractability for multivariate integration and approximation

Huang, Fang-Lun 01 January 2004 (has links)
No description available.
388

The estimation and presentation of standard errors in a survey report

Swanepoel, Rene 26 May 2006 (has links)
The vast number of different study variables or population characteristics and the different domains of interest in a survey, make it impractical and almost impossible to calculate and publish standard errors for each estimated value of a population variable or characteristic and each domain individually. Since estimated values are subject to statistical variation (resulting from the probability sampling), standard errors may not be omitted from the survey report. Estimated values can be evaluated only if their precision is known. The purpose of this research project is to study the feasibility of mathematical modeling to estimate the standard errors of estimated values of population parameters or characteristics in survey data sets and to investigate effective and user-friendly presentation methods of these models in reports. The following data sets were used in the investigation: • October Household Survey (OHS) 1995 - Workers and Household data set • OHS 1996 - Workers and Household data set • OHS 1997 - Workers and Household data set • Victims of Crime Survey (VOC) 1998 The basic methodology consists of the estimation of standard errors of the statistics considered in the survey for a variety of domains (such as the whole country, provinces, urban/rural areas, population groups, gender and age groups as well as combinations of these). This is done by means of a computer program that takes into consideration the complexity of the different sample designs. The direct calculated standard errors were obtained in this way. Different models are then fitted to the data by means of regression modeling in the search for a suitable standard error model. A function of the direct calculated standard error value served as the dependent variable and a function of the size of the statistic served as the independent variable. A linear model, equating the natural logarithm of the coefficient of relative variation of a statistic to a linear function of the natural logarithm of the size of the statistic, gave an adequate fit in most of the cases. Well-known tests for the occurrence of outliers were applied in the model fitting procedure. For each observation indicated as an outlier, it was established whether the observation could be deleted legitimately (e.g. when the domain sample size was too small, or the estimate biased). Afterwards the fitting procedure was repeated. The Australian Bureau of Statistics also uses the above model in similar surveys. They derived this model especially for variables that count people in a specific category. It was found that this model performs equally well when the variable of interest counts households or incidents as in the case of the VOC. The set of domains considered in the fitting procedure included segregated classes, mixed classes and cross-classes. Thus, the model can be used irrespective of the type of subclass domain. This result makes it possible to approximate standard errors for any type of domain with the same model. The fitted model, as a mathematical formula, is not a user-friendly presentation method of the precision of estimates. Consequently, user-friendly and effective presentation methods of standard errors are summarized in this report. The suitability of a specific presentation method, however, depends on the extent of the survey and the number of study variables involved. / Dissertation (MSc (Mathematical Statistics))--University of Pretoria, 2007. / Mathematics and Applied Mathematics / unrestricted
389

Automated Error Assessment in Spherical Near-Field Antenna Measurements

Pelland, Patrick January 2011 (has links)
This thesis will focus on spherical near-field antenna measurements and the methods developed or modified for the work of this thesis to estimate the uncertainty in a particular far-field radiation pattern. We will discuss the need for error assessment in spherical near-field antenna measurements. A procedure will be proposed that, in an automated fashion, can be used to determine the overall uncertainty in the measured far-field radiation pattern of a particular antenna. This overall uncertainty will be the result of a combination of several known sources of error common to SNF measurements. This procedure will consist of several standard SNF measurements, some newly developed tests, and several stages of post-processing of the measured data. The automated procedure will be tested on four antennas of various operating frequencies and directivities to verify its functionality. Finally, total uncertainty data will be presented to the reader in several formats.
390

A Study in the Frequency Warping of Time-Domain Methods

Gao, Kai January 2015 (has links)
This thesis develops a study for the frequency warping introduced by time-domain methods. The work in this study focuses first on the time-domain methods used in the classical SPICE engine, that is the Backward Euler, the Trapezoidal Rule and the Gear's methods. Next, the thesis considers the newly developed high-order method based on the Obreshkov formula. This latter method was proved to have the A-stability and L-stability properties, and is therefore robust in circuit simulation. However, to the best of the author's knowledge, a mathematical study for the frequency warping introduced by this method has not been developed yet. The thesis therefore develops the mathematical derivation for the frequency warping of the Obreshkov-based method. The derivations produced reveal that those methods introduce much smaller warping errors than the traditional methods used by SPICE. In order to take advantage of the small warping error, the thesis develops a shooting method framework based on the Obreshkov-based method to compute the steady-state response of nonlinear circuits excited by periodical sources. The new method demonstrates that the steady-state response can be constructed with much smaller number of time points than what is typically required by the classical methods.

Page generated in 0.0296 seconds