Spelling suggestions: "subject:"[een] ERROR"" "subject:"[enn] ERROR""
31 |
Performance improvement in adaptive signal processing algorithmsPazaitis, Dimitrios I. January 1996 (has links)
No description available.
|
32 |
Finite mixtures of generalized Pareto distributions with applicationsBaeshu, Abdurrazagh M. January 1997 (has links)
No description available.
|
33 |
Analyses of communication failures in rail engineering worksMurphy, Philippa January 2003 (has links)
No description available.
|
34 |
An adaptable high-speed error-control algebraic decoderKatsaros, A. January 1985 (has links)
No description available.
|
35 |
Adaptive finite element methods for the damped wave equationWilkins, Catherine January 1998 (has links)
No description available.
|
36 |
Bit Error Problems with DESLoebner, Christopher E. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The Data Encryption Standard (DES) was developed in 1977 by IBM for the National Bureau of Standards (NBS) as a standard way to encrypt unclassified data for security protection. When the DES decrypts the encrypted data blocks, it assumes that there are no bit errors in the data blocks. It is the object of this project to determine the Hamming distance between the original data block and the data block after decryption if there occurs a single bit error anywhere in the encrypted bit block of 64 bits. This project shows that if a single bit error occurs anywhere in the 64-bit encrypted data block, a mean Hamming distance of 32 with a standard deviation of 4 is produced between the original bit block an the decrypted bit block. Furthermore, it is highly recommended by this project to use a forward error correction scheme like BCH (127, 64) or Reed-Solomon (127, 64) so that the probability of this bit error occurring is decreased.
|
37 |
Error e ignorancia en el Derecho CivilRubio Correa, Marcial 21 June 2016 (has links)
Esta tesis pretende ser un estudio dogmático del tema del
error y la ignorancia en el Derecho Civil. Está elaborada en base
a dos perspectivas: la conceptual contemporánea, y la histórica
que se remonta hasta la compilación justinianea. Se trata de
esclarecer la conceptualización, y no de proponer un proyecto
legislativo alternativo, aunque el capitulo VII trata de lo que
estimamos los asuntos esenciales en esta materia.
El capitulo 1 es una presentación breve de la problemática
conceptual del error tal como aparece hoy dia. El. capitulo II es
largo y pretende describir la evolución del error desde Roma
hasta el final del siglo XIX, cuando aparece el Código Civil
alemán.
El capitulo III es un intento de evaluar la actual
conceptualización del error, pero no en abstracto, sino a través
de los ejemplos que presentan Justiniano, Savigny, Mazeaud y
Messineo. Es una especie de laboratorio (que después continuamos
en el capitulo VI) para apreciar la coherencia de los conceptos.
El capitulo IV estudia el tema del error y la ignorancia en
nuestro Código Civil, no sólo en la parte del Acto Jurídico, sino
en todos las artículos en los que estas figuras aparecen
expresamente. El Capitulo V es un intenta de analizar el mistake
del Derecho Anglosajón en sus propias fuentes, para luego
compararlo con el Derecho romano-germánico. El capitulo VI es una
digresión teórica que trata, de su lado, de mostrar lo que nos
aparece evidente, y es que en materia de error, el Derecho
Anglosajón evoluciona mucho más rápidamente que e1 Romano-germánico.
Esto se debe a varios factores pero, uno de ellos, es
que entre nosotros, seguimos pensando la sociedad con ejemplos
que, en lo esencial, fueron diseñados en Roma. Esto contradice
una de las ideas generalizadas en el Derecho: que el sistema
anglosajón es mucho más conservador que el Romano-germánico / Tesis
|
38 |
Convergent Validity of Variables Residualized By a Single Covariate: the Role of Correlated Error in Populations and SamplesNimon, Kim 05 1900 (has links)
This study examined the bias and precision of four residualized variable validity estimates (C0, C1, C2, C3) across a number of study conditions. Validity estimates that considered measurement error, correlations among error scores, and correlations between error scores and true scores (C3) performed the best, yielding no estimates that were practically significantly different than their respective population parameters, across study conditions. Validity estimates that considered measurement error and correlations among error scores (C2) did a good job in yielding unbiased, valid, and precise results. Only in a select number of study conditions were C2 estimates unable to be computed or produced results that had sufficient variance to affect interpretation of results. Validity estimates based on observed scores (C0) fared well in producing valid, precise, and unbiased results. Validity estimates based on observed scores that were only corrected for measurement error (C1) performed the worst. Not only did they not reliably produce estimates even when the level of modeled correlated error was low, C1 produced values higher than the theoretical limit of 1.0 across a number of study conditions. Estimates based on C1 also produced the greatest number of conditions that were practically significantly different than their population parameters.
|
39 |
The study of medication errors at a teaching hospital using failure mode and effects analysis.McNally, Karen M. January 1998 (has links)
The prevalence of medication errors in a major teaching hospital was investigated using several methodologies. The existing ward stock drug distribution system was assessed and a new system designed based on a novel use of failure mode and effects analysis. The existing system was compared to the new unit supply individual patient dispensing system on two wards in terms of medication errors, nursing time, pharmacy time, drug costs, drug security and nurses' opinion. A review of a one year sample of reports submitted under the existing incident reporting scheme was also undertaken. Errors were categorised according to drug group, error type, reason cited for the error, and probability ranking (probability of occurrence, detection and harm). In addition, a "no-blame" medication error reporting scheme was implemented and assessed.Results of the study showed that in the newly designed individual patient dispensing system there was a reduction in nursing time associated with medication activities of approximately 29%, an increase in pharmacy staff time of 64%, a reduction in drug costs and an increase in drug security. Using the disguised observer methodology a reduction in medication errors by 23.5% (including timing errors) and 7.3% (excluding timing errors) was seen on Ward A. Similarly a reduction of 21.1% (including timing errors) and 9.8% (excluding timing errors) was observed on Ward B. Significant support for the individual patient dispensing system was given by nursing staff. Of the errors self-reported under the existing incident/accident reporting scheme the most common type of error was omissions (32.2%), the most common drug group was cardiovascular drugs (19.8%), and the most common cause of the error cited was a faulty check (42.3%). The probability ranking showed that 75% of errors reported scored between 12 and 17 points (from a possible 30 points). In ++ / the no-blame error reporting system, an error rate of 2.1% was detected in the existing system and 1.7% in the failure mode analysis designed phase.
|
40 |
Design of structured nonbinary quasi-cyclic low-density parity-check codesLiu, Yue, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2009 (has links)
Since the rediscovery, LDPC codes attract a large amount of research efforts. In 1998, nonbinary LDPC codes were firstly investigated and the results shown that they are better than their binary counterparts in performance. Recently, there is always a requirement from the industry to design applied nonbinary LDPC codes. In this dissertation, we firstly propose a novel class of quasi-cyclic (QC) LDPC codes. This class of QC-LDPC codes embraces both linear encoding complexity and excellent compatibility in various degree distributions and nonbinary expansions. We show by simulation results that our proposed QC-LDPC codes perform as well as their comparable counterparts. However, this proposed code structure is more flexible in designing. This feature may show its power when we change the code length and rate adaptively. Further more, we present two algorithms to generate codes with short girth and better girth distribution. The two algorithms are designed based on progressive edge growth (PEG) algorithm and they are specifically designed for quasi-cyclic structure. The simulation results show the improvement they achieved. In this thesis, we also investigate the believe propagation based iterative algorithms for decoding of nonbinary LDPC codes. The algorithms include sum-product (SP) algorithm, SP algorithm using fast Fourier transform, min-sum (MS) algorithm and complexity reduced extended min-sum (EMS) algorithm. In particular, we present the proposed modified min-sum algorithm with threshold filtering which further reduces the computation complexity.
|
Page generated in 0.03 seconds