Spelling suggestions: "subject:"errors."" "subject:"horrors.""
241 |
Failure-Free Pharmacies? : An Exploration of Dispensing Errors and Safety Culture in Swedish Community PharmaciesNordén-Hägg, Annika January 2010 (has links)
Quality in pharmacies includes aspects such as error management and safety issues. The objective of this thesis was to explore these aspects of quality in Swedish community phar-macies. The specific aims were to compare a paper-based and a web-based reporting system for dispensing errors, regarding reporting behaviour and data quality. The impact of an intervention; a technical barrier, for preventing dispensing errors was evaluated. A survey tool, the Safety Attitudes Questionnaire (SAQ), was adapted to Swedish pharmacies and used to describe the safety culture in these pharmacies. The potential relationship between safety culture and dispensing errors was also explored. Data was retrieved from the paper- and web-based reporting systems, semi-structured interviews as well as from a survey, using SAQ. The change in reporting system for dispensing errors increased the reporting of errors and enhanced the completeness of reported data. The web-based system facilitated follow-up and identification of preventive measures, but was associated with implementation problems. The intervention was associated with a significant decrease in the overall number of dispensing errors and, specifically, reports on errors with the wrong strength, and errors caused by registration failure in the pharmacy computers. The Swedish version of the survey tool, SAQ, demonstrated satisfying psychometric properties. No correlation between the SAQ Safety Climate dimension and dispensing errors was seen, while a positive relationship between the SAQ Stress Recognition dimension and dispensing errors was established. A number of other pharmacy characteristics, such as number of dispensed prescription items and employees, displayed positive relationships with dispensing errors. Staff age demonstrated a negative relationship with dispensing errors while other demographic variables such as national education background showed a positive relationship.
|
242 |
Power Properties of the Sargan Test in the Presence of Measurement Errors in Dynamic PanelsDahlberg, Matz, Mörk, Eva, Tovmo, Per January 2008 (has links)
This paper investigates the power properties of the Sargan test in the presence of measurement errors in dynamic panel data models. The conclusion from Monte Carlo simulations, and an application on the data used by Arellano and Bond (1991), is that in the very likely case of measurement errors in either the dependent or any of the independent variables, we will, if we rely on the Sargan test, quite likely accept a mis-specified model and end up with biased results.
|
243 |
Homomorphic EncryptionWeir, Brandon January 2013 (has links)
In this thesis, we provide a summary of fully homomorphic encryption, and in particular, look at the BGV encryption scheme by Brakerski, Gentry, and Vaikuntanathan; as well the DGHV encryption scheme by van Dijk, Gentry, Halevi, and Vaikuntanathan. We explain the mechanisms developed by Gentry in his breakthrough work, and show examples of how they are used.
While looking at the BGV encryption scheme, we make improvements to the underlying lemmas dealing with modulus switching and noise management, and show that the lemmas as currently stated are false. We then examine a lower bound on the hardness of the Learning With Errors lattice problem, and use this to develop specific parameters for the BGV encryption scheme at a variety of security levels.
We then study the DGHV encryption scheme, and show how the somewhat homomorphic encryption scheme can be implemented as both a fully homomorphic encryption scheme with bootstrapping, as well as a leveled fully homomorphic encryption scheme using the techniques from the BGV encryption scheme. We then extend the parameters from the optimized version of this scheme to higher security levels, and describe a more straightforward way of arriving at these parameters.
|
244 |
The Effectiveness of Checklists versus Bar-codes towards Detecting Medication Planning and Execution ErrorsRose, Emily 26 November 2012 (has links)
The primary objective of this research was to evaluate the effectiveness of a checklist, compared to a smart pump and bar-code verification system, at detecting different categories of errors in intravenous medication administration. To address this objective, a medication administration safety checklist was first developed in an iterative user-centered design process. The resulting checklist design was then used in a high-fidelity simulation experiment comparing the effectiveness of interventions towards two classifications of error: execution and planning errors. Results showed the checklist provided no additional benefit for error detection over the control condition of current nursing practice. Relative to the checklist group, the smart pump and bar-coding intervention demonstrated increased effectiveness at detecting planning errors. Results of this work will this work will help guide the selection, implementation and design of appropriate interventions for error mitigation in medication administration.
|
245 |
The Effectiveness of Checklists versus Bar-codes towards Detecting Medication Planning and Execution ErrorsRose, Emily 26 November 2012 (has links)
The primary objective of this research was to evaluate the effectiveness of a checklist, compared to a smart pump and bar-code verification system, at detecting different categories of errors in intravenous medication administration. To address this objective, a medication administration safety checklist was first developed in an iterative user-centered design process. The resulting checklist design was then used in a high-fidelity simulation experiment comparing the effectiveness of interventions towards two classifications of error: execution and planning errors. Results showed the checklist provided no additional benefit for error detection over the control condition of current nursing practice. Relative to the checklist group, the smart pump and bar-coding intervention demonstrated increased effectiveness at detecting planning errors. Results of this work will this work will help guide the selection, implementation and design of appropriate interventions for error mitigation in medication administration.
|
246 |
Nous aspectes metodològics en l'exploració elèctrica i electromagnèticaGabàs i Gasa, Anna 24 October 2003 (has links)
L'elaboració de models que descriuen adequadament el subsòl és un dels aspectes més importants dins el camp de la prospecció geofísica. La seva obtenció a partir de les respostes experimentals són el resultat del procés anomenat problema invers. Els mètodes elèctrics i electromagnètics enregistren determinats components del camp electromagnètic i caracteritzen les estructures del subsòl mitjançant la propietat física del medi anomenada conductivitat elèctrica. Dins l'ampli conjunt de mètodes d'exploració, aquesta tesi se centra en l'estudi dels mètodes magnetotel·lúric i elèctric de corrent continu. La integració d'aquestes dues tècniques pot millorar significativament els models que descriuen la conductivitat elèctrica del terreny. Aquest treball presenta el desenvolupament de diferents aspectes metodològics sobre els mètodes magnetotel·lúric i elèctric de corrent continu, amb els objectius de millorar el tractament de les dades experimentals i d'optimitzar el procés de la inversió.Així, la memòria està dividida en dues grans parts, segons els dos mètodes indicats, cadascuna d'elles amb uns objectius específics.En el mètode magnetotel·lúric es presenten dos aspectes diferenciats i innovadors. L'objectiu del primer és optimitzar la informació que s'introdueix en la inversió per tal d'evitar que, per exemple, només cert tipus de respostes controlin el resultat. Per això s'estudia la influència de cada tipus de resposta magnetotel·lúrica en aquest procés. El resultat és l'obtenció d'una nova expressió teòrica entre els errors de dues respostes magnetotel·lúriques. L'estudi es completa amb l'elaboració d'un mètode alternatiu que permet estimar relacions entre magnituds quan no és possible deduir-les a partir del seu desenvolupament analític.El segon aspecte té l'objectiu de desenvolupar una nova metodologia per identificar i corregir l'efecte de la distorsió galvànica en les dades experimentals. Aquest efecte pot conduir a una errònia interpretació del subsòl i, per això, la seva correcció s'ha de realitzar com un pas previ a la interpretació de les dades. La metodologia segueix dos passos, en el primer s'obtenen unes expressions analítiques a partir de les equacions fonamentals del mètode magnetotel·lúric (equacions de Maxwell), i en el segon pas, s'avaluen aquestes expressions amb respostes sintètiques i dades experimentals. El resultat és un nou mètode per a corregir la distorsió galvànica, basat en la relació lineal, que utilitza les mateixes respostes magnetotel·lúriques mesurades.El treball realitzat amb el mètode de prospecció elèctrica de corrent continu té com a objectius optimitzar la solució del problema directe en dues dimensions i elaborar un programa d'inversió que aporti models bidimensionals.Per a controlar i optimitzar les etapes dels processos de modelització i inversió, s'han implementat diferents aspectes que milloren la solució dels programes. En la modelització s'ha dissenyat una estratègia de superposició de dues malles que permet discretitzar el model, s'ha implementat un nou procediment per a millorar el càlcul del potencial elèctric i, s'han incorporat els efectes de la topografia. En la inversió, a més a més del disseny i l'elaboració de l'algorisme d'inversió, s'ha aprofundit en el principal problema del problema invers, el càlcul de la matriu sensibilitat. Els dos programes elaborats han estat aplicats sobre perfils de sondejos elèctrics verticals i sobre perfils de tomografia elèctrica, en concret s'han adaptat als dispositius Dipol-Dipol i Wenner-Schlumberger. Així, aquesta tesi ofereix com a resultat les eines necessàries per a realitzar la integració de les dades geoelèctriques, una línia de recerca actual en la Prospecció Geofísica, i per tant, obra noves perspectives de futur en aquest camp.
|
247 |
Modeling and Mitigation of Soft Errors in Nanoscale SRAMsJahinuzzaman, Shah M. January 2008 (has links)
Energetic particle (alpha particle, cosmic neutron, etc.) induced single event data upset or soft error has emerged as a key reliability concern in SRAMs in sub-100 nanometre technologies. Low operating voltage, small node capacitance, high packing density, and lack of error masking mechanisms are primarily responsible for the soft error susceptibility of SRAMs. In addition, since SRAM occupies the majority of die area in system-on-chips (SoCs) and microprocessors, different leakage reduction techniques, such as, supply voltage reduction, gated grounding, etc., are applied to SRAMs in order to limit the overall chip leakage. These leakage reduction techniques exponentially increase the soft error rate in SRAMs. The soft error rate is further accentuated by process variations, which are prominent in scaled-down technologies. In this research, we address these concerns and propose techniques to characterize and mitigate soft errors in nanoscale SRAMs.
We develop a comprehensive analytical model of the critical charge, which is a key to assessing the soft error susceptibility of SRAMs. The model is based on the dynamic behaviour of the cell and a simple decoupling technique for the non-linearly coupled storage nodes. The model describes the critical charge in terms of NMOS and PMOS transistor parameters, cell supply voltage, and noise current parameters. Consequently, it enables characterizing the spread of critical charge due to process induced variations in these parameters and to manufacturing defects, such as, resistive contacts or vias. In addition, the model can estimate the improvement in critical charge when MIM capacitors are added to the cell in order to improve the soft error robustness. The model is validated by SPICE simulations (90nm CMOS) and radiation test. The critical charge calculated by the model is in good agreement with SPICE simulations with a maximum discrepancy of less than 5%. The soft error rate estimated by the model for low voltage (sub 0.8 V) operations is within 10% of the soft error rate measured in the radiation test. Therefore, the model can serve as a reliable alternative to time consuming SPICE simulations for optimizing the critical charge and hence the soft error rate at the design stage.
In order to limit the soft error rate further, we propose an area-efficient multiword based error correction code (MECC) scheme. The MECC scheme combines four 32 bit data words to form a composite 128 bit ECC word and uses an optimized 4-input transmission-gate XOR logic. Thus MECC significantly reduces the area overhead for check-bit storage and the delay penalty for error correction. In addition, MECC interleaves two composite words in a row for limiting cosmic neutron induced multi-bit errors. The ground potentials of the composite words are controlled to minimize leakage power without compromising the read data stability. However, use of composite words involves a unique write operation where one data word is written while other three data words are read to update the check-bits. A power efficient word line signaling technique is developed to facilitate the write operation. A 64 kb SRAM macro with MECC is designed and fabricated in a commercial 90nm CMOS technology. Measurement results show that the SRAM consumes 534 μW at 100 MHz with a data latency of 3.3 ns for a single bit error correction. This translates into 82% per-bit energy saving and 8x speed improvement over recently reported multiword ECC schemes. Accelerated neutron radiation test carried out at TRIUMF in Vancouver confirms that the proposed MECC scheme can correct up to 85% of soft errors.
|
248 |
Statistical Power in Ergonomic Intervention StudiesHurley, Kevin 12 April 2010 (has links)
As awareness of the costs of workplace injury and illness continues to grow, there has been an increased demand for effective ergonomic interventions to reduce the prevalence of musculoskeletal disorders (MSDs). The goal of ergonomic interventions is to reduce exposures (mechanical and psychosocial); however there is conflicting evidence about the impact of these interventions as many studies produce inconclusive or conflicting results. In order to provide a clearer picture of the effectiveness of these interventions, we must find out if methodological issues, particularly statistical power, are limiting this research. The purpose of this study was to review and examine factors influencing statistical power in ergonomic intervention papers from five peer reviewed journals in 2008. A standardized review was performed by two reviewers. Twenty eight ergonomic intervention papers met the inclusion criteria and were fully reviewed. Data and trends from the reviewed papers were summarized specifically looking at the research designs used, the outcome measures used, if statistical power was mentioned, if a rationale for sample size was reported, if standardized and un-standardized effect sizes were reported, if confidence intervals were reported, the alpha levels used, if pair-wise correlation values were provided, if mean values and standard deviations were provided for all measures and the location of the studies. Also, the studies were rated based on the outcomes of their intervention into one of three categories (shown to be effective, inconclusive and not shown to be effective). Between these three groupings comparisons of post hoc power, standardized effect sizes, un-standardized effect sizes and coefficients of variation were made. The results indicate that in general, a lack of statistical power is indeed a concern and may be due to the sample sizes used, effect sizes produced, extremely high variability in some of the measures, the lack of attention paid to statistical power during research design and the lack of appropriate statistical reporting guidelines in journals where ergonomic intervention research may be published. A total of 69.6% of studies reviewed had a majority of measures with less than .50 power and 71.4% of all measures used had CVs of > .20.
|
249 |
Real-Time Systems with Radiation-Hardened Processors : A GPU-based Framework to Explore TradeoffsAlhowaidi, Mohammad January 2012 (has links)
Radiation-hardened processors are designed to be resilient against soft errorsbut such processors are slower than Commercial Off-The-Shelf (COTS)processors as well significantly costlier. In order to mitigate the high costs,software techniques such as task re-executions must be deployed together withadequately hardened processors to provide reliability. This leads to a huge designspace comprising of the hardening level of the processors and the numberof re-executions of each task in the system. Each configuration in this designspace represents a tradeoff between processor load, reliability and costs. The reliability comes at the price of higher costs due to higher levels of hardeningand performance degradation due to hardening or due to re-executions.Thus, the tradeoffs between performance, reliability and costs must be carefullystudied. Pertinent questions that arise in such a design scenario are — (i)how many times a task must be re-executed and (ii) what should be hardeninglevel? — such that the system reliability is satisfied. In order to evaluate such tradeoffs efficiently, in this thesis, we proposenovel framework that harnesses the computational power of Graphics ProcessingUnits (GPUs). Our framework is based on a system failure probabilityanalysis that connects the probability of failure of tasks to the overall systemreliability. Based on characteristics of this probabilistic analysis as well asreal-time deadlines, we derive bounds on the design space to prune infeasiblesolutions. Finally, we illustrate the benefits of our proposed framework withseveral experiments
|
250 |
Extended Information Matrices for Optimal Designs when the Observations are CorrelatedPazman, Andrej, Müller, Werner January 1996 (has links) (PDF)
Regression models with correlated errors lead to nonadditivity of the information matrix. This makes the usual approach of design optimization (approximation with a continuous design, application of an equivalence theorem, numerical calculations by a gradient algorithm) impossible. A method is presented that allows the construction of a gradient algorithm by altering the information matrices through adding of supplementary noise. A heuristic is formulated to circumvent the nonconvexity problem and the method is applied to typical examples from the literature. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
|
Page generated in 0.0427 seconds