1 |
Quality Improvement on Patient Safety at a HEmodialysis Center- Using Root Cause AnalysisChu, Fen-Yao 16 December 2005 (has links)
The U.S. Institute of Medicine estimates that there are 98,000 people died yearly from medical errors; approximate 20,000 people died from medical adverse events annually was estimated in Taiwan. All these reports indicate that the medical errors have great impact on patient safety. The hemodialysis population in Taiwan keeps increasing these years, and this means more attention should be paid to patient safety with the growing hemodialysis population. In 2005, Taiwan Joint Commission on Hospital Accreditation sets six goals for patient safety, general guidelines for healthcare facilities, and relative regulations are mostly on standard devices. This study tries to provide more possible root causes about patient safety at a hemodialysis center.
Root cause analysis (RCA) has been greatly used in patient safety because latent factors can be determined by RCA. RCA was simulated in this study at a hemodialysis center. Firstly, a series of formal questions, developed by the U.S. Department of Veteran Affairs, were used to examine the current situation. The questions used are composed of six dimensions. Then, cause-effect-diagram was used to locate latent causes, and finally identified four dimensions. Research results are mainly summarized as human resource management issues, including two root causes of inadequate professional training and overwork. Adjusted job assignment and job content are also suggested in this study.
|
2 |
Optimal coordinate sensor placements for estimating mean and variance components of variation sourcesLiu, Qinyan 29 August 2005 (has links)
In-process Optical Coordinate Measuring Machine (OCMM) offers the potential of diagnosing in a timely manner variation sources that are responsible for product quality defects. Such a sensor system can help manufacturers improve product quality and reduce process downtime. Effective use of sensory data in diagnosing variation sources depends on the optimal design of a sensor system, which is often known as the problem of sensor placements. This thesis addresses coordinate sensor placement in diagnosing dimensional variation sources in assembly processes. Sensitivity indices of detecting process mean and variance components are defined as the design criteria and are derived in terms of process layout and sensor deployment information. Exchange algorithms, originally developed in the research of optimal experiment deign, are employed and revised to maximize the detection sensitivity. A sort-and-cut procedure is used, which remarkably improve the algorithm efficiency of the current exchange routine. The resulting optimal sensor layouts and its implications are illustrated in the specific context of a panel assembly process.
|
3 |
Log Event Filtering Using Clustering TechniquesWasfy, Ahmed January 2009 (has links)
Large software systems are composed of various different run-time components, partner
applications and, processes. When such systems operate they are monitored so that audits can be
performed once a failure occurs or when maintenance operations are performed. However, log files
are usually sizeable, and require filtering and reduction to be processed efficiently. Furthermore, there
is no apparent correspondence of how logged events relate to particular use cases the system may be
performing. In this thesis, we have developed a framework that is based on heuristic clustering
algorithms to achieve log filtering, log reduction and, log interpretation. More specifically we define
the concept of the Event Dependency Graph, and we present event filtering and use case
identification techniques, that are based on event clustering. The clustering process groups together
all events that relate to a collection of initial significant events that relate to a use case. We refer to
these significant events as beacon events. Beacon events can be identified automatically or semiautomatically
by examining log event types or event names against event types or event names in the
corresponding specification of a use case being considered (e.g. events in sequence diagrams).
Furthermore, the user can select other or additional initial clustering conditions based on his or her
domain knowledge of the system. The clustering technique can be used in two possible ways. The
first is for large logs to be reduced or sliced, with respect to a particular use case so that, operators can
better focus their attention to specific events that relate to specific operations. The second is for the
determination of active use cases where operators select particular seed events of interest and then
examine the resulting reduced logs against events or event types stemming from different alternative
known use cases being considered, in order to identify the best match and consequently provide
insights on which of these alternative use cases may be running at any given time. The approach has
shown very promising results towards the identification of executing use cases among various
alternative ones in various runs of the Session Initiation Protocol.
|
4 |
Log Event Filtering Using Clustering TechniquesWasfy, Ahmed January 2009 (has links)
Large software systems are composed of various different run-time components, partner
applications and, processes. When such systems operate they are monitored so that audits can be
performed once a failure occurs or when maintenance operations are performed. However, log files
are usually sizeable, and require filtering and reduction to be processed efficiently. Furthermore, there
is no apparent correspondence of how logged events relate to particular use cases the system may be
performing. In this thesis, we have developed a framework that is based on heuristic clustering
algorithms to achieve log filtering, log reduction and, log interpretation. More specifically we define
the concept of the Event Dependency Graph, and we present event filtering and use case
identification techniques, that are based on event clustering. The clustering process groups together
all events that relate to a collection of initial significant events that relate to a use case. We refer to
these significant events as beacon events. Beacon events can be identified automatically or semiautomatically
by examining log event types or event names against event types or event names in the
corresponding specification of a use case being considered (e.g. events in sequence diagrams).
Furthermore, the user can select other or additional initial clustering conditions based on his or her
domain knowledge of the system. The clustering technique can be used in two possible ways. The
first is for large logs to be reduced or sliced, with respect to a particular use case so that, operators can
better focus their attention to specific events that relate to specific operations. The second is for the
determination of active use cases where operators select particular seed events of interest and then
examine the resulting reduced logs against events or event types stemming from different alternative
known use cases being considered, in order to identify the best match and consequently provide
insights on which of these alternative use cases may be running at any given time. The approach has
shown very promising results towards the identification of executing use cases among various
alternative ones in various runs of the Session Initiation Protocol.
|
5 |
Optimal coordinate sensor placements for estimating mean and variance components of variation sourcesLiu, Qinyan 29 August 2005 (has links)
In-process Optical Coordinate Measuring Machine (OCMM) offers the potential of diagnosing in a timely manner variation sources that are responsible for product quality defects. Such a sensor system can help manufacturers improve product quality and reduce process downtime. Effective use of sensory data in diagnosing variation sources depends on the optimal design of a sensor system, which is often known as the problem of sensor placements. This thesis addresses coordinate sensor placement in diagnosing dimensional variation sources in assembly processes. Sensitivity indices of detecting process mean and variance components are defined as the design criteria and are derived in terms of process layout and sensor deployment information. Exchange algorithms, originally developed in the research of optimal experiment deign, are employed and revised to maximize the detection sensitivity. A sort-and-cut procedure is used, which remarkably improve the algorithm efficiency of the current exchange routine. The resulting optimal sensor layouts and its implications are illustrated in the specific context of a panel assembly process.
|
6 |
The development and validation of a questionnaire on Root Cause AnalysisWepener, Clare 02 March 2021 (has links)
Background: Root Cause Analysis (RCA) is a method of investigating adverse events (AEs). The purpose of RCA is to improve quality of care and patient safety through a retrospective, structured investigative process of an incident, resulting in recommendations to prevent the recurrence of medical errors. Aim: The aim of the study was to develop and validate a prototype questionnaire to establish whether the RCA model and processes employed at the research setting were perceived by the users to be acceptable, thorough and credible in terms of internationally established criteria. Methods: This is a validation study comprising four phases to meet the study objectives: 1) the development of a prototype questionnaire guided by a literature review; 2) assessing the validity of the content of the questionnaire by and numerical evaluation of the face validity thereof; 3) assessing the qualitative face validity cognitive interviews; and 4) reliability by test-retest. Results: Content validity assessment in Phase 2 resulted in removal of 1/36 (2.77%) question items and amendment of 7/36 (19.44%), resulting in 35 for the revised questionnaire. Analysis of data from the cognitive interviews resulted in amendment of 20/35 (57.14%) question items but no removal. Reliability of the final questionnaire achieved the predetermined ≥0.7 level of agreement. Conclusion: The questionnaire achieved a high content validity index and face validity was enhanced by cognitive interviews by providing qualitative data. The inter-rater coefficient indicated a high level of reliability. The tool was designed for a local private healthcare sector and this may limit its use.
|
7 |
Enhancing Organizational Performance in Banks: A Systematic ApproachYavas, Ugur, Yasin, Mahmoud M. 01 November 2001 (has links)
To enhance their organizational performance, banks can benefit from the experiences of manufacturing firms and gainfully employ quality and process improvement philosophies with proven track records in manufacturing industries. This article presents a framework, which integrates root cause analysis with benchmarking, process reengineering and continuous improvement. A case study is employed to illustrate the application of the framework and to demonstrate how it can benefit a bank in lowering costs, enhancing productivity, responding to customer demands, reducing complaints and improving customer satisfaction.
|
8 |
A data-driven solution for root cause analysis in cloud computing environments. / Uma solução guiada por dados de análise de causa raiz em ambiente de computação em nuvem.Pereira, Rosangela de Fátima 05 December 2016 (has links)
The failure analysis and resolution in cloud-computing environments are a a highly important issue, being their primary motivation the mitigation of the impact of such failures on applications hosted in these environments. Although there are advances in the case of immediate detection of failures, there is a lack of research in root cause analysis of failures in cloud computing. In this process, failures are tracked to analyze their causal factor. This practice allows cloud operators to act on a more effective process in preventing failures, resulting in the number of recurring failures reduction. Although this practice is commonly performed through human intervention, based on the expertise of professionals, the complexity of cloud-computing environments, coupled with the large volume of data generated from log records generated in these environments and the wide interdependence between system components, has turned manual analysis impractical. Therefore, scalable solutions are needed to automate the root cause analysis process in cloud computing environments, allowing the analysis of large data sets with satisfactory performance. Based on these requirements, this thesis presents a data-driven solution for root cause analysis in cloud-computing environments. The proposed solution includes the required functionalities for the collection, processing and analysis of data, as well as a method based on Bayesian Networks for the automatic identification of root causes. The validation of the proposal is accomplished through a proof of concept using OpenStack, a framework for cloud-computing infrastructure, and Hadoop, a framework for distributed processing of large data volumes. The tests presented satisfactory performance, and the developed model correctly classified the root causes with low rate of false positives. / A análise e reparação de falhas em ambientes de computação em nuvem é uma questão amplamente pesquisada, tendo como principal motivação minimizar o impacto que tais falhas podem causar nas aplicações hospedadas nesses ambientes. Embora exista um avanço na área de detecção imediata de falhas, ainda há percalços para realizar a análise de sua causa raiz. Nesse processo, as falhas são rastreadas a fim de analisar o seu fator causal ou seus fatores causais. Essa prática permite que operadores da nuvem possam atuar de modo mais efetivo na prevenção de falhas, reduzindo-se o número de falhas recorrentes. Embora essa prática seja comumente realizada por meio de intervenção humana, com base no expertise dos profissionais, a complexidade dos ambientes de computação em nuvem, somada ao grande volume de dados oriundos de registros de log gerados nesses ambientes e à ampla inter-dependência entre os componentes do sistema tem tornado a análise manual inviável. Por esse motivo, torna-se necessário soluções que permitam automatizar o processo de análise de causa raiz de uma falha ou conjunto de falhas em ambientes de computação em nuvem, e que sejam escaláveis, viabilizando a análise de grande volume de dados com desempenho satisfatório. Com base em tais necessidades, essa dissertação apresenta uma solução guiada por dados para análise de causa raiz em ambientes de computação em nuvem. A solução proposta contempla as funcionalidades necessárias para a aquisição, processamento e análise de dados no diagnóstico de falhas, bem como um método baseado em Redes Bayesianas para a identificação automática de causas raiz de falhas. A validação da proposta é realizada por meio de uma prova de conceito utilizando o OpenStack, um arcabouço para infraestrutura de computação em nuvem, e o Hadoop, um arcabouço para processamento distribuído de grande volume de dados. Os testes apresentaram desempenhos satisfatórios da arquitetura proposta, e o modelo desenvolvido classificou corretamente com baixo número de falsos positivos.
|
9 |
Provenance-based computingCarata, Lucian January 2019 (has links)
Relying on computing systems that become increasingly complex is difficult: with many factors potentially affecting the result of a computation or its properties, understanding where problems appear and fixing them is a challenging proposition. Typically, the process of finding solutions is driven by trial and error or by experience-based insights. In this dissertation, I examine the idea of using provenance metadata (the set of elements that have contributed to the existence of a piece of data, together with their relationships) instead. I show that considering provenance a primitive of computation enables the exploration of system behaviour, targeting both retrospective analysis (root cause analysis, performance tuning) and hypothetical scenarios (what-if questions). In this context, provenance can be used as part of feedback loops, with a double purpose: building software that is able to adapt for meeting certain quality and performance targets (semi-automated tuning) and enabling human operators to exert high-level runtime control with limited previous knowledge of a system's internal architecture. My contributions towards this goal are threefold: providing low-level mechanisms for meaningful provenance collection considering OS-level resource multiplexing, proving that such provenance data can be used in inferences about application behaviour and generalising this to a set of primitives necessary for fine-grained provenance disclosure in a wider context. To derive such primitives in a bottom-up manner, I first present Resourceful, a framework that enables capturing OS-level measurements in the context of application activities. It is the contextualisation that allows tying the measurements to provenance in a meaningful way, and I look at a number of use-cases in understanding application performance. This also provides a good setup for evaluating the impact and overheads of fine-grained provenance collection. I then show that the collected data enables new ways of understanding performance variation by attributing it to specific components within a system. The resulting set of tools, Soroban, gives developers and operation engineers a principled way of examining the impact of various configuration, OS and virtualization parameters on application behaviour. Finally, I consider how this supports the idea that provenance should be disclosed at application level and discuss why such disclosure is necessary for enabling the use of collected metadata efficiently and at a granularity which is meaningful in relation to application semantics.
|
10 |
Orsaksanalys och lösningsförslag vid fel vid kommunikation av växelläge / Analysis of cause and suggestion of countermeasure for position communication failure of selected gearThorsell, Tobias January 2012 (has links)
Detta examensarbete på C-nivå har genomförts i samarbete med Kongsberg Automotive i Mullsjö som utvecklar och tillverkar komponenter till fordons-industrin.Företaget har fått reklamationer på en växelväljare i deras sortiment och är en produkt som sitter i kundens lastbilar och bussar. Felen har uppträtt relativt sällan men med tillräckligt stor marginal för att de ska klassas som allvarliga fel. Dessa fel är kopplade till den magnetarm som kommunicerar med transmissionen i fordonet och konsekvensen av dessa fel blir att fordonet blir obrukbart och måste bärgas bort.Syftet med detta examensarbete är att författaren på ett ingenjörsmässigt sätt ska angripa problemen med magnetarmen till växelväljaren på så sätt att grundorsaken bakom kan säkerställas. Målet för arbetet är att hitta dessa grundorsaker till varför magnetarmen går sönder, eller hoppas ur sitt läge, samt att ta fram en design som hindrar att problemen uppstår igen.För att strukturera arbetet har författaren använt sig av en problemlösningsmetod som heter Six Sigma DMAIC. Det är den här metoden som hela projektet och därmed rapporten är uppbyggt kring.Författaren kom fram till att grundorsakerna till problemen som uppstått med magnetarmen hade grundat sig i konstruktionen av de komponenter som sköter funktionen med produktens växelknappar. Deras konstruktion har gjort det möjligt för föraren att felaktigt kunna aktivera två knappar samtidigt vilket ledde till att produkten påverkats på fel sätt.Examensarbetet resulterade i ett koncept som tillsammans med företagets egna framtagna lösning tar bort de bakomliggande grundorsakerna och förhindrar att problemen kan uppstå igen. / This bachelor thesis has been executed in cooperation with Kongsberg Auto-motive AB, Mullsjö, who develop and produce parts to the automotive industry.The company has received complaints on a gear lever unit which they produce and which sits in the customers’ trucks and buses. The failures have occurred relatively infrequently, but with enough margins to classify them as serious failures. These failures are connected to that magnet arm in the product which communicates with the transmission of the vehicle and leads to the consequence of an unusable truck that is in need of towing.The intent with this thesis is that the author should tackle the problems with the magnet arm on an engineering basis so that the root causes to the problems can be ascertained. The goal is to find these root causes to why the magnet arm breaks, or dislocates, and generate a design that prevents the problems from reappearing.To structure the work the author has used a method for problem solving called Six Sigma DMAIC which is the base for the whole project and therefore the thesis.Through extensive analyzes the author ascertained that the root causes for the problems with the magnet arm came from the design of parts, relating to the knob of the product, that enables two buttons to simultaneously be activated.The thesis resulted in a concept which together with the company´s solution removes the underlying root causes and prevents the problem from reappearing.
|
Page generated in 0.0691 seconds