Spelling suggestions: "subject:"eliability."" "subject:"deliability.""
261 |
Reliability engineering of a hospital oxygen supply system.Nel, Coenrad Marais 11 September 2012 (has links)
M.Ing. / This dissertation covers a literature study of the reliability engineering, and this is then applied to the hospital oxygen supply system in order to determine the reliability of the system. The hospital oxygen supply system must comply with international and local legislation, which insists that the reliability of the system must be very high, since it supports life in the hospital. Since there were no previous studies conducted in terms of the oxygen supply system to the knowledge of the author, it definitely opens a new study field for the application of reliability engineering concepts. In the research it was found that no records were kept by the company on the failures occurring with the oxygen supply system. This increased the difficulty to calculate the actual reliability of the supply system. A reliability prediction was done, based on the failure rate data from a database. The reliability prediction of the .system was very low, and possibly not a very accurate prediction of the actual reliability of the system. The author therefore created a reliability calculation program, which calculates the reliability of the system and also keeps, an accurate failure data record on each component of the system. The main conclusion reached with this dissertation is that failure data feedback, and accurate records are a very important factor of reliability engineering. This may influence the company's ability to rectify design changes in their systems, as there is no idea where the failure occurred and how much money value is linked to the failures occurring.
|
262 |
Improving network quality-of-service with unreserved backup pathsChen, Ing-Wher 11 1900 (has links)
To be effective, applications such as streaming multimedia require both a more stable and more reliable service than the default best effort service from the underlying computer network. To guarantee steady data transmission despite the unpredictability of the network, a single reserved path for each traffic flow is used. However, a single dedicated path suffers from single link failures. To allow for continuous service inexpensively, unreserved backup paths are used in this thesis. While there are no wasted resources using unreserved backup paths, recovery from a failure may not be perfect. Thus, a goal for this approach is to design algorithms that compute backup paths to mask the failure for all traffic, and failing that, to maximize the number of flows that can be unaffected by the failure. Although algorithms are carefully designed with the goal to provide perfect recovery, when using only unreserved backup paths, re-routing of all affected flows, at the same service quality as before the failure, may not be possible under some conditions, particularly when the network was already fully loaded prior to the failure. Alternate strategies that trade off service quality for continuous traffic flow to minimize the effects of the failure on traffic should be considered. In addition, the actual backup path calculation can be problematic because finding backup paths that can provide good service often requires a large amount of information regarding the traffic present in the network, so much that the overhead can be prohibitive. Thus, algorithms are developed with trade-offs between good performance and communication overhead. In this thesis, a family of algorithms is designed such that as a whole, inexpensive, scalable, and effective performance can be obtained after a failure. Simulations are done to study the trade-offs between performance and scalability and between soft and hard service guarantees. Simulation results show that some algorithms in this thesis yield competitive or better performance even at lower overhead. The more reliable service provided by unreserved backup paths allows for better performance by current applications inexpensively, and provides the groundwork to expand the computer network for future services and applications. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
|
263 |
On Optimal Maintenance Management for Wind Power SystemsBesnard, Francois January 2009 (has links)
Sound maintenance strategies and planning are of crucial importance for wind power systems, and especially for offshore locations. In the last decades, an increased awareness of the impact of human living on the environment has emerged in the world. The importance of developing renewable energy is today highly recognized and energy policies have been adopted towards this development. Wind energy has been the strongest growing renewable source of energy this last decade. Wind power is now developing offshore where sites are available and benefits from strong and steady wind. However, the initial investments are larger than onshore, and operation and maintenance costs may be substantially higher due to transportation costs for maintenance and accessibility constrained by the weather. Operational costs can be significantly reduced by optimizing decisions for maintenance strategies and maintenance planning. This is especially important for offshore wind power systems to reduce the high economic risks related to the uncertainties on the accessibility and reliability of wind turbines. This thesis proposes decision models for cost efficient maintenance planning and maintenance strategies for wind power systems. One model is proposed on the maintenance planning of service maintenance activities. Two models investigate the benefits of condition based maintenance strategies for the drive train and for the blades of wind turbines, respectively. Moreover, a model is proposed to optimize the inspection interval for the blade. Maintenance strategies for small components are also presented with simple models for component redundancy and age replacement. The models are tested in case studies and sensitivity analyses are performed for parameters of interests. The results show that maintenance costs can be significantly reduced through optimizing the maintenance strategies and the maintenance planning.
|
264 |
One-Year Test-Retest Reliability of the Online Version of ImPACT in High School AthletesElbin, R. J., Schatz, Philip, Covassin, Tracey 01 November 2011 (has links)
Background: The ImPACT (Immediate Post-Concussion Assessment and Cognitive Testing) neurocognitive testing battery is a popular assessment tool used for concussion management. The stability of the baseline neurocognitive assessment is important for accurate comparisons between postconcussion and baseline neurocognitive performance. Psychometric properties of the recently released online version of ImPACT have yet to be established; therefore, research evaluating the reliability of this measure is warranted.Purpose: The authors investigated the 1-year test-retest reliability of the ImPACT online version in a sample of high school athletes.Study Design: Case series; Level of evidence, 4Methods: A total of 369 varsity high school athletes completed 2 mandatory preseason baseline cognitive assessments approximately 1 year apart as required by their respective athletics program. No diagnosed concussion occurred between assessments.Results: Intraclass correlation coefficients (ICCs) for ImPACT online indicated that motor processing speed (.85) was the most stable composite score, followed by reaction time (.76), visual memory (.70), and verbal memory (.62). Unbiased estimates of reliability were consistent with ICCs: motor processing speed (.85), reaction time (.76), visual memory (.71), and verbal memory (.62).Conclusion: The online ImPACT baseline is a stable measure of neurocognitive performance across a 1-year time period for high school athletes. These reliability data for online ImPACT are higher than the 2-year ICCs previously reported from the desktop version.Clinical Relevance: It is recommended that the ImPACT baseline assessment (both desktop and online) continue to be updated every 2 years. The online version of ImPACT appears to be a stable measure of neurocognitive performance over a 1-year period, and systematic evaluation of its stability over a 2-year period is warranted.
|
265 |
Comparison of Depressive Symptom Severity Scores in Low-Income WomenKneipp, Shawn M., Kairalla, John A., Stacciarini, Jeanne M., Pereira, Deidre, Miller, M. D. 01 November 2010 (has links)
BACKGROUND: The Beck Depression Inventory, Second Edition (BDI-II), and the Patient Health Questionnaire-9 (PHQ-9) are considered reliable and valid for measuring depressive symptom severity and screening for a depressive disorder. Few studies have examined the convergent or divergent validity of these two measures, and none has been conducted among low-income women-although rates of depression in this group are extremely high. Moreover, variation in within-subject scores suggests that these measures may be less comparable in select subgroups. OBJECTIVE: We sought to compare these two measures in terms of construct validity and to examine whether within-subject differences in depressive symptom severity scores could be accounted for by select characteristics in low-income women. METHODS: In a sample of 308 low-income women, construct validity was assessed using a multitrait-monomethod matrix approach, between-instrument differences in continuous symptom severity scores were regressed on select characteristics using backward stepwise selection, and differences in depressive symptom classification were assessed using the Mantel-Haenszel test. RESULTS: Convergent validity was high (rs = .80, p < .001). Among predictors that included age, race, education, number of chronic health conditions, history of depression, perceived stress, anxiety, and/or the number of generalized symptoms, none explained within-subject differences in depressive symptom scores between the BDI-II and the PHQ-9 (p > .05, R2 < .04). Similarly, there was consistency in depressive symptom classification (χ = 172 and 172.6, p < .0001). DISCUSSION: These findings demonstrate that the BDI-II and the PHQ-9 perform similarly among low-income women in terms of depressive symptom severity measurement and classifying levels of depressive symptoms, and do not vary across subgroups on the basis of select demographics.
|
266 |
TAAF Stopping Rules for Maximizing the Utility of One-Shot SystemsMaillart, Lisa M. 25 April 1997 (has links)
Test-analyze-and-fix (TAAF) is the most commonly recognized method of improving system reliability. The work presented here addresses the question of when to stop testing during TAAF programs involving one-shot systems when the number of systems to be produced is predetermined and the probabilities of identifying and successfully correcting each failure mode are less than one. The goal here is to determine when to cease testing to maximize utility where utility is defined as the number of systems expected to perform successfully in the field after deployment of the lot.
Two TAAF stopping rules are presented. Simulation is used to model TAAF execution under different reliability growth conditions. Four discrete reliability growth models (DRGM's) are used to generate "real world" reliability growth and to estimate reliability growth using hypothetical observed success/failure data. Ranges for the following parameters are considered: starting reliability, growth rate, maximum achievable reliability, number of systems to be produced, probability of incorrectly identifying a failure mode, and probability of an unsuccessful design modification.
Conclusions are drawn regarding stopping rule performance in terms of stopping rule signal location, utility loss, achieved reliability, and fraction tested. Both rules perform well and are implementable from a practical standpoint. Specific recommendations for stopping rule implementation are given based on the controllable factors, estimation methodology and lot size. / Master of Science
|
267 |
Cumulative sum quality control charts design and applicationsKesupile, Galeboe January 2006 (has links)
Includes bibliographical references (pages 165-169). / Classical Statistical Process Control Charts are essential in Statistical Control exercises and thus constantly obtained attention for quality improvements. However, the establishment of control charts requires large-sample data (say, no less than I 000 data points). On the other hand, we notice that the small-sample based Grey System Theory Approach is well-established and applied in many areas: social, economic, industrial, military and scientific research fields. In this research, the short time trend curve in terms of GM( I, I) model will be merged into Shewhart and CU SUM two-sided version control charts and establish Grey Predictive Shewhart Control chart and Grey Predictive CUSUM control chart. On the other hand the GM(2, I) model is briefly checked its of how accurate it could be as compared to GM( I, 1) model in control charts. Industrial process data collected from TBF Packaging Machine Company in Taiwan was analyzed in terms of these new developments as an illustrative example for grey quality control charts.
|
268 |
Interval-Valued Kriging Models with Applications in Design Ground Snow Load PredictionBean, Brennan L. 01 August 2019 (has links)
One critical consideration in the design of buildings constructed in the western United States is the weight of settled snow on the roof of the structure. Engineers are tasked with selecting a design snow load that ensures that the building is safe and reliable, without making the construction overly expensive. Western states use historical snow records at weather stations scattered throughout the region to estimate appropriate design snow loads. Various mapping techniques are then used to predict design snow loads between the weather stations. Each state uses different mapping techniques to create their snow load requirements, yet these different techniques have never been compared. In addition, none of the current mapping techniques can account for the uncertainty in the design snow load estimates. We address both issues by formally comparing the existing mapping techniques, as well as creating a new mapping technique that allows the estimated design snow loads to be represented as an interval of values, rather than a single value. In the process, we have improved upon existing methods for creating design snow load requirements and have produced a new tool capable of handling uncertain climate data.
|
269 |
Heritability estimation of reliable connectome featuresXie, Linhui January 2018 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Brain imaging genetics is an emerging research field aimed at studying the underlying genetic architecture of brain structure and function by utilizing different imaging modalities. However, not all the changes in the brain are a direct result of the genetic effect. Furthermore, the imaging phenotypes are promising for genetic analyses are usually unknown. In this thesis, we focus on identifying highly heritable measures of structural brain networks derived from Diffusion Weighted Magnetic Resonance imaging data. Using data for twins that is made available by the Human Connectome Project (HCP), the reliability of edge-level measures, namely fractional anisotropy, fiber length, and fiber number in the structural connectome, as well as seven network-level measures, specifically assortativity coefficient, local efficiency, modularity, transitivity, cluster coefficient, global efficiency, and characteristic path length, were evaluated using intraclass correlation coefficients. In addition, estimates of the heritability of the reliable measures were also obtained. It was observed that across all 64,620 network edges between 360 brain regions in the Glasser parcellation, approximately 5% were significantly high heritability based on fractional anisotropy, fiber length, or fiber number. Moreover, all tested network level measures, that capture network integrity, segregation, or resilience, were found to be highly heritable, having a variance ranging from 59% to 77% that is attributable to an additive genetic effect.
|
270 |
The Reliability Paradox: When High Reliability Does not Signal Reliable Detection of Experimental EffectsWang, Shuo 24 October 2019 (has links)
No description available.
|
Page generated in 0.0744 seconds