Spelling suggestions: "subject:"eliability."" "subject:"deliability.""
161 |
Optimering av rotordrift för värmeväxlare / Optimering av rotordrift för värmeväxlareHolländer, Anton January 2023 (has links)
No description available.
|
162 |
Processförbättring vid tillverkning av konstruktionskeramerGarvare, Rickard January 1998 (has links)
This thesis is about implementing Design of Experiments in enterprises manufacturing high performance ceramics. The manufacturing of ceramics is a complex process which involves problems with variation in product properties and in process performance. Every system in operation generates information that can be used to improve it. To be able to improve, measurements must be made and recorded data must be transformed into information. Design of Experiments is about performing tests using a minimum of resources to receive a maximum of information about a process or a system. Today most of the development of processes and products is done supported by expensive, and often misleading, one-factor-at-a-time experiments. To examine the possibilities of facilitating implementation of Design of Experiments, case-studies of two Swedish manufacturers of high performance ceramics were carried out. A model of implementing Design of Experiments is presented based on theory and the case-studies. The proposed model consists of three major phases: 1.Planning and education. 2.Pilot project with new ways of working. 3.Assessment, maintenance and improvement. Design of Experiments appears to be a well suited technique for structuring the development of manufacturing high performance ceramics. The implementation of Design of Experiments could be facilitated by long-term planning for process improvement. To make assessment and evaluation possible, process performance should be documented not only after but also before an implementation takes place. Both knowledge about statistics and knowledge about the studied processes should be present in the teams carrying out experiments. / <p>Godkänd; 1998; 20070404 (ysko)</p>
|
163 |
Squat Detection in Railway Switches & Crossings Using Point Machine VibrationZuo, Yang January 2022 (has links)
Railway switches and crossings (S&Cs) are among the most important high-value components in a railway network and a single failure of such an asset could result in severe network disturbance, huge economical loss, and even severe accidents. Therefore, potential defects need to be detected at an early stage and the status of the S&C must be monitored to prevent such consequences. One type of defect that can occur is called a squat. A squat is a local defect like a dent or an open pit in the rail surface. In this thesis, a testbed including a full-scale S&C and a bogie wagon was studied. Vibrations were measured for different squat sizes by an accelerometer mounted at the point machine, while a boggy was travelling along the S&C. A method of processing the vibration data and the speed data is proposed to investigate the feasibility of detecting and quantifying the severity of a squat. A group of features were extracted to apply isolation forest to generate anomaly scores to estimate the health status of the S&C. One key technology applied is wavelet denoising. The study shows that it is possible to monitor the development of the squat size introduced in the test bed by measuring point machine vibrations. The relationships between the normalised peak-to-peak amplitude of the vibration signal and the squat depth were estimated. The results also show that the proposed method is effective and can produce anomaly scores that can indicate the general health status of an S&C regarding squat defects.
|
164 |
An accurate reliability modeling technique for hybrid modular redundant digital systems using modular characteristic error parameters /Nelson, Victor Peter January 1978 (has links)
No description available.
|
165 |
Measures of agreement for qualitative dataWolfson, Christina, 1955- January 1978 (has links)
No description available.
|
166 |
Dynamic Fault Tree Analysis: State-of-the-Art in Modeling, Analysis, and ToolsAslansefat, K., Kabir, Sohag, Gheraibia, Y., Papadopoulos, Y. 04 August 2020 (has links)
Yes / Safety and reliability are two important aspects of dependability that are needed to be rigorously evaluated throughout the development life-cycle of a system. Over the years, several methodologies have been developed for the analysis of failure behavior of systems. Fault tree analysis (FTA) is one of the well-established and widely used methods for safety and reliability engineering of systems. Fault tree, in its classical static form, is inadequate for modeling dynamic interactions between components and is unable to include temporal and statistical dependencies in the model. Several attempts have been made to alleviate the aforementioned limitations of static fault trees (SFT). Dynamic fault trees (DFT) were introduced to enhance the modeling power of its static counterpart. In DFT, the expressiveness of fault tree was improved by introducing new dynamic gates. While the introduction of the dynamic gates helps to overcome many limitations of SFT and allows to analyze a wide range of complex systems, it brings some overhead with it. One such overhead is that the existing combinatorial approaches used for qualitative and quantitative analysis of SFTs are no longer applicable to DFTs. This leads to several successful attempts for developing new approaches for DFT analysis. The methodologies used so far for DFT analysis include, but not limited to, algebraic solution, Markov models, Petri Nets, Bayesian Networks, and Monte Carlo simulation. To illustrate the usefulness of modeling capability of DFTs, many benchmark studies have been performed in different industries. Moreover, software tools are developed to aid in the DFT analysis process. Firstly, in this chapter, we provided a brief description of the DFT methodology. Secondly, this chapter reviews a number of prominent DFT analysis techniques such as Markov chains, Petri Nets, Bayesian networks, algebraic approach; and provides insight into their working mechanism, applicability, strengths, and challenges. These reviewed techniques covered both qualitative and quantitative analysis of DFTs. Thirdly, we discussed the emerging trends in machine learning based approaches to DFT analysis. Fourthly, the research performed for sensitivity analysis in DFTs has been reviewed. Finally, we provided some potential future research directions for DFT-based safety and reliability analysis.
|
167 |
An Investigation of Software Metrics Affect on Cobol Program ReliabilityDay, Henry Jesse II 20 June 1996 (has links)
The purpose of this research was to predict a COBOL program's reliability from software characteristics that are found in the program's source code. The first step was to select factors based on the human information processing model that are associated with changes in computer program reliability. Then these factors (software metrics) were quantitatively studied to determine which factors affect COBOL program reliability. Then a statistical model was developed that predicts COBOL program reliability. Reliability was selected because the reliability of computer programs can be used by systems professionals and auditors to make decisions. Using the Human Information Processing Model to study the act of creating a computer program, several hypotheses were derived about program characteristics and reliability. These hypotheses were categorized as size, structure, and temporal hypotheses. These characteristics were then used to test several prediction models for the reliability of COBOL programs. Program characteristics were measured by a program called METRICS. METRICS was written by the author using the Pascal programming language. It accepts COBOL programs as input and produces as output seventeen measures of complexity. Actual programs and related data were then gathered from a large insurance company over the course of one year. The data were used to test the hypotheses and to find a model for predicting the reliability of COBOL programs. The operational definition for reliability was the probability of a program executing without abending. The size of a program, its cyclomatic complexity, and the number of times a program has been executed were used to predict reliability. A regression model was developed that predicted the reliability of a COBOL program from a program's characteristics. The model had a prediction error of 9.3%, a R2 of 15%, and an adjusted R2 of 13%. The most important thing learned from the research is that increasing the size of a program's modules, not the total size of a program, is associated with decreased reliability. / Ph. D.
|
168 |
Reliability Transform MethodYoung, Robert Benjamin 22 July 2003 (has links)
Since the end of the cold war the United States is the single dominant naval power in the world. The emphasis of the last decade has been to reduce cost while maintaining this status. As the Navy's infrastructure decreases, so too does its ability to be an active participant in all aspects of ship operations and design. One way that the navy has achieved large savings is by using the Military Sealift Command to manage day to day operations of the Navy's auxiliary and underway replenishment ships. While these ships are an active part of the Navy's fighting force, they infrequently are put into harm's way. The natural progression in the design of these ships is to have them fully classified under current American Bureau of Shipping (ABS) rules, as they closely resemble commercial ships. The first new design to be fully classed under ABS is the T-AKE. The Navy and ABS consider the T-AKE program a trial to determine if a partnership between the two organizations can extend into the classification of all new naval ships. A major difficulty in this venture is how to translate the knowledge base which led to the development of current military specifications into rules that ABS can use for future ships.
The specific task required by the Navy in this project is to predict the inherent availability of the new T-AKE class ship. To accomplish this task, the reliability of T-AKE equipment and machinery must be known. Under normal conditions reliability data would be obtained from past ships with similar mission, equipment and machinery. Due to the unique nature of the T-AKE acquisition, this is not possible. Because of the use of commercial off the shelf (COTS) equipment and machinery, military equipment and machinery reliability data can not be used directly to predict T-AKE availability. This problem is compounded by the fact that existing COTS equipment and machinery reliability data developed in commercial applications may not be applicable to a military application. A method for deriving reliability data for commercial equipment and machinery adapted or used in military applications is required.
A Reliability Transform Method is developed that allows the interpolation of reliability data between commercial equipment and machinery operating in a commercial environment, commercial equipment and machinery operating in a military environment, and military equipment and machinery operating in a military environment. The reliability data for T-AKE is created using this Reliability Transform Method and the commercial reliability data. The reliability data is then used to calculate the inherent availability of T-AKE. / Master of Science
|
169 |
Quantifying validity and reliability of GPS derived distances during simulated tennis movementsTessaro, Edoardo 09 February 2017 (has links)
Tennis is a competitive sport attracting millions of players and fans worldwide. During a competition, the physical component crucially affects the final result of a match. In field sports such as soccer physical demand data are collected using the global positioning system (GPS). There is question regarding the validity and reliability of using GPS technology for court sports such as tennis. The purpose of this study is to determine the validity and reliability of GPS to determine distances covered during simulated tennis movements. This was done by comparing GPS recorded distances to distances determined with a calibrated trundle wheel. Two SPI HPU units were attached to the wheel Four different trials were performed to assess accuracy and reliability: distance trial (DIST), shuttle run trial (SHUT), change of direction trial (COD) and random movement trial (RAND). The latter three trials are performed on a tennis court and designed to mimic movements during a tennis match. Bland-Altman analysis showed that during all trails, there were small differences in the trundle wheel and GPS derived distances. Bias for the DIST, SHUT, COD and RAND trails were -0.02±0.10, -0.51±0.15, -0.24±0.19 and 0.28±0.20%, respectively. Root mean squared (RMS) errors for the four trials were 0.41±0.10, 1.28±0.10, 1.70±0.10 and 1.55±0.13%. Analysis of paired units showed a good reliability with mean bias and RMS errors <2%%. These results suggest that SPI HPU units are both accurate and reliable for simulated tennis movements. They can be confidently used to determine the physical demands of court sports like tennis. / Master of Science / Wearable technology, including global positioning system (GPS) devices, are becoming increasingly popular among athletes and sports teams. These devices offer a quick and simple method of determining distances covered, movement speeds and accelerations during training and competition. Such data are then used to determine match or training load and help the athlete maximize training benefits while minimizing injury risk. However, in order for these devices to be successfully used, their validity and reliability must be determined. GPS has been shown to be valid and reliable for field based sports, like soccer, but not for court based sports, such as tennis. In this study, the GPSports SPI HPU unit was found to be both valid and reliable for simulated tennis movements. The error between the unit distances and distances determined by a running wheel were less than 6% and typically within 2%. The results of this study indicate that the SPI HPU devices can be used to determine work load during tennis. Coaches and trainers can be confident that the data generated by these units are both reliable and valid.
|
170 |
ENHANCING AUTONOMOUSUNDERWATER VEHICLE MISSIONDEPENDABILITY THROUGH ADAPTIVEDYNAMIC REDUNDANCYBarhaido, Matteus January 2024 (has links)
This master thesis presents, suggests, and discusses a novel approach to enhance the mission dependability of Autonomous Underwater Vehicles (AUVs) through Adaptive Dynamic Redundancy (ADR). The AUVs demand improved dependability not only because they operate in harsh and unpredictable waters, but also because mission failures occur, leading to mission aborts and potential loss of valuable data and equipment. The ADR was carefully implemented for AUV thrusters using curated methods. The ADR stands out as it maintains high dependability while utilizing less power consumption, basing this on necessity, needs, environmental data and conditions, thruster health, and mission criticality. By leveraging a switching feature between Dual Modular Redundancy (DMR) and 1-out-of-3 redundancy, ADR aims to minimize the risk of failures while optimizing the power consumption and reducing wear and tear on the thrusters both for their operational and futuristic state. Through the comparative analysis, the ADR has demonstrated its capability to enhance dependability by improving reliability, safety, and operational efficiency of AUVs compared to other standardized redundancy concepts. The findings suggest that ADR not only prevents failures more effectively than TMR and DMR, but also significantly extends the mission’s lifespan and increases overall mission success rates.
|
Page generated in 0.0372 seconds