• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 64
  • 32
  • 29
  • 29
  • 23
  • 23
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Design of a reliability methodology : modelling the influence of temperature on gate oxide reliability

Owens, Gethin Lloyd January 2007 (has links)
An Integrated Reliability Methodology (IRM) is presented that encompasses the changes that technology growth has brought with it and includes several new device degradation models. Each model is based on a physics of failure approach and includes on the effects of temperature. At all stages the models are verified experimentally on modern deep sub-micron devices. The research provides the foundations of a tool which gives the user the opportunity to make appropriate trade-offs between performance and reliability, and that can be implemented in the early stages of product development.
2

A systems engineering approach to servitisation system modelling and reliability assessment

Astley, Kenneth Richard January 2011 (has links)
Companies are changing their business model in order to improve their long term competitiveness. Once where they provided only products, they are now providing a service with that product resulting in a reduced cost of ownership. Such a business case benefits both customer and service supplier only if the availability of the product, and hence the service, is optimised. For highly integrated product and service offerings this means it is necessary to assess the reliability monitoring service which underpins service availability. Reliability monitoring service assessment requires examination of not only product monitoring capability but also the effectiveness of the maintenance response prompted by the detection of fault conditions. In order to address these seemingly dissimilar aspects of the reliability monitoring service, a methodology is proposed which defines core aspects of both the product and service organisation. These core aspects provide a basis from which models of both the product and service organisation can be produced. The models themselves though not functionally representative, portray the primary components of each type of system, the ownership of these system components and how they are interfaced. These system attributes are then examined to establish system risk to reliability by inspection, evaluation of the model or by reference to model source documentation. The result is a methodology that can be applied to such large scale, highly integrated systems at either an early stage of development or in latter development stages. The methodology will identify weaknesses in each system type, indicating areas which should be considered for system redesign and will also help inform the analyst of whether or not the reliability monitoring service as a whole meets the requirements of the proposed business case.
3

System failure modelling using binary decision diagrams

Remenyte-Prescott, Rasa January 2007 (has links)
The aim of th1s thesis is to develop the Binary Decision Diagram method for the analysis of coherent and non-coherent fault trees. At present the well-known ite technique for converting fault trees to BDDs is used Difficulties appear when the ordering scheme for basic events needs to be chosen, because it can have a crucial effect on the size of a BDD An alternative method for constructing BDDs from fault trees which addresses these difficulties has been proposed The Binary Decision Diagram method provides an accurate and efficient tool for analysing coherent and non-coherent fault trees. The method is used for the qualitative and quantitative analyses and it is a lot faster and more efficient than the conventional techniques of Fault Tree Analysis The Simplification techniques of fault trees prior to the BDD conversion have been applied and the method for the qualitative analysis of BDDs for coherent and non-coherent fault trees has been developed A new method for the qualitative analysis of non-coherent fault trees has been proposed An analysis of the efficiency has been carried out, comparing the proposed method with the other existing methods for calculating prime implicant sets. The main advantages and disadvantages of the methods have been identified. The combined method of fault tree Simplification and the BDD approach has been applied to Phased Missions This application contains coherent and non-coherent fault trees Methods to perform thmr simplification, conversion to BDDs, minimal cut sets/prime implicant sets calculation, and the mission unreliability evaluation have been produced.
4

Three competing risk problems in the study of mechanical systems reliability

Burnham, Michael Richard January 2010 (has links)
This thesis considers three problems within the eld of competing risks modelling in reliability. The rst problem concerns the question of identi ability within certain subclasses of Doyen and Gaudoin's recently proposed generalised competing risks framework. Bedford and Lindqvist have shown identi ability for one such subclass - a two component series system in which, every time a component fails it is restored to a state "as good as new", while the other component is restored to a state "as bad as old". In this thesis two different subclasses are shown to be identifiable. The first is a generalisation of the Bedford and Lindqvist example for series systems with n components. The second is an n component series system in which each time a component fails it is restored to a state "as good as new". At the same time the remaining components are restored to a state "as good as new" with probability p (which may depend on both the component being restored and the component that failed), or to a state "as bad as old" with probability (1 - p). The second problem concerns the use of competing risks models to study opportunistic maintenance. Bedford and Alkali proposed the following model - the system exhibits a sequence of warning signals, the inter-arrival times of which are assumed to be independently distributed (but non-identical) exponential random variables. The hazard rate of the time to system failure is modelled as a piecewise exponential distribution, in which the hazard rate is constant between signals. A sequence of maintenance opportunities occurs according to a homogeneous Poisson process and the rst opportunity after the kth signal is used to preventatively maintain the system. In this thesis closed form expressions for the above model are calculated (subject to some minor technical restrictions) for the marginal distributions of the time to both preventative and corrective maintenance. Also, the sub-distribution of the time to corrective maintenance is calculated. The third problem concerns the estimation of the marginal distribution for one of two independent competing risks, when knowledge of which risk caused the system shut-down is unknown for some of the observations in the dataset. In this thesis a new estimator based on the Kaplan-Meier product limit estimator is developed for the above set-up. A re-distribution to the right algorithm is also developed and this is shown to be equivalent to the new estimator. The new estimator is also shown to be consistent.
5

Multivariate statistical modelling for fault analysis and quality prediction in batch processes

Hong, Jeong Jin January 2011 (has links)
Multivariate statistical process control (MSPC) has emerged as an effective technique for monitoring processes with a large number of correlated process variables. MSPC techniques use principal component analysis (PCA) and partial least squares (PLS) to project the high dimensional correlated process variables onto a low dimensional principal component or latent variable space and process monitoring is carried out in this low dimensional space. This study is focused on developing enhanced MSPC techniques for fault diagnosis and quality prediction in batch processes. A progressive modelling method is developed in this study to facilitate fault analysis and fault localisation. A PCA model is developed from normal process operation data and is used for on-line process monitoring. Once a fault is detected by the PCA model, process variables that are related to the fault are identified using contribution analysis. The time information on when abnormalities occurred in these variables is identified using time series plot of the squared prediction errors (SPE) on these variables. These variables are then removed and another PCA model is developed using the remaining variables. If the faulty batch cannot be detected by the new PCA model, then the remaining variables are not related to the fault. If the faulty batch can still be detected by the new PCA model, then further variables associated with the fault are identified from SPE contribution analysis. The procedure is repeated until the faulty batch can no longer be detected using the remaining variables. Multi-block methods are then applied with the progressive modelling scheme to enhance fault analysis and localisation efficiency. The methods are tested on a benchmark simulated penicillin production process and real industrial data. An enhanced multi-block PLS predictive modelling method is developed in this study. It is based on the hypothesis that meaningful variable selection can lead to better prediction performance. A data partitioning method for enhanced predictive process modelling is proposed and it enables data to be separated into blocks by different measuring time. Model parameters can be used to express contributions
6

Design for reliability in mechanical systems

Stephenson, John Antony January 1996 (has links)
No description available.
7

The exploratory analysis of reliability data

Walls, L. A. January 1987 (has links)
The thesis outlines the usual parametric analysis of field failure time data for repairable equipments. Due to shortcomings of this black-box approach, exploratory reliability analysis has been adopted to exploit the available data and so learn more about the physical failure process. Elements of exploratory analysis have appeared in recent statistical applications of point process, time series and multivariate methods in the area. These approaches are reviewed and investigated. Exploratory analysis of much field time between failure and limited repair time data for hardware equipments has been undertaken. Despite being from different physical mechanisms, software failure interval data has the same underlying statistical point process as such hardware data and has been similarly investigated. Simple graphs, often with simulation bounds, inference procedures for nonhomogeneous Poisson processes and Box-Jenkins analysis have been used to search for and model aspects of structure expected in reliability data. The appropriateness of the methods is discussed. As well as revealing that (constant) failure rates are often unsuitable summaries, exploratory analysis has highlighted features previously unknown or ignored. The identified time structures, data irregularities and other complexities are described. Exploratory analysis indicated potential dependent failures. A simulation-based graphical tool for highlighting these important events is described. Applications to real data have shown this is a promising approach. Principal coordinates and cluster analyses have been used to explore multivariate field data for automatic fire detection systems in an attempt to identify circumstances leading to false alarms. Data problems limited this analysis. Exploratory analysis has revealed it is common in reliability to assume a too simplistic model formulation compared with the true complex data structures. The implications of this for reliability data collection. storage and analysis are discussed. While an exploratory approach is generally successful, some specialisation of standard statistical methods for reliability is desirable.
8

The automated inspection of knitted fabric

Lefley, M. January 1988 (has links)
No description available.
9

An investigation into the development and potential of a computer based robot selection aid

Ioannou, A. January 1983 (has links)
No description available.
10

Probablistic failure prediction of rocket motor components

Cooper, N. R. January 1988 (has links)
No description available.

Page generated in 0.0194 seconds