• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 212
  • 60
  • 38
  • 23
  • 8
  • 6
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 451
  • 451
  • 97
  • 96
  • 87
  • 80
  • 60
  • 51
  • 49
  • 49
  • 46
  • 45
  • 44
  • 44
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Monitoring and Implementing Early and Cost-Effective Software Fault Detection / Övervakning och implementation av tidig och kostnadseffektiv feldetektering

Damm, Lars-Ola January 2005 (has links)
Avoidable rework constitutes a large part of development projects, i.e. 20-80 percent depending on the maturity of the organization and the complexity of the products. High amounts of avoidable rework commonly occur when having many faults left to correct in late stages of a project. In fact, research studies indicate that the cost of rework could be decreased by up to 30-50 percent by finding more faults earlier. However, since larger software systems have an almost infinite number of usage scenarios, trying to find most faults early through for example formal specifications and extensive inspections is very time-consuming. Therefore, such an approach is not cost-effective in products that do not have extremely high quality requirements. For example, in market-driven development, time-to-market is at least as important as quality. Further, some areas such as hardware dependent aspects of a product might not be possible to verify early through for example code reviews or unit tests. Therefore, in such environments, rework reduction is primarily about finding faults earlier to the extent it is cost-effective, i.e. find the right faults in the right phase. Through a set of case studies at a department at Ericsson AB, this thesis investigates how to achieve early and cost-effective fault detection through improvements in the test process. The case studies include investigations on how to identify which improvements that are most beneficial to implement, possible solutions to the identified improvement areas, and approaches for how to follow-up implemented improvements. The contributions of the thesis include a framework for component-level test automation and test-driven development. Additionally, the thesis provides methods for how to use fault statistics for identifying and monitoring test process improvements. In particular, we present results from applying methods that can quantify unnecessary fault costs and pinpointing which phases and activities to focus improvements on in order to achieve earlier and more cost-effective fault detection. The goal of the methods is to make organizations strive towards finding the right fault in the right test phase, which commonly is in early test phases. The developed methods were also used for evaluating the results of implementing the above-mentioned test framework at Ericsson AB. Finally, the thesis demonstrates how the implementation of such improvements can be continuously monitored to obtain rapid feedback on the status of defined goals. This was achieved through enhancements of previously applied fault analysis methods. / Avhandlingen handlar om hur en mjukvaruutvecklingsorganisation kan hitta fel tidigare i utvecklingsprocessen. Fokus ligger på att hitta rätt fel i rätt fas, d.v.s. när det är som mest kostnadseffektivt. Avhandlingen presenterar en samling fallstudier utförda inom detta området på Ericsson AB. Nyckelord: processförbättring, felanalys, tidig feldetektering
172

Increasing the availability of a service through Hot Passive Replication / Öka tillgängligheten för en tjänst genom hot passive replication

Bengtson, John, Jigin, Ola January 2015 (has links)
This bachelor thesis examines how redundancy is used to tolerate a process crash fault on a server in a system developed for emergency situations. The goal is to increase the availability of the service the system delivers. The redundant solution uses hot passive replication with one primary replica manager and one backup replica manager. With this approach, code for updating the backup, code for establishing a new primary and code to implement fault detection to detect a process crash has been written. After implementing the redundancy, the redundant solution has been evaluated. The first part of the evaluation showed that the redundant solution can deliver a service in case of a process crash on the primary replica manager. The second part of the evaluation showed that the average response time for an upload request and a download request had increased by 31\% compared to the non-redundant solution. The standard deviation was calculated for the response times and it showed that the response time of an upload request could be higher compared to the average response time. This large deviation has been investigated and the conclusion was that the database insertion was the reason.
173

Fault detection on an experimental aircraft fuel rig using a Kalman filter based FDI screen

Bennett, Paul J. January 2010 (has links)
Reliability is an important issue across industry. This is due to a number of drivers such as the requirement of high safety levels within industries such as aviation, the need for mission success with military equipment, or to avoid monetary losses (due to unplanned outage) within the process and many other industries. The application of fault detection and identification helps to identify the presence of faults to improve mission success or increase up-time of plant equipment. Implementation of such systems can take the form of pattern recognition, statistical and geometric classifiers, soft computing methods or complex model based methods. This study deals with the latter, and focuses on a specific type of model, the Kalman filter. The Kalman filter is an observer which estimates the states of a system, i.e. the physical variables, based upon its current state and knowledge of its inputs. This relies upon the creation of a mathematical model of the system in order to predict the outputs of the system at any given time. Feedback from the plant corrects minor deviation between the system and the Kalman filter model. Comparison between this prediction of outputs and the real output provides the indication of the presence of a fault. On systems with several inputs and outputs banks of these filters can used in order to detect and isolate the various faults that occur in the process and its sensors and actuators. The thesis examines the application of the diagnostic techniques to a laboratory scale aircraft fuel system test-rig. The first stage of the research project required the development of a mathematical model of the fuel rig. Test data acquired by experiment is used to validate the system model against the fuel rig. This nonlinear model is then simplified to create several linear state space models of the fuel rig. These linear models are then used to develop the Kalman filter Fault Detection and Identification (FDI) system by application of appropriate tuning of the Kalman filter gains and careful choice of residual thresholds to determine fault condition boundaries and logic to identify the location of the fault. Additional performance enhancements are also achieved by implementation of statistical evaluation of the residual signal produced and by automatic threshold calculation. The results demonstrate the positive capture of a fault condition and identification of its location in an aircraft fuel system test-rig. The types of fault captured are hard faults such sensor malfunction and actuator failure which provide great deviation of the residual signals and softer faults such as performance degradation and fluid leaks in the tanks and pipes. Faults of a smaller magnitude are captured very well albeit within a larger time range. The performance of the Fault Diagnosis and Identification was further improved by the implementation of statistically evaluating the residual signal and by the development of automatic threshold determination. Identification of the location of the fault is managed by the use of mapping the possible fault permutations and the Kalman filter behaviour, this providing full discrimination between any faults present. Overall the Kalman filter based FDI developed provided positive results in capturing and identifying a system fault on the test-rig.
174

Nonlinear fault detection and diagnosis using Kernel based techniques applied to a pilot distillation colomn

Phillpotts, David Nicholas Charles 15 January 2008 (has links)
Fault detection and diagnosis is an important problem in process engineering. In this dissertation, use of multivariate techniques for fault detection and diagnosis is explored in the context of statistical process control. Principal component analysis and its extension, kernel principal component analysis, are proposed to extract features from process data. Kernel based methods have the ability to model nonlinear processes by forming higher dimensional representations of the data. Discriminant methods can be used to extend on feature extraction methods by increasing the isolation between different faults. This is shown to aid fault diagnosis. Linear and kernel discriminant analysis are proposed as fault diagnosis methods. Data from a pilot scale distillation column were used to explore the performance of the techniques. The models were trained with normal and faulty operating data. The models were tested with unseen and/or novel fault data. All the techniques demonstrated at least some fault detection and diagnosis ability. Linear PCA was particularly successful. This was mainly due to the ease of the training and the ability to relate the scores back to the input data. The attributes of these multivariate statistical techniques were compared to the goals of statistical process control and the desirable attributes of fault detection and diagnosis systems. / Dissertation (MEng (Control Engineering))--University of Pretoria, 2008. / Chemical Engineering / MEng / Unrestricted
175

Modelling and multivariate data analysis of agricultural systems

Lawal, Najib January 2015 (has links)
The broader research area investigated during this programme was conceived from a goal to contribute towards solving the challenge of food security in the 21st century through the reduction of crop loss and minimisation of fungicide use. This is aimed to be achieved through the introduction of an empirical approach to agricultural disease monitoring. In line with this, the SYIELD project, initiated by a consortium involving University of Manchester and Syngenta, among others, proposed a novel biosensor design that can electrochemically detect viable airborne pathogens by exploiting the biology of plant-pathogen interaction. This approach offers improvement on the inefficient and largely experimental methods currently used. Within this context, this PhD focused on the adoption of multidisciplinary methods to address three key objectives that are central to the success of the SYIELD project: local spore ingress near canopies, the evaluation of a suitable model that can describe spore transport, and multivariate analysis of the potential monitoring network built from these biosensors. The local transport of spores was first investigated by carrying out a field trial experiment at Rothamsted Research UK in order to investigate spore ingress in OSR canopies, generate reliable data for testing the prototype biosensor, and evaluate a trajectory model. During the experiment, spores were air-sampled and quantified using established manual detection methods. Results showed that the manual methods, such as colourimetric detection are more sensitive than the proposed biosensor, suggesting the proxy measurement mechanism used by the biosensor may not be reliable in live deployments where spores are likely to be contaminated by impurities and other inhibitors of oxalic acid production. Spores quantified using the more reliable quantitative Polymerase Chain Reaction proved informative and provided novel of data of high experimental value. The dispersal of this data was found to fit a power decay law, a finding that is consistent with experiments in other crops. In the second area investigated, a 3D backward Lagrangian Stochastic model was parameterised and evaluated with the field trial data. The bLS model, parameterised with Monin-Obukhov Similarity Theory (MOST) variables showed good agreement with experimental data and compared favourably in terms of performance statistics with a recent application of an LS model in a maize canopy. Results obtained from the model were found to be more accurate above the canopy than below it. This was attributed to a higher error during initialisation of release velocities below the canopy. Overall, the bLS model performed well and demonstrated suitability for adoption in estimating above-canopy spore concentration profiles which can further be used for designing efficient deployment strategies. The final area of focus was the monitoring of a potential biosensor network. A novel framework based on Multivariate Statistical Process Control concepts was proposed and applied to data from a pollution-monitoring network. The main limitation of traditional MSPC in spatial data applications was identified as a lack of spatial awareness by the PCA model when considering correlation breakdowns caused by an incoming erroneous observation. This resulted in misclassification of healthy measurements as erroneous. The proposed Kriging-augmented MSPC approach was able to incorporate this capability and significantly reduce the number of false alarms.
176

Forecasting Components Failure Using Ant Colony Optimization For Predictive Maintenance / Forecasting Components Failure Using Ant Colony Optimization For Predictive Maintenance

Shahi, Durlabh, Gupta, Ankit January 2020 (has links)
Failures are the eminent aspect of any machine and so is true for vehicle as it is one of the sophisticated machines of today’s time. Early detection of faults and prioritized maintenance is a necessity of vehicle manufactures as it enables them to reduce maintenance cost and increase customer satisfaction. In our research, we have proposed a method for processing Logged Vehicle Data (LVD) that uses Ant-Miner algorithm which is a Ant Colony Optimization (ACO) based Algorithm. It also utilizes processes like Feature engineering, Data preprocessing. We tried to explore the effectiveness of ACO for solving classification problem in the form of fault detection and prediction of failures which would be used for predictive maintenance by manufacturers. From the seasonal and yearly model that we have created, we have used ACO to successfully predict the time of failure which is the month with highest likelihood of failure in vehicle’s components. Here, we also validated the obtained results. LVD suffers from data imbalance problem and we have implemented balancing techniques to eliminate this issue, however more effective balancing techniques along with feature engineering is required to increase accuracy in prediction.
177

Nekontaktní indikátory poruchových stavů na VN vedení / Contactless Fault Indicator for MV Lines

Pernica, Drahomír January 2011 (has links)
The theoretical findings about methods of earth faults indication are elaborated in this thesis into form, which is applicable to design contactless indicator of failure states on MV lines. This design contains electromagnetic field sensors, evaluation device and software support. The higher effectiveness of clearing of fault and better health and asset protection is supposed by using of these indicators.
178

On lights-out process control in the minerals processing industry

Olivier, Laurentz Eugene January 2017 (has links)
The concept of lights-out process control is explored in this work (specifically pertaining to the minerals processing industry). The term is derived from lights-out manufacturing, which is used in discrete component manufacturing to describe a fully automated production line, i.e. with no human intervention. Lights-out process control is therefore defined as the fully autonomous operation of a processing plant (as achieved through automatic process control), without operator interaction. / Thesis (PhD)--University of Pretoria, 2017. / National Research Foundation (NRF) / Electrical, Electronic and Computer Engineering / PhD / Unrestricted
179

Condition Monitoring Systems for Axial Piston Pumps: Mobile Applications

Nathan J Keller (8770307) 02 May 2020 (has links)
Condition monitoring of hydraulic systems has become more available and inexpensive to implement. However, much of the research on this topic has been done on stationary hydraulic systems without the jump to mobile machines. This lack of research on condition monitoring of hydraulic systems on mobile equipment is addressed in this work. The objective of this work is to develop a novel process of implementing an affordable condition monitoring system for axial piston pumps on a mobile machine, a mini excavator in this work. The intent was to find a minimum number of sensors required to accurately predict a faulty pump. First, an expert understanding of the different components on an axial piston pump and how those components interact with one another was discussed. The valve plate was selected as a case study for condition monitoring because valve plates are a critical component that are known for a high percentage of failures in axial piston pumps. Several valve plates with various degrees of natural wear and artificially generated damage were obtained, and an optical profilometer was used to quantify the level of wear and damage. A stationary test-rig was developed to determine if the faulty pumps could be detected under a controlled environment, to test several different machine learning algorithms, and to perform a sensor reduction to find the minimum number of required sensors necessary to detect the faulty pumps. The results from this investigation showed that only the pump outlet pressure, drain pressure, speed, and displacement are sufficient to detect the faulty pump conditions, and the K-Nearest Neighbor (KNN) machine learning algorithms proved to be the least computationally expensive and most accurate algorithms that were investigated. Fault detectability accuracies of 100% were achievable. Next, instrumentation of a mini excavator was shown to begin the next phase of the research, which is to implement a similar process that was done on the stationary test-rig but on a mobile machine. Three duty cycle were developed for the excavator: controlled, digging, and different operator. The controlled duty cycle eliminated the need of an operator and the variability inherent in mobile machines. The digging cycle was a realistic cycle where an operator dug into a lose pile of soil. The different operator cycle is the same as the digging cycle but with another operator. The sensors found to be the most useful were the same as those determined on the stationary test-rig, and the best algorithm was the Fine KNN for both the controlled and digging cycles. The controlled cycle could see fault detectability accuracies of 100%, while the digging cycle only saw accuracies of 93.6%. Finally, a cross-compatibility between a model trained under one cycle and using data from another cycle as an input into the model. This study showed that a model trained under the controlled duty cycle does not give reliable and accurate fault detectability for data run in a digging cycle, below 60% accuracies. This work concluded by recommending a diagnostic function for mobile machines to perform a preprogrammed operation to reliably and accurately detect pump faults.
180

Optical Time Domain Reflectometer based Wavelength Division Multiplexing Passive Optical Network Monitoring

GETANEH WORKALEMAHU, AGEREKIBRE January 2012 (has links)
This project focuses on wavelength division multiplexing passive optical network (WDM-PON) supervision using optical time domain reflectometer (OTDR) for detection and localization of any fault occurred in optical distribution network. The objective is to investigate the impact of OTDR monitoring signal on the data transmission in the WDM-PON based on wavelength re-use system, where the same wavelength is assigned for both upstream and downstream to each end user. Experimental validation has been carried out to measure three different schemes, i.e. back-to-back, WDM-PON with and without OTDR connection by using 1xN and NxN arrayed waveguide gratings. Furthermore, a comprehensive comparison has been made to trace out the effect of the monitoring signal which is transmitted together with the data through the implemented setup. Finally, the result has confirmed that the OTDR supervision signal does not affect the data transmission. The experiment has been carried out at Ericsson AB, Kista.

Page generated in 0.2053 seconds