• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 179
  • 59
  • 38
  • 18
  • 8
  • 6
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 399
  • 399
  • 87
  • 84
  • 69
  • 68
  • 51
  • 45
  • 44
  • 43
  • 43
  • 41
  • 41
  • 40
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Automatização de processos de detecção de faltas em linhas de distribuição utilizando sistemas especialistas híbridos / Fault detection process automation in distribution lines using hybrid expert systems

Danilo Hernane Spatti 15 June 2011 (has links)
Identificar e localizar faltas em alimentadores de distribuição representa um passo importante para a melhoria da qualidade de energia, pois proporciona impactos diretos sobre o tempo de inspeção. Na verdade, a duração da inspeção implica consideravelmente no intervalo em que os consumidores estão sem energia elétrica, quando ocorre uma interrupção não programada. O objetivo deste trabalho é fornecer um sistema de detecção automática de curtos-circuitos, permitindo aos profissionais das companhias de distribuição acompanhar e monitorar de maneira on-line a ocorrência de possíveis faltas e transitórios eletromagnéticos observados na rede primária de distribuição. A abordagem de detecção utiliza um sistema híbrido que combina ferramentas inteligentes e convencionais para identificar e localizar faltas em redes primárias. Os resultados que foram compilados demonstram grande potencialidade de aplicação da proposta em sistemas de distribuição. / Efficient faults identification and location in power distribution lines constitute an important step for power quality improvement, since they provide direct impacts on the inspection time. In fact, the duration of inspection implies directly in the time interval where consumers are without power, considering here the occurrence of a non-programmed interruption. The objective of this work is to provide an automated fault detection system, allowing to the power companies engineers to online track and monitor the possible occurrence of faults and electromagnetic transients observed in the primary network for the distribution circuits. The detection approach uses a hybrid system, which combines a set of intelligent and conventional tools to identify and locate faults in the primary networks. Validation results show great application potential in distribution systems.
142

Static Code Features for a Machine Learning based Inspection : An approach for C

Tribus, Hannes January 2010 (has links)
Delivering fault free code is the clear goal of each devel- oper, however the best method to achieve this aim is still an open question. Despite that several approaches have been proposed in literature there exists no overall best way. One possible solution proposed recently is to combine static source code analysis with the discipline of machine learn- ing. An approach in this direction has been defined within this work, implemented as a prototype and validated subse- quently. It shows a possible translation of a piece of source code into a machine learning algorithm’s input and further- more its suitability for the task of fault detection. In the context of the present work two prototypes have been de- veloped to show the feasibility of the presented idea. The output they generated on open source projects has been collected and used to train and rank various machine learn- ing classifiers in terms of accuracy, false positive and false negative rates. The best among them have subsequently been validated again on an open source project. Out of the first study at least 6 classifiers including “MultiLayerPer- ceptron”, “Ibk” and “ADABoost” on a “BFTree” could convince. All except the latter, which failed completely, could be validated in the second study. Despite that the it is only a prototype, it shows the suitability of some machine learning algorithms for static source code analysis.
143

Monitoring and Implementing Early and Cost-Effective Software Fault Detection / Övervakning och implementation av tidig och kostnadseffektiv feldetektering

Damm, Lars-Ola January 2005 (has links)
Avoidable rework constitutes a large part of development projects, i.e. 20-80 percent depending on the maturity of the organization and the complexity of the products. High amounts of avoidable rework commonly occur when having many faults left to correct in late stages of a project. In fact, research studies indicate that the cost of rework could be decreased by up to 30-50 percent by finding more faults earlier. However, since larger software systems have an almost infinite number of usage scenarios, trying to find most faults early through for example formal specifications and extensive inspections is very time-consuming. Therefore, such an approach is not cost-effective in products that do not have extremely high quality requirements. For example, in market-driven development, time-to-market is at least as important as quality. Further, some areas such as hardware dependent aspects of a product might not be possible to verify early through for example code reviews or unit tests. Therefore, in such environments, rework reduction is primarily about finding faults earlier to the extent it is cost-effective, i.e. find the right faults in the right phase. Through a set of case studies at a department at Ericsson AB, this thesis investigates how to achieve early and cost-effective fault detection through improvements in the test process. The case studies include investigations on how to identify which improvements that are most beneficial to implement, possible solutions to the identified improvement areas, and approaches for how to follow-up implemented improvements. The contributions of the thesis include a framework for component-level test automation and test-driven development. Additionally, the thesis provides methods for how to use fault statistics for identifying and monitoring test process improvements. In particular, we present results from applying methods that can quantify unnecessary fault costs and pinpointing which phases and activities to focus improvements on in order to achieve earlier and more cost-effective fault detection. The goal of the methods is to make organizations strive towards finding the right fault in the right test phase, which commonly is in early test phases. The developed methods were also used for evaluating the results of implementing the above-mentioned test framework at Ericsson AB. Finally, the thesis demonstrates how the implementation of such improvements can be continuously monitored to obtain rapid feedback on the status of defined goals. This was achieved through enhancements of previously applied fault analysis methods. / Avhandlingen handlar om hur en mjukvaruutvecklingsorganisation kan hitta fel tidigare i utvecklingsprocessen. Fokus ligger på att hitta rätt fel i rätt fas, d.v.s. när det är som mest kostnadseffektivt. Avhandlingen presenterar en samling fallstudier utförda inom detta området på Ericsson AB. Nyckelord: processförbättring, felanalys, tidig feldetektering
144

Increasing the availability of a service through Hot Passive Replication / Öka tillgängligheten för en tjänst genom hot passive replication

Bengtson, John, Jigin, Ola January 2015 (has links)
This bachelor thesis examines how redundancy is used to tolerate a process crash fault on a server in a system developed for emergency situations. The goal is to increase the availability of the service the system delivers. The redundant solution uses hot passive replication with one primary replica manager and one backup replica manager. With this approach, code for updating the backup, code for establishing a new primary and code to implement fault detection to detect a process crash has been written. After implementing the redundancy, the redundant solution has been evaluated. The first part of the evaluation showed that the redundant solution can deliver a service in case of a process crash on the primary replica manager. The second part of the evaluation showed that the average response time for an upload request and a download request had increased by 31\% compared to the non-redundant solution. The standard deviation was calculated for the response times and it showed that the response time of an upload request could be higher compared to the average response time. This large deviation has been investigated and the conclusion was that the database insertion was the reason.
145

Fault detection on an experimental aircraft fuel rig using a Kalman filter based FDI screen

Bennett, Paul J. January 2010 (has links)
Reliability is an important issue across industry. This is due to a number of drivers such as the requirement of high safety levels within industries such as aviation, the need for mission success with military equipment, or to avoid monetary losses (due to unplanned outage) within the process and many other industries. The application of fault detection and identification helps to identify the presence of faults to improve mission success or increase up-time of plant equipment. Implementation of such systems can take the form of pattern recognition, statistical and geometric classifiers, soft computing methods or complex model based methods. This study deals with the latter, and focuses on a specific type of model, the Kalman filter. The Kalman filter is an observer which estimates the states of a system, i.e. the physical variables, based upon its current state and knowledge of its inputs. This relies upon the creation of a mathematical model of the system in order to predict the outputs of the system at any given time. Feedback from the plant corrects minor deviation between the system and the Kalman filter model. Comparison between this prediction of outputs and the real output provides the indication of the presence of a fault. On systems with several inputs and outputs banks of these filters can used in order to detect and isolate the various faults that occur in the process and its sensors and actuators. The thesis examines the application of the diagnostic techniques to a laboratory scale aircraft fuel system test-rig. The first stage of the research project required the development of a mathematical model of the fuel rig. Test data acquired by experiment is used to validate the system model against the fuel rig. This nonlinear model is then simplified to create several linear state space models of the fuel rig. These linear models are then used to develop the Kalman filter Fault Detection and Identification (FDI) system by application of appropriate tuning of the Kalman filter gains and careful choice of residual thresholds to determine fault condition boundaries and logic to identify the location of the fault. Additional performance enhancements are also achieved by implementation of statistical evaluation of the residual signal produced and by automatic threshold calculation. The results demonstrate the positive capture of a fault condition and identification of its location in an aircraft fuel system test-rig. The types of fault captured are hard faults such sensor malfunction and actuator failure which provide great deviation of the residual signals and softer faults such as performance degradation and fluid leaks in the tanks and pipes. Faults of a smaller magnitude are captured very well albeit within a larger time range. The performance of the Fault Diagnosis and Identification was further improved by the implementation of statistically evaluating the residual signal and by the development of automatic threshold determination. Identification of the location of the fault is managed by the use of mapping the possible fault permutations and the Kalman filter behaviour, this providing full discrimination between any faults present. Overall the Kalman filter based FDI developed provided positive results in capturing and identifying a system fault on the test-rig.
146

Nonlinear fault detection and diagnosis using Kernel based techniques applied to a pilot distillation colomn

Phillpotts, David Nicholas Charles 15 January 2008 (has links)
Fault detection and diagnosis is an important problem in process engineering. In this dissertation, use of multivariate techniques for fault detection and diagnosis is explored in the context of statistical process control. Principal component analysis and its extension, kernel principal component analysis, are proposed to extract features from process data. Kernel based methods have the ability to model nonlinear processes by forming higher dimensional representations of the data. Discriminant methods can be used to extend on feature extraction methods by increasing the isolation between different faults. This is shown to aid fault diagnosis. Linear and kernel discriminant analysis are proposed as fault diagnosis methods. Data from a pilot scale distillation column were used to explore the performance of the techniques. The models were trained with normal and faulty operating data. The models were tested with unseen and/or novel fault data. All the techniques demonstrated at least some fault detection and diagnosis ability. Linear PCA was particularly successful. This was mainly due to the ease of the training and the ability to relate the scores back to the input data. The attributes of these multivariate statistical techniques were compared to the goals of statistical process control and the desirable attributes of fault detection and diagnosis systems. / Dissertation (MEng (Control Engineering))--University of Pretoria, 2008. / Chemical Engineering / MEng / Unrestricted
147

Modelling and multivariate data analysis of agricultural systems

Lawal, Najib January 2015 (has links)
The broader research area investigated during this programme was conceived from a goal to contribute towards solving the challenge of food security in the 21st century through the reduction of crop loss and minimisation of fungicide use. This is aimed to be achieved through the introduction of an empirical approach to agricultural disease monitoring. In line with this, the SYIELD project, initiated by a consortium involving University of Manchester and Syngenta, among others, proposed a novel biosensor design that can electrochemically detect viable airborne pathogens by exploiting the biology of plant-pathogen interaction. This approach offers improvement on the inefficient and largely experimental methods currently used. Within this context, this PhD focused on the adoption of multidisciplinary methods to address three key objectives that are central to the success of the SYIELD project: local spore ingress near canopies, the evaluation of a suitable model that can describe spore transport, and multivariate analysis of the potential monitoring network built from these biosensors. The local transport of spores was first investigated by carrying out a field trial experiment at Rothamsted Research UK in order to investigate spore ingress in OSR canopies, generate reliable data for testing the prototype biosensor, and evaluate a trajectory model. During the experiment, spores were air-sampled and quantified using established manual detection methods. Results showed that the manual methods, such as colourimetric detection are more sensitive than the proposed biosensor, suggesting the proxy measurement mechanism used by the biosensor may not be reliable in live deployments where spores are likely to be contaminated by impurities and other inhibitors of oxalic acid production. Spores quantified using the more reliable quantitative Polymerase Chain Reaction proved informative and provided novel of data of high experimental value. The dispersal of this data was found to fit a power decay law, a finding that is consistent with experiments in other crops. In the second area investigated, a 3D backward Lagrangian Stochastic model was parameterised and evaluated with the field trial data. The bLS model, parameterised with Monin-Obukhov Similarity Theory (MOST) variables showed good agreement with experimental data and compared favourably in terms of performance statistics with a recent application of an LS model in a maize canopy. Results obtained from the model were found to be more accurate above the canopy than below it. This was attributed to a higher error during initialisation of release velocities below the canopy. Overall, the bLS model performed well and demonstrated suitability for adoption in estimating above-canopy spore concentration profiles which can further be used for designing efficient deployment strategies. The final area of focus was the monitoring of a potential biosensor network. A novel framework based on Multivariate Statistical Process Control concepts was proposed and applied to data from a pollution-monitoring network. The main limitation of traditional MSPC in spatial data applications was identified as a lack of spatial awareness by the PCA model when considering correlation breakdowns caused by an incoming erroneous observation. This resulted in misclassification of healthy measurements as erroneous. The proposed Kriging-augmented MSPC approach was able to incorporate this capability and significantly reduce the number of false alarms.
148

Forecasting Components Failure Using Ant Colony Optimization For Predictive Maintenance / Forecasting Components Failure Using Ant Colony Optimization For Predictive Maintenance

Shahi, Durlabh, Gupta, Ankit January 2020 (has links)
Failures are the eminent aspect of any machine and so is true for vehicle as it is one of the sophisticated machines of today’s time. Early detection of faults and prioritized maintenance is a necessity of vehicle manufactures as it enables them to reduce maintenance cost and increase customer satisfaction. In our research, we have proposed a method for processing Logged Vehicle Data (LVD) that uses Ant-Miner algorithm which is a Ant Colony Optimization (ACO) based Algorithm. It also utilizes processes like Feature engineering, Data preprocessing. We tried to explore the effectiveness of ACO for solving classification problem in the form of fault detection and prediction of failures which would be used for predictive maintenance by manufacturers. From the seasonal and yearly model that we have created, we have used ACO to successfully predict the time of failure which is the month with highest likelihood of failure in vehicle’s components. Here, we also validated the obtained results. LVD suffers from data imbalance problem and we have implemented balancing techniques to eliminate this issue, however more effective balancing techniques along with feature engineering is required to increase accuracy in prediction.
149

Nekontaktní indikátory poruchových stavů na VN vedení / Contactless Fault Indicator for MV Lines

Pernica, Drahomír January 2011 (has links)
The theoretical findings about methods of earth faults indication are elaborated in this thesis into form, which is applicable to design contactless indicator of failure states on MV lines. This design contains electromagnetic field sensors, evaluation device and software support. The higher effectiveness of clearing of fault and better health and asset protection is supposed by using of these indicators.
150

On lights-out process control in the minerals processing industry

Olivier, Laurentz Eugene January 2017 (has links)
The concept of lights-out process control is explored in this work (specifically pertaining to the minerals processing industry). The term is derived from lights-out manufacturing, which is used in discrete component manufacturing to describe a fully automated production line, i.e. with no human intervention. Lights-out process control is therefore defined as the fully autonomous operation of a processing plant (as achieved through automatic process control), without operator interaction. / Thesis (PhD)--University of Pretoria, 2017. / National Research Foundation (NRF) / Electrical, Electronic and Computer Engineering / PhD / Unrestricted

Page generated in 0.1282 seconds