In this work, data quality control and mitigation tools have been developed for improving the accuracy of photovoltaic (PV) system performance assessment. These tools allow to demonstrate the impact of ignoring erroneous or lost data on performance evaluation and fault detection. The work mainly focuses on residential PV systems where monitoring is limited to recording total generation and the lack of meteorological data makes quality control in that area truly challenging. Main quality issues addressed in this work are with regards to wrong system description and missing electrical and/or meteorological data in monitoring. An automatic detection of wrong input information such as system nominal capacity and azimuth is developed, based on statistical distributions of annual figures of PV system performance ratio (PR) and final yield. This approach is specifically useful in carrying out PV fleet analyses where only monthly or annual energy outputs are available. The evaluation is carried out based on synthetic weather data which is obtained by interpolating from a network of about 80 meteorological monitoring stations operated by the UK Meteorological Office. The procedures are used on a large PV domestic dataset, obtained by a social housing organisation, where a significant number of cases with wrong input information are found. Data interruption is identified as another challenge in PV monitoring data, although the effect of this is particularly under-researched in the area of PV. Disregarding missing energy generation data leads to falsely estimated performance figures, which consequently may lead to false alarms on performance and/or the lack of necessary requirements for the financial revenue of a domestic system through the feed-in-tariff scheme. In this work, the effect of missing data is mitigated by applying novel data inference methods based on empirical and artificial neural network approaches, training algorithms and remotely inferred weather data. Various cases of data loss are considered and case studies from the CREST monitoring system and the domestic dataset are used as test cases. When using back-filled energy output, monthly PR estimation yields more accurate results than when including prolonged data gaps in the analysis. Finally, to further discriminate more obscure data from system faults when higher temporal resolution data is available, a remote modelling and failure detection framework is ii developed based on a physical electrical model, remote input weather data and system description extracted from PV module and inverter manufacturer datasheets. The failure detection is based on the analysis of daily profiles and long-term PR comparison of neighbouring PV systems. By employing this tool on various case studies it is seen that undetected wrong data may severely obscure fault detection, affecting PV system s lifetime. Based on the results and conclusions of this work on the employed residential dataset, essential data requirements for domestic PV monitoring are introduced as a potential contribution to existing lessons learnt in PV monitoring.
07 July 2017
Wireless sensor networks (WSNs) can provide new methods for information gathering for a variety of applications. In order to ensure the network quality of service, the quality of the measurements has to be guaranteed. Distributed fault detection and isolation schemes are preferred to centralized solutions to diagnose faulty sensors in WSNs. Indeed the first approach avoids the need for a central node that collects information from every sensor node, and hence it limits complexity and energy cost while improving reliability.In the case of state estimation over distributed architectures, the sensor faults can be propagated in the network during the information exchanging process. To build a reliable state estimate one has to make sure that the measurements issued by the different sensors are fault free. That is one of the motivations to build a distributed fault detection and isolation (FDI) system that generates an alarm as soon as a measurement is subject to a fault (has drift, cdots ). In order to diagnose faults with small magnitude in wireless sensor networks, a systematic methodology to design and implement a distributed FDI system is proposed. It resorts to distinguishability measures to indicate the performance of the FDI system and to select the most suitable node(s) for information exchange in the network with a view to FDI. It allows one to determine the minimum amount of data to be exchanged between the different nodes for a given FDI performance. In this way, the specifications for FDI can be achieved while the communication and computation cost are kept as small as possible. The distributed FDI systems are designed both in deterministic and stochastic frameworks. They are based on the parity space approach that exploits spacial redundancy as well as temporal redundancy in the context of distributed schemes. The decision systems with the deterministic method and the stochastic method are designed not only to detect a fault but also to distinguish which fault is occurring in the network. A case study with a WSN is conducted to verify the proposed method. The network is used to monitor the temperature and humidity in a computer room. The distributed FDI system is validated both with simulated data and recorded data. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
12 January 2007
The aim of this study was to propose a nonlinear multiscale principal component analysis (NLMSPCA) methodology for process monitoring and fault detection based upon multilevel wavelet decomposition and nonlinear principal component analysis via an input-training neural network. Prior to assessing the capabilities of the monitoring scheme on a nonlinear industrial process, the data is first pre-processed to remove heavy noise and significant spikes through wavelet thresholding. The thresholded wavelet coefficients are used to reconstruct the thresholded details and approximations. The significant details and approximations are used as the inputs for the linear and nonlinear PCA algorithms in order to construct detail and approximation conformance models. At the same time non-thresholded details and approximations are reconstructed and combined which are used in a similar way as that of the thresholded details and approximations to construct a combined conformance model to take account of noise and outliers. Performance monitoring charts with non-parametric control limits are then applied to identify the occurrence of non-conforming operation prior to interrogating differential contribution plots to help identify the potential source of the fault. A novel summary display is used to present the information contained in bivariate graphs in order to facilitate global visualization. Positive results were achieved. / Dissertation (M Eng (Control Engineering))--University of Pretoria, 2007. / Chemical Engineering / unrestricted
Condition monitoring of hydraulic systems is an area that has grown substantially in the last few decades. This thesis presents a scheme that automatically generates the fault symptoms by on-line processing of raw sensor data from a real test rig. The main purposes of implementing condition monitoring in hydraulic systems are to increase productivity, decrease maintenance costs and increase safety. Since such systems are widely used in industry and becoming more complex in function, reliability of the systems must be supported by an efficient monitoring and maintenance scheme. This work proposes an accurate state space model together with a novel model-based fault diagnosis methodology. The test rig has been fabricated in the Process Automation and Robotics Laboratory at UBC. First, a state space model of the system is derived. The parameters of the model are obtained through either experiments or direct measurements and manufacturer specifications. To validate the model, the simulated and measured states are compared. The results show that under normal operating conditions the simulation program and real system produce similar state trajectories. For the validated model, a condition monitoring scheme based on the Unscented Kalman Filter (UKF) is developed. In simulations, both measurement and process noises are considered. The results show that the algorithm estimates the iii system states with acceptable residual errors. Therefore, the structure is verified to be employed as the fault diagnosis scheme. Five types of faults are investigated in this thesis: loss of load, dynamic friction load, the internal leakage between the two hydraulic cylinder chambers, and the external leakage at either side of the actuator. Also, for each leakage scenario, three levels of leakage are investigated in the tests. The developed UKF-based fault monitoring scheme is tested on the practical system while different fault scenarios are singly introduced to the system. A sinusoidal reference signal is used for the actuator displacement. To diagnose the occurred fault in real time, three criteria, namely residual moving average of the errors, chamber pressures, and actuator characteristics, are considered. Based on the presented experimental results and discussions, the proposed scheme can accurately diagnose the occurred faults. / Applied Science, Faculty of / Mechanical Engineering, Department of / Graduate
Labuschagne, Petrus Jacobus
06 July 2009
Fault detection and diagnosis presents a big challenge within the petrochemical industry. The annual economic impact of unexpected shutdowns is estimated to be $20 billion. Assistive technologies will help with the effective detection and classification of the faults causing these shutdowns. Clustering analysis presents a form of unsupervised learning which identifies data with similar properties. Various algorithms were used and included hard-partitioning algorithms (K-means and K-medoid) and fuzzy algorithms (Fuzzy C-means, Gustafson-Kessel and Gath-Geva). A novel approach to the clustering problem of time-series data is proposed. It exploits the time dependency of variables (time delays) within a process engineering environment. Before clustering, process lags are identified via signal cross-correlations. From this, a least-squares optimal signal time shift is calculated. Dimensional reduction techniques are used to visualise the data. Various nonlinear dimensional reduction techniques have been proposed in recent years. These techniques have been shown to outperform their linear counterparts on various artificial data sets including the Swiss roll and helix data sets but have not been widely implemented in a process engineering environment. The algorithms that were used included linear PCA and standard Sammon and fuzzy Sammon mappings. Time shifting resulted in better clustering accuracy on a synthetic data set based on than traditional clustering techniques based on quantitative criteria (including Partition Coefficient, Classification Entropy, Partition Index, Separation Index, Dunn’s Index and Alternative Dunn Index). However, the time shifted clustering results of the Tennessee Eastman process were not as good as the non-shifted data. Copyright / Dissertation (MEng)--University of Pretoria, 2009. / Chemical Engineering / unrestricted
DETECTION AND EXCLUSION OF FAULTY GNSS MEASUREMENTS: A PARAMETERIZED QUADRATIC PROGRAMMING APPROACH AND ITS INTEGRITYTeng-yao Yang (8742285) 23 April 2020 (has links)
<div>This research investigates the detection and exclusion of faulty global navigation satellite system (GNSS) measurements using a parameterized quadratic programming formulation (PQP) approach. Furthermore, the PQP approach is integrated with the integrity risk and continuity risk bounds of the Chi-squared advanced receiver autonomous integrity monitoring (ARAIM). The integration allows for performance evaluation of the PQP approach in terms of accuracy, integrity, continuity, and availability, which is necessary for the PQP approach to be applied to the vertical navigation in the performance-based navigation (PBN). In the case of detection, the PQP approach can also be integrated with the vertical protection level and the associated lower and upper bounds derived for the solution separation ARAIM. While there are other computationally efficient and less computationally efficient fault detection and</div><div>exclusion methods to detect and exclude faulty GNSS measurements, the strength of the PQP approach can summarized from two different perspectives. Firstly, the PQP</div><div>approach belongs to the group of the computationally efficient methods, which makes the PQP approach more favorable when it comes to detect and exclude multiple simultaneous faulty GNSS measurements. Secondly, because of the integration of the PQP approach with the integrity risk and continuity risk bounds of the Chi-squared</div><div>ARAIM, the PQP approach is among the first computationally efficient fault detection and exclusion methods to incorporate the concept of integrity, which lies in</div><div>the foundation of PBN. Despite the PQP approach not being a practical integrity monitoring method in its current form because of the combinatorial natural of the integrity risk bound calculation and the rather conservative integrity performance, further research can be pursued to improve the PQP approach. Any improvement on the integrity risk bound calculation for the Chi-squared ARAIM can readily be</div><div>applied to the integrity risk bound calculation for the PQP approach. Also, the connection between the PQP approach and the support vector machines, the application of the extreme value theory to obtain a conservative tail probability may shed light upon the parameter tuning of the PQP approach, which in turn will result in tight integrity risk bound.</div>
Identifying symptoms of fault in District Heating Substations : An investigation in how a predictive heat load software can help with fault detectionBergentz, Tobias January 2020 (has links)
District heating delivers more than 70% of the energy used for heating and domestichot water in Swedish buildings. To stay competitive, district heating needs toreduce its losses and increase capabilities to utilise low grade heat. Finding faultysubstations is one way to allow reductions in supply temperatures in district heatingnetworks, which in turn can help reduce the losses. In this work three suggestedsymptoms of faults: abnormal quantization, drifting and anomalous values, are investigatedwith the help of hourly meter data of: heat load, volume flow, supplyand return temperatures from district heating substations. To identify abnormalquantization, a method is proposed based on Shannon’s entropy, where lower entropysuggests higher risk of abnormal quantization. The majority of the substationsidentified as having abnormal quantization with the proposed method has a meterresolution lower than the majority of the substations in the investigated districtheating network. This lower resolution is likely responsible for identifying thesesubstation, suggesting the method is limited by the meter resolution of the availabledata. To improve result from the method higher resolution and sampling frequencyis likely needed.For identifying drift and anomalous values two methods are proposed, one for eachsymptom. Both methods utilize a software for predicting hourly heat load, volumeflow, supply and return temperatures in individual district heating substations.The method suggested for identifying drift uses the mean value of each predictedand measured quantity during the investigated period. The mean of the prediction iscompared to the mean of the measured values and a large difference would suggestrisk of drift. However this method has not been evaluated due to difficulties infinding a suitable validation method.The proposed method for detecting anomalous values is based on finding anomalousresiduals when comparing the prediction from the prediction software to themeasured values. To find the anomalous residuals the method uses an anomalydetection algorithm called IsolationForest. The method produces rankable lists inwhich substations with risk of anomalies are ranked higher in the lists. Four differentlists where evaluated by an experts. For the two best preforming lists approximatelyhalf of the top 15 substations where classified to contain anomalies by the expertgroup. The proposed method for detecting anomalous values shows promising resultespecially considering how easily the method could be added to a district heatingnetwork. Future work will focus on reducing the number of false positives. Suggestionsfor lowering the false positive rate include, alternations or checks on theprediction models used.
In this thesis, a methodology for the detection of anomalies in the cardiovascular system is presented. The cardiovascular system is one of the most fascinating and complex physiological systems. Nowadays, cardiovascular diseases constitute one of the most important causes of mortality in the world. For instance, an estimate of 17.3 million people died in 2008 from cardiovascular diseases. Therefore, many studies have been devoted to modeling the cardiovascular system in order to better understand its behavior and find new reliable diagnosis techniques. The lumped parameter model of the cardiovascular system proposed in  is restructured using a hybrid systems approach in order to include a discrete input vector that represents the influence of the mitral and aortic valves in the different phases of the cardiac cycle. Parting from this model, a Taylor expansion around the nominal values of a vector of parameters is conducted. This expansion serves as the foundation for a component fault detection process to detect changes in the physiological parameters of the cardiovascular system which could be associated with cardiovascular anomalies such as atherosclerosis, aneurysm, high blood pressure, etc. An Extended Kalman Filter is used in order to achieve a joint estimation of the state vector and the changes in the considered parameters. Finally, a bank of filters is, as in , used in order to detect the appearance of heart valve diseases, particularly stenosis and regurgitation. The first numerical results obtained are presented.
Harat, Robert Oliver
Effective models of vibratory screens which can capture the true response characteristics are crucial in the understanding of faults and failures which occur in vibratory screens. However, the current available models are usually simplified and have limited validation to that of a physical screen. Much research has been conducted to optimise the screening efficiency of screens. The optimisation includes screen geometry, material processing of the screen and the dynamic response of the screen. These investigations have not been furthered to investigate the effects of different faults on the dynamic response of a vibratory screen. To model a vibratory screen which can replicate the dynamics of a physical vibratory screen it is important to create a model with enough complexity to capture the dynamics of the screen. The model of the screen was validated using both modal analysis and the transient response of the screen. The modal analysis was used to ensure that the physical characteristics of the model are consistent with that of the physical screen. Once this was completed, the second validation aimed to investigate if the model of the screen could capture transient faults which are measured experimentally. It was found that it was not possible to conclusively determine if the finite element methods model could Finally, an intelligent method was used to distinguishing between different faults and classifying them accordingly. The intelligent method was also trained using the FEM data and then used to classify the physical screen data. / Dissertation (MEng)--University of Pretoria, 2020. / Mechanical and Aeronautical Engineering / MEng / Unrestricted
A Neural Network Approach to Fault Detection in Spacecraft Attitude Determination and Control SystemsSchreiner, John N. 01 May 2015 (has links)
This thesis proposes a method of performing fault detection and isolation in spacecraft attitude determination and control systems. The proposed method works by deploying a trained neural network to analyze a set of residuals that are dened such that they encompass the attitude control, guidance, and attitude determination subsystems. Eight neural networks were trained using either the resilient backpropagation, Levenberg-Marquardt, or Levenberg-Marquardt with Bayesian regularization training algorithms. The results of each of the neural networks were analyzed to determine the accuracy of the networks with respect to isolating the faulty component or faulty subsystem within the ADCS. The performance of the proposed neural network-based fault detection and isolation method was compared and contrasted with other ADCS FDI methods. The results obtained via simulation showed that the best neural networks employing this method successfully detected the presence of a fault 79% of the time. The faulty subsystem was successfully isolated 75% of the time and the faulty components within the faulty subsystem were isolated 37% of the time.
Page generated in 0.1962 seconds