• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 8
  • 8
  • 7
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Bayesian Methods for On-Line Gross Error Detection and Compensation

Gonzalez, Ruben 11 1900 (has links)
Data reconciliation and gross error detection are traditional methods toward detecting mass balance inconsistency within process instrument data. These methods use a static approach for statistical evaluation. This thesis is concerned with using an alternative statistical approach (Bayesian statistics) to detect mass balance inconsistency in real time. The proposed dynamic Baysian solution makes use of a state space process model which incorporates mass balance relationships so that a governing set of mass balance variables can be estimated using a Kalman filter. Due to the incorporation of mass balances, many model parameters are defined by first principles. However, some parameters, namely the observation and state covariance matrices, need to be estimated from process data before the dynamic Bayesian methods could be applied. This thesis makes use of Bayesian machine learning techniques to estimate these parameters, separating process disturbances from instrument measurement noise. / Process Control
2

Bayesian Methods for On-Line Gross Error Detection and Compensation

Gonzalez, Ruben Unknown Date
No description available.
3

Ein automatisches Verfahren für geodätische Berechnungen / An Automatic Method for Geodetic Computations

Lehmann, Rüdiger 17 October 2016 (has links) (PDF)
Das in diesem Beitrag beschriebene automatische Verfahren findet bei klassischen geodätischen Berechnungsaufgaben ausgehend von gegebenen Startgrößen (z. B. Festpunktkoordinaten, Messwerte) Berechnungsmöglichkeiten für alle anderen relevanten Größen. Bei redundanten Startgrößen existiert meist eine Vielzahl unterschiedlicher Berechnungsmöglichkeiten, die alle gefunden und deren Ergebnisse berechnet werden. Wenn die Berechnung mehrdeutig ist, aber nur endlich viele Lösungen existieren, werden alle Lösungen gefunden und berechnet. Durch den Vergleich unterschiedlicher Berechnungsergebnisse besteht die Möglichkeit, grobe Fehler in den Startgrößen aufzudecken und ein robustes Endergebnis zu generieren. Das Verfahren arbeitet nicht stochastisch, setzt also kein stochastisches Modell der Messwerte voraus. Die Beschreibung wird mit Beispielen illustriert. Das Verfahren wurde als Webserver-Script realisiert und ist frei im Internet verfügbar. / This contribution describes an automatic method, which can be applied to classical geodetic computation problems. Starting from given input quantities (e. g. coordinates of known points, measurements) computation opportunties for all other relevant quantities are found. For redundant input quantities a multitude of different computation opportunties exists, which are all found automatically, and their results are computed. If the computation is non-unique, but only a finite number of solutions exist, then all solutions are found and computed. By comparison of the different computation results there is the opportunity to detect gross errors in the input quantities and to produce a robust final result. The method does not work stochastically, so no stochastic model of the observations is required. The description of the algorithm is illustrated with the help of examples. The method was implemented as a webserver script and is available for free in the internet.
4

Vyrovnání provozních dat v energetických procesech / Data reconciliation of energy processes

Nováček, Adam January 2015 (has links)
This thesis is focused on problem data reconciliation of measurements. The objective of this thesis was reconciled measured value from electric drum dryer to suit exactly to the mathematical model of drying. For solution was used nonlinear data reconciliation with constrained nonlinear optimization. The entire calculation is processed in programme MATLAB and outputs are graphs of reconciled values of measurement on dryer such as inlet and outlet temperature and humidity, differential pressure of exhaust moisture air, weight of laundry, atmospheric pressure and electric supply. Achieved solution can by characterized by an amount of evaporated water. Weight of wet and dry laundry are 27,7 kg a 17,7 kg. The calculated amount of evaporated water from measurements was almost 18,8 kg. With reconciled measurements it was 9,7 kg. Goals of the thesis were found more realistic values.
5

The use of classification methods for gross error detection in process data

Gerber, Egardt 12 1900 (has links)
Thesis (MScEng)-- Stellenbosch University, 2013. / ENGLISH ABSTRACT: All process measurements contain some element of error. Typically, a distinction is made between random errors, with zero expected value, and gross errors with non-zero magnitude. Data Reconciliation (DR) and Gross Error Detection (GED) comprise a collection of techniques designed to attenuate measurement errors in process data in order to reduce the effect of the errors on subsequent use of the data. DR proceeds by finding the optimum adjustments so that reconciled measurement data satisfy imposed process constraints, such as material and energy balances. The DR solution is optimal under the assumed statistical random error model, typically Gaussian with zero mean and known covariance. The presence of outliers and gross errors in the measurements or imposed process constraints invalidates the assumptions underlying DR, so that the DR solution may become biased. GED is required to detect, identify and remove or otherwise compensate for the gross errors. Typically GED relies on formal hypothesis testing of constraint residuals or measurement adjustment-based statistics derived from the assumed random error statistical model. Classification methodologies are methods by which observations are classified as belonging to one of several possible groups. For the GED problem, artificial neural networks (ANN’s) have been applied historically to resolve the classification of a data set as either containing or not containing a gross error. The hypothesis investigated in this thesis is that classification methodologies, specifically classification trees (CT) and linear or quadratic classification functions (LCF, QCF), may provide an alternative to the classical GED techniques. This hypothesis is tested via the modelling of a simple steady-state process unit with associated simulated process measurements. DR is performed on the simulated process measurements in order to satisfy one linear and two nonlinear material conservation constraints. Selected features from the DR procedure and process constraints are incorporated into two separate input vectors for classifier construction. The performance of the classification methodologies developed on each input vector is compared with the classical measurement test in order to address the posed hypothesis. General trends in the results are as follows: - The power to detect and/or identify a gross error is a strong function of the gross error magnitude as well as location for all the classification methodologies as well as the measurement test. - For some locations there exist large differences between the power to detect a gross error and the power to identify it correctly. This is consistent over all the classifiers and their associated measurement tests, and indicates significant smearing of gross errors. - In general, the classification methodologies have higher power for equivalent type I error than the measurement test. - The measurement test is superior for small magnitude gross errors, and for specific locations, depending on which classification methodology it is compared with. There is significant scope to extend the work to more complex processes and constraints, including dynamic processes with multiple gross errors in the system. Further investigation into the optimal selection of input vector elements for the classification methodologies is also required. / AFRIKAANSE OPSOMMING: Alle prosesmetings bevat ʼn sekere mate van metingsfoute. Die fout-element van ʼn prosesmeting word dikwels uitgedruk as bestaande uit ʼn ewekansige fout met nul verwagte waarde, asook ʼn nie-ewekansige fout met ʼn beduidende grootte. Data Rekonsiliasie (DR) en Fout Opsporing (FO) is ʼn versameling van tegnieke met die doelwit om die effek van sulke foute in prosesdata op die daaropvolgende aanwending van die data te verminder. DR word uitgevoer deur die optimale veranderinge aan die oorspronklike prosesmetings aan te bring sodat die aangepaste metings sekere prosesmodelle gehoorsaam, tipies massa- en energie-balanse. Die DR-oplossing is optimaal, mits die statistiese aannames rakende die ewekansige fout-element in die prosesdata geldig is. Dit word tipies aanvaar dat die fout-element normaal verdeel is, met nul verwagte waarde, en ʼn gegewe kovariansie matriks. Wanneer nie-ewekansige foute in die data teenwoordig is, kan die resultate van DR sydig wees. FO is daarom nodig om nie-ewekansige foute te vind (Deteksie) en te identifiseer (Identifikasie). FO maak gewoonlik staat op die statistiese eienskappe van die meting aanpassings wat gemaak word deur die DR prosedure, of die afwykingsverskil van die model vergelykings, om formele hipoteses rakende die teenwoordigheid van nie-ewekansige foute te toets. Klassifikasie tegnieke word gebruik om die klasverwantskap van observasies te bepaal. Rakende die FO probleem, is sintetiese neurale netwerke (SNN) histories aangewend om die Deteksie en Identifikasie probleme op te los. Die hipotese van hierdie tesis is dat klassifikasie tegnieke, spesifiek klassifikasiebome (CT) en lineêre asook kwadratiese klassifikasie funksies (LCF en QCF), suksesvol aangewend kan word om die FO probleem op te los. Die hipotese word ondersoek deur middel van ʼn simulasie rondom ʼn eenvoudige gestadigde toestand proses-eenheid wat aan een lineêre en twee nie-lineêre vergelykings onderhewig is. Kunsmatige prosesmetings word geskep met behulp van lukrake syfers sodat die foutkomponent van elke prosesmeting bekend is. DR word toegepas op die kunsmatige data, en die DR resultate word gebruik om twee verskillende insetvektore vir die klassifikasie tegnieke te skep. Die prestasie van die klassifikasie metodes word vergelyk met die metingstoets van klassieke FO ten einde die gestelde hipotese te beantwoord. Die onderliggende tendense in die resultate is soos volg: - Die vermoë om ‘n nie-ewekansige fout op te spoor en te identifiseer is sterk afhanklik van die grootte asook die ligging van die fout vir al die klassifikasie tegnieke sowel as die metingstoets. - Vir sekere liggings van die nie-ewekansige fout is daar ‘n groot verskil tussen die vermoë om die fout op te spoor, en die vermoë om die fout te identifiseer, wat dui op smering van die fout. Al die klassifikasie tegnieke asook die metingstoets baar hierdie eienskap. - Oor die algemeen toon die klassifikasie metodes groter sukses as die metingstoets. - Die metingstoets is meer suksesvol vir relatief klein nie-ewekansige foute, asook vir sekere liggings van die nie-ewekansige fout, afhangende van die klassifikasie tegniek ter sprake. Daar is verskeie maniere om die bestek van hierdie ondersoek uit te brei. Meer komplekse, niegestadigde prosesse met sterk nie-lineêre prosesmodelle en meervuldige nie-ewekansige foute kan ondersoek word. Die moontlikheid bestaan ook om die prestasie van klassifikasie metodes te verbeter deur die gepaste keuse van insetvektor elemente.
6

Desenvolvimento de um software para reconciliação de dados de processos quimicos e petroquimicos / Development of software for data reconciliation of chemical and petrochemical processes

Barbosa, Agremis Guinho 11 June 2003 (has links)
Orientador: Rubens Maciel Filho / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Quimica / Made available in DSpace on 2018-08-10T21:51:34Z (GMT). No. of bitstreams: 1 Barbosa_AgremisGuinho_M.pdf: 1501070 bytes, checksum: c20fd373ba5e239e2b783608aebbc7f2 (MD5) Previous issue date: 2003 / Resumo: O objetivo deste trabalho é o desenvolvimento de rotinas computacionais para o condicionamento de dados provenientes de um processo químico, de modo que estes sejam consistentes para a representação do comportamento do processo. A descrição adequada do comportamento de um processo é a base fundamental de qualquer sistema de controle e/ou otimização, uma vez que será em resposta às medições deste processo (sua descrição) que os referidos sistemas atuarão. Desta forma o tratamento e correção dos erros de medição, especificamente, e a estimativa de parâmetros, de um modo mais geral, constituem uma etapa que não deve ser negligenciada no controle e otimização de processos. O condicionamento de dados estudado neste trabalho é a reconciliação de dados, que tem como característica principal o uso de um modelo de restrições para condicionar a informação. Geralmente os modelos de restrição são balanços de massa e energia e os somatórios das frações mássicas e molares, mas outros modelos também podem ser usados. Matematicamente, a reconciliação de dados é um problema de otimização sujeito a restrições. Neste trabalho, a formulação do problema de reconciliação é a dos mínimos quadrados ponderados sujeito a restrições e a abordagem para a sua solução é a fatoração QR. Objetiva-se também reunir as rotinas desenvolvidas em uma única ferramenta computacional para a descrição, resolução e análise dos resultados do problema de reconciliação de dados, constituindo-se em um software de fácil utilização e que tenha ainda um mecanismo de comunicação com banco de dados, conferindo-lhe interatividade em tempo real com sistemas de aquisição de dados de processo / Abstract: The purpose of this work is the development of computational routines for conditioning chemical process data in order to represent the process behavior as reliable as possible. Reliable process description is fundamental for any control or optimization system development, since they respond to the process measurements (its description). Thus, data conditioning and correction of process measurement errors, and parameter estimation are a step that should not be neglected in process control and optimization. The data conditioning considered in this work is data reconciliation which has as the main characteristic the use of a constraint model. In general constraint models are mass and energy balances and mass and molar fraction summation, but other models may be used. Under a mathematical point of view, data reconciliation is an optimization subject to constraints. In this work, it is used the formulation of weighed least squares subject to constraints and QR factorization approach to solve the problem. The additional objective of this work is to accommodate the developed routines in such a way to build up an integrated computational tool characterized by its easy to use structure, capability to solve and perform data reconciliation. Its structure takes into account the interaction with data bank, giving it real time interactiveness with process data acquisition systems / Mestrado / Desenvolvimento de Processos Químicos / Mestre em Engenharia Química
7

Výpočtový systém pro vyhodnocení výrobních ukazatelů spaloven komunálních odpadů / Computational tool for processing of production data from waste-to-energy systems

Machát, Ondřej January 2013 (has links)
This thesis contains evaluation of crucial operational indicators of a waste-to-energy plant. Above all, it is lower heating value of municipal solid waste and boiler efficiency. An approach for evaluation improvement by mathematical methods is proposed. The approach is implemented in a computational tool developed in Microsoft Excel. The approach is tested and subsequently used for operational data from a real waste-to-energy plant.
8

Ein automatisches Verfahren für geodätische Berechnungen

Lehmann, Rüdiger January 2015 (has links)
Das in diesem Beitrag beschriebene automatische Verfahren findet bei klassischen geodätischen Berechnungsaufgaben ausgehend von gegebenen Startgrößen (z. B. Festpunktkoordinaten, Messwerte) Berechnungsmöglichkeiten für alle anderen relevanten Größen. Bei redundanten Startgrößen existiert meist eine Vielzahl unterschiedlicher Berechnungsmöglichkeiten, die alle gefunden und deren Ergebnisse berechnet werden. Wenn die Berechnung mehrdeutig ist, aber nur endlich viele Lösungen existieren, werden alle Lösungen gefunden und berechnet. Durch den Vergleich unterschiedlicher Berechnungsergebnisse besteht die Möglichkeit, grobe Fehler in den Startgrößen aufzudecken und ein robustes Endergebnis zu generieren. Das Verfahren arbeitet nicht stochastisch, setzt also kein stochastisches Modell der Messwerte voraus. Die Beschreibung wird mit Beispielen illustriert. Das Verfahren wurde als Webserver-Script realisiert und ist frei im Internet verfügbar. / This contribution describes an automatic method, which can be applied to classical geodetic computation problems. Starting from given input quantities (e. g. coordinates of known points, measurements) computation opportunties for all other relevant quantities are found. For redundant input quantities a multitude of different computation opportunties exists, which are all found automatically, and their results are computed. If the computation is non-unique, but only a finite number of solutions exist, then all solutions are found and computed. By comparison of the different computation results there is the opportunity to detect gross errors in the input quantities and to produce a robust final result. The method does not work stochastically, so no stochastic model of the observations is required. The description of the algorithm is illustrated with the help of examples. The method was implemented as a webserver script and is available for free in the internet.
9

Aspects of analysis of small-sample right censored data using generalized Wilcoxon rank tests

Öhman, Marie-Louise January 1994 (has links)
The estimated bias and variance of commonly applied and jackknife variance estimators and observed significance level and power of standardised generalized Wilcoxon linear rank sum test statistics and tests, respectively, of Gehan and Prentice are compared in a Monte Carlo simulation study. The variance estimators are the permutational-, the conditional permutational- and the jackknife variance estimators of the test statistic of Gehan, and the asymptotic- and the jackknife variance estimators of the test statistic of Prentice. In unbalanced small sample size problems with right censoring, the commonly applied variance estimators for the generalized Wilcoxon rank test statistics of Gehan and Prentice may be biased. In the simulation study it appears that variance properties and observed level and power may be improved by using the jackknife variance estimator. To establish the sensitivity to gross errors and misclassifications for standardised generalized Wilcoxon linear rank sum statistics in small samples with right censoring, the sensitivity curves of Tukey are used. For a certain combined sample, which might contain gross errors, a relatively simple method is needed to establish the applicability of the inference drawn from the selected rank test. One way is to use the change of decision point, which in this thesis is defined as the smallest proportion of altered positions resulting in an opposite decision. When little is known about the shape of a distribution function, non-parametric estimates for the location parameter are found by making use of censored one-sample- and two-sample rank statistics. Methods for constructing censored small sample confidence intervals and asymptotic confidence intervals for a location parameter are also considered. Generalisations of the solutions from uncensored one-sample and two-sample rank tests are utilised. A Monte-Carlo simulation study indicates that rank estimators may have smaller absolute estimated bias and smaller estimated mean squared error than a location estimator derived from the Product-Limit estimator of the survival distribution function. The ideas described and discussed are illustrated with data from a clinical trial of Head and Neck cancer. / digitalisering@umu
10

An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation.

Lin, TsungPo 26 June 2008 (has links)
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principle component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.

Page generated in 0.0584 seconds