Return to search

Studies in Data Reconciliation Using Principal Component Analysis

<p>Measurements such as flow rates from a chemical process are inherently inaccurate. They are contaminated by random errors and possibly gross errors such as process disturbances, leaks, departure from steady state, and biased instrumentation. These measurements violate conservation laws and other process constraints. The goal of data reconciliation is to resolve the contradictions between the measurements and their constraints, and to process contaminated data into consistent information. Data reconciliation aims at estimating the true values of measured variables, detecting gross errors, and solving for unmeasured variables.</p> <p>This thesis presents a modification of a model of bilinear data reconciliation which is capable of handling any measurement covariance structure, followed by a construction of principal component tests which are sharper in detecting and have a substantially greater power in correctly identifying gross errors than the currently used statistical tests in data reconciliation. Sequential Analysis is combined with Principal Component Analysis to provide a procedure for detecting persistent gross errors.</p> <p>The concept of zero accumulation is used to determine the applicability of the established linear/bilinear data reconciliation model and algorithms. A two stage algorithm is presented to detect zero accumulation in the presence of gross errors.</p> <p>An interesting finding is that the univariate and the maximum power tests can be quite poor in detecting gross errors and can lead to confounding in their identification.</p> / Doctor of Philosophy (PhD)

Identiferoai:union.ndltd.org:mcmaster.ca/oai:macsphere.mcmaster.ca:11375/7088
Date08 1900
CreatorsTong, Hongwei
ContributorsCrowe, Cameron M., Chemical Engineering
Source SetsMcMaster University
Detected LanguageEnglish
Typethesis

Page generated in 0.0017 seconds