• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 10
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 61
  • 61
  • 61
  • 38
  • 18
  • 12
  • 10
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A Statistical Methodology for Classifying Time Series in the Context of Climatic Data

Ramírez Buelvas, Sandra Milena 24 February 2022 (has links)
[ES] De acuerdo con las regulaciones europeas y muchos estudios científicos, es necesario monitorear y analizar las condiciones microclimáticas en museos o edificios, para preservar las obras de arte en ellos. Con el objetivo de ofrecer herramientas para el monitoreo de las condiciones climáticas en este tipo de edificios, en esta tesis doctoral se propone una nueva metodología estadística para clasificar series temporales de parámetros climáticos como la temperatura y humedad relativa. La metodología consiste en aplicar un método de clasificación usando variables que se computan a partir de las series de tiempos. Los dos primeros métodos de clasificación son versiones conocidas de métodos sparse PLS que no se habían aplicado a datos correlacionados en el tiempo. El tercer método es una nueva propuesta que usa dos algoritmos conocidos. Los métodos de clasificación se basan en diferentes versiones de un método sparse de análisis discriminante de mínimos cuadra- dos parciales PLS (sPLS-DA, SPLSDA y sPLS) y análisis discriminante lineal (LDA). Las variables que los métodos de clasificación usan como input, corresponden a parámetros estimados a partir de distintos modelos, métodos y funciones del área de las series de tiempo, por ejemplo, modelo ARIMA estacional, modelo ARIMA- TGARCH estacional, método estacional Holt-Winters, función de densidad espectral, función de autocorrelación (ACF), función de autocorrelación parcial (PACF), rango móvil (MR), entre otras funciones. También fueron utilizadas algunas variables que se utilizan en el campo de la astronomía para clasificar estrellas. En los casos que a priori no hubo información de los clusters de las series de tiempos, las dos primeras componentes de un análisis de componentes principales (PCA) fueron utilizadas por el algoritmo k- means para identificar posibles clusters de las series de tiempo. Adicionalmente, los resultados del método sPLS-DA fueron comparados con los del algoritmo random forest. Tres bases de datos de series de tiempos de humedad relativa o de temperatura fueron analizadas. Los clusters de las series de tiempos se analizaron de acuerdo a diferentes zonas o diferentes niveles de alturas donde fueron instalados sensores para el monitoreo de las condiciones climáticas en los 3 edificios.El algoritmo random forest y las diferentes versiones del método sparse PLS fueron útiles para identificar las variables más importantes en la clasificación de las series de tiempos. Los resultados de sPLS-DA y random forest fueron muy similares cuando se usaron como variables de entrada las calculadas a partir del método Holt-Winters o a partir de funciones aplicadas a las series de tiempo. Aunque los resultados del método random forest fueron levemente mejores que los encontrados por sPLS-DA en cuanto a las tasas de error de clasificación, los resultados de sPLS- DA fueron más fáciles de interpretar. Cuando las diferentes versiones del método sparse PLS utilizaron variables resultantes del método Holt-Winters, los clusters de las series de tiempo fueron mejor discriminados. Entre las diferentes versiones del método sparse PLS, la versión sPLS con LDA obtuvo la mejor discriminación de las series de tiempo, con un menor valor de la tasa de error de clasificación, y utilizando el menor o segundo menor número de variables.En esta tesis doctoral se propone usar una versión sparse de PLS (sPLS-DA, o sPLS con LDA) con variables calculadas a partir de series de tiempo para la clasificación de éstas. Al aplicar la metodología a las distintas bases de datos estudiadas, se encontraron modelos parsimoniosos, con pocas variables, y se obtuvo una discriminación satisfactoria de los diferentes clusters de las series de tiempo con fácil interpretación. La metodología propuesta puede ser útil para caracterizar las distintas zonas o alturas en museos o edificios históricos de acuerdo con sus condiciones climáticas, con el objetivo de prevenir problemas de conservación con las obras de arte. / [CA] D'acord amb les regulacions europees i molts estudis científics, és necessari monitorar i analitzar les condiciones microclimàtiques en museus i en edificis similars, per a preservar les obres d'art que s'exposen en ells. Amb l'objectiu d'oferir eines per al monitoratge de les condicions climàtiques en aquesta mena d'edificis, en aquesta tesi es proposa una nova metodologia estadística per a classificar series temporals de paràmetres climàtics com la temperatura i humitat relativa.La metodologia consisteix a aplicar un mètode de classificació usant variables que es computen a partir de les sèries de temps. Els dos primers mètodes de classificació són versions conegudes de mètodes sparse PLS que no s'havien aplicat adades correlacionades en el temps. El tercer mètode és una nova proposta que usados algorismes coneguts. Els mètodes de classificació es basen en diferents versions d'un mètode sparse d'anàlisi discriminant de mínims quadrats parcials PLS (sPLS-DA, SPLSDA i sPLS) i anàlisi discriminant lineal (LDA). Les variables queels mètodes de classificació usen com a input, corresponen a paràmetres estimats a partir de diferents models, mètodes i funcions de l'àrea de les sèries de temps, per exemple, model ARIMA estacional, model ARIMA-TGARCH estacional, mètode estacional Holt-Winters, funció de densitat espectral, funció d'autocorrelació (ACF), funció d'autocorrelació parcial (PACF), rang mòbil (MR), entre altres funcions. També van ser utilitzades algunes variables que s'utilitzen en el camp de l'astronomia per a classificar estreles. En els casos que a priori no va haver-hi información dels clústers de les sèries de temps, les dues primeres components d'una anàlisi de components principals (PCA) van ser utilitzades per l'algorisme k-means per a identificar possibles clústers de les sèries de temps. Addicionalment, els resultats del mètode sPLS-DA van ser comparats amb els de l'algorisme random forest.Tres bases de dades de sèries de temps d'humitat relativa o de temperatura varen ser analitzades. Els clústers de les sèries de temps es van analitzar d'acord a diferents zones o diferents nivells d'altures on van ser instal·lats sensors per al monitoratge de les condicions climàtiques en els edificis.L'algorisme random forest i les diferents versions del mètode sparse PLS van ser útils per a identificar les variables més importants en la classificació de les series de temps. Els resultats de sPLS-DA i random forest van ser molt similars quan es van usar com a variables d'entrada les calculades a partir del mètode Holt-winters o a partir de funcions aplicades a les sèries de temps. Encara que els resultats del mètode random forest van ser lleument millors que els trobats per sPLS-DA quant a les taxes d'error de classificació, els resultats de sPLS-DA van ser més fàcils d'interpretar.Quan les diferents versions del mètode sparse PLS van utilitzar variables resultants del mètode Holt-Winters, els clústers de les sèries de temps van ser més ben discriminats. Entre les diferents versions del mètode sparse PLS, la versió sPLS amb LDA va obtindre la millor discriminació de les sèries de temps, amb un menor valor de la taxa d'error de classificació, i utilitzant el menor o segon menor nombre de variables.En aquesta tesi proposem usar una versió sparse de PLS (sPLS-DA, o sPLS amb LDA) amb variables calculades a partir de sèries de temps per a classificar series de temps. En aplicar la metodologia a les diferents bases de dades estudiades, es van trobar models parsimoniosos, amb poques variables, i varem obtindre una discriminació satisfactòria dels diferents clústers de les sèries de temps amb fácil interpretació. La metodologia proposada pot ser útil per a caracteritzar les diferents zones o altures en museus o edificis similars d'acord amb les seues condicions climàtiques, amb l'objectiu de previndre problemes amb les obres d'art. / [EN] According to different European Standards and several studies, it is necessary to monitor and analyze the microclimatic conditions in museums and similar buildings, with the goal of preserving artworks. With the aim of offering tools to monitor the climatic conditions, a new statistical methodology for classifying time series of different climatic parameters, such as relative humidity and temperature, is pro- posed in this dissertation.The methodology consists of applying a classification method using variables that are computed from time series. The two first classification methods are ver- sions of known sparse methods which have not been applied to time dependent data. The third method is a new proposal that uses two known algorithms. These classification methods are based on different versions of sparse partial least squares discriminant analysis PLS (sPLS-DA, SPLSDA, and sPLS) and Linear Discriminant Analysis (LDA). The variables that are computed from time series, correspond to parameter estimates from functions, methods, or models commonly found in the area of time series, e.g., seasonal ARIMA model, seasonal ARIMA-TGARCH model, seasonal Holt-Winters method, spectral density function, autocorrelation function (ACF), partial autocorrelation function (PACF), moving range (MR), among others functions. Also, some variables employed in the field of astronomy (for classifying stars) were proposed.The methodology proposed consists of two parts. Firstly, different variables are computed applying the methods, models or functions mentioned above, to time series. Next, once the variables are calculated, they are used as input for a classification method like sPLS-DA, SPLSDA, or SPLS with LDA (new proposal). When there was no information about the clusters of the different time series, the first two components from principal component analysis (PCA) were used as input for k-means method for identifying possible clusters of time series. In addition, results from random forest algorithm were compared with results from sPLS-DA.This study analyzed three sets of time series of relative humidity or temperate, recorded in different buildings (Valencia's Cathedral, the archaeological site of L'Almoina, and the baroque church of Saint Thomas and Saint Philip Neri) in Valencia, Spain. The clusters of the time series were analyzed according to different zones or different levels of the sensor heights, for monitoring the climatic conditions in these buildings.Random forest algorithm and different versions of sparse PLS helped identifying the main variables for classifying the time series. When comparing the results from sPLS-DA and random forest, they were very similar for variables from seasonal Holt-Winters method and functions which were applied to the time series. The results from sPLS-DA were easier to interpret than results from random forest. When the different versions of sparse PLS used variables from seasonal Holt- Winters method as input, the clusters of the time series were identified effectively.The variables from seasonal Holt-Winters helped to obtain the best, or the second best results, according to the classification error rate. Among the different versions of sparse PLS proposed, sPLS with LDA helped to classify time series using a fewer number of variables with the lowest classification error rate.We propose using a version of sparse PLS (sPLS-DA, or sPLS with LDA) with variables computed from time series for classifying time series. For the different data sets studied, the methodology helped to produce parsimonious models with few variables, it achieved satisfactory discrimination of the different clusters of the time series which are easily interpreted. This methodology can be useful for characterizing and monitoring micro-climatic conditions in museums, or similar buildings, for preventing problems with artwork. / I gratefully acknowledge the financial support of Pontificia Universidad Javeriana Cali – PUJ and Instituto Colombiano de Crédito Educativo y Estudios Técnicos en el Exterior – ICETEX who awarded me the scholarships ’Convenio de Capacitación para Docentes O. J. 086/17’ and ’Programa Crédito Pasaporte a la Ciencia ID 3595089 foco-reto salud’ respectively. The scholarships were essential for obtaining the Ph.D. Also, I gratefully acknowledge the financial support of the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 814624. / Ramírez Buelvas, SM. (2022). A Statistical Methodology for Classifying Time Series in the Context of Climatic Data [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/181123
32

Development of Robust Correlation Algorithms for Image Velocimetry using Advanced Filtering

Eckstein, Adric 18 January 2008 (has links)
Digital Particle Image Velocimetry (DPIV) is a planar measurement technique to measure the velocity within a fluid by correlating the motion of flow tracers over a sequence of images recorded with a camera-laser system. Sophisticated digital processing algorithms are required to provide a high enough accuracy for quantitative DPIV results. This study explores the potential of a variety of cross-correlation filters to improve the accuracy and robustness of the DPIV estimation. These techniques incorporate the use of the Phase Transform (PHAT) Generalized Cross Correlation (GCC) filter applied to the image cross-correlation. The use of spatial windowing is subsequently examined and shown to be ideally suited for the use of phase correlation estimators, due to their invariance to the loss of correlation effects. The Robust Phase Correlation (RPC) estimator is introduced, with the coupled use of the phase correlation and spatial windowing. The RPC estimator additionally incorporates the use of a spectral filter designed from an analytical decomposition of the DPIV Signal-to-Noise Ratio (SNR). This estimator is validated in a variety of artificial image simulations, the JPIV standard image project, and experimental images, which indicate reductions in error on the order of 50% when correlating low SNR images. Two variations of the RPC estimator are also introduced, the Gaussian Transformed Phase Correlation (GTPC): designed to optimize the subpixel interpolation, and the Spectral Phase Correlation (SPC): estimates the image shift directly from the phase content of the correlation. While these estimators are designed for DPIV, the methodology described here provides a universal framework for digital signal correlation analysis, which could be extended to a variety of other systems. / Master of Science
33

Fraktionierung des Chemischen Sauerstoffbedarfs mithilfe von Extinktionsmessungen im UV/Vis-Spektralbereich

Weber, Steffen 21 April 2023 (has links)
Das Messverfahren der optischen Spektrophotometrie wird zur kontinuierlichen Messung der Abwasserqualität auf ihre Einsatztauglichkeit überprüft. Der chemische Sauerstoffbedarf (CSB) wird als zentraler Kennwert für die stoffliche Verschmutzung von Abwasser und für dessen Nachweis in Oberflächengewässern eingesetzt, welche es zu bestimmen galt. Dabei wird der Informationsgehalt über eine organische, summarische Kohlenstoffbelastung mittels einer zusätzlichen Fraktionierung erhöht. In einer Labormesskampagne werden auf der Grundlage von Respirationsversuchen Daten aus Extinktionswerten des UV/Vis-Spektrums und Referenzwerten (Standardanalyseparameter und simulierte Stoffkonzentrationen mithilfe des Activated Sludge Modell No. 1) generiert. Darauf aufbauend werden Kalibrationsmodelle für den CSB und einzelne Fraktionen entwickelt. Die Modelle werden mithilfe des Regressionsansatzes der Partial-Least-Squares entwickelt und im Rahmen eines Anwendungsbeispiels auf ihre Praxistauglichkeit überprüft. Als Ergebnis dieser Arbeit stehen Kalibrationsmodelle für den Einsatz im kommunalem Abwasser unter Trockenwetterbedingungen zur Verfügung. Die Vorhersagequalität nimmt mit zunehmender Differenzierung ab. Von einer Weiterverwendung der berechneten Äquivalentkonzentrationen für die CSB-Fraktionen (SS, XS, SI und XI), z. B. als Kalibriergröße für Stofftransportmodelle oder als Steuer- und Regelgröße, wird allerdings abgeraten. Als Ursache für die hohen Messungenauigkeiten wurde eine unzureichende Anpassung an die Veränderungen in der Abwasserzusammensetzung während eines Trockenwettertagesganges identifiziert. Mit einer erweiterten Datengrundlage, unter der Verwendung von Standardanalyseparametern (CSB, CSBmf und BSB) in einer Abwasserprobe, welche für den Ausschluss von Stoffverbindungen vor und nach einer respirativen Vorbehandlung bestimmt werden, wird eine höhere Modellgüte in Aussicht gestellt. Darüber hinaus wird ein Umdenken hinsichtlich statischer - hin zu dynamischen - Kalibrationsfunktionen für UV/Vis-Sensoren vorgeschlagen. Eine Generalisierbarkeit der entwickelten Kalibrationsmodelle auf weitere Wetterbedingungen, Messstandorte oder Sensoren wird nicht empfohlen.:Abbildungen VI Tabellen XIII Abkürzungen XV 1 Einleitung 1 1.1 Motivation 1 1.2 Zielstellung 2 2 Stand der Forschung 5 2.1 Kohlenstoffe 6 2.1.1 Zusammensetzung und Herkunft im häuslichen Abwasser 7 2.1.1.1 Fette 8 2.1.1.2 Proteine 8 2.1.1.3 Tenside 9 2.1.1.4 Phenole 10 2.1.1.5 Kohlenwasserstoffe 10 2.1.2 Fraktionierung von Kohlenstoffverbindungen 11 2.1.2.1 Chemischer Sauerstoffbedarf 12 2.1.2.2 Ansätze zur CSB-Fraktionierung 12 2.1.2.3 Stoffzusammensetzung einzelner CSB-Fraktionen 15 2.1.2.4 Messmethoden zur Bestimmung des CSB 18 2.2 Optische Spektroskopie 20 2.2.1 Grundlagen 20 2.2.1.1 Elektromagnetische Strahlung 20 2.2.1.2 Einordnung der optischen Spektroskopie 21 2.2.1.3 Lichtabsorption 21 2.2.1.4 Chemisch-physikalische Grundlagen 22 2.2.1.5 Mathematische Grundlagen 24 2.2.1.6 Extinktionsmessung 25 2.2.2 Online-Messtechnik 26 2.2.2.1 Sensoren /-hersteller 26 2.2.2.2 Kalibrierung 26 2.2.2.2.1 Kalibrierung der S::CAN MESSTECHNIK GmbH 27 2.2.2.2.2 Unabhängige Analyseverfahren zur Auswertung spektrophotometrischer Messreihen 28 2.2.2.3 Messung 29 2.2.2.3.1 Einstellungen und Voraussetzungen 29 2.2.2.3.2 Qualitative Einflussnahme von Störgrößen auf die spektroskopische Datenerfassung 30 2.2.3 Einsatz in der Siedlungswasserwirtschaft und Hydrologie 31 3 Versuchsdurchführung und Analytik 33 3.1 Messkampagnen 33 3.1.1 Labormessversuche 33 3.1.1.1 Respirationsversuch 34 3.1.1.1.1 Versuchsaufbau zum Respirationsversuch 35 3.1.1.1.2 Betriebshinweise Respirationsversuch 38 3.1.1.2 Verdünnungsversuch 41 3.1.1.2.1 Versuchsaufbau zum Verdünnungsversuch 42 3.1.1.2.2 Betriebshinweise Verdünnungsversuch 43 3.1.2 Feldmessversuch 43 3.1.2.1 Versuchsaufbau zum Feldmessversuch 44 3.1.2.2 Betriebshinweise Feldmessversuch 46 3.2 Abwasserproben: Aufbewahrung und Analytik 47 3.2.1 Konservierung und Probenvorbehandlung 48 3.2.2 Standardisierte Laboranalyseverfahren 49 3.2.2.1 CSB 49 3.2.2.2 Biologischer Sauerstoffbedarf BSBn 50 3.3 Mess- und Regelinstrumente 51 3.3.1 Optischer Multiparameter-Sensor 51 3.3.2 Luminescent Dissolved Oxygen-Sensor (LDO) 53 3.3.3 Peristaltik-Pumpe 54 3.3.4 Dispergierer 54 4 Untersuchungen zur Entwicklung und Anwendung von UV/Vis-Kalibrierungen 55 4.1 Statistische Verfahren zur Kalibrierung 55 4.1.1 Datengrundlage und Methoden 56 4.1.1.1 Datengrundlage 56 4.1.1.2 Multivariate Datenanalyse 57 4.1.1.2.1 Regressionsanalyse 58 4.1.1.2.1.1 Schätzung der Regressionsfunktion 59 4.1.1.2.2 Qualitätsprüfung 61 4.1.1.2.3 Prüfung der Modellprämissen 63 4.1.1.2.4 Multivariate Regressionsanalyse 66 4.1.1.3 Vergleich der Kalibrierverfahren 70 4.1.2 Ergebnisse 70 4.1.2.1 Regressionsansätze für UV/Vis-Kalibrierung 70 4.1.2.1.1 Partial-Least-Squares Regression (PLS-R) 70 4.1.2.1.2 Lasso-Regression 73 4.1.2.1.3 Herstellerkalibrierung (SCAN GmbH) 73 4.1.2.1.3.1 Anwendung der globalen Herstellerkalibrierung 73 4.1.2.1.3.2 Lokal angepasste Herstellerkalibrierung 74 4.1.3 Auswertungen 75 4.1.3.1 Tauglichkeit angewandter Regressionsansätze zur Entwicklung von UV/Vis-Kalibrierfunktionen 75 4.1.3.1.1 Vergleich der Vorhersagequalitäten zwischen Regressionsansätzen und Herstellerkali¬- brierung 75 4.1.3.1.2 Aussagekraft angewandter Regressionsmodelle 77 4.1.3.1.2.1 Regressionsfunktion und -koeffizienten 77 4.1.3.1.2.2 Modellprämissen 78 4.1.3.2 Identifizierung signifikanter WL oder -Bereiche 80 4.2 Fraktionierung von CSB-Verbindungen 81 4.2.1 Datengrundlage und Methoden 82 4.2.1.1 Laborwertmethode 83 4.2.1.2 Modellwertmethode 85 4.2.1.2.1 Respirometrische Messung 86 4.2.1.2.2 Sauerstoffverbrauchsrate 87 4.2.1.2.3 Modellberechnung 89 4.2.1.2.4 Simulationsmethode mit modifiziertem Activated Sludge Modell No. 1 92 4.2.1.2.5 Modellkalibrierung 95 4.2.1.2.6 Datenauswahl 96 4.2.1.3 Lichtabsorptionsmethode 96 4.2.2 Ergebnisse 97 4.2.2.1 Modellwertmethode mit ASM No. 1 97 4.2.2.2 Auswahl von Modelldaten 100 4.2.2.3 UV/Vis-Kalibrierfunktionen 101 4.2.2.3.1 CSB-Fraktionen 101 4.2.2.3.2 Vergleich MW- und LW-Modell 103 4.2.3 Auswertungen 104 4.2.3.1 Tauglichkeit von Simulationsergebnissen aus Modellwertmethode zur Entwicklung von Kalibrierfunktionen 104 4.2.3.2 Abweichende Vorhersagequalitäten zwischen den UV/Vis-Kalibrierfunktionen 106 4.2.3.3 Messunsicherheiten und Modellqualität 107 4.2.3.4 Signifikante Wellenlängen oder -bereiche für einzelne CSB-Fraktionen 109 4.3 Anwendungsbeispiel: Kohlenstoffumsatz entlang einer Fließstrecke 111 4.3.1 Datengrundlage und Methoden 112 4.3.1.1 Einsatz von UV/Vis-Messtechnik 115 4.3.1.1.1 Vergleichbarkeit bei Parallelbetrieb baugleicher Sensoren 115 4.3.1.1.1.1 Versuchsdurchführung 116 4.3.1.1.1.2 Berechnungsansätze 116 4.3.1.1.2 Lokale Kalibrierung 117 4.3.1.1.2.1 Univariat 118 4.3.1.1.2.2 Multivariat 118 4.3.1.2 Kohlenstoffumwandlung und -umsatz innerhalb des Durchflussreaktors 118 4.3.1.2.1 Vorverarbeitung von UV/Vis-Daten 120 4.3.1.2.2 Zeitsynchronisation mithilfe der Fließzeit 120 4.3.1.2.3 Bestimmung von stofflichen Veränderungen in einem Wasserpaket 121 4.3.2 Ergebnisse 122 4.3.2.1 Praxiseinsatz von UV/Vis-Messtechnik 122 4.3.2.1.1 Stabilität und Vergleichbarkeit von Messsignalen bei unterschiedlichen Sensoren 122 4.3.2.1.1.1 Messgüte 122 4.3.2.1.1.2 Sensoranpassung 124 4.3.2.1.2 UV/Vis-Kalibrationsfunktionen 125 4.3.2.1.2.1 Validierung LK-PLS-R 126 4.3.2.1.2.2 Lokale Nachkalibrierung LK-PLS-R 128 4.3.2.1.3 Anwendung entwickelter Kalibrationsmodelle auf Zeitreihen 130 4.3.2.2 Kohlenstoffumsatz 131 4.3.3 Auswertungen 135 4.3.3.1 Tauglichkeit von UV/Vis-Messtechnik für den Einsatz in der Kanalisation 135 4.3.3.1.1 Vorhersagegenauigkeit von Kalibrationsfunktionen 135 4.3.3.1.2 Abweichende Messergebnisse der Extinktion von einzelnen Sensoren 135 4.3.3.2 Veränderungen in den Konzentrationen einzelner Kohlenstofffraktionen entlang der Fließstrecke 136 5 Diskussion 139 6 Ausblick 151 7 Zusammenfassung 153 8 Literaturverzeichnis 157 A Anhang 171 A.1 Respirationsversuche CSB-Fraktionen 171 A.1.1 Quellcode - CSB-Fraktionierung 171 A.1.2 Respirationsversuche CSB-Fraktionen 175 A.1.3 Quellcode - PLS-Regression 178 A.1.4 UV/Vis-Kalibrierung - CSB-Fraktionen 180 A.1.5 Modellgüte 183 A.1.6 Modellprämissen 184 A.2 Feldmesskampagne 188 A.2.1 Sensorkompensation 188 A.2.2 Korrelationsplots 189 A.2.2.1 Validierung der Kalibrationsmodelle 189 A.2.2.2 Nachkalibrierung der Kalibrationsmodelle 192 A.2.2.2.1 univariat 192 A.2.3 Stoffliche Veränderungen in Wasserpaketen 198 A.2.4 Laboranalysen Stoffliche Veränderungen in Wasserpaketen 201 / Optical spectrophotometry is checked as measuring method for continuous monitoring of waste water quality. The chemical oxygen demand (COD) is used as a central parame-ter for the material assessment of waste water and for its detection in surface waters. The information value about an organic load is increased using an additional fractiona-tion. In a laboratory measurement campaign, data from extinction values of the UV/Vis spectrum and reference values are created (standard analysis parameters and simulated concentrations by using the Activated Sludge Model No. 1). Based on this calibration models for the COD and individual fractions are developed using the regression ap-proach of the partial least squares and their practical suitability is checked in the context of an application example. As a result of this work, calibration models for use in munici-pal wastewater under dry weather conditions, are available. The prediction quality de-creases with increasing differentiation. We advise against further use of the calculated equivalent concentrations for the COD fractions (SS, XS, SI und XI), e.g. as a calibration var-iable for mass transfer models or as a control and regulation variable. The reason for higher measurement uncertainties is identified as insufficient adaptation to the changing wastewater composition during a dry weather day. With an extended data basis, a higher model quality is promised: Standard analysis parameters (COD, CODmf and BOD) are de-termined in wastewater samples before and after respiratory pretreatment in order to be able to rule out substances. In addition, rethinking of static calibration functions for UV/Vis sensors is proposed towards dynamic methods. A generalization of calibration models to other weather conditions, measurement locations or sensors is not recom-mended.:Abbildungen VI Tabellen XIII Abkürzungen XV 1 Einleitung 1 1.1 Motivation 1 1.2 Zielstellung 2 2 Stand der Forschung 5 2.1 Kohlenstoffe 6 2.1.1 Zusammensetzung und Herkunft im häuslichen Abwasser 7 2.1.1.1 Fette 8 2.1.1.2 Proteine 8 2.1.1.3 Tenside 9 2.1.1.4 Phenole 10 2.1.1.5 Kohlenwasserstoffe 10 2.1.2 Fraktionierung von Kohlenstoffverbindungen 11 2.1.2.1 Chemischer Sauerstoffbedarf 12 2.1.2.2 Ansätze zur CSB-Fraktionierung 12 2.1.2.3 Stoffzusammensetzung einzelner CSB-Fraktionen 15 2.1.2.4 Messmethoden zur Bestimmung des CSB 18 2.2 Optische Spektroskopie 20 2.2.1 Grundlagen 20 2.2.1.1 Elektromagnetische Strahlung 20 2.2.1.2 Einordnung der optischen Spektroskopie 21 2.2.1.3 Lichtabsorption 21 2.2.1.4 Chemisch-physikalische Grundlagen 22 2.2.1.5 Mathematische Grundlagen 24 2.2.1.6 Extinktionsmessung 25 2.2.2 Online-Messtechnik 26 2.2.2.1 Sensoren /-hersteller 26 2.2.2.2 Kalibrierung 26 2.2.2.2.1 Kalibrierung der S::CAN MESSTECHNIK GmbH 27 2.2.2.2.2 Unabhängige Analyseverfahren zur Auswertung spektrophotometrischer Messreihen 28 2.2.2.3 Messung 29 2.2.2.3.1 Einstellungen und Voraussetzungen 29 2.2.2.3.2 Qualitative Einflussnahme von Störgrößen auf die spektroskopische Datenerfassung 30 2.2.3 Einsatz in der Siedlungswasserwirtschaft und Hydrologie 31 3 Versuchsdurchführung und Analytik 33 3.1 Messkampagnen 33 3.1.1 Labormessversuche 33 3.1.1.1 Respirationsversuch 34 3.1.1.1.1 Versuchsaufbau zum Respirationsversuch 35 3.1.1.1.2 Betriebshinweise Respirationsversuch 38 3.1.1.2 Verdünnungsversuch 41 3.1.1.2.1 Versuchsaufbau zum Verdünnungsversuch 42 3.1.1.2.2 Betriebshinweise Verdünnungsversuch 43 3.1.2 Feldmessversuch 43 3.1.2.1 Versuchsaufbau zum Feldmessversuch 44 3.1.2.2 Betriebshinweise Feldmessversuch 46 3.2 Abwasserproben: Aufbewahrung und Analytik 47 3.2.1 Konservierung und Probenvorbehandlung 48 3.2.2 Standardisierte Laboranalyseverfahren 49 3.2.2.1 CSB 49 3.2.2.2 Biologischer Sauerstoffbedarf BSBn 50 3.3 Mess- und Regelinstrumente 51 3.3.1 Optischer Multiparameter-Sensor 51 3.3.2 Luminescent Dissolved Oxygen-Sensor (LDO) 53 3.3.3 Peristaltik-Pumpe 54 3.3.4 Dispergierer 54 4 Untersuchungen zur Entwicklung und Anwendung von UV/Vis-Kalibrierungen 55 4.1 Statistische Verfahren zur Kalibrierung 55 4.1.1 Datengrundlage und Methoden 56 4.1.1.1 Datengrundlage 56 4.1.1.2 Multivariate Datenanalyse 57 4.1.1.2.1 Regressionsanalyse 58 4.1.1.2.1.1 Schätzung der Regressionsfunktion 59 4.1.1.2.2 Qualitätsprüfung 61 4.1.1.2.3 Prüfung der Modellprämissen 63 4.1.1.2.4 Multivariate Regressionsanalyse 66 4.1.1.3 Vergleich der Kalibrierverfahren 70 4.1.2 Ergebnisse 70 4.1.2.1 Regressionsansätze für UV/Vis-Kalibrierung 70 4.1.2.1.1 Partial-Least-Squares Regression (PLS-R) 70 4.1.2.1.2 Lasso-Regression 73 4.1.2.1.3 Herstellerkalibrierung (SCAN GmbH) 73 4.1.2.1.3.1 Anwendung der globalen Herstellerkalibrierung 73 4.1.2.1.3.2 Lokal angepasste Herstellerkalibrierung 74 4.1.3 Auswertungen 75 4.1.3.1 Tauglichkeit angewandter Regressionsansätze zur Entwicklung von UV/Vis-Kalibrierfunktionen 75 4.1.3.1.1 Vergleich der Vorhersagequalitäten zwischen Regressionsansätzen und Herstellerkali¬- brierung 75 4.1.3.1.2 Aussagekraft angewandter Regressionsmodelle 77 4.1.3.1.2.1 Regressionsfunktion und -koeffizienten 77 4.1.3.1.2.2 Modellprämissen 78 4.1.3.2 Identifizierung signifikanter WL oder -Bereiche 80 4.2 Fraktionierung von CSB-Verbindungen 81 4.2.1 Datengrundlage und Methoden 82 4.2.1.1 Laborwertmethode 83 4.2.1.2 Modellwertmethode 85 4.2.1.2.1 Respirometrische Messung 86 4.2.1.2.2 Sauerstoffverbrauchsrate 87 4.2.1.2.3 Modellberechnung 89 4.2.1.2.4 Simulationsmethode mit modifiziertem Activated Sludge Modell No. 1 92 4.2.1.2.5 Modellkalibrierung 95 4.2.1.2.6 Datenauswahl 96 4.2.1.3 Lichtabsorptionsmethode 96 4.2.2 Ergebnisse 97 4.2.2.1 Modellwertmethode mit ASM No. 1 97 4.2.2.2 Auswahl von Modelldaten 100 4.2.2.3 UV/Vis-Kalibrierfunktionen 101 4.2.2.3.1 CSB-Fraktionen 101 4.2.2.3.2 Vergleich MW- und LW-Modell 103 4.2.3 Auswertungen 104 4.2.3.1 Tauglichkeit von Simulationsergebnissen aus Modellwertmethode zur Entwicklung von Kalibrierfunktionen 104 4.2.3.2 Abweichende Vorhersagequalitäten zwischen den UV/Vis-Kalibrierfunktionen 106 4.2.3.3 Messunsicherheiten und Modellqualität 107 4.2.3.4 Signifikante Wellenlängen oder -bereiche für einzelne CSB-Fraktionen 109 4.3 Anwendungsbeispiel: Kohlenstoffumsatz entlang einer Fließstrecke 111 4.3.1 Datengrundlage und Methoden 112 4.3.1.1 Einsatz von UV/Vis-Messtechnik 115 4.3.1.1.1 Vergleichbarkeit bei Parallelbetrieb baugleicher Sensoren 115 4.3.1.1.1.1 Versuchsdurchführung 116 4.3.1.1.1.2 Berechnungsansätze 116 4.3.1.1.2 Lokale Kalibrierung 117 4.3.1.1.2.1 Univariat 118 4.3.1.1.2.2 Multivariat 118 4.3.1.2 Kohlenstoffumwandlung und -umsatz innerhalb des Durchflussreaktors 118 4.3.1.2.1 Vorverarbeitung von UV/Vis-Daten 120 4.3.1.2.2 Zeitsynchronisation mithilfe der Fließzeit 120 4.3.1.2.3 Bestimmung von stofflichen Veränderungen in einem Wasserpaket 121 4.3.2 Ergebnisse 122 4.3.2.1 Praxiseinsatz von UV/Vis-Messtechnik 122 4.3.2.1.1 Stabilität und Vergleichbarkeit von Messsignalen bei unterschiedlichen Sensoren 122 4.3.2.1.1.1 Messgüte 122 4.3.2.1.1.2 Sensoranpassung 124 4.3.2.1.2 UV/Vis-Kalibrationsfunktionen 125 4.3.2.1.2.1 Validierung LK-PLS-R 126 4.3.2.1.2.2 Lokale Nachkalibrierung LK-PLS-R 128 4.3.2.1.3 Anwendung entwickelter Kalibrationsmodelle auf Zeitreihen 130 4.3.2.2 Kohlenstoffumsatz 131 4.3.3 Auswertungen 135 4.3.3.1 Tauglichkeit von UV/Vis-Messtechnik für den Einsatz in der Kanalisation 135 4.3.3.1.1 Vorhersagegenauigkeit von Kalibrationsfunktionen 135 4.3.3.1.2 Abweichende Messergebnisse der Extinktion von einzelnen Sensoren 135 4.3.3.2 Veränderungen in den Konzentrationen einzelner Kohlenstofffraktionen entlang der Fließstrecke 136 5 Diskussion 139 6 Ausblick 151 7 Zusammenfassung 153 8 Literaturverzeichnis 157 A Anhang 171 A.1 Respirationsversuche CSB-Fraktionen 171 A.1.1 Quellcode - CSB-Fraktionierung 171 A.1.2 Respirationsversuche CSB-Fraktionen 175 A.1.3 Quellcode - PLS-Regression 178 A.1.4 UV/Vis-Kalibrierung - CSB-Fraktionen 180 A.1.5 Modellgüte 183 A.1.6 Modellprämissen 184 A.2 Feldmesskampagne 188 A.2.1 Sensorkompensation 188 A.2.2 Korrelationsplots 189 A.2.2.1 Validierung der Kalibrationsmodelle 189 A.2.2.2 Nachkalibrierung der Kalibrationsmodelle 192 A.2.2.2.1 univariat 192 A.2.3 Stoffliche Veränderungen in Wasserpaketen 198 A.2.4 Laboranalysen Stoffliche Veränderungen in Wasserpaketen 201
34

Statistical modelling of return on capital employed of individual units

Burombo, Emmanuel Chamunorwa 10 1900 (has links)
Return on Capital Employed (ROCE) is a popular financial instrument and communication tool for the appraisal of companies. Often, companies management and other practitioners use untested rules and behavioural approach when investigating the key determinants of ROCE, instead of the scientific statistical paradigm. The aim of this dissertation was to identify and quantify key determinants of ROCE of individual companies listed on the Johannesburg Stock Exchange (JSE), by comparing classical multiple linear regression, principal components regression, generalized least squares regression, and robust maximum likelihood regression approaches in order to improve companies decision making. Performance indicators used to arrive at the best approach were coefficient of determination ( ), adjusted ( , and Mean Square Residual (MSE). Since the ROCE variable had positive and negative values two separate analyses were done. The classical multiple linear regression models were constructed using stepwise directed search for dependent variable log ROCE for the two data sets. Assumptions were satisfied and problem of multicollinearity was addressed. For the positive ROCE data set, the classical multiple linear regression model had a of 0.928, an of 0.927, a MSE of 0.013, and the lead key determinant was Return on Equity (ROE),with positive elasticity, followed by Debt to Equity (D/E) and Capital Employed (CE), both with negative elasticities. The model showed good validation performance. For the negative ROCE data set, the classical multiple linear regression model had a of 0.666, an of 0.652, a MSE of 0.149, and the lead key determinant was Assets per Capital Employed (APCE) with positive effect, followed by Return on Assets (ROA) and Market Capitalization (MC), both with negative effects. The model showed poor validation performance. The results indicated more and less precision than those found by previous studies. This suggested that the key determinants are also important sources of variability in ROCE of individual companies that management need to work with. To handle the problem of multicollinearity in the data, principal components were selected using Kaiser-Guttman criterion. The principal components regression model was constructed using dependent variable log ROCE for the two data sets. Assumptions were satisfied. For the positive ROCE data set, the principal components regression model had a of 0.929, an of 0.929, a MSE of 0.069, and the lead key determinant was PC4 (log ROA, log ROE, log Operating Profit Margin (OPM)) and followed by PC2 (log Earnings Yield (EY), log Price to Earnings (P/E)), both with positive effects. The model resulted in a satisfactory validation performance. For the negative ROCE data set, the principal components regression model had a of 0.544, an of 0.532, a MSE of 0.167, and the lead key determinant was PC3 (ROA, EY, APCE) and followed by PC1 (MC, CE), both with negative effects. The model indicated an accurate validation performance. The results showed that the use of principal components as independent variables did not improve classical multiple linear regression model prediction in our data. This implied that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with. Generalized least square regression was used to assess heteroscedasticity and dependences in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the weighted generalized least squares regression model had a of 0.920, an of 0.919, a MSE of 0.044, and the lead key determinant was ROE with positive effect, followed by D/E with negative effect, Dividend Yield (DY) with positive effect and lastly CE with negative effect. The model indicated an accurate validation performance. For the negative ROCE data set, the weighted generalized least squares regression model had a of 0.559, an of 0.548, a MSE of 57.125, and the lead key determinant was APCE and followed by ROA, both with positive effects.The model showed a weak validation performance. The results suggested that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with. Robust maximum likelihood regression was employed to handle the problem of contamination in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the robust maximum likelihood regression model had a of 0.998, an of 0.997, a MSE of 6.739, and the lead key determinant was ROE with positive effect, followed by DY and lastly D/E, both with negative effects. The model showed a strong validation performance. For the negative ROCE data set, the robust maximum likelihood regression model had a of 0.990, an of 0.984, a MSE of 98.883, and the lead key determinant was APCE with positive effect and followed by ROA with negative effect. The model also showed a strong validation performance. The results reflected that the key determinants are major sources of variability in ROCE of individual companies that management need to work with. Overall, the findings showed that the use of robust maximum likelihood regression provided more precise results compared to those obtained using the three competing approaches, because it is more consistent, sufficient and efficient; has a higher breakdown point and no conditions. Companies management can establish and control proper marketing strategies using the key determinants, and results of these strategies can see an improvement in ROCE. / Mathematical Sciences / M. Sc. (Statistics)
35

Statistical modelling of return on capital employed of individual units

Burombo, Emmanuel Chamunorwa 10 1900 (has links)
Return on Capital Employed (ROCE) is a popular financial instrument and communication tool for the appraisal of companies. Often, companies management and other practitioners use untested rules and behavioural approach when investigating the key determinants of ROCE, instead of the scientific statistical paradigm. The aim of this dissertation was to identify and quantify key determinants of ROCE of individual companies listed on the Johannesburg Stock Exchange (JSE), by comparing classical multiple linear regression, principal components regression, generalized least squares regression, and robust maximum likelihood regression approaches in order to improve companies decision making. Performance indicators used to arrive at the best approach were coefficient of determination ( ), adjusted ( , and Mean Square Residual (MSE). Since the ROCE variable had positive and negative values two separate analyses were done. The classical multiple linear regression models were constructed using stepwise directed search for dependent variable log ROCE for the two data sets. Assumptions were satisfied and problem of multicollinearity was addressed. For the positive ROCE data set, the classical multiple linear regression model had a of 0.928, an of 0.927, a MSE of 0.013, and the lead key determinant was Return on Equity (ROE),with positive elasticity, followed by Debt to Equity (D/E) and Capital Employed (CE), both with negative elasticities. The model showed good validation performance. For the negative ROCE data set, the classical multiple linear regression model had a of 0.666, an of 0.652, a MSE of 0.149, and the lead key determinant was Assets per Capital Employed (APCE) with positive effect, followed by Return on Assets (ROA) and Market Capitalization (MC), both with negative effects. The model showed poor validation performance. The results indicated more and less precision than those found by previous studies. This suggested that the key determinants are also important sources of variability in ROCE of individual companies that management need to work with. To handle the problem of multicollinearity in the data, principal components were selected using Kaiser-Guttman criterion. The principal components regression model was constructed using dependent variable log ROCE for the two data sets. Assumptions were satisfied. For the positive ROCE data set, the principal components regression model had a of 0.929, an of 0.929, a MSE of 0.069, and the lead key determinant was PC4 (log ROA, log ROE, log Operating Profit Margin (OPM)) and followed by PC2 (log Earnings Yield (EY), log Price to Earnings (P/E)), both with positive effects. The model resulted in a satisfactory validation performance. For the negative ROCE data set, the principal components regression model had a of 0.544, an of 0.532, a MSE of 0.167, and the lead key determinant was PC3 (ROA, EY, APCE) and followed by PC1 (MC, CE), both with negative effects. The model indicated an accurate validation performance. The results showed that the use of principal components as independent variables did not improve classical multiple linear regression model prediction in our data. This implied that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with. Generalized least square regression was used to assess heteroscedasticity and dependences in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the weighted generalized least squares regression model had a of 0.920, an of 0.919, a MSE of 0.044, and the lead key determinant was ROE with positive effect, followed by D/E with negative effect, Dividend Yield (DY) with positive effect and lastly CE with negative effect. The model indicated an accurate validation performance. For the negative ROCE data set, the weighted generalized least squares regression model had a of 0.559, an of 0.548, a MSE of 57.125, and the lead key determinant was APCE and followed by ROA, both with positive effects.The model showed a weak validation performance. The results suggested that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with. Robust maximum likelihood regression was employed to handle the problem of contamination in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the robust maximum likelihood regression model had a of 0.998, an of 0.997, a MSE of 6.739, and the lead key determinant was ROE with positive effect, followed by DY and lastly D/E, both with negative effects. The model showed a strong validation performance. For the negative ROCE data set, the robust maximum likelihood regression model had a of 0.990, an of 0.984, a MSE of 98.883, and the lead key determinant was APCE with positive effect and followed by ROA with negative effect. The model also showed a strong validation performance. The results reflected that the key determinants are major sources of variability in ROCE of individual companies that management need to work with. Overall, the findings showed that the use of robust maximum likelihood regression provided more precise results compared to those obtained using the three competing approaches, because it is more consistent, sufficient and efficient; has a higher breakdown point and no conditions. Companies management can establish and control proper marketing strategies using the key determinants, and results of these strategies can see an improvement in ROCE. / Mathematical Sciences / M. Sc. (Statistics)
36

Numerical methods for backward stochastic differential equations of quadratic and locally Lipschitz type

Turkedjiev, Plamen 17 July 2013 (has links)
Der Fokus dieser Dissertation liegt darauf, effiziente numerische Methode für ungekoppelte lokal Lipschitz-stetige und quadratische stochastische Vorwärts-Rückwärtsdifferenzialgleichungen (BSDE) mit Endbedingungen von schwacher Regularität zu entwickeln. Obwohl BSDE viele Anwendungen in der Theorie der Finanzmathematik, der stochastischen Kontrolle und der partiellen Differenzialgleichungen haben, gibt es bisher nur wenige numerische Methoden. Drei neue auf Monte-Carlo- Simulationen basierende Algorithmen werden entwickelt. Die in der zeitdiskreten Approximation zu lösenden bedingten Erwartungen werden mittels der Methode der kleinsten Quadrate näherungsweise berechnet. Ein Vorteil dieser Algorithmen ist, dass sie als Eingabe nur Simulationen eines Vorwärtsprozesses X und der Brownschen Bewegung benötigen. Da sie auf modellfreien Abschätzungen aufbauen, benötigen die hier vorgestellten Verfahren nur sehr schwache Bedingungen an den Prozess X. Daher können sie auf sehr allgemeinen Wahrscheinlichkeitsräumen angewendet werden. Für die drei numerischen Algorithmen werden explizite maximale Fehlerabschätzungen berechnet. Die Algorithmen werden dann auf Basis dieser maximalen Fehler kalibriert und die Komplexität der Algorithmen wird berechnet. Mithilfe einer zeitlich lokalen Abschneidung des Treibers der BSDE werden quadratische BSDE auf lokal Lipschitz-stetige BSDE zurückgeführt. Es wird gezeigt, dass die Komplexität der Algorithmen im lokal Lipschitz-stetigen Fall vergleichbar zu ihrer Komplexität im global Lipschitz-stetigen Fall ist. Es wird auch gezeigt, dass der Vergleich mit bereits für Lipschitz-stetige BSDE existierenden Methoden für die hier vorgestellten Algorithmen positiv ausfällt. / The focus of the thesis is to develop efficient numerical schemes for quadratic and locally Lipschitz decoupled forward-backward stochastic differential equations (BSDEs). The terminal conditions satisfy weak regularity conditions. Although BSDEs have valuable applications in the theory of financial mathematics, stochastic control and partial differential equations, few efficient numerical schemes are available. Three algorithms based on Monte Carlo simulation are developed. Starting from a discrete time scheme, least-square regression is used to approximate conditional expectation. One benefit of these schemes is that they require as an input only the simulations of an explanatory process X and a Brownian motion W. Due to the use of distribution-free tools, one requires only very weak conditions on the explanatory process X, meaning that these methods can be applied to very general probability spaces. Explicit upper bounds for the error are obtained. The algorithms are then calibrated systematically based on the upper bounds of the error and the complexity is computed. Using a time-local truncation of the BSDE driver, the quadratic BSDE is reduced to a locally Lipschitz BSDE, and it is shown that the complexity of the algorithms for the locally Lipschitz BSDE is the same as that of the algorithm of a uniformly Lipschitz BSDE. It is also shown that these algorithms are competitive compared to other available algorithms for uniformly Lipschitz BSDEs.
37

Projeto e desenvolvimento de um sistema de análises químicas por injeção em fluxo para determinações espectrofotométricas simultâneas de cobre e de níquel explorando cinética diferencial e calibração multivariada / Project and development of a flow-injection system for simultaneous spectrophotometric determination of copper and nickel exploiting differential kinetics and multivariate calibration

Sasaki, Milton Katsumi 09 June 2011 (has links)
Análise cinética diferencial explora diferenças em taxas reacionais entre os analitos e um sistema reacional comum; etapas de separação prévia dos analitos podem então ser prescindidas. Sistemas de análise por injeção em fluxo (FIA) se afiguram como uma ferramenta importante para métodos envolvendo essa estratégia, pois permitem um controle preciso da dispersão de reagentes / amostras e da temporização. O objetivo deste trabalho foi então explorar estes dois aspectos favoráveis visando a determinação simultânea de cobre e de níquel, a partir de suas reações com o reagente cromogênico 5-Br-PADAP. Três alíquotas de amostra eram simultaneamente inseridas, por meio de um injetor proporcional, no fluxo transportador reagente (5-Br-PADAP 75 mg L-1 + sistema tampão 0,5 mol L-1 em ácido acético / acetato, pH 4,7) de um sistema FIA em linha única. Durante o transporte em direção ao detector, as zonas estabelecidas se coalesciam, originando uma zona complexa que era monitorada a 562 nm. Os valores locais máximos e mínimos da função concentração / tempo obtida eram considerados para calibração multivariada utilizando a ferramenta quimiométrica PLS-2 (partial least squares - 2). A concentração do reagente, a capacidade tampão, a temperatura, a vazão, os comprimentos do percurso analítico e das alças de amostragem, bem como a distância inicial entre as zonas de amostra estabelecidas foram avaliados para construção dos modelos matemáticos. Estes foram criados a partir de 24 soluções-padrão mistas de Cu2+ e Ni2+ (0,00-1,60 mg L-1 em HNO3 a 0,1% v/v). Duas variáveis latentes foram suficientes para capturar > 98 % das variâncias inerentes ao conjunto de dados e erros médios das previsões (RMSEP) foram estimados em 0,025 e 0,071 mg L-1 para Cu e Ni, salientando a boa precisão do modelo de calibração. O sistema proposto apresenta boas figuras de mérito: fisicamente estável, quando mantido em operação por quatro horas ininterruptas, consumo de 314 \'mü\'g 5-Br-PADAP por amostra, frequência analítica de 33 amostras por hora (165 dados, 66 determinações) e erros nas leituras em sinais de absorbância tipicamente < 5%. Entretanto, verificou-se a inexatidão das previsões efetuadas pelo modelo proposto, quando comparadas aos resultados obtidos por ICP OES. A partir deste fato, tornam-se necessários maiores estudos referentes a este tipo de matriz, bem como de técnicas de mascaramento dos possíveis interferentes presentes / Differential kinetic analysis exploits the differences in reaction rates between the analytes and a common reactant system; prior steps of analyte separation can then be waived. Flow-injection systems (FIA) are considered as an important tool for methods involving such a strategy because they allow precise control of sample / reagent dispersion and timing. The aim of this work was then to exploit these two favorable aspects for the simultaneous determination of copper and nickel using the 5-Br-PADAP chromogenic reagent. Three sample aliquots were simultaneously inserted by means of a proportional injector into reagent carrier stream (75 mg L-1 5-Br-PADAP + 0.5 mol L-1 acetic acid / acetate, pH 4.7) of a single-line FIA system. During transport towards detection, the established zones coalesce themselves, resulting in a complex zone that was monitored at 562 nm. The local maximum and minimum values of the concentration / time obtained function were considered for multivariate calibration using the PLS-2 (partial least squares - 2) chemometric tool. The reagent concentration, buffering capacity, temperature, flow rate and lengths of the analytical path, sampling loops and initial distance between plugs were established and evaluated for the construction of mathematical models. To this end, 24 Cu2+ and Ni2+ (0.00 - 1.60 mg L-1, also 0.1% v/v HNO3) mixed standard solutions were used. Two latent variables were enough to capture > 98% of the variance inherent in the data set and average prediction errors (RMSEP) were estimated as 0.025 and 0.071 mg L-1 for Cu and Ni, emphasizing the good precision the calibration model. The proposed system presents good figures of merit: physical stability when kept in operation for four uninterrupted hours, consumption of 314 \'mü\'g 5-Br-PADAP per sample, sample throughput of 33 h-1 (165 data, 66 determinations) and error readings in absorbance signals typically <5%. However, inaccuracy of the predictions made by the proposed model when compared to results obtained by ICP OES was noted. Thus, further studies involving this type of matrix, as well as masking techniques of potential interferences present, are recommended
38

Projeto e desenvolvimento de um sistema de análises químicas por injeção em fluxo para determinações espectrofotométricas simultâneas de cobre e de níquel explorando cinética diferencial e calibração multivariada / Project and development of a flow-injection system for simultaneous spectrophotometric determination of copper and nickel exploiting differential kinetics and multivariate calibration

Milton Katsumi Sasaki 09 June 2011 (has links)
Análise cinética diferencial explora diferenças em taxas reacionais entre os analitos e um sistema reacional comum; etapas de separação prévia dos analitos podem então ser prescindidas. Sistemas de análise por injeção em fluxo (FIA) se afiguram como uma ferramenta importante para métodos envolvendo essa estratégia, pois permitem um controle preciso da dispersão de reagentes / amostras e da temporização. O objetivo deste trabalho foi então explorar estes dois aspectos favoráveis visando a determinação simultânea de cobre e de níquel, a partir de suas reações com o reagente cromogênico 5-Br-PADAP. Três alíquotas de amostra eram simultaneamente inseridas, por meio de um injetor proporcional, no fluxo transportador reagente (5-Br-PADAP 75 mg L-1 + sistema tampão 0,5 mol L-1 em ácido acético / acetato, pH 4,7) de um sistema FIA em linha única. Durante o transporte em direção ao detector, as zonas estabelecidas se coalesciam, originando uma zona complexa que era monitorada a 562 nm. Os valores locais máximos e mínimos da função concentração / tempo obtida eram considerados para calibração multivariada utilizando a ferramenta quimiométrica PLS-2 (partial least squares - 2). A concentração do reagente, a capacidade tampão, a temperatura, a vazão, os comprimentos do percurso analítico e das alças de amostragem, bem como a distância inicial entre as zonas de amostra estabelecidas foram avaliados para construção dos modelos matemáticos. Estes foram criados a partir de 24 soluções-padrão mistas de Cu2+ e Ni2+ (0,00-1,60 mg L-1 em HNO3 a 0,1% v/v). Duas variáveis latentes foram suficientes para capturar > 98 % das variâncias inerentes ao conjunto de dados e erros médios das previsões (RMSEP) foram estimados em 0,025 e 0,071 mg L-1 para Cu e Ni, salientando a boa precisão do modelo de calibração. O sistema proposto apresenta boas figuras de mérito: fisicamente estável, quando mantido em operação por quatro horas ininterruptas, consumo de 314 \'mü\'g 5-Br-PADAP por amostra, frequência analítica de 33 amostras por hora (165 dados, 66 determinações) e erros nas leituras em sinais de absorbância tipicamente < 5%. Entretanto, verificou-se a inexatidão das previsões efetuadas pelo modelo proposto, quando comparadas aos resultados obtidos por ICP OES. A partir deste fato, tornam-se necessários maiores estudos referentes a este tipo de matriz, bem como de técnicas de mascaramento dos possíveis interferentes presentes / Differential kinetic analysis exploits the differences in reaction rates between the analytes and a common reactant system; prior steps of analyte separation can then be waived. Flow-injection systems (FIA) are considered as an important tool for methods involving such a strategy because they allow precise control of sample / reagent dispersion and timing. The aim of this work was then to exploit these two favorable aspects for the simultaneous determination of copper and nickel using the 5-Br-PADAP chromogenic reagent. Three sample aliquots were simultaneously inserted by means of a proportional injector into reagent carrier stream (75 mg L-1 5-Br-PADAP + 0.5 mol L-1 acetic acid / acetate, pH 4.7) of a single-line FIA system. During transport towards detection, the established zones coalesce themselves, resulting in a complex zone that was monitored at 562 nm. The local maximum and minimum values of the concentration / time obtained function were considered for multivariate calibration using the PLS-2 (partial least squares - 2) chemometric tool. The reagent concentration, buffering capacity, temperature, flow rate and lengths of the analytical path, sampling loops and initial distance between plugs were established and evaluated for the construction of mathematical models. To this end, 24 Cu2+ and Ni2+ (0.00 - 1.60 mg L-1, also 0.1% v/v HNO3) mixed standard solutions were used. Two latent variables were enough to capture > 98% of the variance inherent in the data set and average prediction errors (RMSEP) were estimated as 0.025 and 0.071 mg L-1 for Cu and Ni, emphasizing the good precision the calibration model. The proposed system presents good figures of merit: physical stability when kept in operation for four uninterrupted hours, consumption of 314 \'mü\'g 5-Br-PADAP per sample, sample throughput of 33 h-1 (165 data, 66 determinations) and error readings in absorbance signals typically <5%. However, inaccuracy of the predictions made by the proposed model when compared to results obtained by ICP OES was noted. Thus, further studies involving this type of matrix, as well as masking techniques of potential interferences present, are recommended
39

Regularization in reinforcement learning

Farahmand, Amir-massoud Unknown Date
No description available.
40

Multivariate data analysis using spectroscopic data of fluorocarbon alcohol mixtures / Nothnagel, C.

Nothnagel, Carien January 2012 (has links)
Pelchem, a commercial subsidiary of Necsa (South African Nuclear Energy Corporation), produces a range of commercial fluorocarbon products while driving research and development initiatives to support the fluorine product portfolio. One such initiative is to develop improved analytical techniques to analyse product composition during development and to quality assure produce. Generally the C–F type products produced by Necsa are in a solution of anhydrous HF, and cannot be directly analyzed with traditional techniques without derivatisation. A technique such as vibrational spectroscopy, that can analyze these products directly without further preparation, will have a distinct advantage. However, spectra of mixtures of similar compounds are complex and not suitable for traditional quantitative regression analysis. Multivariate data analysis (MVA) can be used in such instances to exploit the complex nature of spectra to extract quantitative information on the composition of mixtures. A selection of fluorocarbon alcohols was made to act as representatives for fluorocarbon compounds. Experimental design theory was used to create a calibration range of mixtures of these compounds. Raman and infrared (NIR and ATR–IR) spectroscopy were used to generate spectral data of the mixtures and this data was analyzed with MVA techniques by the construction of regression and prediction models. Selected samples from the mixture range were chosen to test the predictive ability of the models. Analysis and regression models (PCR, PLS2 and PLS1) gave good model fits (R2 values larger than 0.9). Raman spectroscopy was the most efficient technique and gave a high prediction accuracy (at 10% accepted standard deviation), provided the minimum mass of a component exceeded 16% of the total sample. The infrared techniques also performed well in terms of fit and prediction. The NIR spectra were subjected to signal saturation as a result of using long path length sample cells. This was shown to be the main reason for the loss in efficiency of this technique compared to Raman and ATR–IR spectroscopy. It was shown that multivariate data analysis of spectroscopic data of the selected fluorocarbon compounds could be used to quantitatively analyse mixtures with the possibility of further optimization of the method. The study was a representative study indicating that the combination of MVA and spectroscopy can be used successfully in the quantitative analysis of other fluorocarbon compound mixtures. / Thesis (M.Sc. (Chemistry))--North-West University, Potchefstroom Campus, 2012.

Page generated in 0.1136 seconds