Spelling suggestions: "subject:"square""
901 |
Fraktionierung des Chemischen Sauerstoffbedarfs mithilfe von Extinktionsmessungen im UV/Vis-SpektralbereichWeber, Steffen 21 April 2023 (has links)
Das Messverfahren der optischen Spektrophotometrie wird zur kontinuierlichen Messung der Abwasserqualität auf ihre Einsatztauglichkeit überprüft. Der chemische Sauerstoffbedarf (CSB) wird als zentraler Kennwert für die stoffliche Verschmutzung von Abwasser und für dessen Nachweis in Oberflächengewässern eingesetzt, welche es zu bestimmen galt. Dabei wird der Informationsgehalt über eine organische, summarische Kohlenstoffbelastung mittels einer zusätzlichen Fraktionierung erhöht. In einer Labormesskampagne werden auf der Grundlage von Respirationsversuchen Daten aus Extinktionswerten des UV/Vis-Spektrums und Referenzwerten (Standardanalyseparameter und simulierte Stoffkonzentrationen mithilfe des Activated Sludge Modell No. 1) generiert. Darauf aufbauend werden Kalibrationsmodelle für den CSB und einzelne Fraktionen entwickelt. Die Modelle werden mithilfe des Regressionsansatzes der Partial-Least-Squares entwickelt und im Rahmen eines Anwendungsbeispiels auf ihre Praxistauglichkeit überprüft. Als Ergebnis dieser Arbeit stehen Kalibrationsmodelle für den Einsatz im kommunalem Abwasser unter Trockenwetterbedingungen zur Verfügung. Die Vorhersagequalität nimmt mit zunehmender Differenzierung ab. Von einer Weiterverwendung der berechneten Äquivalentkonzentrationen für die CSB-Fraktionen (SS, XS, SI und XI), z. B. als Kalibriergröße für Stofftransportmodelle oder als Steuer- und Regelgröße, wird allerdings abgeraten. Als Ursache für die hohen Messungenauigkeiten wurde eine unzureichende Anpassung an die Veränderungen in der Abwasserzusammensetzung während eines Trockenwettertagesganges identifiziert. Mit einer erweiterten Datengrundlage, unter der Verwendung von Standardanalyseparametern (CSB, CSBmf und BSB) in einer Abwasserprobe, welche für den Ausschluss von Stoffverbindungen vor und nach einer respirativen Vorbehandlung bestimmt werden, wird eine höhere Modellgüte in Aussicht gestellt. Darüber hinaus wird ein Umdenken hinsichtlich statischer - hin zu dynamischen - Kalibrationsfunktionen für UV/Vis-Sensoren vorgeschlagen. Eine Generalisierbarkeit der entwickelten Kalibrationsmodelle auf weitere Wetterbedingungen, Messstandorte oder Sensoren wird nicht empfohlen.:Abbildungen VI
Tabellen XIII
Abkürzungen XV
1 Einleitung 1
1.1 Motivation 1
1.2 Zielstellung 2
2 Stand der Forschung 5
2.1 Kohlenstoffe 6
2.1.1 Zusammensetzung und Herkunft im häuslichen Abwasser 7
2.1.1.1 Fette 8
2.1.1.2 Proteine 8
2.1.1.3 Tenside 9
2.1.1.4 Phenole 10
2.1.1.5 Kohlenwasserstoffe 10
2.1.2 Fraktionierung von Kohlenstoffverbindungen 11
2.1.2.1 Chemischer Sauerstoffbedarf 12
2.1.2.2 Ansätze zur CSB-Fraktionierung 12
2.1.2.3 Stoffzusammensetzung einzelner CSB-Fraktionen 15
2.1.2.4 Messmethoden zur Bestimmung des CSB 18
2.2 Optische Spektroskopie 20
2.2.1 Grundlagen 20
2.2.1.1 Elektromagnetische Strahlung 20
2.2.1.2 Einordnung der optischen Spektroskopie 21
2.2.1.3 Lichtabsorption 21
2.2.1.4 Chemisch-physikalische Grundlagen 22
2.2.1.5 Mathematische Grundlagen 24
2.2.1.6 Extinktionsmessung 25
2.2.2 Online-Messtechnik 26
2.2.2.1 Sensoren /-hersteller 26
2.2.2.2 Kalibrierung 26
2.2.2.2.1 Kalibrierung der S::CAN MESSTECHNIK GmbH 27
2.2.2.2.2 Unabhängige Analyseverfahren zur Auswertung spektrophotometrischer Messreihen 28
2.2.2.3 Messung 29
2.2.2.3.1 Einstellungen und Voraussetzungen 29
2.2.2.3.2 Qualitative Einflussnahme von Störgrößen auf die spektroskopische Datenerfassung 30
2.2.3 Einsatz in der Siedlungswasserwirtschaft und Hydrologie 31
3 Versuchsdurchführung und Analytik 33
3.1 Messkampagnen 33
3.1.1 Labormessversuche 33
3.1.1.1 Respirationsversuch 34
3.1.1.1.1 Versuchsaufbau zum Respirationsversuch 35
3.1.1.1.2 Betriebshinweise Respirationsversuch 38
3.1.1.2 Verdünnungsversuch 41
3.1.1.2.1 Versuchsaufbau zum Verdünnungsversuch 42
3.1.1.2.2 Betriebshinweise Verdünnungsversuch 43
3.1.2 Feldmessversuch 43
3.1.2.1 Versuchsaufbau zum Feldmessversuch 44
3.1.2.2 Betriebshinweise Feldmessversuch 46
3.2 Abwasserproben: Aufbewahrung und Analytik 47
3.2.1 Konservierung und Probenvorbehandlung 48
3.2.2 Standardisierte Laboranalyseverfahren 49
3.2.2.1 CSB 49
3.2.2.2 Biologischer Sauerstoffbedarf BSBn 50
3.3 Mess- und Regelinstrumente 51
3.3.1 Optischer Multiparameter-Sensor 51
3.3.2 Luminescent Dissolved Oxygen-Sensor (LDO) 53
3.3.3 Peristaltik-Pumpe 54
3.3.4 Dispergierer 54
4 Untersuchungen zur Entwicklung und Anwendung von UV/Vis-Kalibrierungen
55
4.1 Statistische Verfahren zur Kalibrierung 55
4.1.1 Datengrundlage und Methoden 56
4.1.1.1 Datengrundlage 56
4.1.1.2 Multivariate Datenanalyse 57
4.1.1.2.1 Regressionsanalyse 58
4.1.1.2.1.1 Schätzung der Regressionsfunktion 59
4.1.1.2.2 Qualitätsprüfung 61
4.1.1.2.3 Prüfung der Modellprämissen 63
4.1.1.2.4 Multivariate Regressionsanalyse 66
4.1.1.3 Vergleich der Kalibrierverfahren 70
4.1.2 Ergebnisse 70
4.1.2.1 Regressionsansätze für UV/Vis-Kalibrierung 70
4.1.2.1.1 Partial-Least-Squares Regression (PLS-R) 70
4.1.2.1.2 Lasso-Regression 73
4.1.2.1.3 Herstellerkalibrierung (SCAN GmbH) 73
4.1.2.1.3.1 Anwendung der globalen Herstellerkalibrierung 73
4.1.2.1.3.2 Lokal angepasste Herstellerkalibrierung 74
4.1.3 Auswertungen 75
4.1.3.1 Tauglichkeit angewandter Regressionsansätze zur Entwicklung von UV/Vis-Kalibrierfunktionen 75
4.1.3.1.1 Vergleich der Vorhersagequalitäten zwischen Regressionsansätzen und Herstellerkali¬- brierung 75
4.1.3.1.2 Aussagekraft angewandter Regressionsmodelle 77
4.1.3.1.2.1 Regressionsfunktion und -koeffizienten 77
4.1.3.1.2.2 Modellprämissen 78
4.1.3.2 Identifizierung signifikanter WL oder -Bereiche 80
4.2 Fraktionierung von CSB-Verbindungen 81
4.2.1 Datengrundlage und Methoden 82
4.2.1.1 Laborwertmethode 83
4.2.1.2 Modellwertmethode 85
4.2.1.2.1 Respirometrische Messung 86
4.2.1.2.2 Sauerstoffverbrauchsrate 87
4.2.1.2.3 Modellberechnung 89
4.2.1.2.4 Simulationsmethode mit modifiziertem Activated Sludge Modell No. 1 92
4.2.1.2.5 Modellkalibrierung 95
4.2.1.2.6 Datenauswahl 96
4.2.1.3 Lichtabsorptionsmethode 96
4.2.2 Ergebnisse 97
4.2.2.1 Modellwertmethode mit ASM No. 1 97
4.2.2.2 Auswahl von Modelldaten 100
4.2.2.3 UV/Vis-Kalibrierfunktionen 101
4.2.2.3.1 CSB-Fraktionen 101
4.2.2.3.2 Vergleich MW- und LW-Modell 103
4.2.3 Auswertungen 104
4.2.3.1 Tauglichkeit von Simulationsergebnissen aus Modellwertmethode zur Entwicklung von Kalibrierfunktionen 104
4.2.3.2 Abweichende Vorhersagequalitäten zwischen den UV/Vis-Kalibrierfunktionen 106
4.2.3.3 Messunsicherheiten und Modellqualität 107
4.2.3.4 Signifikante Wellenlängen oder -bereiche für einzelne CSB-Fraktionen
109
4.3 Anwendungsbeispiel: Kohlenstoffumsatz entlang einer Fließstrecke 111
4.3.1 Datengrundlage und Methoden 112
4.3.1.1 Einsatz von UV/Vis-Messtechnik 115
4.3.1.1.1 Vergleichbarkeit bei Parallelbetrieb baugleicher Sensoren 115
4.3.1.1.1.1 Versuchsdurchführung 116
4.3.1.1.1.2 Berechnungsansätze 116
4.3.1.1.2 Lokale Kalibrierung 117
4.3.1.1.2.1 Univariat 118
4.3.1.1.2.2 Multivariat 118
4.3.1.2 Kohlenstoffumwandlung und -umsatz innerhalb des Durchflussreaktors
118
4.3.1.2.1 Vorverarbeitung von UV/Vis-Daten 120
4.3.1.2.2 Zeitsynchronisation mithilfe der Fließzeit 120
4.3.1.2.3 Bestimmung von stofflichen Veränderungen in einem Wasserpaket 121
4.3.2 Ergebnisse 122
4.3.2.1 Praxiseinsatz von UV/Vis-Messtechnik 122
4.3.2.1.1 Stabilität und Vergleichbarkeit von Messsignalen bei unterschiedlichen Sensoren 122
4.3.2.1.1.1 Messgüte 122
4.3.2.1.1.2 Sensoranpassung 124
4.3.2.1.2 UV/Vis-Kalibrationsfunktionen 125
4.3.2.1.2.1 Validierung LK-PLS-R 126
4.3.2.1.2.2 Lokale Nachkalibrierung LK-PLS-R 128
4.3.2.1.3 Anwendung entwickelter Kalibrationsmodelle auf Zeitreihen 130
4.3.2.2 Kohlenstoffumsatz 131
4.3.3 Auswertungen 135
4.3.3.1 Tauglichkeit von UV/Vis-Messtechnik für den Einsatz in der Kanalisation
135
4.3.3.1.1 Vorhersagegenauigkeit von Kalibrationsfunktionen 135
4.3.3.1.2 Abweichende Messergebnisse der Extinktion von einzelnen Sensoren 135
4.3.3.2 Veränderungen in den Konzentrationen einzelner Kohlenstofffraktionen entlang der Fließstrecke 136
5 Diskussion 139
6 Ausblick 151
7 Zusammenfassung 153
8 Literaturverzeichnis 157
A Anhang 171
A.1 Respirationsversuche CSB-Fraktionen 171
A.1.1 Quellcode - CSB-Fraktionierung 171
A.1.2 Respirationsversuche CSB-Fraktionen 175
A.1.3 Quellcode - PLS-Regression 178
A.1.4 UV/Vis-Kalibrierung - CSB-Fraktionen 180
A.1.5 Modellgüte 183
A.1.6 Modellprämissen 184
A.2 Feldmesskampagne 188
A.2.1 Sensorkompensation 188
A.2.2 Korrelationsplots 189
A.2.2.1 Validierung der Kalibrationsmodelle 189
A.2.2.2 Nachkalibrierung der Kalibrationsmodelle 192
A.2.2.2.1 univariat 192
A.2.3 Stoffliche Veränderungen in Wasserpaketen 198
A.2.4 Laboranalysen Stoffliche Veränderungen in Wasserpaketen 201 / Optical spectrophotometry is checked as measuring method for continuous monitoring of waste water quality. The chemical oxygen demand (COD) is used as a central parame-ter for the material assessment of waste water and for its detection in surface waters. The information value about an organic load is increased using an additional fractiona-tion. In a laboratory measurement campaign, data from extinction values of the UV/Vis spectrum and reference values are created (standard analysis parameters and simulated concentrations by using the Activated Sludge Model No. 1). Based on this calibration models for the COD and individual fractions are developed using the regression ap-proach of the partial least squares and their practical suitability is checked in the context of an application example. As a result of this work, calibration models for use in munici-pal wastewater under dry weather conditions, are available. The prediction quality de-creases with increasing differentiation. We advise against further use of the calculated equivalent concentrations for the COD fractions (SS, XS, SI und XI), e.g. as a calibration var-iable for mass transfer models or as a control and regulation variable. The reason for higher measurement uncertainties is identified as insufficient adaptation to the changing wastewater composition during a dry weather day. With an extended data basis, a higher model quality is promised: Standard analysis parameters (COD, CODmf and BOD) are de-termined in wastewater samples before and after respiratory pretreatment in order to be able to rule out substances. In addition, rethinking of static calibration functions for UV/Vis sensors is proposed towards dynamic methods. A generalization of calibration models to other weather conditions, measurement locations or sensors is not recom-mended.:Abbildungen VI
Tabellen XIII
Abkürzungen XV
1 Einleitung 1
1.1 Motivation 1
1.2 Zielstellung 2
2 Stand der Forschung 5
2.1 Kohlenstoffe 6
2.1.1 Zusammensetzung und Herkunft im häuslichen Abwasser 7
2.1.1.1 Fette 8
2.1.1.2 Proteine 8
2.1.1.3 Tenside 9
2.1.1.4 Phenole 10
2.1.1.5 Kohlenwasserstoffe 10
2.1.2 Fraktionierung von Kohlenstoffverbindungen 11
2.1.2.1 Chemischer Sauerstoffbedarf 12
2.1.2.2 Ansätze zur CSB-Fraktionierung 12
2.1.2.3 Stoffzusammensetzung einzelner CSB-Fraktionen 15
2.1.2.4 Messmethoden zur Bestimmung des CSB 18
2.2 Optische Spektroskopie 20
2.2.1 Grundlagen 20
2.2.1.1 Elektromagnetische Strahlung 20
2.2.1.2 Einordnung der optischen Spektroskopie 21
2.2.1.3 Lichtabsorption 21
2.2.1.4 Chemisch-physikalische Grundlagen 22
2.2.1.5 Mathematische Grundlagen 24
2.2.1.6 Extinktionsmessung 25
2.2.2 Online-Messtechnik 26
2.2.2.1 Sensoren /-hersteller 26
2.2.2.2 Kalibrierung 26
2.2.2.2.1 Kalibrierung der S::CAN MESSTECHNIK GmbH 27
2.2.2.2.2 Unabhängige Analyseverfahren zur Auswertung spektrophotometrischer Messreihen 28
2.2.2.3 Messung 29
2.2.2.3.1 Einstellungen und Voraussetzungen 29
2.2.2.3.2 Qualitative Einflussnahme von Störgrößen auf die spektroskopische Datenerfassung 30
2.2.3 Einsatz in der Siedlungswasserwirtschaft und Hydrologie 31
3 Versuchsdurchführung und Analytik 33
3.1 Messkampagnen 33
3.1.1 Labormessversuche 33
3.1.1.1 Respirationsversuch 34
3.1.1.1.1 Versuchsaufbau zum Respirationsversuch 35
3.1.1.1.2 Betriebshinweise Respirationsversuch 38
3.1.1.2 Verdünnungsversuch 41
3.1.1.2.1 Versuchsaufbau zum Verdünnungsversuch 42
3.1.1.2.2 Betriebshinweise Verdünnungsversuch 43
3.1.2 Feldmessversuch 43
3.1.2.1 Versuchsaufbau zum Feldmessversuch 44
3.1.2.2 Betriebshinweise Feldmessversuch 46
3.2 Abwasserproben: Aufbewahrung und Analytik 47
3.2.1 Konservierung und Probenvorbehandlung 48
3.2.2 Standardisierte Laboranalyseverfahren 49
3.2.2.1 CSB 49
3.2.2.2 Biologischer Sauerstoffbedarf BSBn 50
3.3 Mess- und Regelinstrumente 51
3.3.1 Optischer Multiparameter-Sensor 51
3.3.2 Luminescent Dissolved Oxygen-Sensor (LDO) 53
3.3.3 Peristaltik-Pumpe 54
3.3.4 Dispergierer 54
4 Untersuchungen zur Entwicklung und Anwendung von UV/Vis-Kalibrierungen
55
4.1 Statistische Verfahren zur Kalibrierung 55
4.1.1 Datengrundlage und Methoden 56
4.1.1.1 Datengrundlage 56
4.1.1.2 Multivariate Datenanalyse 57
4.1.1.2.1 Regressionsanalyse 58
4.1.1.2.1.1 Schätzung der Regressionsfunktion 59
4.1.1.2.2 Qualitätsprüfung 61
4.1.1.2.3 Prüfung der Modellprämissen 63
4.1.1.2.4 Multivariate Regressionsanalyse 66
4.1.1.3 Vergleich der Kalibrierverfahren 70
4.1.2 Ergebnisse 70
4.1.2.1 Regressionsansätze für UV/Vis-Kalibrierung 70
4.1.2.1.1 Partial-Least-Squares Regression (PLS-R) 70
4.1.2.1.2 Lasso-Regression 73
4.1.2.1.3 Herstellerkalibrierung (SCAN GmbH) 73
4.1.2.1.3.1 Anwendung der globalen Herstellerkalibrierung 73
4.1.2.1.3.2 Lokal angepasste Herstellerkalibrierung 74
4.1.3 Auswertungen 75
4.1.3.1 Tauglichkeit angewandter Regressionsansätze zur Entwicklung von UV/Vis-Kalibrierfunktionen 75
4.1.3.1.1 Vergleich der Vorhersagequalitäten zwischen Regressionsansätzen und Herstellerkali¬- brierung 75
4.1.3.1.2 Aussagekraft angewandter Regressionsmodelle 77
4.1.3.1.2.1 Regressionsfunktion und -koeffizienten 77
4.1.3.1.2.2 Modellprämissen 78
4.1.3.2 Identifizierung signifikanter WL oder -Bereiche 80
4.2 Fraktionierung von CSB-Verbindungen 81
4.2.1 Datengrundlage und Methoden 82
4.2.1.1 Laborwertmethode 83
4.2.1.2 Modellwertmethode 85
4.2.1.2.1 Respirometrische Messung 86
4.2.1.2.2 Sauerstoffverbrauchsrate 87
4.2.1.2.3 Modellberechnung 89
4.2.1.2.4 Simulationsmethode mit modifiziertem Activated Sludge Modell No. 1 92
4.2.1.2.5 Modellkalibrierung 95
4.2.1.2.6 Datenauswahl 96
4.2.1.3 Lichtabsorptionsmethode 96
4.2.2 Ergebnisse 97
4.2.2.1 Modellwertmethode mit ASM No. 1 97
4.2.2.2 Auswahl von Modelldaten 100
4.2.2.3 UV/Vis-Kalibrierfunktionen 101
4.2.2.3.1 CSB-Fraktionen 101
4.2.2.3.2 Vergleich MW- und LW-Modell 103
4.2.3 Auswertungen 104
4.2.3.1 Tauglichkeit von Simulationsergebnissen aus Modellwertmethode zur Entwicklung von Kalibrierfunktionen 104
4.2.3.2 Abweichende Vorhersagequalitäten zwischen den UV/Vis-Kalibrierfunktionen 106
4.2.3.3 Messunsicherheiten und Modellqualität 107
4.2.3.4 Signifikante Wellenlängen oder -bereiche für einzelne CSB-Fraktionen
109
4.3 Anwendungsbeispiel: Kohlenstoffumsatz entlang einer Fließstrecke 111
4.3.1 Datengrundlage und Methoden 112
4.3.1.1 Einsatz von UV/Vis-Messtechnik 115
4.3.1.1.1 Vergleichbarkeit bei Parallelbetrieb baugleicher Sensoren 115
4.3.1.1.1.1 Versuchsdurchführung 116
4.3.1.1.1.2 Berechnungsansätze 116
4.3.1.1.2 Lokale Kalibrierung 117
4.3.1.1.2.1 Univariat 118
4.3.1.1.2.2 Multivariat 118
4.3.1.2 Kohlenstoffumwandlung und -umsatz innerhalb des Durchflussreaktors
118
4.3.1.2.1 Vorverarbeitung von UV/Vis-Daten 120
4.3.1.2.2 Zeitsynchronisation mithilfe der Fließzeit 120
4.3.1.2.3 Bestimmung von stofflichen Veränderungen in einem Wasserpaket 121
4.3.2 Ergebnisse 122
4.3.2.1 Praxiseinsatz von UV/Vis-Messtechnik 122
4.3.2.1.1 Stabilität und Vergleichbarkeit von Messsignalen bei unterschiedlichen Sensoren 122
4.3.2.1.1.1 Messgüte 122
4.3.2.1.1.2 Sensoranpassung 124
4.3.2.1.2 UV/Vis-Kalibrationsfunktionen 125
4.3.2.1.2.1 Validierung LK-PLS-R 126
4.3.2.1.2.2 Lokale Nachkalibrierung LK-PLS-R 128
4.3.2.1.3 Anwendung entwickelter Kalibrationsmodelle auf Zeitreihen 130
4.3.2.2 Kohlenstoffumsatz 131
4.3.3 Auswertungen 135
4.3.3.1 Tauglichkeit von UV/Vis-Messtechnik für den Einsatz in der Kanalisation
135
4.3.3.1.1 Vorhersagegenauigkeit von Kalibrationsfunktionen 135
4.3.3.1.2 Abweichende Messergebnisse der Extinktion von einzelnen Sensoren 135
4.3.3.2 Veränderungen in den Konzentrationen einzelner Kohlenstofffraktionen entlang der Fließstrecke 136
5 Diskussion 139
6 Ausblick 151
7 Zusammenfassung 153
8 Literaturverzeichnis 157
A Anhang 171
A.1 Respirationsversuche CSB-Fraktionen 171
A.1.1 Quellcode - CSB-Fraktionierung 171
A.1.2 Respirationsversuche CSB-Fraktionen 175
A.1.3 Quellcode - PLS-Regression 178
A.1.4 UV/Vis-Kalibrierung - CSB-Fraktionen 180
A.1.5 Modellgüte 183
A.1.6 Modellprämissen 184
A.2 Feldmesskampagne 188
A.2.1 Sensorkompensation 188
A.2.2 Korrelationsplots 189
A.2.2.1 Validierung der Kalibrationsmodelle 189
A.2.2.2 Nachkalibrierung der Kalibrationsmodelle 192
A.2.2.2.1 univariat 192
A.2.3 Stoffliche Veränderungen in Wasserpaketen 198
A.2.4 Laboranalysen Stoffliche Veränderungen in Wasserpaketen 201
|
902 |
Efficient Monte Carlo Simulation for Counterparty Credit Risk Modeling / Effektiv Monte Carlo-simulering för modellering av motpartskreditriskJohansson, Sam January 2019 (has links)
In this paper, Monte Carlo simulation for CCR (Counterparty Credit Risk) modeling is investigated. A jump-diffusion model, Bates' model, is used to describe the price process of an asset, and the counterparty default probability is described by a stochastic intensity model with constant intensity. In combination with Monte Carlo simulation, the variance reduction technique importance sampling is used in an attempt to make the simulations more efficient. Importance sampling is used for simulation of both the asset price and, for CVA (Credit Valuation Adjustment) estimation, the default time. CVA is simulated for both European and Bermudan options. It is shown that a significant variance reduction can be achieved by utilizing importance sampling for asset price simulations. It is also shown that a significant variance reduction for CVA simulation can be achieved for counterparties with small default probabilities by employing importance sampling for the default times. This holds for both European and Bermudan options. Furthermore, the regression based method least squares Monte Carlo is used to estimate the price of a Bermudan option, resulting in CVA estimates that lie within an interval of feasible values. Finally, some topics of further research are suggested. / I denna rapport undersöks Monte Carlo-simuleringar för motpartskreditrisk. En jump-diffusion-modell, Bates modell, används för att beskriva prisprocessen hos en tillgång, och sannolikheten att motparten drabbas av insolvens beskrivs av en stokastisk intensitetsmodell med konstant intensitet. Tillsammans med Monte Carlo-simuleringar används variansreduktionstekinken importance sampling i ett försök att effektivisera simuleringarna. Importance sampling används för simulering av både tillgångens pris och, för estimering av CVA (Credit Valuation Adjustment), tidpunkten för insolvens. CVA simuleras för både europeiska optioner och Bermuda-optioner. Det visas att en signifikant variansreduktion kan uppnås genom att använda importance sampling för simuleringen av tillgångens pris. Det visas även att en signifikant variansreduktion för CVA-simulering kan uppnås för motparter med små sannolikheter att drabbas av insolvens genom att använda importance sampling för simulering av tidpunkter för insolvens. Detta gäller både europeiska optioner och Bermuda-optioner. Vidare, används regressionsmetoden least squares Monte Carlo för att estimera priset av en Bermuda-option, vilket resulterar i CVA-estimat som ligger inom ett intervall av rimliga värden. Slutligen föreslås några ämnen för ytterligare forskning.
|
903 |
Statistical modelling of return on capital employed of individual unitsBurombo, Emmanuel Chamunorwa 10 1900 (has links)
Return on Capital Employed (ROCE) is a popular financial instrument and communication tool for the appraisal of companies. Often, companies management and other practitioners use untested rules and behavioural approach when investigating the key determinants of ROCE, instead of the scientific statistical paradigm. The aim of this dissertation was to identify and quantify key determinants of ROCE of individual companies listed on the Johannesburg Stock Exchange (JSE), by comparing classical multiple linear regression, principal components regression, generalized least squares regression, and robust maximum likelihood regression approaches in order to improve companies decision making. Performance indicators used to arrive at the best approach were coefficient of determination ( ), adjusted ( , and Mean Square Residual (MSE). Since the ROCE variable had positive and negative values two separate analyses were done.
The classical multiple linear regression models were constructed using stepwise directed search for dependent variable log ROCE for the two data sets. Assumptions were satisfied and problem of multicollinearity was addressed. For the positive ROCE data set, the classical multiple linear regression model had a of 0.928, an of 0.927, a MSE of 0.013, and the lead key determinant was Return on Equity (ROE),with positive elasticity, followed by Debt to Equity (D/E) and Capital Employed (CE), both with negative elasticities. The model showed good validation performance. For the negative ROCE data set, the classical multiple linear regression model had a of 0.666, an of 0.652, a MSE of 0.149, and the lead key determinant was Assets per Capital Employed (APCE) with positive effect, followed by Return on Assets (ROA) and Market Capitalization (MC), both with negative effects. The model showed poor validation performance. The results indicated more and less precision than those found by previous studies. This suggested that the key determinants are also important sources of variability in ROCE of individual companies that management need to work with.
To handle the problem of multicollinearity in the data, principal components were selected using Kaiser-Guttman criterion. The principal components regression model was constructed using dependent variable log ROCE for the two data sets. Assumptions were satisfied. For the positive ROCE data set, the principal components regression model had a of 0.929, an of 0.929, a MSE of 0.069, and the lead key determinant was PC4 (log ROA, log ROE, log Operating Profit Margin (OPM)) and followed by PC2 (log Earnings Yield (EY), log Price to Earnings (P/E)), both with positive effects. The model resulted in a satisfactory validation performance. For the negative ROCE data set, the principal components regression model had a of 0.544, an of 0.532, a MSE of 0.167, and the lead key determinant was PC3 (ROA, EY, APCE) and followed by PC1 (MC, CE), both with negative effects. The model indicated an accurate validation performance. The results showed that the use of principal components as independent variables did not improve classical multiple linear regression model prediction in our data. This implied that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with.
Generalized least square regression was used to assess heteroscedasticity and dependences in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the weighted generalized least squares regression model had a of 0.920, an of 0.919, a MSE of 0.044, and the lead key determinant was ROE with positive effect, followed by D/E with negative effect, Dividend Yield (DY) with positive effect and lastly CE with negative effect. The model indicated an accurate validation performance. For the negative ROCE data set, the weighted generalized least squares regression model had a of 0.559, an of 0.548, a MSE of 57.125, and the lead key determinant was APCE and followed by ROA, both with positive effects.The model showed a weak validation performance. The results suggested that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with. Robust maximum likelihood regression was employed to handle the problem of contamination in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the robust maximum likelihood regression model had a of 0.998, an of 0.997, a MSE of 6.739, and the lead key determinant was ROE with positive effect, followed by DY and lastly D/E, both with negative effects. The model showed a strong validation performance. For the negative ROCE data set, the robust maximum likelihood regression model had a of 0.990, an of 0.984, a MSE of 98.883, and the lead key determinant was APCE with positive effect and followed by ROA with negative effect. The model also showed a strong validation performance. The results reflected that the key determinants are major sources of variability in ROCE of individual companies that management need to work with.
Overall, the findings showed that the use of robust maximum likelihood regression provided more precise results compared to those obtained using the three competing approaches, because it is more consistent, sufficient and efficient; has a higher breakdown point and no conditions. Companies management can establish and control proper marketing strategies using the key determinants, and results of these strategies can see an improvement in ROCE. / Mathematical Sciences / M. Sc. (Statistics)
|
904 |
On specification and inference in the econometrics of public procurementSundström, David January 2016 (has links)
In Paper [I] we use data on Swedish public procurement auctions for internal regularcleaning service contracts to provide novel empirical evidence regarding green publicprocurement (GPP) and its effect on the potential suppliers’ decision to submit a bid andtheir probability of being qualified for supplier selection. We find only a weak effect onsupplier behavior which suggests that GPP does not live up to its political expectations.However, several environmental criteria appear to be associated with increased complexity,as indicated by the reduced probability of a bid being qualified in the postqualificationprocess. As such, GPP appears to have limited or no potential to function as an environmentalpolicy instrument. In Paper [II] the observation is made that empirical evaluations of the effect of policiestransmitted through public procurements on bid sizes are made using linear regressionsor by more involved non-linear structural models. The aspiration is typically to determinea marginal effect. Here, I compare marginal effects generated under both types ofspecifications. I study how a political initiative to make firms less environmentally damagingimplemented through public procurement influences Swedish firms’ behavior. Thecollected evidence brings about a statistically as well as economically significant effect onfirms’ bids and costs. Paper [III] embarks by noting that auction theory suggests that as the number of bidders(competition) increases, the sizes of the participants’ bids decrease. An issue in theempirical literature on auctions is which measurement(s) of competition to use. Utilizinga dataset on public procurements containing measurements on both the actual and potentialnumber of bidders I find that a workhorse model of public procurements is bestfitted to data using only actual bidders as measurement for competition. Acknowledgingthat all measurements of competition may be erroneous, I propose an instrumental variableestimator that (given my data) brings about a competition effect bounded by thosegenerated by specifications using the actual and potential number of bidders, respectively.Also, some asymptotic results are provided for non-linear least squares estimatorsobtained from a dependent variable transformation model. Paper [VI] introduces a novel method to measure bidders’ costs (valuations) in descending(ascending) auctions. Based on two bounded rationality constraints bidders’costs (valuations) are given an imperfect measurements interpretation robust to behavioraldeviations from traditional rationality assumptions. Theory provides no guidanceas to the shape of the cost (valuation) distributions while empirical evidence suggeststhem to be positively skew. Consequently, a flexible distribution is employed in an imperfectmeasurements framework. An illustration of the proposed method on Swedishpublic procurement data is provided along with a comparison to a traditional BayesianNash Equilibrium approach.
|
905 |
Real-time Structural Health Monitoring of Nonlinear Hysteretic StructuresNayyerloo, Mostafa January 2011 (has links)
The great social and economic impact of earthquakes has made necessary the development of novel structural health monitoring (SHM) solutions for increasing the level of structural safety and assessment. SHM is the process of comparing the current state of a structure’s condition relative to a healthy baseline state to detect the existence, location, and degree of likely damage during or after a damaging input, such as an earthquake. Many SHM algorithms have been proposed in the literature. However, a large majority of these algorithms cannot be implemented in real time. Therefore, their results would not be available during or immediately after a major event for urgent post-event response and decision making. Further, these off-line techniques are not capable of providing the input information required for structural control systems for damage mitigation. The small number of real-time SHM (RT-SHM) methods proposed in the past, resolve these issues. However, these approaches have significant computational complexity and typically do not manage nonlinear cases directly associated with relevant damage metrics. Finally, many available SHM methods require full structural response measurement, including velocities and displacements, which are typically difficult to measure. All these issues make implementation of many existing SHM algorithms very difficult if not impossible.
This thesis proposes simpler, more suitable algorithms utilising a nonlinear Bouc-Wen hysteretic baseline model for RT-SHM of a large class of nonlinear hysteretic structures. The RT-SHM algorithms are devised so that they can accommodate different levels of the availability of design data or measured structural responses, and therefore, are applicable to both existing and new structures. The second focus of the thesis is on developing a high-speed, high-resolution, seismic structural displacement measurement sensor to enable these methods and many other SHM approaches by using line-scan cameras as a low-cost and powerful means of measuring structural displacements at high sampling rates and high resolution. Overall, the results presented are thus significant steps towards developing smart, damage-free structures and providing more reliable information for post-event decision making.
|
906 |
Multiple Outlier Detection: Hypothesis Tests versus Model Selection by Information CriteriaLehmann, Rüdiger, Lösler, Michael 14 June 2017 (has links) (PDF)
The detection of multiple outliers can be interpreted as a model selection problem. Models that can be selected are the null model, which indicates an outlier free set of observations, or a class of alternative models, which contain a set of additional bias parameters. A common way to select the right model is by using a statistical hypothesis test. In geodesy data snooping is most popular. Another approach arises from information theory. Here, the Akaike information criterion (AIC) is used to select an appropriate model for a given set of observations. The AIC is based on the Kullback-Leibler divergence, which describes the discrepancy between the model candidates. Both approaches are discussed and applied to test problems: the fitting of a straight line and a geodetic network. Some relationships between data snooping and information criteria are discussed. When compared, it turns out that the information criteria approach is more simple and elegant. Along with AIC there are many alternative information criteria for selecting different outliers, and it is not clear which one is optimal.
|
907 |
Gestion des données : contrôle de qualité des modèles numériques des bases de données géographiques / Data management : quality Control of the Digital Models of Geographical DatabasesZelasco, José Francisco 13 December 2010 (has links)
Les modèles numériques de terrain, cas particulier de modèles numériques de surfaces, n'ont pas la même erreur quadratique moyenne en planimétrie qu'en altimétrie. Différentes solutions ont été envisagées pour déterminer séparément l'erreur en altimétrie et l'erreur planimétrique, disposant, bien entendu, d'un modèle numérique plus précis comme référence. La démarche envisagée consiste à déterminer les paramètres des ellipsoïdes d'erreur, centrées dans la surface de référence. Dans un premier temps, l'étude a été limitée aux profils de référence avec l'ellipse d'erreur correspondante. Les paramètres de cette ellipse sont déterminés à partir des distances qui séparent les tangentes à l'ellipse du centre de cette même ellipse. Remarquons que cette distance est la moyenne quadratique des distances qui séparent le profil de référence des points du modèle numérique à évaluer, c'est à dire la racine de la variance marginale dans la direction normale à la tangente. Nous généralisons à l'ellipsoïde de révolution. C'est le cas ou l'erreur planimétrique est la même dans toutes les directions du plan horizontal (ce n'est pas le cas des MNT obtenus, par exemple, par interférométrie radar). Dans ce cas nous montrons que le problème de simulation se réduit à l'ellipse génératrice et la pente du profil correspondant à la droite de pente maximale du plan appartenant à la surface de référence. Finalement, pour évaluer les trois paramètres d'un ellipsoïde, cas où les erreurs dans les directions des trois axes sont différentes (MNT obtenus par Interférométrie SAR), la quantité des points nécessaires pour la simulation doit être importante et la surface tr ès accidentée. Le cas échéant, il est difficile d'estimer les erreurs en x et en y. Néanmoins, nous avons remarqué, qu'il s'agisse de l'ellipsoïde de révolution ou non, que dans tous les cas, l'estimation de l'erreur en z (altimétrie) donne des résultats tout à fait satisfaisants. / A Digital Surface Model (DSM) is a numerical surface model which is formed by a set of points, arranged as a grid, to study some physical surface, Digital Elevation Models (DEM), or other possible applications, such as a face, or some anatomical organ, etc. The study of the precision of these models, which is of particular interest for DEMs, has been the object of several studies in the last decades. The measurement of the precision of a DSM model, in relation to another model of the same physical surface, consists in estimating the expectancy of the squares of differences between pairs of points, called homologous points, one in each model which corresponds to the same feature of the physical surface. But these pairs are not easily discernable, the grids may not be coincident, and the differences between the homologous points, corresponding to benchmarks in the physical surface, might be subject to special conditions such as more careful measurements than on ordinary points, which imply a different precision. The generally used procedure to avoid these inconveniences has been to use the squares of vertical distances between the models, which only address the vertical component of the error, thus giving a biased estimate when the surface is not horizontal. The Perpendicular Distance Evaluation Method (PDEM) which avoids this bias, provides estimates for vertical and horizontal components of errors, and is thus a useful tool for detection of discrepancies in Digital Surface Models (DSM) like DEMs. The solution includes a special reference to the simplification which arises when the error does not vary in all horizontal directions. The PDEM is also assessed with DEM's obtained by means of the Interferometry SAR Technique
|
908 |
Localization algorithms for passive sensor networksIsmailova, Darya 23 January 2017 (has links)
Locating a radiating source based on range or range measurements obtained from a network of passive sensors has been a subject of research over the past two decades due to the problem’s importance in applications in wireless communications, surveillance, navigation, geosciences, and several other fields. In this thesis, we develop new solution methods for the problem of localizing a single radiating source based on range and range-difference measurements. Iterative re-weighting algorithms are developed for both range-based and range-difference-based least squares localization. Then we propose a penalty convex-concave procedure for finding an approximate solution to nonlinear least squares problems that are related to the range measurements. Finally, the sequential convex relaxation procedures are proposed to obtain the nonlinear least squares estimate of source coordinates. Localization in wireless sensor network, where the RF signals are used to derive the ranging measurements, is the primary application area of this work. However, the solution methods proposed are general and could be applied to range and range-difference measurements derived from other types of signals. / Graduate / 0544 / ismailds@uvic.ca
|
909 |
A laser based straightness monitor for a prototype automated linear collider tunnel surveying systemMoss, Gregory Richard January 2013 (has links)
For precise measurement of new TeV-scale physics and precision studies of the Higgs Boson, a new lepton collider is required. To enable meaningful analysis, a centre of mass energy of 500GeV and luminosity of 10<sup>34</sup>cm<sup>-2</sup>s<sup>-1</sup> is needed. The planned 31km long International Linear Collider is capable of meeting these targets, requiring a final emittance of 10 micro-radians horizontally and 35nmrad vertically. To achieve these demanding emittance values, the accelerator components in the main linacs must be aligned against an accurately mapped network of reference markers along the entire tunnel. An automated system could map this tunnel network quickly, accurately, safely and repeatedly; the Linear Collider Alignment and Survey (LiCAS) Rapid Tunnel Reference Surveyor (RTRS) is a working prototype of such a system. The LiCAS RTRS is a train of measurement units that accurately locate regularly spaced retro-reflector markers using Frequency Scanning Interferometry (FSI). The unit locations with respect to each other are precisely reconstructed using a Laser Straightness Monitor (LSM) and tilt sensor system, along with a system of internal FSI lines. The design, commissioning, practical usage, calibration, and reconstruction performance of the LSM is addressed in this work. The commissioned RTRS is described and the properties of the LSM components are investigated in detail. A method of finding the position of laser beam spots on the LSM cameras is developed, along with a process of combining individual spot positions into a more robust measurement compatible with the data from other sub-systems. Laser beam propagation along the LSM is modelled and a robust method of reconstructing CCD beam spot position measurements into positions and orientations of the LSM units is described. A method of calibrating LSM units using an external witness system is presented, along with a way of using the overdetermined nature of the LSM to improve calibration constant errors by including data taken from unwitnessed runs. The reconstruction uncertainty, inclusive of both statistical and systematic effects, of the LSM system is found to be of 5.8 microns × 5.3 microns in lateral translations and 27.6 microradians × 34.1 microradians in rotations perpendicular to the beam, with an uncertainty of 51.1 microradians in rotations around the beam coming from a tilt-sensor arrangement.
|
910 |
Firm performance, sources and drivers of innovation and sectoral technological trajectories : an empirical study using recent french CIS / Performance économique, sources et leviers de l'innovation et filières technologiques : une étude économétrique à partir de données CIS françaisesHaned, Naciba 10 June 2011 (has links)
Cette thèse présente trois chapitres qui mobilisent un cadre d’analyse évolutionniste et qui étudient empiriquement la relation « innovation-performance » à partir de données CIS. Nous souhaitons montrer que les sources de l’innovation et les méthodes d’appropriation varient en fonction des secteurs d’activité et des stratégies d’innovation des firmes. Dans un premier temps, nous décrivons les tendances de l’innovation à partir de quatre vagues d’enquêtes CIS (1994-2006) et nous analysons la persistance de l’innovation sur un échantillon de 431 firmes avec une régression logistique binaire. Nous montrons que la persistance de l’innovation est plus élevée pour les firmes qui innovent en produits car ces firmes sont contraintes d’investir de manière continue dans des projets d’innovation pour rester compétitives. Les firmes qui innovent en procédés sont moins persistantes car leur stratégie est plus orientée vers des ajustements de la qualité des produits ainsi que vers l’amélioration des processus de production. Les deux derniers essais explorent avec la méthode des doubles moindres carrés le lien innovation-performance économique sur un échantillon de 7 742 firmes portant sur la période 2002-2005. Nous expliquons que la source principale de l’innovation des firmes à « forte intensité scientifique » est la R&D, d’une part, et que les méthodes d’appropriation des rentes de l’innovation passent par l’acquisition d’actifs complémentaires (tels que l’utilisation combinée de titres de propriété intellectuelle et de secrets de fabrication), d’autre part. En revanche, les firmes dans les autres catégories (notamment celles à fortes économies d’échelle) fondent leurs avancées technologiques sur des sources externes de l’innovation telles que les concurrents, les fournisseurs et les utilisateurs avancés. De plus, ces firmes utilisent de manière plus importante des méthodes d’appropriation commerciale telles que les marques ou les stratégies marketing, car leurs produits sont moins exposés au risque d’imitation certes, mais aussi parce qu’elles sont sensibles aux changements de coûts. / This thesis is structured in three essays based on evolutionary theoretical grounds and provides empirical evidence from CIS. It aims at showing that the sources of innovation and the appropriation of innovation rents vary in function of firms’ activities and innovation strategies.In essay 1, we describe four waves of CIS, covering the period 1994-2006 and we study persistent innovation behavior with a discrete choice model on a data set of 431 firms. We find that innovation persistence is more important for product innovators because they need novel products to be more competitive and therefore enrich their base of knowledge continuously. By contrast, process innovators are less persistent because innovation strategy is less “market” oriented and intends to meet quality or production adjustments. The two last essays explore with the two stage least squares method how firms benefit economically from their innovations on a sample of 7 742 firms, on the period 2002-2005. We show that science-based firms rely more on R&D investments to develop their products and maintain their leads by acquiring complementary assets, i.e. they use mixed methods to appropriate the rents of innovation (the combined use IPRs and strategic methods for instance secrecy). By contrast, firms in other categories (for instance firms using cost-cutting strategies) draw more on external sources of knowledge coming either from suppliers or advanced users. Additionally, these firms use more extensively trademarks or non technological methods of appropriation (as marketing devices), because they are less exposed to potential imitation and because they are price sensitive.
|
Page generated in 0.0792 seconds