• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 25
  • 20
  • 12
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 221
  • 221
  • 40
  • 35
  • 32
  • 30
  • 30
  • 24
  • 24
  • 24
  • 22
  • 20
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Evaluation and variability of power grid hosting capacity for electric vehicles : Case studies of residential areas in Sweden

Sandström, Maria January 2024 (has links)
Electric vehicles (EVs) are increasing in popularity and play an important role in decarbonizing the transport sector. However, a growing EV fleet can cause problems for power grids as the grids are not initially designed for EV charging. The potential of a power grid to accommodate EV loads can be assessed through hosting capacity (HC) analysis. The HC is grid specific and varies, therefore it is necessary to conduct analysis that reflects local conditions and covers uncertainties and correlations over time. This theses aims to investigate the HC for EVs in existing residential power grids, and to gain a better understanding of how it varies based on how the EVs are implemented and charged. The work is in collaboration with a distribution system operator (DSO) and is based on two case studies using real-life data reflecting conditions in Swedish grids. Combinations of different HC assessment methods have been used and the HC is evaluated based on cable loading, transformer loading and voltage deviation. Additionally, the study investigated three distinct charging strategies: charging on arrival, evenly spread charging over whole connection period, and charging at the lowest spot price.  The results show that decisions on acceptable voltage deviation limit can have a large influence on the HC as well as the charging strategy used. A charging strategy based on energy prices resulted in the lowest HC, as numerous EVs charging simultaneously caused high power peaks during low spot price periods. Charging on arrival was the second worst strategy, as the peak power coincided with household demand. The best strategy was to evenly spread out the charging, resulting in fewer violations for 100% EV implementation compared to the other two strategies for 25% EV implementation.  The findings underscore the necessity for coordinated charging controls for EV fleets or diversified power tariffs to balance power on a large scale in order to use the grids efficiently.
132

Evaluation and variability of power grid hosting capacity for electric vehicles : Case studies of residential areas in Sweden

Sandström, Maria January 2024 (has links)
Electric vehicles (EVs) are increasing in popularity and play an important role in decarbonizing the transport sector. However, a growing EV fleet can cause problems for power grids as the grids are not initially designed for EV charging. The potential of a power grid to accommodate EV loads can be assessed through hosting capacity (HC) analysis. The HC is grid specific and varies, therefore it is necessary to conduct analysis that reflects local conditions and covers uncertainties and correlations over time. This theses aims to investigate the HC for EVs in existing residential power grids, and to gain a better understanding of how it varies based on how the EVs are implemented and charged. The work is in collaboration with a distribution system operator (DSO) and is based on two case studies using real-life data reflecting conditions in Swedish grids. Combinations of different HC assessment methods have been used and the HC is evaluated based on cable loading, transformer loading and voltage deviation. Additionally, the study investigated three distinct charging strategies: charging on arrival, evenly spread charging over whole connection period, and charging at the lowest spot price.  The results show that decisions on acceptable voltage deviation limit can have a large influence on the HC as well as the charging strategy used. A charging strategy based on energy prices resulted in the lowest HC, as numerous EVs charging simultaneously caused high power peaks during low spot price periods. Charging on arrival was the second worst strategy, as the peak power coincided with household demand. The best strategy was to evenly spread out the charging, resulting in fewer violations for 100% EV implementation compared to the other two strategies for 25% EV implementation.  The findings underscore the necessity for coordinated charging controls for EV fleets or diversified power tariffs to balance power on a large scale in order to use the grids efficiently.
133

Estimating Uncertainty in HSPF based Water Quality Model: Application of Monte-Carlo Based Techniques

Mishra, Anurag 15 September 2011 (has links)
To propose a methodology for the uncertainty estimation in water quality modeling as related to TMDL development, four Monte Carlo (MC) based techniques—single-phase MC, two-phase MC, Generalized Likelihood Uncertainty Estimation (GLUE), and Markov Chain Monte Carlo (MCMC) —were applied to a Hydrological Simulation Program–FORTRAN (HSPF) model developed for the Mossy Creek bacterial TMDL in Virginia. Predictive uncertainty in percent violations of instantaneous fecal coliform concentration criteria for the prediction period under two TMDL pollutant allocation scenarios was estimated. The average percent violations of the applicable water quality criteria were less than 2% for all the evaluated techniques. Single-phase MC reported greater uncertainty in percent violations than the two-phase MC for one of the allocation scenarios. With the two-phase MC, it is computationally expensive to sample the complete parameter space, and with increased simulations, the estimates of single and two-phase MC may be similar. Two-phase MC reported significantly greater effect of knowledge uncertainty than stochastic variability on uncertainty estimates. Single and two-phase MC require manual model calibration as opposed to GLUE and MCMC that provide a framework to obtain posterior or calibrated parameter distributions based on a comparison between observed and simulated data and prior parameter distributions. Uncertainty estimates using GLUE and MCMC were similar when GLUE was applied following the log-transformation of observed and simulated FC concentrations. GLUE provides flexibility in selecting any model goodness of fit criteria for calculating the likelihood function and does not make any assumption about the distribution of residuals, but this flexibility is also a controversial aspect of GLUE. MCMC has a robust formulation that utilizes a statistical likelihood function, and requires normal distribution of model errors. However, MCMC is computationally expensive to apply in a watershed modeling application compared to GLUE. Overall, GLUE is the preferred approach among all the evaluated uncertainty estimation techniques, for the application of watershed modeling as related to bacterial TMDL development. However, the application of GLUE in watershed-scale water quality modeling requires further research to evaluate the effect of different likelihood functions, and different parameter set acceptance/rejection criteria. / Ph. D.
134

HYSTAR: Hydrology and Sediment Transport Simulation using Time-Area Method

Her, Young Gu 04 May 2011 (has links)
A distributed approach can improve functionality of H/WQ (Hydrology and Water Quality) modeling by facilitating a way to explicitly incorporate spatial characteristics of a watershed into the model. The time-area approach, with its intuitive and inherently distributed concept, provides a simple method to simulate runoff mechanisms. This study developed a distributed model based on the time-area approach with the goal of improved utility and efficiency in H/WQ modeling. Uncertainty is always introduced into watershed modeling because of imperfect knowledge and scale dependant spatial heterogeneity and temporal variability. Uncertainty analysis can provide a modeler, policy maker, and stakeholder with reliability information, better understanding, and better communication about the modeling results. This study quantified uncertainty of the model parameter and output through uncertainty analysis in order to assess risk in watershed management. The main goal of this study was to develop a hydrology and sediment transport model capable of routing overland flow using a time-area concept and providing reliability of the modeling results in a probabilistic manner through uncertainty analysis. The HYSTAR (HYdrology and Sediment transport simulation using Time-ARea method) model incorporates a modified Curve Number (CN) method and the newly devised time-area routing method to estimate runoff. HYSTAR is capable of simulating direct runoff, base flow, soil moisture, and sediment load in a distributed manner and in an hourly time step. In the model, the modified CN and a continuity equation are used to calculate infiltration of the routed runoff as well as rainfall on every overland cell. The effective direct runoff volume is distributed over downstream areas using the newly developed routing method. A direct runoff hydrograph is constructed directly through the discrete convolution of the time-area histogram and the effective direct runoff volume map without employing a unit hydrograph. In addition, sediment transport is simulated using the routing method and the sediment transport capacity approach without using a delivery ratio. The sensitivity analysis found that the CN and root zone depth were the most critical parameters for runoff simulation with HYSTAR. The model provided acceptable performance in predicting runoff and sediment load of a subwatershed of the Owl Run Watershed (ORD) with the Nash-Sutcliffe efficiency coefficient and coefficient of determination greater than 0.5. However, it failed to reproduce runoff for a subwatershed of Polecat Creek Watershed (PCA), where data show that runoff is not immediately responsive to rainfall. Uncertainty analysis revealed that the confidence intervals of the simulated monthly runoff and sediment load corresponded to 9.7 % and 10.2 % of their averages, respectively, at a significance level of 0.05. In addition, the average ranges of variation created by the Digital Elevation Model (DEM) and National Land Cover Data (NLCD) errors in the simulated monthly runoff and sediment load were equivalent to 7.5 % and 15.9 % of the average of their calibrated values, respectively. Based on the uncertainty analysis results, the Margin of Safety (MOS) of Total Maximum Daily Load (TMDL) were explicitly quantified as corresponding to 7.0 % and 21.3 % of the average of the simulated runoff and sediment load for ORD at significance level of 0.05. In conclusion, the HYSTAR model provided a new way to explicitly simulate runoff and sediment load of a watershed in a distributed manner. The approach developed here retains the simplicity of a unit hydrograph approach without employing numerical methods. Uncertainty analysis found that parameter uncertainty had greater impact on the model output than did expected Geographic Information System (GIS) data errors. In addition, the impact of the topographic data error on the model output was greater than was that of the land cover data error. Finally, this study provided a proof that a 5 to 10 % MOS that many TMDL studies consider underestimates modeling uncertainty. / Ph. D.
135

Application of Bayesian Inference Techniques for Calibrating Eutrophication Models

Zhang, Weitao 26 February 2009 (has links)
This research aims to integrate mathematical water quality models with Bayesian inference techniques for obtaining effective model calibration and rigorous assessment of the uncertainty underlying model predictions. The first part of my work combines a Bayesian calibration framework with a complex biogeochemical model to reproduce oligo-, meso- and eutrophic lake conditions. The model accurately describes the observed patterns and also provides realistic estimates of predictive uncertainty for water quality variables. The Bayesian estimations are also used for appraising the exceedance frequency and confidence of compliance of different water quality criteria. The second part introduces a Bayesian hierarchical framework (BHF) for calibrating eutrophication models at multiple systems (or sites of the same system). The models calibrated under the BHF provided accurate system representations for all the scenarios examined. The BHF allows overcoming problems of insufficient local data by “borrowing strength” from well-studied sites. Both frameworks can facilitate environmental management decisions.
136

Ponderação bayesiana de modelos utilizando diferentes séries de precipitação aplicada à simulação chuva-vazão na Bacia do Ribeirão da Onça / Ponderação bayesiana de modelos utilizando diferentes séries de precipitação aplicada à simulação chuva-vazão na Bacia do Ribeirão da Onça

Meira Neto, Antônio Alves 11 July 2013 (has links)
Neste trabalho foi proposta uma estratégia de modelagem hidrológica para a transformação chuva vazão da Bacia do Ribeirão da Onça (B.R.O) utilizando-se técnicas de auto calibração com análise de incertezas e de ponderação de modelos. Foi utilizado o modelo hidrológico Soil and Water Assessment Tool (SWAT), por ser um modelo que possui uma descrição física e de maneira distribuída dos processos hidrológicos da bacia. Foram propostas cinco diferentes séries de precipitação e esquemas de interpolação espacial a serem utilizados como dados de entrada para o modelo SWAT. Em seguida, utilizou-se o método semiautomático Sequential Uncertainty Fitting ver.-2 (SUFI-2) para a auto calibração e análise de incertezas dos parâmetros do modelo e produção de respostas com intervalos de incerteza para cada uma das séries de precipitação utilizadas. Por fim, foi utilizado o método de ponderação bayesiana de modelos (BMA) para o pós-processamento estocástico das respostas. Os resultados da análise de incerteza dos parâmetros do modelo SWAT indicam uma não adequação do método Soil Conservation Service (SCS) para simulação da geração do escoamento superficial, juntamente com uma necessidade de maior investigação das propriedades físicas do solo da bacia. A análise da precisão e acurácia dos resultados das séries de precipitação em comparação com a resposta combinada pelo método BMA sugerem a última como a mais adequada para a simulação chuva-vazão na B.R.O. / This study proposed an approach to the hydrological modeling of the Ribeirão da Onças Basin (B.R.O) based on automatic calibration and uncertainty analysis methods, together with model averaging. The Soil and Water Assessment Tool (SWAT) was used due to its distributed nature and physical description of hydrologic processes. An ensemble, composed by five different precipitation schemes, based on different sources and spatial interpolation methods was used. The Sequential Uncertainty Fitting ver-2 (SUFI-2) procedure was used for automatic calibration and uncertainty analysis of the SWAT model parameters, together with generation of streamflow simulations with uncertainty intervals. Following, the Bayesian Model Averaging (BMA) was used to merge the different responses into a single probabilistic forecast. The results of the uncertainty analysis for the SWAT parameters show that the Soil Conservation Service (SCS) model for surface runoff prediction may not be suitable for the B.R.O, and that more investigations about the soil physical properties at the Basin are recommended. An analysis of the accuracy and precision of the simulations produced by the precipitation ensemble members against the BMA simulation supports the use of the latter as a suitable framework for streamflow simulations at the B.R.O.
137

Experimental Analysis of Disc Thickness Variation Development in Motor Vehicle Brakes

Rodriguez, Alexander John, alex73@bigpond.net.au January 2006 (has links)
Over the past decade vehicle judder caused by Disc Thickness Variation (DTV) has become of major concern to automobile manufacturers worldwide. Judder is usually perceived by the driver as minor to severe vibrations transferred through the chassis during braking [1-9]. In this research, DTV is investigated via the use of a Smart Brake Pad (SBP). The SBP is a tool that will enable engineers to better understand the processes which occur in the harsh and confined environment that exists between the brake pad and disc whilst braking. It is also a tool that will enable engineers to better understand the causes of DTV and stick-slip the initiators of low and high frequency vibration in motor vehicle brakes. Furthermore, the technology can equally be used to solve many other still remaining mysteries in automotive, aerospace, rail or anywhere where two surfaces may come in contact. The SBP consists of sensors embedded into an automotive brake pad enabling it to measure pressure between the brake pad and disc whilst braking. The two sensor technologies investigated were Thick Film (TF) and Fibre Optic (FO) technologies. Each type was tested individually using a Material Testing System (MTS) at room and elevated temperatures. The chosen SBP was then successfully tested in simulated driving conditions. A preliminary mathematical model was developed and tested for the TF sensor and a novel Finite Element Analysis (FEA) model for the FO sensor. A new method called the Total Expected Error (TEE) method was also developed to simplify the sensor specification process to ensure consistent comparisons are made between sensors. Most importantly, our achievement will lead to improved comfort levels for the motorist.
138

Back-calculating emission rates for ammonia and particulate matter from area sources using dispersion modeling

Price, Jacqueline Elaine 15 November 2004 (has links)
Engineering directly impacts current and future regulatory policy decisions. The foundation of air pollution control and air pollution dispersion modeling lies in the math, chemistry, and physics of the environment. Therefore, regulatory decision making must rely upon sound science and engineering as the core of appropriate policy making (objective analysis in lieu of subjective opinion). This research evaluated particulate matter and ammonia concentration data as well as two modeling methods, a backward Lagrangian stochastic model and a Gaussian plume dispersion model. This analysis assessed the uncertainty surrounding each sampling procedure in order to gain a better understanding of the uncertainty in the final emission rate calculation (a basis for federal regulation), and it assessed the differences between emission rates generated using two different dispersion models. First, this research evaluated the uncertainty encompassing the gravimetric sampling of particulate matter and the passive ammonia sampling technique at an animal feeding operation. Future research will be to further determine the wind velocity profile as well as determining the vertical temperature gradient during the modeling time period. This information will help quantify the uncertainty of the meteorological model inputs into the dispersion model, which will aid in understanding the propagated uncertainty in the dispersion modeling outputs. Next, an evaluation of the emission rates generated by both the Industrial Source Complex (Gaussian) model and the WindTrax (backward-Lagrangian stochastic) model revealed that the calculated emission concentrations from each model using the average emission rate generated by the model are extremely close in value. However, the average emission rates calculated by the models vary by a factor of 10. This is extremely troubling. In conclusion, current and future sources are regulated based on emission rate data from previous time periods. Emission factors are published for regulation of various sources, and these emission factors are derived based upon back-calculated model emission rates and site management practices. Thus, this factor of 10 ratio in the emission rates could prove troubling in terms of regulation if the model that the emission rate is back-calculated from is not used as the model to predict a future downwind pollutant concentration.
139

On intrinsic uncertainties in earth system modelling

Knopf, Brigitte January 2006 (has links)
Uncertainties are pervasive in the Earth System modelling. This is not just due to a lack of knowledge about physical processes but has its seeds in intrinsic, i.e. inevitable and irreducible, uncertainties concerning the process of modelling as well. Therefore, it is indispensable to quantify uncertainty in order to determine, which are robust results under this inherent uncertainty. The central goal of this thesis is to explore how uncertainties map on the properties of interest such as phase space topology and qualitative dynamics of the system. We will address several types of uncertainty and apply methods of dynamical systems theory on a trendsetting field of climate research, i.e. the Indian monsoon.<br><br> For the systematic analysis concerning the different facets of uncertainty, a box model of the Indian monsoon is investigated, which shows a saddle node bifurcation against those parameters that influence the heat budget of the system and that goes along with a regime shift from a wet to a dry summer monsoon. As some of these parameters are crucially influenced by anthropogenic perturbations, the question is whether the occurrence of this bifurcation is robust against uncertainties in parameters and in the number of considered processes and secondly, whether the bifurcation can be reached under climate change. Results indicate, for example, the robustness of the bifurcation point against all considered parameter uncertainties. The possibility of reaching the critical point under climate change seems rather improbable. <br><br> A novel method is applied for the analysis of the occurrence and the position of the bifurcation point in the monsoon model against parameter uncertainties. This method combines two standard approaches: a bifurcation analysis with multi-parameter ensemble simulations. As a model-independent and therefore universal procedure, this method allows investigating the uncertainty referring to a bifurcation in a high dimensional parameter space in many other models. <br><br> With the monsoon model the uncertainty about the external influence of El Niño / Southern Oscillation (ENSO) is determined. There is evidence that ENSO influences the variability of the Indian monsoon, but the underlying physical mechanism is discussed controversially. As a contribution to the debate three different hypotheses are tested of how ENSO and the Indian summer monsoon are linked. In this thesis the coupling through the trade winds is identified as key in linking these two key climate constituents. On the basis of this physical mechanism the observed monsoon rainfall data can be reproduced to a great extent. Moreover, this mechanism can be identified in two general circulation models (GCMs) for the present day situation and for future projections under climate change. <br><br> Furthermore, uncertainties in the process of coupling models are investigated, where the focus is on a comparison of forced dynamics as opposed to fully coupled dynamics. The former describes a particular type of coupling, where the dynamics from one sub-module is substituted by data. Intrinsic uncertainties and constraints are identified that prevent the consistency of a forced model with its fully coupled counterpart. Qualitative discrepancies between the two modelling approaches are highlighted, which lead to an overestimation of predictability and produce artificial predictability in the forced system. The results suggest that bistability and intermittent predictability, when found in a forced model set-up, should always be cross-validated with alternative coupling designs before being taken for granted. <br><br> All in this, this thesis contributes to the fundamental issue of dealing with uncertainties the climate modelling community is confronted with. Although some uncertainties allow for including them in the interpretation of the model results, intrinsic uncertainties could be identified, which are inevitable within a certain modelling paradigm and are provoked by the specific modelling approach. / Die vorliegende Arbeit untersucht, auf welche Weise Unsicherheiten, wie sie in der integrierten Klima(folgen)forschung allgegenwärtig sind, die Stabilität und die Struktur dynamischer Systeme beeinflussen. <br> Im Rahmen der Erdsystemmodellierung wird der Unsicherheitsanalyse zunehmend eine zentrale Bedeutung beigemessen. Einerseits können mit ihrer Hilfe disziplinäre Qualitäts-standards verbessert werden, andererseits ergibt sich die Chance, im Zuge von "Integrated Assessment" robuste entscheidungsrelevante Aussagen abzuleiten. <br><br> Zur systematischen Untersuchung verschiedener Arten von Unsicherheit wird ein konzeptionelles Modell des Indischen Monsuns eingesetzt, das einen übergang von einem feuchten in ein trockenes Regime aufgrund einer Sattel-Knoten-Bifurkation in Abhängigkeit derjenigen Parameter zeigt, die die Wärmebilanz des Systems beeinflussen. Da einige dieser Parameter anthropogenen Einflüssen und Veränderungen unterworfen sind, werden zwei zentrale Punkte untersucht: zum einen, ob der Bifurkationspunkt robust gegenüber Unsicherheiten in Parametern und in Bezug auf die Anzahl und die Art der im Modell implementierten Prozesse ist und zum anderen, ob durch anthropogenen Einfluss der Bifurkationspunkt erreicht werden kann. Es zeigt sich unter anderem, dass das Auftreten der Bifurkation überaus robust, die Lage des Bifurkationspunktes im Phasenraum ist hingegen sehr sensitiv gegenüber Parameterunsicherheiten ist. <br><br> Für diese Untersuchung wird eine neuartige Methode zur Untersuchung des Auftretens und der Lage einer Bifurkation gegenüber Unsicherheiten im hochdimensionalen Parameterraum entwickelt, die auf der Kombination einer Bifurkationsanalyse mit einer multi parametrischen Ensemble Simulation basiert. <br><br> Mit dem Monsunmodell wird des weiteren die Unsicherheit bezüglich des externen Einflusses von El Niño / Southern Oscillation (ENSO) untersucht. Es ist bekannt, dass durch ENSO die Variabilität des Indischen Monsun beeinflußt wird, wohingegen der zu Grunde liegende Mechanismus kontrovers diskutiert wird. In dieser Arbeit werden drei verschiedene Hypothesen zur Kopplung zwischen diesen beiden Phänomenen untersucht. Es kann gezeigt werden, dass die Passat Winde einen Schlüsselmechanismus für den Einfluß von ENSO auf den Indischen Monsun darstellen.<br> Mit Hilfe dieses Mechanismus können die beobachteten Niederschlagsdaten des Monsuns zu einem großen Anteil reproduziert werden. Zudem kann dieser Mechanismus kann auch in zwei globalen Zirkulationsmodellen (GCMs) für den heutigen Zustand und für ein Emissionsszenario unter Klimawandel identifiziert werden. <br><br> Im weiteren Teil der Arbeit werden intrinsische Unsicherheiten identifiziert, die den Unterschied zwischen der Kopplung von Teilmodulen und dem Vorschreiben von einzelnen dieser Module durch Daten betreffen. Untersucht werden dazu ein getriebenes GCM-Ensemble und ein konzeptionelles Ozean-Atmosphären-Modell, das eine strukturierte Analyse anhand von Methoden der Theorie dynamischer Systeme ermöglicht.<br> In den meisten Fällen kann die getriebene Version, in der ein Teil der Dynamik als externer Antrieb vorschrieben wird, das voll gekoppelte Pendant nachbilden. Es wird gezeigt, dass es jedoch auch Regionen im Phasen- und Parameterraum gibt, in dem sich die zwei Modellierungsansätze signifikant unterscheiden und unter anderem zu einer überschätzung der Vorhersagbarkeit und zu künstlichen Zuständen im getriebenen System führen. Die Ergebnisse legen den Schluss nahe, dass immer auch alternative Kopplungsmechanismen getestet werden müssen bevor das getriebene System als adäquate Beschreibung des gekoppelten Gesamtsystems betrachtet werden kann. <br><br> Anhand der verschiedenen Anwendungen der Unsicherheitsanalyse macht die Arbeit deutlich, dass zum einen Unsicherheiten intrinsisch durch bestimmte Arten der Modellierung entstehen und somit unvermeidbar innerhalb eines Modellierungsansatzes sind, dass es zum anderen aber auch geeignete Methoden gibt, Unsicherheiten in die Modellierung und in die Bewertung von Modellergebnissen einzubeziehen.
140

Detection of long-range dependence : applications in climatology and hydrology

Rust, Henning January 2007 (has links)
It is desirable to reduce the potential threats that result from the variability of nature, such as droughts or heat waves that lead to food shortage, or the other extreme, floods that lead to severe damage. To prevent such catastrophic events, it is necessary to understand, and to be capable of characterising, nature's variability. Typically one aims to describe the underlying dynamics of geophysical records with differential equations. There are, however, situations where this does not support the objectives, or is not feasible, e.g., when little is known about the system, or it is too complex for the model parameters to be identified. In such situations it is beneficial to regard certain influences as random, and describe them with stochastic processes. In this thesis I focus on such a description with linear stochastic processes of the FARIMA type and concentrate on the detection of long-range dependence. Long-range dependent processes show an algebraic (i.e. slow) decay of the autocorrelation function. Detection of the latter is important with respect to, e.g. trend tests and uncertainty analysis. Aiming to provide a reliable and powerful strategy for the detection of long-range dependence, I suggest a way of addressing the problem which is somewhat different from standard approaches. Commonly used methods are based either on investigating the asymptotic behaviour (e.g., log-periodogram regression), or on finding a suitable potentially long-range dependent model (e.g., FARIMA[p,d,q]) and test the fractional difference parameter d for compatibility with zero. Here, I suggest to rephrase the problem as a model selection task, i.e.comparing the most suitable long-range dependent and the most suitable short-range dependent model. Approaching the task this way requires a) a suitable class of long-range and short-range dependent models along with suitable means for parameter estimation and b) a reliable model selection strategy, capable of discriminating also non-nested models. With the flexible FARIMA model class together with the Whittle estimator the first requirement is fulfilled. Standard model selection strategies, e.g., the likelihood-ratio test, is for a comparison of non-nested models frequently not powerful enough. Thus, I suggest to extend this strategy with a simulation based model selection approach suitable for such a direct comparison. The approach follows the procedure of a statistical test, with the likelihood-ratio as the test statistic. Its distribution is obtained via simulations using the two models under consideration. For two simple models and different parameter values, I investigate the reliability of p-value and power estimates obtained from the simulated distributions. The result turned out to be dependent on the model parameters. However, in many cases the estimates allow an adequate model selection to be established. An important feature of this approach is that it immediately reveals the ability or inability to discriminate between the two models under consideration. Two applications, a trend detection problem in temperature records and an uncertainty analysis for flood return level estimation, accentuate the importance of having reliable methods at hand for the detection of long-range dependence. In the case of trend detection, falsely concluding long-range dependence implies an underestimation of a trend and possibly leads to a delay of measures needed to take in order to counteract the trend. Ignoring long-range dependence, although present, leads to an underestimation of confidence intervals and thus to an unjustified belief in safety, as it is the case for the return level uncertainty analysis. A reliable detection of long-range dependence is thus highly relevant in practical applications. Examples related to extreme value analysis are not limited to hydrological applications. The increased uncertainty of return level estimates is a potentially problem for all records from autocorrelated processes, an interesting examples in this respect is the assessment of the maximum strength of wind gusts, which is important for designing wind turbines. The detection of long-range dependence is also a relevant problem in the exploration of financial market volatility. With rephrasing the detection problem as a model selection task and suggesting refined methods for model comparison, this thesis contributes to the discussion on and development of methods for the detection of long-range dependence. / Die potentiellen Gefahren und Auswirkungen der natürlicher Klimavariabilitäten zu reduzieren ist ein wünschenswertes Ziel. Solche Gefahren sind etwa Dürren und Hitzewellen, die zu Wasserknappheit führen oder, das andere Extrem, Überflutungen, die einen erheblichen Schaden an der Infrastruktur nach sich ziehen können. Um solche katastrophalen Ereignisse zu vermeiden, ist es notwendig die Dynamik der Natur zu verstehen und beschreiben zu können. Typischerweise wird versucht die Dynamik geophysikalischer Datenreihen mit Differentialgleichungssystemen zu beschreiben. Es gibt allerdings Situationen in denen dieses Vorgehen nicht zielführend oder technisch nicht möglich ist. Dieses sind Situationen in denen wenig Wissen über das System vorliegt oder es zu komplex ist um die Modellparameter zu identifizieren. Hier ist es sinnvoll einige Einflüsse als zufällig zu betrachten und mit Hilfe stochastischer Prozesse zu modellieren. In dieser Arbeit wird eine solche Beschreibung mit linearen stochastischen Prozessen der FARIMA-Klasse angestrebt. Besonderer Fokus liegt auf der Detektion von langreichweitigen Korrelationen. Langreichweitig korrelierte Prozesse sind solche mit einer algebraisch, d.h. langsam, abfallenden Autokorrelationsfunktion. Eine verläßliche Erkennung dieser Prozesse ist relevant für Trenddetektion und Unsicherheitsanalysen. Um eine verläßliche Strategie für die Detektion langreichweitig korrelierter Prozesse zur Verfügung zu stellen, wird in der Arbeit ein anderer als der Standardweg vorgeschlagen. Gewöhnlich werden Methoden eingesetzt, die das asymptotische Verhalten untersuchen, z.B. Regression im Periodogramm. Oder aber es wird versucht ein passendes potentiell langreichweitig korreliertes Modell zu finden, z.B. aus der FARIMA Klasse, und den geschätzten fraktionalen Differenzierungsparameter d auf Verträglichkeit mit dem trivialen Wert Null zu testen. In der Arbeit wird vorgeschlagen das Problem der Detektion langreichweitiger Korrelationen als Modellselektionsproblem umzuformulieren, d.h. das beste kurzreichweitig und das beste langreichweitig korrelierte Modell zu vergleichen. Diese Herangehensweise erfordert a) eine geeignete Klasse von lang- und kurzreichweitig korrelierten Prozessen und b) eine verläßliche Modellselektionsstrategie, auch für nichtgenestete Modelle. Mit der flexiblen FARIMA-Klasse und dem Whittleschen Ansatz zur Parameterschätzung ist die erste Voraussetzung erfüllt. Hingegen sind standard Ansätze zur Modellselektion, wie z.B. der Likelihood-Ratio-Test, für nichtgenestete Modelle oft nicht trennscharf genug. Es wird daher vorgeschlagen diese Strategie mit einem simulationsbasierten Ansatz zu ergänzen, der insbesondere für die direkte Diskriminierung nichtgenesteter Modelle geeignet ist. Der Ansatz folgt einem statistischen Test mit dem Quotienten der Likelihood als Teststatistik. Ihre Verteilung wird über Simulationen mit den beiden zu unterscheidenden Modellen ermittelt. Für zwei einfache Modelle und verschiedene Parameterwerte wird die Verläßlichkeit der Schätzungen für p-Wert und Power untersucht. Das Ergebnis hängt von den Modellparametern ab. Es konnte jedoch in vielen Fällen eine adäquate Modellselektion etabliert werden. Ein wichtige Eigenschaft dieser Strategie ist, dass unmittelbar offengelegt wird, wie gut sich die betrachteten Modelle unterscheiden lassen. Zwei Anwendungen, die Trenddetektion in Temperaturzeitreihen und die Unsicherheitsanalyse für Bemessungshochwasser, betonen den Bedarf an verläßlichen Methoden für die Detektion langreichweitiger Korrelationen. Im Falle der Trenddetektion führt ein fälschlicherweise gezogener Schluß auf langreichweitige Korrelationen zu einer Unterschätzung eines Trends, was wiederum zu einer möglicherweise verzögerten Einleitung von Maßnahmen führt, die diesem entgegenwirken sollen. Im Fall von Abflußzeitreihen führt die Nichtbeachtung von vorliegenden langreichweitigen Korrelationen zu einer Unterschätzung der Unsicherheit von Bemessungsgrößen. Eine verläßliche Detektion von langreichweitig Korrelierten Prozesse ist somit von hoher Bedeutung in der praktischen Zeitreihenanalyse. Beispiele mit Bezug zu extremem Ereignissen beschränken sich nicht nur auf die Hochwasseranalyse. Eine erhöhte Unsicherheit in der Bestimmung von extremen Ereignissen ist ein potentielles Problem von allen autokorrelierten Prozessen. Ein weiteres interessantes Beispiel ist hier die Abschätzung von maximalen Windstärken in Böen, welche bei der Konstruktion von Windrädern eine Rolle spielt. Mit der Umformulierung des Detektionsproblems als Modellselektionsfrage und mit der Bereitstellung geeigneter Modellselektionsstrategie trägt diese Arbeit zur Diskussion und Entwicklung von Methoden im Bereich der Detektion von langreichweitigen Korrelationen bei.

Page generated in 0.1063 seconds