• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 14
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 76
  • 13
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Testing and improving the spatial and temporal resolution of satellite, radar and station data for hydrological applications

Görner, Christina 26 July 2021 (has links)
This doctoral thesis is based on three publications (two peer-reviewed, one submitted). Its objective was to test existing methods and to develop innovative methods for generating highly resolved climate data with focus on the spatio-temporal distribution of precipitation as both, the spatial and temporal resolution as well as the length of such data sets are limited. For this purpose, satellite and radar-based remote sensing data, ground-based station data, and modelling methods were applied and combined. The Free State of Saxony (Germany) served as an investigation area as its mountainous regions are prone to heavy precipitation events and related (flash) floods like e.g., in 2002, in 2010, and in 2013. Two approaches were developed to generate hourly data when there are no station data available or only daily data. The first approach applies four different algorithms to estimate area-wide rain rates by using the satellite data of Meteosat Second Generation (MSG-1) and compares them to the gauge adjusted radar data product RADOLAN RW. The analyses of five spatial und six temporal integration steps by means of four fit scores and statistical relations show a stepwise improvement. That means, the integration leads to increasing probability of detection (POD) and critical success index (CSI), decreasing false alarm ratio (FAR) and Bias, and improved statistical relations especially for heavy rain rates. The best results are achieved for the lowest resolution of 120 km × 120 km and 24 h. However, this resolution is too low for applications in (flash) flood risk management for small and medium sized catchments. Such satellite-based estimated rain rates may serve as a data source for unobserved regions or as an indicator for large catchments or longer periods. A second approach comprises the newly developed Euclidean distance model (EDM) that generates hourly climate data by means of a temporal disaggregation procedure. The delivered data are point data for the climate variables temperature, precipitation, sunshine duration, relative humidity, and wind speed. They show high correlations and conserve (i) the statistics in comparison to the observed hourly data and (ii) also the consistency over all disaggregated climate elements. The results reveal that the EDM performs best for climate elements with a continuous diurnal cycle like temperature, for the winter half-year, and when the basic climate stations are characterised by similar climate conditions. The EDM proves to be a very robust, flexible and fast working model. Hence, the work presented here succeeded in developing two innovative locally-independent approaches that are applicable to the climate data of any region or station without complex model parametrisation. Simultaneously, the method can be applied to future daily climate data allowing the generation of hourly data that are needed for climate impact models. / Diese Dissertation basiert auf drei Publikationen (zwei begutachtet, eine eingereicht). Ziel war es, existierende Methoden zur Generierung hochaufgelöster Klimadaten zu untersuchen und innovative Methoden zu entwickeln mit dem Fokus auf der raumzeitlichen Niederschlagsverteilung, da sowohl die räumliche und zeitliche Auflösung als auch die Länge solcher Datenreihen begrenzt sind. Hierfür wurden satelliten- und radarbasierte Fernerkundungsdaten, Bodenstationsdaten sowie Modellierungsverfahren angewendet und kombiniert. Als Untersuchungsgebiet wurde der Freistaat Sachsen (Deutschland) gewählt, da dessen Gebirgsregionen starkregen- und damit hochwassergefährdet sind, wie bei den Hochwasserereignissen von 2002, 2010 und 2013 sichtbar wurde. Es wurden zwei Ansätze entwickelt, die die Generierung von Stundendaten ermöglichen, wenn keine Daten oder nur Tagesdaten vorhanden sind. Der erste Ansatz verwendet vier verschiedene Algorithmen zum Abschätzen flächendeckender Niederschlagsintensitäten unter Verwendung der Daten des Satelliten Meteosat Second Generation (MSG-1) und vergleicht diese mit den an Bodenstationsdaten angeeichten Radardaten des RADOLAN RW Produktes. Die Analysen von fünf räumlichen und sechs zeitlichen Integrationsstufen mit Hilfe von vier Fit Scores und statistischer Kennwerte zeigen eine schrittweise Verbesserung der Ergebnisse. Das heißt, dass durch Integration steigende Werte der probability of detection (POD) und des critical success index (CSI), sinkende Werte der false alarm ratio (FAR) und des Bias sowie verbesserte statistische Kennwerte erreicht werden. Dies gilt insbesondere für Starkniederschlagsintensitäten. Die besten Ergebnisse werden bei der niedrigsten Auflösung von 120 km × 120 km und 24 h erreicht. Jedoch ist diese Auflösung für Anwendungen des Hochwasserrisikomanagements kleiner und mittlerer Einzugsgebiete zu gering. Solche satellitenbasierten Niederschlagsintensitäten können als Datenquelle für unbeobachtete Regionen oder als Indikator für große Einzugsgebiete oder längere Zeitintervalle dienen. Ein zweiter Ansatz beinhaltet das neu entwickelte Euclidean distance model (EDM), das mittels zeitlicher Disaggregierung stündliche Klimadaten generiert. Die erzeugten Daten sind punktbezogene Daten der Klimavariablen Temperatur, Niederschlag, Sonnenscheindauer, relative Feuchte und Windgeschwindigkeit. Sie weisen hohe Korrelationen auf und sie wahren (i) die statistischen Kenngrößen im Vergleich mit den beobachteten Stundendaten und (ii) die Konsistenz über alle Klimaelemente hinweg. Die Ergebnisse zeigen, dass das EDM für Klimaelemente mit einem kontinuierlichen Tagesgang, wie z.B. die Temperatur, für das Winterhalbjahr und bei der Verwendung von Basisstationen mit ähnlicher klimatischer Charakteristik die besten Ergebnisse liefert. Das EDM erweist sich als ein sehr robustes, flexibles und schnell arbeitendes Modell. Somit ist es mit der hier vorliegenden Arbeit gelungen, zwei innovative Ansätze zu entwickeln, die ohne komplexe Modellparametrisierung auf Daten einer jeden Klimaregion oder Klimastation angewendet werden können.
62

An Exploration of and Case Studies in Demand Forecast Accuracy: Replenishment, Point of Sale, and Bounding Conditions

Smyth, Kevin Barry January 2017 (has links)
No description available.
63

[en] DISAGGREGATION OF ELECTRICAL ENERGY BY HOME APPLIANCES FOR RESIDENTIAL CONSUMERS / [pt] DESAGREGAÇÃO DA ENERGIA ELÉTRICA POR ELETRODOMÉSTICOS PARA CONSUMIDORES RESIDENCIAIS

ESTIVEN OROZCO ZULUAGA 24 January 2019 (has links)
[pt] Nos últimos anos, o custo com energia elétrica tem aumentado de forma significativa para os consumidores no Brasil. Grandes consumidores, como indústrias e comércios, atualmente dispõem de alternativas para mitigar estes custos, como a otimização do contrato de demanda, a correção do baixo fator de potência, a utilização de geração própria, renovável ou não renovável, além da possibilidade de migrar para o mercado livre de energia elétrica, com diversas modalidades de contratos, preços e prazos. Já os consumidores residenciais, em função dos custos menores com as faturas de energia e da limitação técnica dos medidores, até agora dispunham de poucos mecanismos para atenuar seus custos. Entretanto, nos últimos anos tem sido cada vez mais comum a utilização de geração distribuída, principalmente com o uso de painéis fotovoltaicos por parte destes consumidores. Além disto, com a redução dos custos dos medidores inteligentes de energia elétrica, estes consumidores também podem monitorar seu consumo em tempo real, promovendo ações de aumento de eficiência energética para reduzir custos. Mais recentemente, foram criadas as bandeiras tarifárias, que propõem identificar as condições sistêmicas por cores verde, amarela e vermelha. As cores amarela e vermelha sinalizam aumentos de custos na produção de energia elétrica e, consequentemente, são repassados para o consumidor na forma de aumento de tarifa, promovendo resposta da demanda. Assim, há uma razão adicional para os consumidores monitorarem seu consumo. Não obstante, em 2018 foi adotada uma nova modalidade tarifária voltada para esta classe de consumidor chamada tarifa branca. Nesta modalidade, o consumidor possui diferentes valores de tarifas para diferentes períodos do dia. Assim, o consumidor que optar por esta modalidade pode reduzir o custo da sua fatura deslocando o consumo de horários de maior valor de tarifa para horários de menor valor de tarifa. Esta dissertação busca analisar em detalhes a viabilidade de um consumidor residencial migrar seu contrato para a chamada tarifa branca. Para isto, é proposto um modelo de otimização linear inteiro misto que busca desagregar o consumo de energia elétrica, medido de forma não invasiva, do consumidor para os diferentes eletrodomésticos da casa. Logo, o consumidor poderá decidir pela mudança contratual avaliando a perda de conforto que terá em mudar seus hábitos de consumo. A aplicação do modelo proposto é interessante não só por apresentar um diagnóstico mais detalhado do consumo de energia elétrica, mas também por identificar o funcionamento de eletrodomésticos como geladeira, ar condicionado e frigobar, que possuem diferentes estados de operação que dificilmente seriam capturados por uma simples inspeção destes eletrodomésticos. Para ilustrar o modelo proposto, nesta dissertação, dados de um consumidor real foram utilizados e a acurácia do modelo pôde ser comprovada com medições diretas de alguns eletrodomésticos. Desta forma, o consumidor tem a sua disposição uma ferramenta de apoio à decisão importante para monitorar o funcionamento dos eletrodomésticos e definir se deve migrar para a nova modalidade tarifária. / [en] In the last years, energy consumption has increased significantly for consumers in Brazil. Large consumers, such as industrial and commercial customers, are currently subject to cost-mitigation alternatives such as demand contract optimization, power factor reduction, self-generation, renewable or non-renewable generation, and the possibility of migrating to the free market of electric energy, with various modes of purchase, prices and deadlines. The consumer, in which the means of the upper costs with the fat means of the data of the meters, is in function of minor engines to reduce their costs. However, on a constant basis, with the use of photovoltaic panels, by these consumers. In addition, with the help of the costs of smart electric power meters, these profits are potentially higher, in real time, the ability to generate weaker sound profits for the cost image. More recently, they were created as tariff plates, which identify the systemic conditions by the green, yellow and red nuclei. The yellow and red samples are generated from the temperature of electric energy production and, consequently, are passed on to the consumer in the form of temperature increase. Thus, there is a large difference in consumption levels of your consumption. Nevertheless, in 2015 a new tariff modality was implemented for this class of energy consumption called the white tariff. In this mode, the buyer has different rate values for different periods of the day. Thus, consumers who have this option can reduce the cost of their invoice in relation to the consumption of schedules of higher tariff value for the hours of lower tariff value. This dissertation looks at the analysis on a feasibility of a residential ad migrating its contract to a so-called white tariff. To this end, it is necessary a linear model that makes the difference in consumption of electric energy, measured non-invasively, from consumer to the different units of household appliances of the house. Therefore, the consumer is also evaluated by contracting a service that improves their consumption capacity. The application of the model is more interesting, but no longer presents the power of electric power, but also has the same standard of electricity as the refrigerator, air conditioning and minibar, which have different states of operation that are hardly captured by a simple inspection of each appliance. To illustrate the proposed model, this dissertation, data from a real consumer were used and an accuracy of the model can be proven with the direct measurements of some home appliances. The way in which the consumer has a migration support tool for the operation of the equipment and defines whether to migrate to a new tariff modality.
64

Space disaggregation in models of route and mode choice : method and application to the Paris area / D?sagr?gation de l?espace dans les mod?les de choix d?itin?raire et de mode : m?thode et application ? la r?gion Ile-de-France

Samadzad, Mahdi 18 January 2013 (has links)
La repr?sentation spatiale de l?aire de mod?lisation dans les mod?les de la demande de transports a peu chang? au cours des derni?res d?cennies. A cet ?gard, l??tat-de l?art repose encore largement sur le syst?me de centro?de-connecteur qui est utilis?e dans les mod?les classiques. Elle est une approche agr?g?e qui ignore la variabilit? physique li?e ? la dispersion des lieux d?sagr?g?s de r?sidence et d?activit? dans l?espace local. En cons?quence, le pouvoir explicatif des mod?les quant aux comportements de choix d?itin?raire et de mode demeure limit? ? l??chelle locale : Par exemple, la localisation d?sagr?g?e influence sur le choix entre une autoroute dont l??changeur est ?loign?, et un autre itin?raire non-autoroutier. Egalement, le rabattement terminal influence sur le partage modal auto vs. transports en commun. Nous pr?sentons une approche d?sagr?g?e pour la repr?sentation spatiale. Dans un d?coupage zonal, l?espace ? l?int?rieur d?une zone est repr?sent? de mani?re d?sagr?g?e stochastique. Pour chaque zone, les points d?ancrage sont d?finis relative aux n?uds du r?seau qui peuvent ?tre utilis?s pour acc?der au r?seau. Un itin?raire entre une paire de zones est ensuite consid?r? comme une chaine, compos?e de deux trajets terminaux, correspondants aux sections intrazonales de l?itin?raire, et d?un trajet principal correspondant ? la section entre deux points d?ancrage. En cons?quence, le mod?le de choix d?itin?raire est transform? ? un mod?le de choix conjoint d?une paire de point d?ancrage. Le vecteur des temps al?atoires terminaux est Normal Multidimensionnel donnant lieu ? un mod?le Probit de choix conjoint de points d?ancrage.Pour ?tendre au cadre multimodal, un mode collectif composite est d?fini comme une chaine compos?e des trois trajets modaux d?acc?s, principal, et de sortie, et les stations sont consid?r?es comme les points d?ancrage, connectant les trajets de rabattement au trajet principal. Un mod?le Logit Multinomial de choix de mode est estim? ? partir de l?Enqu?te Globale de Transport de 2001 pour le mode auto et le faisceau des modes collectifs composites, et est combin? avec les deux mod?les Probit correspondants au choix des stations / Spatial representation of modeling area in travel demand models has changed little over the course of last several decades. In this regard, the state-of-the-art still widely relies on the same centroid-connector system that has been used in classic models. In this approach continuum bidimensional space is lumped on centroids. It is an aggregate approach which ignores the physical variability linked to the scatteredness of disaggregate residence- and activity-places over the local space. Consequently the modeling performance in explaining route and mode choice behavior degrades at local scales: In route choice, disaggregate location influences the propensity between a distant interchange to a highway, or a nearby road. In mode choice, feeder service to public transportations influences the auto vs. transit modal share. We propose a disaggregate approach for spatial representation. Based on a zoning system, a stochastic disaggregate representation is used to characterize the space within a traffic analysis zone. For each zone, anchor-points are defined as the network nodes that are used for accessing to the network from within the local space. An itinerary between a pair of zones is then considered as a chain of legs composed of two terminal legs, corresponding to the intrazonal route sections, and one main leg between two anchor points. The route choice problem is transformed to a joint choice of a pair of anchor points. The vector of random terminal travel times is Multivariate Normal resulting in a Multinomial Probit model of choice of a pair of anchor points. To extend to the multimodal context, a transit composite mode is defined as a chain of access, main, and egress modal legs, and transit platforms are considered as anchor points connecting the feeder legs to the main line-haul leg. A Multinomial Logit mode choice model is estimated based on the 2001 Paris Household Travel Survey for the auto mode and the composite transit modes. It is joined with the two Multinomial Probit models corresponding to the choice of anchor points. The result is a joint model of mode and station choice with a disaggregate representation of the local space
65

[en] DECOMPOSITION AND RELAXATION ALGORITHMS FOR NONCONVEX MIXED INTEGER QUADRATICALLY CONSTRAINED QUADRATIC PROGRAMMING PROBLEMS / [pt] ALGORITMOS BASEADOS EM DECOMPOSIÇÃO E RELAXAÇÃO PARA PROBLEMAS DE PROGRAMAÇÃO INTEIRA MISTA QUADRÁTICA COM RESTRIÇÕES QUADRÁTICAS NÃO CONVEXA

TIAGO COUTINHO CARNEIRO DE ANDRADE 29 April 2019 (has links)
[pt] Esta tese investiga e desenvolve algoritmos baseados em relaxação Lagrangiana e técnica de desagregação multiparamétrica normalizada para resolver problemas não convexos de programação inteira-mista quadrática com restrições quadráticas. Primeiro, é realizada uma revisão de técnias de relaxação para este tipo de problema e subclasses do mesmo. Num segundo momento, a técnica de desagregação multiparamétrica normalizada é aprimorada para sua versão reformulada onde o tamanho dos subproblemas a serem resolvidos tem seu tamanho reduzido, em particular no número de variáveis binárias geradas. Ademais, dificuldas em aplicar a relaxação Lagrangiana a problemas não convexos são discutidos e como podem ser solucionados caso o subproblema dual seja substituído por uma relaxação não convexa do mesmo. Este método Lagrangiano modificado é comparado com resolvedores globais comerciais e resolvedores de código livre. O método proposto convergiu em 35 das 36 instâncias testadas, enquanto o Baron, um dos resolvedores que obteve os melhores resultados, conseguiu convergir apenas para 4 das 36 instâncias. Adicionalmente, mesmo para a única instância que nosso método não conseguiu resolver, ele obteve um gap relativo de menos de 1 por cento, enquanto o Baron atingiu um gap entre 10 por cento e 30 por cento para a maioria das instâncias que o mesmo não convergiu. / [en] This thesis investigates and develops algorithms based on Lagrangian relaxation and normalized multiparametric disaggregation technique to solve nonconvex mixed-integer quadratically constrained quadratic programming. First, relaxations for quadratic programming and related problem classes are reviewed. Then, the normalized multiparametric disaggregation technique is improved to a reformulated version, in which the size of the generated subproblems are reduced in the number of binary variables. Furthermore, issues related to the use of the Lagrangian relaxation to solve nonconvex problems are addressed by replacing the dual subproblems with convex relaxations. This method is compared to commercial and open source off-the-shelf global solvers using randomly generated instances. The proposed method converged in 35 of 36 instances, while Baron, the benchmark solver that obtained the best results only converged in 4 of 36. Additionally, even for the one instance the methods did not converge, it achieved relative gaps below 1 percent in all instances, while Baron achieved relative gaps between 10 percent and 30 percent in most of them.
66

Impact Assessment Of Climate Change On Hydrometeorology Of River Basin For IPCC SRES Scenarios

Anandhi, Aavudai 12 1900 (has links)
There is ample growth in scientific evidence about climate change. Since, hydrometeorological processes are sensitive to climate variability and changes, ascertaining the linkages and feedbacks between the climate and the hydrometeorological processes becomes critical for environmental quality, economic development, social well-being etc. As the river basin integrates some of the important systems like ecological and socio-economic systems, the knowledge of plausible implications of climate change on hydrometeorology of a river basin will not only increase the awareness of how the hydrological systems may change over the coming century, but also prepare us for adapting to the impacts of climate changes on water resources for sustainable management and development. In general, quantitative climate impact studies are based on several meteorological variables and possible future climate scenarios. Among the meteorological variables, sic “cardinal” variables are identified as the most commonly used in impact studies (IPCC, 2001). These are maximum and minimum temperatures, precipitation, solar radiation, relative humidity and wind speed. The climate scenarios refer to plausible future climates, which have been constructed for explicit use for investigating the potential consequences of anthropogenic climate alterations, in addition to the natural climate variability. Among the climate scenarios adapted in impact assessments, General circulation model(GCM) projections based on marker scenarios given in Intergovernmental Panel on Climate Change’s (IPCC’s) Special Report on Emissions Scenarios(SRES) have become the standard scenarios. The GCMs are run at coarse resolutions and therefore the output climate variables for the various scenarios of these models cannot be used directly for impact assessment on a local(river basin)scale. Hence in the past, several methodologies such as downscaling and disaggregation have been developed to transfer information of atmospheric variables from the GCM scale to that of surface meteorological variables at local scale. The most commonly used downscaling approaches are based on transfer functions to represent the statistical relationships between the large scale atmospheric variables(predictors) and the local surface variables(predictands). Recently Support vector machine (SVM) is proposed, and is theoretically proved to have advantages over other techniques in use such as transfer functions. The SVM implements the structural risk minimization principle, which guarantees the global optimum solution. Further, for SVMs, the learning algorithm automatically decides the model architecture. These advantages make SVM a plausible choice for use in downscaling hydrometeorological variables. The literature review on use of transfer function for downscaling revealed that though a diverse range of transfer functions has been adopted for downscaling, only a few studies have evaluated the sensitivity of such downscaling models. Further, no studies have so far been carried out in India for downscaling hydrometeorological variables to a river basin scale, nor there was any prior work aimed at downscaling CGCM3 simulations to these variables at river basin scale for various IPCC SRES emission scenarios. The research presented in the thesis is motivated to assess the impact of climate change on streamflow at river basin scale for the various IPCC SRES scenarios (A1B, A2, B1 and COMMIT), by integrating implications of climate change on all the six cardinal variables. The catchment of Malaprabha river (upstream of Malaprabha reservoir) in India is chosen as the study area to demonstrate the effectiveness of the developed models, as it is considered to be a climatically sensitive region, because though the river originates in a region having high rainfall it feeds arid and semi-arid regions downstream. The data of the National Centers for Environmental Prediction (NCEP), the third generation Canadian Global Climate Model (CGCM3) of the Canadian Center for Climate Modeling and Analysis (CCCma), observed hydrometeorological variables, Digital Elevation model (DEM), land use/land cover map, and soil map prepared based on PAN and LISS III merged, satellite images are considered for use in the developed models. The thesis is broadly divided into four parts. The first part comprises of general introduction, data, techniques and tools used. The second part describes the process of assessment of the implications of climate change on monthly values of each of the six cardinal variables in the study region using SVM downscaling models and k-nearest neighbor (k-NN) disaggregation technique. Further, the sensitivity of the SVM downscaling models to the choice of predictors, predictand, calibration period, season and location is evaluated. The third part describes the impact assessment of climate change on streamflow in the study region using the SWAT hydrologic model, and SVM downscaling models. The fourth part presents summary of the work presented in the thesis, conclusions draws, and the scope for future research. The development of SVM downscaling model begins with the selection of probable predictors (large scale atmospheric variables). For this purpose, the cross-correlations are computed between the probable predictor variables in NCEP and GCM data sets, and the probable predictor variables in NCEP data set and the predictand. A pool of potential predictors is then stratified (which is optional and variable dependant) based on season and or location by specifying threshold values for the computed cross-correlations. The data on potential predictors are first standardized for a baseline period to reduce systemic bias (if any) in the mean and variance of predictors in GCM data, relative to those of the same in NCEP reanalysis data. The standardized NCEP predictor variables are then processed using principal component analysis (PCA) to extract principal components (PCs) which are orthogonal and which preserve more than 98% of the variance originally present in them. A feature vector is formed for each month using the PCs. The feature vector forms the input to the SVM model, and the contemporaneous value of predictand is its output. Finally, the downscaling model is calibrated to capture the relationship between NCEP data on potential predictors (i.e feature vectors) and the predictand. Grid search procedure is used to find the optimum range for each of the parameters. Subsequently, the optimum values of parameters are obtained from the selected ranges, using the stochastic search technique of genetic algorithm. The SVM model is subsequently validated, and then used to obtain projections of predictand for simulations of CGCM3. Results show that precipitation, maximum and minimum temperature, relative humidity and cloud cover are projected to increase in future for A1B, A2 and B1 scenarios, whereas no trend is discerned with theCOMMIT. The projected increase in predictands is high for A2 scenario and is least for B1 scenario. The wind speed is not projected to change in future for the study region for all the aforementioned scenarios. The solar radiation is projected to decrease in future for A1B, A2 and B1 scenarios, whereas no trend is discerned with the COMMIT. To assess the monthly streamflow responses to climate change, two methodologies are considered in this study namely (i) downscaling and disaggregating the meteorological variables for use as inputs in SWAT and (ii) directly downscaling streamflow using SVM. SWAT is a physically based, distributed, continuous time hydrological model that operates on a daily time scale. The hydrometeorologic variables obtained using SVM downscaling models are disaggregated to daily scale by using k-nearest neighbor method developed in this study. The other inputs to SWAT are DEM, land use/land cover map, soil map, which are considered to be the same for the present and future scenarios. The SWAT model has projected an increase in future streamflows for A1B, A2 andB1 scenarios, whereas no trend is discerned with the COMMIT. The monthly projections of streamflow at river basin scale are also obtained using two SVM based downscaling models. The first SVM model (called one-stage SVM model) considered feature vectors prepared based on monthly values of large scale atmospheric variables as inputs, whereas the second SVM model (called two-stage SVM model) considered feature vectors prepared from the monthly projections of cardinal variables as inputs. The trend in streamflows projected using two-stage SVM model is found to be similar to that projected by SWAT for each of the scenarios considered. The streamflow is not projected to change for any of the scenarios considered with the one-stage SVM downscaling model. The relative performance of the SWAT and the two SVM downscaling models in simulating observed streamflows is evaluated. In general, all the three models are able to simulate the streamflows well. Nevertheless, the performance of SWAT model is better. Further, among the two SVM models, the performance of one-stage streamflow downscaling model is marginally better than that of the two-stage streamflow downscaling model.
67

Engineering seismological studies and seismic design criteria for the Buller Region, South Island, New Zealand

Stafford, Peter James January 2006 (has links)
This thesis addresses two fundamental topics in Engineering Seismology; the application of Probabilistic Seismic Hazard Analysis (PSHA) methodology, and the estimation of measures of Strong Ground Motion. These two topics, while being related, are presented as separate sections. In the first section, state-of-the-art PSHA methodologies are applied to various sites in the Buller Region, South Island, New Zealand. These sites are deemed critical to the maintenance of economic stability in the region. A fault-source based seismicity model is developed for the region that is consistent with the governing tectonic loading, and seismic moment release of the region. In attempting to ensure this consistency the apparent anomaly between the rates of activity dictated by deformation throughout the Quaternary, and rates of activity dictated by observed seismicity is addressed. Individual fault source activity is determined following the application of a Bayesian Inference procedure in which observed earthquake events are attributed to causative faults in the study region. The activity of fault sources, in general, is assumed to be governed by bounded power law behaviour. An exception is made for the Alpine Fault which is modelled as a purely characteristic source. The calculation of rates of exceedance of various ground motion indices is made using a combination of Poissonian and time-dependent earthquake occurrence models. The various ground motion indices for which rates of exceedance are determined include peak ground acceleration, ordinates of 5% damped Spectral Acceleration, and Arias Intensity. The total hazard determined for each of these ground motion measures is decomposed using a four dimensional disaggregation procedure. From this disaggregation procedure, design earthquake scenarios are specified for the sites that are considered. The second part of the thesis is concerned with the estimation of ground motion measures that are more informative than the existing scalar measures that are available for use in New Zealand. Models are developed for the prediction of Fourier Amplitude Spectra (FAS) as well as Arias Intensity for use in the New Zealand environment. The FAS model can be used to generate ground motion time histories for use in structural and geotechnical analyses. Arias Intensity has been shown to be an important strong motion measure due to its positive correlation with damage in short period structures as well as its utility in predicting the onset of liquefaction and landslides. The models are based upon the analysis of a dataset of New Zealand Strong Motion records as well as supplementary near field records from major overseas events. While the two measures of ground motion intensity are strongly related, different methods have been adopted in order to develop the models. As part of the methodology used for the FAS model, Monte Carlo simulation coupled with a simple ray tracing procedure is employed to estimate source spectra from various New Zealand earthquakes and, consequently, a magnitude - corner-frequency relationship is obtained. In general, the parameters of the predictive equations are determined using the most state-of-the-art mixed effects regression procedures.
68

Modeling hydrometeorological extremes in Alpine catchments / Modellering av hydrometeorologiska extremvärden i alpina avrinningsområden

Voulgaridis, Theo January 2017 (has links)
Uncertainties with a modeling framework consisting of a weather generator, two precipitation disaggregation models and the hydrological HBV model was assessed with respect to hydrometeorological extremes in Tyrol, Austria. Extreme precipitation events are expected to increase in intensity and frequency in the Alps during a warmer climate. The Alpine regions may be particularly vulnerable to such changes in climate where many floods in Europe occurred during recent years and caused major damage and loss of life. Weather generators typically provide time series at daily resolution. Different disaggregation methods have therefore been proposed and successfully tested to increase temporal resolution in precipitation. This is essential since flood peaks may be maintained for as little as minutes. Here, the non-parametric method of fragments was tested and compared with the multiplicative microcanonical cascade model with uniform splitting on the reproduction of precipitation extremes. It is also demonstrated that the method of fragments model can be transformed to disaggregate temperature with slight changes in the model structure. Preliminary test results show that the simulation of discharge peaks can be improved by disaggregating temperature in comparison with using daily averages as input in the HBV model.  Test results show that precipitation extremes were simulated within confidence bounds for Kelchsauer and Gurglbach when using historical observations as input. These two catchments had longer records of data available in comparison with Ruetz where the majority of simulated precipitation extremes were found outside confidence ranges. This indicates that the model is data driven. Synthetic data series were constructed with the weather generator from historical data and disaggregated with the two disaggregation models. The differences between the models were bigger for Ruetz where less observed data was available. The method of fragments simulates extremes with the closest resemblance to extremes. This is also true for the reproduction of wet spells and simulated variance. To account for parameter uncertainty in the HBV model, it is highly motivated to simulate discharge with different but suitable parameter sets to account for equifinality. However, the large amount of data produced when disaggregating the weather generated time series transcended the data capacity of the HBV model and made it crash. Other uncertainties related to the framework are the use of theoretical probability distributions in the weather generator and the dependence of high-resolution data for the disaggregation model. Despite these uncertainties, the framework is closer to a physical understanding of the causes of floods than the uncertain frequency analysis method. The framework is also applicable to land-use and climate change studies.
69

High frequency rainfall data disaggregation with a random cascade model : Identifying regional differences in hyetographs in Sweden

Rulewski Stenberg, Louis January 2021 (has links)
The field of urban hydrology is in need of high temporal resolution data series in order to effectively model and analyse existing and future trends in extreme precipitation. When high resolution data sets are, for any number of reasons, not available for a given location, the technique of disaggregation using a random cascade model can be applied. Previous studies have demonstrated the relevance of random cascades in the context of rainfall data disaggregation with temporal resolutions usually down to 1 hour. In this study, an attempt at disaggregation to a resolution of 1 minute was made. Using newly disaggregated rainfall data for different regions in Sweden, the possibility of clustering rain events into separate regional hyetographs was investigated. The random cascade model was calibrated using existing municipal rainfall data with a temporal resolution of 1 minute, in order to disaggregate continuous 15 minutes data series provided by the Swedish Meteorological and Hydrological Institute (SMHI). The disaggregation process was then performed in multiple stochastic realisations, in order to correct the uncertainties inherent to the random cascade model. The disaggregation results were assessed by comparing them with calibration data: two main rainfall parameters, EV and ED, were analysed by determining their behaviours and distribution. The possibility of transfering calibration parameters from one station to another was also assessed in a similar manner, again by studying EV & ED for different scenarios. Finally, hyetographs were clustered, compared and contrasted, in order to ascertain previously theorized differences between regions. This research showed the feasibility of applying a random cascade model to very high temporal resolutions in Sweden, while replicating rainfall characteristics from the calibration data quite well. The analysis of the spatial transferability of calibration parameters yielded inconclusive results, as rainfall characteristics were preserved in some cases but failed in others. Lastly, distinct regional differences in hyetographs were noted, but no clear conclusions could be drawn owing to the delimitations of this study. / Inom småskalig hydrologisk modellering finns det idag ett behov av dataserier med hög tidsupplösning för att effektivt kunna modellera och analysera både aktuella och kommande trender hos extrema regnhändelser. När högupplösta dataserier är otillgängliga vid en önskad mätplats kan disaggregering med hjälp av en slumpmässig kaskadmodell tillämpas. Tidigare forskning har visat att kaskadmodeller är användbara för disaggregering av regndata med en tidsupplösning av 1 timme. I denna studie disaggregerades dataserier med syftet att uppnå en tidsupplösningav av 1 minut. För att kunna analysera eventuella skillnader mellan regioner klustrades även hyetografer med de framtagna dataserierna. Den slumpmässiga kaskadmodellen kalibrerades med befintlig kommunal data med en tidsupplösning på 1 minut, för att sedan kunna disaggregera 15 minuters data från SMHIs databaser. Disaggregeringen genomfördes i ett antal olika stokastiska realisationer för att kunna ta hänsyn till, och korrigera, de inneboende osäkerheterna i den slumpmässiga kaskadmodellen. Disaggregeringsresultaten bedömdes genom en jämförelse med kalibreringsdata: två regnegenskaper, regnvaraktighet (ED) och regnvolym (EV), analyserades för att kunna bestämma derasfördelningar och beteenden. Kalibreringsparametrarnas överförbarhet analyserades också med hjälp av ED & EV för olika scenarier. Slutligen klustrades hyetografer för att fastställa potentiella skillnader mellan regioner. Studien påvisade möjligheten att använda en slumpmässig kaskadmodell till höga tidsupplösningar i Sverige. Modellen lyckades återskapa regnegenskaper från kalibreringsdata vid disaggregeringen. Möjligheten att överföra kalibreringsparametrar från en station till en annan visade sig dock inte vara helt övertygande: regnegenskaper återskapades endast i vissa fall, men inte i samtliga. Slutligen konstaterades regionala skillnader i hyetografer, men tydliga slutsatser kunde inte dras på grund av underliggande begränsningar med studien.
70

Neural Network-Based Residential Water End-Use Disaggregation / Neurala nätverk för klassificering av vattenanvändning i hushåll

Pierrou, Cajsa January 2023 (has links)
Sustainable management of finite resources is vital for ensuring livable conditions for both current and future generations. Measuring the total water consumption of residential households at high temporal resolutions and automatically disaggregating the sole signal into classified end usages (e.g. shower, sink) allows for identification of behavioural patterns that could be improved to minimise wasteful water consumption. Such disaggregation is not trivial, as water consuming patterns vary greatly depending on consumer behaviour, and further since at any given time, an unknown amount of fixtures may be used simultaneously. In this work, we approach the disaggregation problem by evaluating the performance of a set of recurrent and convolutional neural network structures provided approximately one year of high resolution water consumption data from a single apartment in Sweden. Unlike previous approaches to the problem, we let the models process the full, uninterrupted flow traces (as opposed to extracted segments of water consuming activity) in order to allow for temporal dependencies within and between water consuming activities to be learned. Out of four networks applied to the task, we find that a deeper temporal convolutional network structure yields the best overall results on the test data, with prediction accuracy of 85% and F1-score above 0.8 averaged over all end-use categories - a performance exceeding that of commercial analysis tools, and comparable to components of current state-of-the-art approaches. However, significant decreases in performance are observed for all of the networks, particularly for toilet and washing machine activity, when evaluating the models on unseen and augmented data from the apartment, indicating the results can not be fully generalised for usage in other households. / Hållbar användning av ändliga resurser är avgörande för att försäkra god livskvalitet för både nutida och framtida generationer. I Sverige är vatten för många en självklarhet, vilket öppnar upp för slösaktigt användande. En metod för att utbilda användare och identifiera icke hållbara beteenden är att kvantifiera vattenförbrukningen i hushåll baserat på syfte (t.ex. tvätta händerna, diska) eller källa (t.ex. dusch, handfat) av slutanvändningen. För att göra en sådan sammanställning mäts den totala åtkomsten av vatten i hög upplösning från hushåll, och signalen delas sedan upp i respektive kategori av slutanvändning. En sådan disaggregering är inte trivial, och försvåras av skillnader i beteendemönster hos användare samt faktumet att vi inte vid någon tidpunkt vet hur många vattenarmaturer som används samtidigt. I syftet att förbättra nuvarande tekniker för disaggregeringsproblemet implementerar och utvärderar vi alternativa lösningar baserade på rekurrenta och konvolutionerande neurala nätverk, på flödesdata insamlad med hög upplösning från en lägenhet i Sverige under en period av cirka ett år. Till skillnad från tidigare förhållningssätt till problemet låter vi våra modeller bearbeta den fullständiga, oavbrutna, flödesdatan (i motsats till extraherade segment av vattenförbrukande aktiviteter) för att möjliggöra lärandet av tidsmässiga beroenden inom och mellan vattenförbrukande aktiviteter. Utav fyra testade nätverk finner vi att ett djupt konvolutionerande nätverk ger den bästa klassificeringen överlag, givet testdata, med genomsnittlig igenkänningsnogrannhet på 85%. Signifikant försämrade resultat observerades för samtliga modeller i kategorierna toalett och tvättmaskin när nätverken testades på augmenterad data från hushållet, vilket indikerar att resultaten inte kan generaliseras för användning i andra lägenheter.

Page generated in 0.7039 seconds