• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 21
  • 21
  • 21
  • 7
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Quantifying Uncertainty in Flood Modeling Using Bayesian Approaches

Tao Huang (15353755) 27 April 2023 (has links)
<p>  </p> <p>Floods all over the world are one of the most common and devastating natural disasters for human society, and the flood risk is increasing recently due to more and more extreme climatic events. In the United States, one of the key resources that provide the flood risk information to the public is the Flood Insurance Rate Map (FIRM) administrated by the Federal Emergency Management Agency (FEMA) and the digitalized FIRMs have covered over 90% of the United States population so far. However, the uncertainty in the modeling process of FIRMs is rarely investigated. In this study, we use two of the widely used multi-model methods, the Bayesian Model Averaging (BMA) and the generalized likelihood uncertainty estimation (GLUE), to evaluate and reduce the impacts of various uncertainties with respect to modeling settings, evaluation metrics, and algorithm parameters on the flood modeling of FIRMs. Accordingly, three objectives of this study are to: (1) quantify the uncertainty in FEMA FIRMs by using BMA and Hierarchical BMA approaches; (2) investigate the inherent limitations and uncertainty in existing evaluation metrics of flood models; and (3) estimate the BMA parameters (weights and variances) using the Metropolis-Hastings (M-H) algorithm with multiple Markov Chains Monte Carlo (MCMC).</p> <p><br></p> <p>In the first objective, both the BMA and hierarchical BMA (HBMA) approaches are employed to quantify the uncertainty within the detailed FEMA models of the Deep River and the Saint Marys River in the State of Indiana based on water stage predictions from 150 HEC-RAS 1D unsteady flow model configurations that incorporate four uncertainty sources including bridges, channel roughness, floodplain roughness, and upstream flow input. Given the ensemble predictions and the observed water stage data in the training period, the BMA weight and the variance for each model member are obtained, and then the BMA prediction ability is validated for the observed data from the later period. The results indicate that the BMA prediction is more robust than both the original FEMA model and the ensemble mean. Furthermore, the HBMA framework explicitly shows the propagation of various uncertainty sources, and both the channel roughness and the upstream flow input have a larger impact on prediction variance than bridges. Hence, it provides insights for modelers into the relative impact of individual uncertainty sources in the flood modeling process. The results show that the probabilistic flood maps developed based on the BMA analysis could provide more reliable predictions than the deterministic FIRMs.</p> <p><br></p> <p>In the second objective, the inherent limitations and sampling uncertainty in several commonly used model evaluation metrics, namely, the Nash Sutcliffe efficiency (<em>NSE</em>), the Kling Gupta efficiency (<em>KGE</em>), and the coefficient of determination (<em>R</em>2), are investigated systematically, and hence the overall performance of flood models can be evaluated in a comprehensive way. These evaluation metrics are then applied to the 1D HEC-RAS models of six reaches located in the states of Indiana and Texas of the United States to quantify the uncertainty associated with the channel roughness and upstream flow input. The results show that the model performances based on the uniform and normal priors are comparable. The distributions of these evaluation metrics are significantly different for the flood model under different high-flow scenarios, and it further indicates that the metrics should be treated as random statistical variables given both aleatory and epistemic uncertainties in the modeling process. Additionally, the white-noise error in observations has the least impact on the evaluation metrics.</p> <p><br></p> <p>In the third objective, the Metropolis-Hastings (M-H) algorithm, which is one of the most widely used algorithms in the MCMC method, is proposed to estimate the BMA parameters (weights and variances), since the reliability of BMA parameters determines the accuracy of BMA predictions. However, the uncertainty in the BMA parameters with fixed values, which are usually obtained from the Expectation-Maximization (EM) algorithm, has not been adequately investigated in BMA-related applications over the past few decades. Both numerical experiments and two practical 1D HEC-RAS models in the states of Indiana and Texas of the United States are employed to examine the applicability of the M-H algorithm with multiple independent Markov chains. The results show that the BMA weights estimated from both algorithms are comparable, while the BMA variances obtained from the M-H MCMC algorithm are closer to the given variances in the numerical experiment. Overall, the MCMC approach with multiple chains can provide more information associated with the uncertainty of BMA parameters and its performance of water stage predictions is better than the default EM algorithm in terms of multiple evaluation metrics as well as algorithm flexibility.</p>
12

IMPROVING NUTRIENT TRANSPORT SIMULATION IN SWAT BY DEVELOPING A REACH-SCALE WATER QUALITY MODEL

Femeena Pandara Valappil (6703574) 02 August 2019 (has links)
<p>Ecohydrological models are extensively used to evaluate land use, land management and climate change impacts on hydrology and in-stream water quality conditions. The scale at which these models operate influences the complexity of processes incorporated within the models. For instance, a large scale hydrological model such as Soil and Water Assessment Tool (SWAT) that runs on a daily scale may ignore the sub-daily scale in-stream processes. The key processes affecting in-stream solute transport such as advection, dispersion and transient storage (dead zone) exchange can have considerable effect on the predicted stream solute concentrations, especially for localized studies. To represent realistic field conditions, it is therefore required to modify the in-stream water quality algorithms of SWAT by including these additional processes. Existing reach-scale solute transport models like OTIS (One-dimensional Transport with Inflow and Storage) considers these processes but excludes the actual biochemical reactions occurring in the stream and models nutrient uptake using an empirical first-order decay equation. Alternatively, comprehensive stream water quality models like QUAL2E (The Enhanced Stream Water Quality Model) incorporates actual biochemical reactions but neglects the transient storage exchange component which is crucial is predicting the peak and timing of solute concentrations. In this study, these two popular models (OTIS and QUAL2E) are merged to integrate all essential solute transport processes into a single in-stream water quality model known as ‘Enhanced OTIS model’. A generalized model with an improved graphical user interface was developed on MATLAB platform that performed reasonably well for both experimental data and previously published data (R<sup>2</sup>=0.76). To incorporate this model into large-scale hydrological models, it was necessary to find an alternative to estimate transient storage parameters, which are otherwise derived through calibration using experimental tracer tests. Through a meta-analysis approach, simple regression models were therefore developed for dispersion coefficient (D), storage zone area (A<sub>s</sub>) and storage exchange coefficient (α) by relating them to easily obtainable hydraulic characteristics such as discharge, velocity, flow width and flow depth. For experimental data from two study sites, breakthrough curves and storage potential of conservative tracers were predicted with good accuracy (R<sup>2</sup>>0.5) by using the new regression equations. These equations were hence recommended as a tool for obtaining preliminary and approximate estimates of D, A<sub>s</sub> and α when reach-specific calibration is unfeasible. </p> <p> </p> <p>The existing water quality module in SWAT was replaced with the newly developed ‘Enhanced OTIS model’ along with the regression equations for storage parameters. Water quality predictions using the modified SWAT model (Mir-SWAT) for a study catchment in Germany showed that the improvements in process representation yields better results for dissolved oxygen (DO), phosphate and Chlorophyll-a. While the existing model simulated extreme low values of DO, Mir-SWAT improved these values with a 0.11 increase in R<sup>2</sup> value between modeled and measured values. No major improvement was observed for nitrate loads but modeled phosphate peak loads were reduced to be much closer to measured values with Mir-SWAT model. A qualitative analysis on Chl-<i>a</i> concentrations also indicated that average and maximum monthly Chl-<i>a</i> values were better predicted with Mir-SWAT when compared to SWAT model, especially for winter months. The newly developed in-stream water quality model is expected to act as a stand alone model or coupled with larger models to improve the representation of solute transport processes and nutrient uptake in these models. The improvements made to SWAT model will increase the model confidence and widen its extent of applicability to short-term and localized studies that require understanding of fine-scale solute transport dynamics. </p>
13

Hydrodynamic modelling of Delta Marsh and simplified methods of discharge estimation for discontinuous inland coastal wetlands

Aminian, Parsa 09 January 2016 (has links)
This thesis details the hydrodynamic research conducted at Delta Marsh as part of the Restoring the Tradition marsh rehabilitation project. Research has indicated that the hydraulic and hydrologic controls on the marsh can have considerable impacts on its ecological function, but the impacts of these controls had not previously been studied. Field hydrography and two-dimensional numerical modelling (using MIKE 21) provided insight into many aspects of the physical behaviour of Delta Marsh. Eighty five percent of the inflow to Delta Marsh from Lake Manitoba passes through Clandeboye Channel, and these discharge signals propagate as far west as Cadham Bay. Inflow to the marsh disperses quickly, and accounts for a small fraction of the water that exits the marsh during subsequent outflow. Thus, Portage Diversion water that enters the marsh through the lake can remain there even if there is a net loss in marsh volume over the season. Wind friction across Lake Manitoba has the greatest impact on short-term fluctuations in marsh volume and on the composition of marsh water, followed by the Portage Diversion and the natural inflows to Lake Manitoba. Expansions to flood diversion infrastructure will considerably impact the composition of Delta Marsh waters. Three methods of wetland discharge estimation were developed and tested. The most promising method was the regressed slope Manning method (RSMM), which estimates two-directional channel discharge as a function of the water surface elevations at both ends of a channel. When used in conjunction with the velocity index method, the RSMM can multiply the amount of reliable discharge data collected per research dollar. Thanks to its simple formulation, the RSMM is likely applicable outside of wetland settings, as well. / February 2016
14

Integration of a Sedimentation Module to a Hydrologic Model and its Application to a Mercury TMDL Analysis

Marrero, Lilian 03 July 2013 (has links)
This research is part of continued efforts to correlate the hydrology of East Fork Poplar Creek (EFPC) and Bear Creek (BC) with the long term distribution of mercury within the overland, subsurface, and river sub-domains. The main objective of this study was to add a sedimentation module (ECO Lab) capable of simulating the reactive transport mercury exchange mechanisms within sediments and porewater throughout the watershed. The enhanced model was then applied to a Total Maximum Daily Load (TMDL) mercury analysis for EFPC. That application used historical precipitation, groundwater levels, river discharges, and mercury concentrations data that were retrieved from government databases and input to the model. The model was executed to reduce computational time, predict flow discharges, total mercury concentration, flow duration and mercury mass rate curves at key monitoring stations under various hydrological and environmental conditions and scenarios. The computational results provided insight on the relationship between discharges and mercury mass rate curves at various stations throughout EFPC, which is important to best understand and support the management mercury contamination and remediation efforts within EFPC.
15

Uncovering the Efficiency Limits to Obtaining Water: On Earth and Beyond

Akshay K Rao (12456060) 26 April 2022 (has links)
<p> Inclement challenges of a changing climate and humanity's desire to explore extraterrestrial environments both necessitate efficient methods to obtain freshwater. To accommodate next generation water technology, there is a need for understanding and defining the energy efficiency for unconventional water sources over a broad range of environments. Exergy analysis provides a common description for efficiency that may be used to evaluate technologies and water sources for energy feasibility. This work uses robust thermodynamic theory coupled with atmospheric and planetary data to define water capture efficiency, explore its variation across climate conditions, and identify technological niches and development needs.  </p> <p><br></p> <p> We find that desalinating saline liquid brines, even when highly saline, could be the most energetically favorable option for obtaining water outside of Earth. The energy required to access water vapor may be four to ten times higher than accessing ice deposits, however it offers the capacity for decentralized systems. Considering atmospheric water vapor harvesting on Earth, we find that the thermodynamic minimum is anywhere from 0x (RH≥ 100%) to upwards of 250x (RH<10\%) the minimum energy requirement of seawater desalination. Sorbents, modelled as metal organic frameworks (MOFs), have a particular niche in arid and semi-arid regions (20-30%). Membrane-systems are best at low relative humidity and the region of applicability is strongly affected by the vacuum pumping efficiency. Dew harvesting is best at higher humidity and fog harvesting is optimal when super-saturated conditions exist. Component (e.g., pump, chiller, etc.) inefficiencies are the largest barrier in increasing process-level efficiency and strongly impact the regions optimal technology deployment. The analysis elucidates a fundamental basis for comparing water systems energy efficiency for outer space applications and provides the first thermodynamics-based comparison of classes of atmospheric water harvesting technologies on Earth.</p>
16

MUNICIPAL LANDFILL LEACHATE INORGANIC ANALYSIS FOCUSING ON DETECTING VALUABLE METALS

Tristin Michael Pratt (16020944) 19 June 2023 (has links)
<p>Pumped municipal solid waste landfill leachate samples (7 cells from a site in Nebraska, 4 cells from a site in Illinois) have been analyzed for 62 elements using Inductively Coupled Plasma-Optical Emission Spectroscopy (ICP-OES). A procedure for complete dissolution of solids in the leachate was developed. Complete dissolution aims to reduce material loss in filtration by eliminating the need for filtration, and frees materials entrapped in undissolved solids. The procedure uses centrifugation to separate solid phase matter from the raw sample to maximize the effect of acid, and uses Chloric, Nitric, Fluoric, and Boric acids with microwave digestion to achieve full dissolution. The dissolved solid fraction precipitates yttrium fluoride and some other metals due to over-solubility concentrations; the precipitate is recovered and redissolved for analysis. Platinum, (Post-) Transition, and Lanthanide group metals were positively detected in the landfill leachate. Individual metals from these groups were detected in either/both aqueous or/and solid phases: solid phase metals are usually at least one magnitude of concentration greater than liquid phase metals, unless the solid phase produced no detection of the metal where the liquid phase did. Noteworthy results are: in the solid phase; Al was quantified from 10 to 103 𝜇g/g of solid mass; Sc, Cr, Ti, and Cu were quantified in the solid phase from 1 to 50 𝜇g/g of solid mass; Zr and Eu were quantified from .5 to ~8 𝜇g/g of solid mass. In the liquid phase: Ti, Cr, Li, Cu, As, and Zr were quantified mostly between 10-2 to 10-1 𝜇g/g of liquid mass, but occasionally reach out of those bounds; Al, Sc, Pt, Co, and V were quantified mostly from 10-3 to 10-2 𝜇g/g of solid mass. Solid phase metals were positively detected with a minimum Limit of Detection (LOD) usually around 10-1 𝜇g/g of solid mass, including: In, Ge, Pb, Ru, Sb, Ta, Hf, Bi, Yb, La, Ti, Pd, Lu, Dy, and Tb. Liquid phase metals were positively detected with a minimum LOD usually around 10-5 𝜇g/g of liquid mass, including: Tm, Ge, Au, Pb, Sb, Ta, Hf, Sm, Nb, Ho, Ga, Bi, Yb, Pd, Er, and Cd.</p>
17

<b>Machine Learning And remote sensing applications for lake Michigan coastal processes</b>

Hazem Usama Abdelhady (18309886) 04 April 2024 (has links)
<p dir="ltr">The recent surge in water levels within the Great Lakes has laid bare the vulnerability of the surrounding coastal areas. Over the past few years, communities along the Great Lakes coast have struggled with widespread coastal transformations, witnessing phenomena such as shoreline retreat, alterations in habitat, significant recession of bluffs and dunes, infrastructure and property damage, coastal flooding, and the failure of coastal protection structures. Unlike the ocean coasts, the Great Lakes coastal regions experience a unique confluence of large interannual water level fluctuations, coastal storms, and ice cover dynamics, which complicates the ongoing coastal management endeavors. To address this multifaceted challenge, the interplay between all these factors and their impact on coastal changes should be understood and applied to improve the resilience of Great Lakes coastal areas.</p><p><br></p><p dir="ltr">In this dissertation, several steps were taken to improve knowledge of coastal processes in the Great Lakes, spanning from the initial use of remote sensing for quantifying coastal changes to the subsequent stages of modeling and predicting shoreline changes as well as leveraging machine learning techniques to simulate and forecast influential factors like waves and ice cover. First, a fully automated shoreline detection algorithm was developed to quantify the shoreline changes in Lake Michigan, detecting the most vulnerable areas, and determining the main factors responsible for the spatial variability in the shoreline changes. Additionally, a reduced complexity model was designed to simulate the shoreline changes in Lake Michigan by considering both waves and water level fluctuations, which significantly improved the shoreline changes modeling and forecasting for Lake Michigan. Furthermore, new deep learning-based frameworks based on the Convolution Long Short-Term Memory (ConvLSTM) and Convolution Neural Network (CNN) were introduced to model and extend the current records of wave heights and ice cover datasets, adding 70% and 50% data to the existing waves and ice time series respectively. Finally, the extended waves and ice time series were used to study the long-term trends and the correlation between the interannual water level and waves changes, revealing a statically significant decreasing trend in the ice cover over Lake Michigan of 0.6 days/year, and an increasing trend in the waves interannual variability at Chicago area.</p>
18

Stagnation Impacts on Building Drinking Water Safety: The Pandemic and Microplastics

Kyungyeon Ra (13164972) 28 July 2022 (has links)
<p>  </p> <p>The pandemic prompted buildings globally to transition to low or no occupancy as social distancing to reduce the spread of Coronavirus Disease (COVID-19). This consequence prompted concerns about the chemical and microbiological safety of building drinking water due to stagnation. At the same time, microplastic (MP) pollution received increasing global attention due to their presence in the environment and recent discoveries within water distribution systems and at building faucets. MP sources have primarily been targeted as originating within the drinking water sources, but plastic plumbing components are less discussed and known to deteriorate into fragments and smaller pieces that reach faucets. Literature at the time of this work as sparse on stagnation impacts to drinking water quality and the fate of MPs in plumbing. In particular, health officials and building owners issued and received many differed guidance documents telling building owners do different things and no standard guideline was available to reduce the health risks caused by stagnant building drinking water. This dissertation  examined three different types of buildings during closed to low water use conditions and conducted bench-scale testing to explore the phenomena observed in the field. Chapter 1 describes water quality impacts during a 7 year old ‘green’ middle school as it transitioned from Summer (low water use) to Fall (normal use). Field experiments revealed that more than half of first draw water samples exceeded the copper (acute) health-based action limit during low water use. Copper concentration within the school increased as distance from building entry point increased. Chapter 2 and 3 describe report on chemical and microbiological water quality in buildings at a university buildings (Chapter 2), and elementary school (Chapter 3). Chapters 2 and 3 revealed that stagnation negatively impacted chemical and microbiological building water quality (cold and hot) but flushing was effective at remediating high concentration of heavy metals and <em>Legionella pneumophila</em> at most locations. But in large buildings, where building plumbing system was more complicated, flushing did not always result in improved water quality. Also discovered was that water quality again deteriorated even after whole building water system was flushed. It is important to understand own building systems to maintain water quality as each building complexity requires specific knowledge and solutions. Chapter 4 describes current knowledge associated with MPs in drinking water and results of bench scale experiments on MP fate and transport in building plumbing. This work identified that while MPs have been reported at building faucets, sampling details lacking from available studies often resulted in study results not being comparable across others. Based on the review of the issue, it was found that MPs have likely reached building faucets for decades but have received no characterization until recently. Bench-scale testing using two MPs, of different density, in copper and crosslinked polyethylene (PEX) pipes revealed size influenced the amount of MPs retained in a pipe. Research needs were identified to determine the fundamental factors that control MP fate in plumbing and their presence at building faucets. </p>
19

Integrated Flood Modeling for Improved Understanding of River-Floodplain Hydrodynamics: Moving beyond Traditional Flood Mapping

Siddharth Saksena (7026707) 15 August 2019 (has links)
<div>With increasing focus on large scale planning and allocation of resources for protection against future flood risk, it is necessary to analyze and improve the deficiencies in the conventional flood modeling approach through a better understanding of the interactions between river hydrodynamics and subsurface processes. Recent studies have shown that it is possible to improve the flood inundation modeling and mapping using physically-based integrated models that incorporate observable data through assimilation and simulate hydrologic fluxes using the fundamental laws of conservation of mass at multiple spatiotemporal scales. However, despite the significance of integrated modeling in hydrology, it has received relatively less attention within the context of flood hazard. The overall aim of this dissertation is to study the heterogeneity in complex physical processes that govern the watershed response during flooding and incorporate these effects in integrated models across large scales for improved flood risk estimation. Specifically, this dissertation addresses the following questions: (1) Can physical process incorporation using integrated models improve the characterization of antecedent conditions and increase the accuracy of the watershed response to flood events? (2) What factors need to be considered for characterizing scale-dependent physical processes in integrated models across large watersheds? (3) How can the computational efficiency and process representation be improved for modeling flood events at large scales? (4) Can the applicability of integrated models be improved for capturing the hydrodynamics of unprecedented flood events in complex urban systems?</div><div><br></div><div>To understand the combined effect of surface-subsurface hydrology and hydrodynamics on streamflow generation and subsequent inundation during floods, the first objective incorporates an integrated surface water-groundwater (SW-GW) modeling approach for simulating flood conditions. The results suggest that an integrated model provides a more realistic simulation of flood hydrodynamics for different antecedent soil conditions. Overall, the findings suggest that the current practice of simulating floods which assumes an impervious surface may not be providing realistic estimates of flood inundation, and that an integrated approach incorporating all the hydrologic and hydraulic processes in the river system must be adopted.</div><div><br></div><div>The second objective focuses on providing solutions to better characterize scale-dependent processes in integrated models by comparing two model structures across two spatial scales and analyzing the changes in flood responses. The results indicate that since the characteristic length scales of GW processes are larger than SW processes, the intrinsic scale (or resolution) of GW in integrated models should be coarser when compared to SW. The results also highlight the degradation of streamflow prediction using a single channel roughness when the stream length scales are increased. A distributed channel roughness variable along the stream length improves the modeled basin response. Further, the results highlight the ability of a dimensionless parameter 𝜂1, representing the ratio of the reach length in the study region to maximum length of the single stream draining at that point, for identifying which streams may require a distributed channel roughness.</div><div><br></div><div>The third objective presents a hybrid flood modeling approach that incorporates the advantages of both loosely-coupled (‘downward’) and integrated (‘upward’) modeling approaches by coupling empirically-based and physically-based approaches within a watershed. The computational efficiency and accuracy of the proposed hybrid modeling approach is tested across three watersheds in Indiana using multiple flood events and comparing the results with fully- integrated models. Overall, the hybrid modeling approach results in a performance comparable to a fully-integrated approach but at a much higher computational efficiency, while at the same time, providing objective-oriented flexibility to the modeler.</div><div><br></div><div>The fourth objective presents a physically-based but computationally-efficient approach for modeling unprecedented flood events at large scales in complex urban systems. The application of the proposed approach results in accurate simulation of large scale flood hydrodynamics which is shown using Hurricane Harvey as the test case. The results also suggest that the ability to control the mesh development using the proposed flexible model structure for incorporating important physical and hydraulic features is as important as integration of distributed hydrology and hydrodynamics.</div>
20

A New Mathematical Framework for Regional Frequency Analysis of Floods

Basu, Bidroha January 2015 (has links) (PDF)
Reliable estimates of design flood quantiles are often necessary at sparsely gauged/ungauged target locations in river basins for various applications in water resources engineering. Development of effective methods for use in this task has been a long-standing challenge in hydrology for over five decades.. Hydrologists often consider various regional flood frequency analysis (RFFA) approaches that involve (i) use of regionalization approach to delineate a homogeneous group of watersheds resembling watershed of the target location, and (ii) use of a regional frequency analysis (RFA) approach to transfer peak flow related information from gauged watersheds in the group to the target location, and considering the information as the basis to estimate flood quantile(s) for the target site. The work presented in the thesis is motivated to address various shortcomings/issues associated with widely used regionalization and RFA approaches. Regionalization approaches often determine regions by grouping data points in multidimensional space of attributes depicting watershed’s hydrology, climatology, topography, land-use/land-cover and soils. There are no universally established procedures to identify appropriate attributes, and modelers use subjective procedures to choose a set of attributes that is considered common for the entire study area. This practice may not be meaningful, as different sets of attributes could influence extreme flow generation mechanism in watersheds located in different parts of the study area. Another issue is that practitioners usually give equal importance (weight) to all the attributes in regionalization, though some attributes could be more important than others in influencing peak flows. To address this issue, a two-stage clustering approach is developed in the thesis. It facilitates identification of appropriate attributes and their associated weights for use in regionalization of watersheds in the context of flood frequency analysis. Effectiveness of the approach is demonstrated through a case study on Indiana watersheds. Conventional regionalization approaches could prove effective for delineating regions when data points (depicting watersheds) in watershed related attribute space can be segregated into disjoint groups using straight lines or linear planes. They prove ineffective when (i) data points are not linearly separable, (ii) the number of attributes and watersheds is large, (iii) there are outliers in the attribute space, and (iv) most watersheds resemble each other in terms of their attributes. In real world scenario, most watersheds resemble each other, and regions may not always be segregated using straight lines or linear planes, and dealing with outliers and high-dimensional data is inevitable in regionalization. To address this, a fuzzy support vector clustering approach is proposed in the thesis and its effectiveness over commonly used region-of-influence approach, and different cluster analysis based regionalization methods is demonstrated through a case study on Indiana watersheds. For the purpose of regional frequency analysis (RFA), index-flood approach is widely used over the past five decades. Conventional index-flood (CIF) approach assumes that values of scale and shape parameters of frequency distribution are identical across all the sites in a homogeneous region. In real world scenario, this assumption may not be valid even if a region is statistically homogeneous. Logarithmic index-flood (LIF) and population index-flood (PIF) methodologies were proposed to address the problem, but even those methodologies make unrealistic assumptions. PIF method assumes that the ratio of scale to location parameters is a constant for all the sites in a region. On the other hand, LIF method assumes that appropriate frequency distribution to fit peak flows could be found in log-space, but in reality the distribution of peak flows in log space may not be closer to any of the known theoretical distributions. To address this issue, a new mathematical approach to RFA is proposed in L-moment and LH-moment frameworks that can overcome shortcomings of the CIF approach and its related LIF and PIF methods that make various assumptions but cannot ensure their validity in RFA. For use with the proposed approach, transformation mechanisms are proposed for five commonly used three-parameter frequency distributions (GLO, GEV, GPA, GNO and PE3) to map the random variable being analyzed from the original space to a dimensionless space where distribution of the random variable does not change, and deviations of regional estimates of all the distribution’s parameters (location, scale, shape) with respect to their population values as well as at-site estimates are minimal. The proposed approach ensures validity of all the assumptions of CIF approach in the dimensionless space, and this makes it perform better than CIF approach and related LIF and PIF methods. Monte-Carlo simulation experiments revealed that the proposed approach is effective even when the form of regional frequency distribution is mis-specified. Case study on watersheds in conterminous United States indicated that the proposed approach outperforms methods based on index-flood approach in real world scenario. In recent decades, fuzzy clustering approach gained recognition for regionalization of watersheds, as it can account for partial resemblance of several watersheds in watershed related attribute space. In working with this approach, formation of regions and quantile estimation requires discerning information from fuzzy-membership matrix. But, currently there are no effective procedures available for discerning the information. Practitioners often defuzzify the matrix to form disjoint clusters (regions) and use them as the basis for quantile estimation. The defuzzification approach (DFA) results in loss of information discerned on partial resemblance of watersheds. The lost information cannot be utilized in quantile estimation, owing to which the estimates could have significant error. To avert the loss of information, a threshold strategy (TS) was considered in some prior studies, but it results in under-prediction of quantiles. To address this, a mathematical approach is proposed in the thesis that allows discerning information from fuzzy-membership matrix derived using fuzzy clustering approach for effective quantile estimation. Effectiveness of the approach in estimating flood quantiles relative to DFA and TS was demonstrated through Monte-Carlo simulation experiments and case study on mid-Atlantic water resources region, USA. Another issue with index flood approach and its related RFA methodologies is that they assume linear relationship between each of the statistical raw moments (SMs) of peak flows and watershed related attributes in a region. Those relationships form the basis to arrive at estimates of SMs for the target ungauged/sparsely gauged site, which are then utilized to estimate parameters of flood frequency distribution and quantiles corresponding to target return periods. In reality, non-linear relationships could exist between SMs and watershed related attributes. To address this, simple-scaling and multi-scaling methodologies have been proposed in literature, which assume that scaling (power law) relationship exists between each of the SMs of peak flows at sites in a region and drainage areas of watersheds corresponding to those sites. In real world scenario, drainage area alone may not completely describe watershed’s flood response. Therefore flood quantile estimates based on the scaling relationships can have large errors. To address this, a recursive multi-scaling (RMS) approach is proposed that facilitates construction of scaling (power law) relationship between each of the SMs of peak flows and a set of site’s region-specific watershed related attributes chosen/identified in a recursive manner. The approach is shown to outperform index-flood based region-of-influence approach, simple-and multi-scaling approaches, and a multiple linear regression method through leave-one-out cross validation experiment on watersheds in and around Indiana State, USA. The conventional approaches to flood frequency analysis (FFA) are based on the assumption that peak flows at the target site represent a sample of independent and identically distributed realization drawn from a stationary homogeneous stochastic process. This assumption is not valid when flows are affected by changes in climate and/or land use/land cover, and regulation of rivers through dams, reservoirs and other artificial diversions/storages. In situations where evidence of non-stationarity in peak flows is strong, it is not appropriate to use quantile estimates obtained based on the conventional FFA approaches for hydrologic designs and other applications. Downscaling is one of the options to arrive at future projections of flows at target sites in a river basin for use in FFA. Conventional downscaling methods attempt to downscale General Circulation Model (GCM) simulated climate variables to streamflow at target sites. In real world scenario, correlation structure exists between records of streamflow at sites in a study area. An effective downscaling model must be parsimonious, and it should ensure preservation of the correlation structure in downscaled flows to a reasonable extent, though exact reproduction/mimicking of the structure may not be necessary in a climate change (non-stationary) scenario. A few recent studies attempted to address this issue based on the assumption of spatiotemporal covariance stationarity. However, there is dearth of meaningful efforts especially for multisite downscaling of flows. To address this, multivariate support vector regression (MSVR) based methodology is proposed to arrive at flood return levels (quantile estimates) for target locations in a river basin corresponding to different return periods in a climate change scenario. The approach involves (i) use of MSVR relationships to downscale GCM simulated large scale atmospheric variables (LSAVs) to monthly time series of streamflow at multiple locations in a river basin, (ii) disaggregation of the downscaled streamflows corresponding to each site from monthly to daily time scale using k-nearest neighbor disaggregation methodology, (iii) fitting time varying generalized extreme value (GEV) distribution to annual maximum flows extracted from the daily streamflows and estimating flood return levels for different target locations in the river basin corresponding to different return periods.

Page generated in 0.0979 seconds