• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 20
  • 20
  • 12
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 206
  • 206
  • 38
  • 33
  • 31
  • 30
  • 28
  • 24
  • 24
  • 22
  • 21
  • 20
  • 20
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Development and Implementation of Bayesian Computer Model Emulators

Lopes, Danilo Lourenco January 2011 (has links)
<p>Our interest is the risk assessment of rare natural hazards, such as</p><p>large volcanic pyroclastic flows. Since catastrophic consequences of</p><p>volcanic flows are rare events, our analysis benefits from the use of</p><p>a computer model to provide information about these events under</p><p>natural conditions that may not have been observed in reality.</p><p>A common problem in the analysis of computer experiments, however, is the high computational cost associated with each simulation of a complex physical process. We tackle this problem by using a statistical approximation (emulator) to predict the output of this computer model at untried values of inputs. Gaussian process response surface is a technique commonly used in these applications, because it is fast and easy to use in the analysis.</p><p>We explore several aspects of the implementation of Gaussian process emulators in a Bayesian context. First, we propose an improvement for the implementation of the plug-in approach to Gaussian processes. Next, we also evaluate the performance of a spatial model for large data sets in the context of computer experiments.</p><p>Computer model data can also be combined to field observations in order to calibrate the emulator and obtain statistical approximations to the computer model that are closer to reality. We present an application where we learn the joint distribution of inputs from field data and then bind this auxiliary information to the emulator in a calibration process.</p><p>One of the outputs of our computer model is a surface of maximum volcanic flow height over some geographical area. We show how the topography of the volcano area plays an important role in determining the shape of this surface, and we propose methods</p><p>to incorporate geophysical information in the multivariate analysis of computer model output.</p> / Dissertation
52

A Case Study of Risk Management for Groundwater Contaminated Site by Organic Chlorine Solvents.

Lin, Sang-Feng 02 January 2012 (has links)
The pollution of soil and groundwater have increased due to the leakage of ptrolium products and organic chemicals in the country recently,and Taiwan face with small region and dense population,so the pollution maybe exposed through relevant way such as quality of water,soil,air,crop and fish and so on to affect human health and cause risk of jeopardise directly or indiretly. The study is to aimed at chlorinated compounds for domestic,and use methods of risk assessment to analyze necessity and urgency of renovation for be contaiminted compound and downstream influence.And then according to result to make relevant management strategies and projets to control risk level and influence for contaiminated compound. In this study,we collect information relate to case,in accordence to health and risk assessment methods for soil and groundwater contaminated sites and some softwares of CRYSTAL BALL and BIOCHLO by Environmental Protection Bureau to estimate the site which was evaluated chlorinated compound (trichloroethylene,TCE),and considerate whether they affect residents healy nearby,use some hydrogeological survey of sites to process second-level health risk assessments. First,use commercial software of CRYSTAL BALL execute uncertainty and sensitivity analysis. Not only does the analysis probe into overall parameters variaty how to affect risk value but also they process analysis and results by different combinations of paremeter.From this result, we can confirm that the large parameter values for affecting risk is transmission of pollutants and is similar to previous studies assessment and analysis.Other parameter cause less influence for risk variaty such as age level of receptor,group,the way of contact,time and water quntity.The study discovers TCE pollutants concentration will make change of risk value by accompany with groundwater move to downstream distance. That means receptor of distance of contamination resource cause large influence. The far distance represents can product the larger function for TCE pollutant, and causes less cancer risk for receptor, conversely, it causes higher cancer risk. Subsequently, we also use BIOCHLOR assessment software by U.S.A Enviromental Protection Bureau. from the result to determine whether the site have potential of anaerrobic reductive dechlorination or not and estimate natural attenuation possibility.The actual situation of the site gets the three score in the weight total.This display do not prove that chlorinated compounds can procedd biogical decomposition in the anaerobic invironment without more correct value in the stage.We recommend looking for another renovations. The study selects more important sensitive parameters through risk assessment result for the site and work out the way of renovations which is suitable for this case by schedule of object. The investigation has found that residents indeed extracted groundwater for agriculture irrigation, but not drank water directly. In order to avoid the worst situation happens from the view of risk, we will consider two aspects of regulation and technology for plan. First, in order to administration control for the aspect of regulation we will consider how to prohibit residents to extract groundwater effectively. For instance, prohibit or limit to set up underground well, provide substitute drink water, set up notice sign and underground water quality monitor measure regularity. Second, for the sake of preventing pollutant group spread and adopt measure for the aspect of technology, for example pollution remediation technology (include soil gas extraction, air injection method at present) and containment control technology (include hydraulic containment, permeability reactive barrier, natural biological remediation, etc) to manage effectively possible events in the site and prepare well prevention in advance. We also adopt good communication with local residents to let residents understand executive content for renovation and reduce their resistance in favor of making progress for renovation and achieve risk management objective.
53

Design and performance of an ammonia measurement system

Boriack, Cale Nolan 25 April 2007 (has links)
Ammonia emissions from animal feeding operations (AFOs) have recently come under increased scrutiny. The US Environmental Protection Agency (EPA) has come under increased pressure from special interest groups to regulate ammonia. Regulation of ammonia is very difficult because every facility has different manure management practices. Different management practices lead to different emissions for every facility. Researchers have been tasked by industry to find best management practices to reduce emissions. The task cannot be completed without equipment that can efficiently and accurately compare emissions. To complete this task, a measurement system was developed and performance tested to measure ammonia. Performance tests included uncertainty analysis, system response, and adsorption kinetics. A measurement system was designed for measurement of gaseous emissions from ground level area sources (GLAS) in order to sample multiple receptors with a single sensor. This multiplexer may be used in both local and remote measurement systems to increase the sampling rate of gaseous emissions. The increased data collection capacity with the multiplexer allows for nearly three times as many samples to be taken in the same amount of time while using the same protocol for sampling. System response analysis was performed on an ammonia analyzer, a hydrogen sulfide analyzer, and tubing used with flux chamber measurement. System responses were measured and evaluated using transfer functions. The system responses for the analyzers were found to be first order with delay in auto mode. The tubing response was found to be a first order response with delay. Uncertainty analysis was performed on an ammonia sampling and analyzing system. The system included an analyzer, mass flow controllers, calibration gases, and analog outputs. The standard uncertainty was found to be 443 ppb when measuring a 16 ppm ammonia stream with a 20 ppm span. A laboratory study dealing with the adsorption kinetics of ammonia on a flux chamber was performed to determine if adsorption onto the chamber walls was significant. The study found that the adsorption would not significantly change the concentration of the output flow 30 minutes after a clean chamber was exposed to ammonia concentrations for concentrations above 2.5 ppm.
54

Real-Time Demand Estimation for Water Distribution Systems

Kang, Doo Sun January 2008 (has links)
The goal of a water distribution system (WDS) is to supply the desired quantity of fresh water to consumers at the appropriate time. In order to properly operate a WDS, system operators need information about the system states, such as tank water level, nodal pressure, and water quality for the system wide locations. Most water utilities now have some level of SCADA (Supervisory Control and Data Acquisition) systems providing nearly real-time monitoring data. However, due to the prohibitive metering costs and lack of applications for the data, only portions of systems are monitored and the use of the SCADA data is limited. This dissertation takes a comprehensive view of real-time demand estimation in water distribution systems. The goal is to develop an optimal monitoring system plan that will collect appropriate field data to determine accurate, precise demand estimates and to understand their impact on model predictions. To achieve that goal, a methodology for real-time demand estimates and associated uncertainties using limited number of field measurements is developed. Further, system wide nodal pressure and chlorine concentration and their uncertainties are predicted using the estimated nodal demands. This dissertation is composed of three journal manuscripts that address these three key steps beginning with uncertainty evaluation, followed by demand estimation and finally optimal metering layout.The uncertainties associated with the state estimates are quantified in terms of confidence limits. To compute the uncertainties in real-time alternative schemes that reduce computational efforts while providing good statistical approximations are evaluated and verified by Monte Carlo simulation (MCS). The first order second moment(FOSM) method provides accurate variance estimates for pressure; however, because of its linearity assumption it has limited predictive ability for chlorine under unsteady conditions. Latin Hypercube sampling (LHS) provides good estimates of prediction uncertainty for chlorine and pressure in steady and unsteady conditions with significantly less effort.For real-time demand estimation, two recursive state estimators; tracking state estimator (TSE) based on weighted least squares (WLS) scheme and Kalman filter (KF), are applied. In addition, in order to find available field data types for demand estimation, comparative studies are performed using pipe flow rate and nodal pressure head as measurements. To reduce the number of unknowns and make the system solvable, nodes with similar user characteristics are grouped and assumed to have same demand pattern. The uncertainties in state variables are quantified in terms of confidence limits using the approximate methods (i.e., FOSM and LHS). Results show that TSE with pipe flow rates as measurements provide reliable demand estimations. Also, the model predictions computed using the estimated demands match well with the synthetically generated true values.Field measurements are critical elements to obtaining quality real-time state estimates. However, the limited number of metering locations has been a significant obstacle for the real-time studies and identifying locations to best gain information is critical. Here, an optimal meter placement (OMP) is formulated as a multi-objective optimization problem and solved using a multi-objective genetic algorithm (MOGA) based on Pareto-optimal solutions. Results show that model accuracy and precision should be pursued at the same time as objectives since both measures have trade-off relationship. GA solutions were improvements over the less robust methods or designers' experienced judgment.
55

Decision support algorithms for power system and power electronic design

Heidari, Maziar 10 September 2010 (has links)
The thesis introduces an approach for obtaining higher level decision support information using electromagnetic transient (EMT) simulation programs. In this approach, a suite of higher level driver programs (decision support tools) control the simulator to gain important information about the system being simulated. These tools conduct a sequence of simulation runs, in each of which the study parameters are carefully selected based on the observations of the earlier runs in the sequence. In this research two such tools have been developed in conjunction with the PSCAD/EMTDC electromagnetic transient simulation program. The first tool is an improved optimization algorithm, which is used for automatic optimization of the system parameters to achieve a desired performance. This algorithm improves the capabilities of the previously reported method of optimization-enabled electromagnetic transient simulation by using an enhanced gradient-based optimization algorithm with constraint handling techniques. In addition to allow handling of design problems with more than one objective the thesis proposes to augment the optimization tool with the technique of Pareto optimality. A sequence of optimization runs are conducted to obtain the Pareto frontier, which quantifies the tradeoffs between the design objectives. The frontier can be used by the designer for decision making process. The second tool developed in this research helps the designer to study the effects of uncertainties in a design. By using a similar multiple-run approach this sensitivity analysis tool provides surrogate models of the system, which are simple mathematical functions that represent different aspects of the system performance. These models allow the designer to analyze the effects of uncertainties on system performance without having to conduct any further time-consuming EMT simulations. In this research it has been also proposed to add probabilistic analysis capabilities to the developed sensitivity analysis tool. Since probabilistic analysis of a system using conventional techniques (e.g. Monte-Carlo simulations) normally requires a large number of EMT simulation runs, using surrogate models instead of the actual simulation runs yields significant savings in terms of shortened simulation time. A number of examples have been used throughout the thesis to demonstrate the application and usefulness of the proposed tools.
56

On Fuel Coolant Interactions and Debris Coolability in Light Water Reactors

Thakre, Sachin January 2015 (has links)
During the case of a hypothetical severe accident in a light water reactor, core damage may occur and molten fuel may interact with water resulting in explosive interactions. A Fuel-Coolant Interactions (FCI) consists of many complex phenomena whose characteristics determine the energetics of the interactions. The fuel melt initially undergoes fragmentation after contact with the coolant which subsequently increases the melt surface area exposed to coolant and causes rapid heat transfer. A substantial amount of research has been done to understand the phenomenology of FCI, still there are gaps to be filled in terms of the uncertainties in describing the processes such as breakup/fragmentation of melt and droplets. The objective of the present work is to substantiate the understanding in the premixing phase of the FCI process by studying the deformation/pre-fragmentation of melt droplets and also the mechanism of melt jet breakup. The focus of the work is to study the effect of various influential parameters during the premixing phase that determine the intensity of the energetics in terms of steam explosion. The study is based on numerical analysis starting from smaller scale and going to the large scale FCI. Efforts are also taken to evaluate the uncertainties in estimating the steam explosion loads on the reactor scale. The fragmented core is expected to form a porous debris bed. A part of the present work also deals with experimental investigations on the coolability of prototypical debris bed. Initially, the phenomenology of FCI and debris bed coolability is introduced. A review of the state of the art based on previous experimental and theoretical developments is also presented. The study starts with numerical investigation of molten droplet hydrodynamics in a water pool, carried out using the Volume Of Fluid (VOF) method in the CFD code ANSYS FLUENT. This fundamental study is related to single droplets in a preconditioning phase, i.e. deformation/pre-fragmentation prior to steam explosion. The droplet deformation is studied extensively also including the effect of the pressure pulse on its deformation behavior. The effect of material physical properties such as density, surface tension and viscosity are investigated. The work is then extended to 3D analysis as a part of high fidelity simulations, in order to overcome the possible limitations of 2D simulations. The investigation on FCI processes is then continued to the analysis on melt jet fragmentation in a water pool, since this is the crucial phenomenon which creates the melt-coolant pre-mixture, an initial condition for steam explosion. The calculations are carried out assuming non-boiling conditions and the properties of Wood’s metal. The jet fragmentation and breakup pattern are carefully observed at various Weber numbers. Moreover, the effect of physical and material properties such as diameter, velocity, density, surface tension and viscosity on jet breakup length, are investigated. After the fundamental studies, the work was extended to reactor scale FCI energetics. It is mainly oriented on the evaluation of uncertainties in estimating the explosion impact loads on the surrounding structures. The uncertainties include the influential parameters in the FCI process and also the code uncertainties in calculations. The FCI code MC3D is used for the simulations and the PIE (propagation of input errors) method is used for the uncertainty analysis. The last part of the work is about experimental investigations of debris coolability carried out using the POMECO-HT facility at KTH. The focus is on the determination of the effect of the bed’s prototypical characteristics on its coolability, in terms of inhomogeneity with heap like (triangular shape) bed and the radial stratified bed, and also the effect of its multi-dimensionality. For this purpose, four particle beds were constructed: two homogeneous, one with radial stratification and one with triangular shape, respectively. The effectiveness of coolability-enhanced measures such as bottom injection of water and a downcomer (used for natural circulation driven coolability, NCDC) was also investigated. The final chapter includes the summary of the whole work. / Under ett svårt haveri i en kärnkraftsreaktor kan en härdsmälta bildas och smältan växelverka på ett explosivt sätt med kylvattnet. En sådan FCI (Fuel-Coolant-Interaction) inbegriper flera fysikaliska processer vilkas förlopp bestämmer hur stor den frigjorda energin blir. Vid kontakt med vattnet fragmenteras först härdsmältan vilket i sin tur leder till att en större yta exponeras för kylvattnet och att värmeöverföringen från smältan snabbt ökar. Mycket forskning har ägnats åt att förstå vad som sker under en FCI men det finns fortfarande luckor att fylla vad beträffar t ex osäkerheter i beskrivningen av fragmentering av såväl smälta som enskilda droppar av smält material. Syftet med detta arbete är främst att underbygga en bättre förståelse av den inledande delen av en FCI genom att studera dels hur enskilda droppar av smält material deformeras och splittras och dels hur en stråle av smält material fragmenteras. Vi studerar särskilt vilka parametrar som mest påverkar den energi som frigörs vid ångexplosionen. Problemet studeras med numerisk analys med början i liten skala och sedan i full skala. Vi söker också uppskatta de laster som explosionen utsätter reaktorns komponenter för. En annan viktig fråga gäller kylbarheten hos den slaggansamling som bildas under reaktorhärden efter en FCI. Slagghögen förväntas ha en porös struktur och en del av avhandlingen redogör för experimentella försök som genomförts för att utvärdera kylbarheten i olika prototypiska slaggformationer. I avhandlingens inledning beskrivs de fysikaliska processerna under en FCI och kylningen av en slaggansamling. Det aktuella kunskapsläget på dessa områden presenteras också utgående från tidigare experimentella och teoretiska studier. Studierna i avhandlingen inleds med numerisk analys av hydrodynamiken för en enskild droppe smälta i en vattentank där VOF-metoden i CFD-programmet ANSYS FLUENT används. Denna grundläggande studie rör en enskild droppe under förstadiet till fragmentering och ångexplosion då droppen deformeras alltmer. Deformationen studeras ingående också med hänsyn tagen till inverkan av en tryckpuls. Inverkan av olika egenskaper hos materialet, som densitet, ytspänning och viskositet studeras också. Arbetet utvidgas sedan till en beskrivning i 3D för att undvika de begränsningar som finns i en 2D-simulering. Studierna av FCI utvidgas sedan till en analys av fragmentering av en stråle smälta i vatten. Detta är en kritisk del av förloppet då smälta och vatten blandas för att ge utgångstillståndet för ångexplosionen. Beräkningarna genomförs under antagande att kokning inte sker och med materialegenskaper som för Wood´s metall. Mönstret för fragmentering och uppsplittring studeras ingående för olika Weber-tal. Dessutom studeras effekten på strålens uppsplittringslängd av parametrar som diameter och hastighet för strålen samt densitet, ytspänning och viskositet hos materialet. Efter dessa grundläggande studier utvidgas arbetet till FCI-energier i reaktorskala. Här ligger tonvikten på utvärdering av osäkerheter i bestämningen av den inverkan explosionen har på omgivande konstruktioner och komponenter. Osäkerheterna inkluderar eventuell bristande noggrannhet hos såväl de viktiga parametrarna i FCI-processen som i själva beräkningarna. Den sista delen av arbetet handlar om experimentella undersökningar av slaggformationens kylbarhet som genomförts i uppställningen POMECO-HT vid avdelningen för kärnkraftsäkerhet på KTH. Vi vill bestämma effekten av formationens prototypiska egenskaper på kylbarheten. För detta ändamål konstruerades fyra olika formationer: två homogena, en med radiell variation i partikelstorlek och en med triangulär variation. Vi undersökte också hur förbättrad kylning kan uppnås genom att tillföra kylvatten underifrån respektive via ett fallrör (kylning genom naturlig cirkulation). I det avslutande kapitlet ges en sammanfattning av hela arbetet. / <p>QC 20150507</p>
57

Decision support algorithms for power system and power electronic design

Heidari, Maziar 10 September 2010 (has links)
The thesis introduces an approach for obtaining higher level decision support information using electromagnetic transient (EMT) simulation programs. In this approach, a suite of higher level driver programs (decision support tools) control the simulator to gain important information about the system being simulated. These tools conduct a sequence of simulation runs, in each of which the study parameters are carefully selected based on the observations of the earlier runs in the sequence. In this research two such tools have been developed in conjunction with the PSCAD/EMTDC electromagnetic transient simulation program. The first tool is an improved optimization algorithm, which is used for automatic optimization of the system parameters to achieve a desired performance. This algorithm improves the capabilities of the previously reported method of optimization-enabled electromagnetic transient simulation by using an enhanced gradient-based optimization algorithm with constraint handling techniques. In addition to allow handling of design problems with more than one objective the thesis proposes to augment the optimization tool with the technique of Pareto optimality. A sequence of optimization runs are conducted to obtain the Pareto frontier, which quantifies the tradeoffs between the design objectives. The frontier can be used by the designer for decision making process. The second tool developed in this research helps the designer to study the effects of uncertainties in a design. By using a similar multiple-run approach this sensitivity analysis tool provides surrogate models of the system, which are simple mathematical functions that represent different aspects of the system performance. These models allow the designer to analyze the effects of uncertainties on system performance without having to conduct any further time-consuming EMT simulations. In this research it has been also proposed to add probabilistic analysis capabilities to the developed sensitivity analysis tool. Since probabilistic analysis of a system using conventional techniques (e.g. Monte-Carlo simulations) normally requires a large number of EMT simulation runs, using surrogate models instead of the actual simulation runs yields significant savings in terms of shortened simulation time. A number of examples have been used throughout the thesis to demonstrate the application and usefulness of the proposed tools.
58

Underwater acoustic localization and tracking of Pacific walruses in the northeastern Chukchi Sea

Rideout, Brendan Pearce 10 January 2012 (has links)
This thesis develops and demonstrates an approach for estimating the three-dimensional (3D) location of a vocalizing underwater marine mammal using acoustic arrival time measurements at three spatially separated receivers while providing rigorous location uncertainties. To properly account for uncertainty in the measurements of receiver parameters (e.g., 3D receiver locations and synchronization times) and environmental parameters (water depth and sound speed correction), these quantities are treated as unknowns constrained with prior estimates and prior uncertainties. While previous localization algorithms have solved for an unknown scaling factor on the prior uncertainties as part of the inversion, in this work unknown scaling factors on both the prior and arrival time uncertainties are estimated. Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously. Posterior uncertainties for all unknowns are calculated and incorporate both arrival time and prior uncertainties. Simulation results demonstrated that, for the case considered here, linearization errors are generally small and that the lack of an accurate sound speed profile does not necessarily cause large uncertainties or biases in the estimated positions. The primary motivation for this work was to develop an algorithm for locating underwater Pacific walruses in the coastal waters around Alaska. In 2009, an array of approximately 40 underwater acoustic receivers was deployed in the northeastern Chukchi Sea (northwest of Alaska) from August to October to record the vocalizations of marine mammals including Pacific walruses and bowhead whales. Three of these receivers were placed in a triangular arrangement approximately 400 m apart near the Hanna Shoal (northwest of Wainwright, Alaska). A sequence of walrus knock vocalizations from this data set was processed using the localization algorithm developed in this thesis, yielding a track whose estimated swim speed is consistent with current knowledge of normal walrus swim speed. An examination of absolute and relative walrus location uncertainties demonstrated the usefulness of considering relative uncertainties for applications where the precise location of the mammal is not important (e.g., estimating swim speed). / Graduate
59

Direct sensitivity techniques in regional air quality models: development and application

Zhang, Wenxian 12 January 2015 (has links)
Sensitivity analysis based on a chemical transport model (CTM) serves as an important approach towards better understanding the relationship between trace contaminant levels in the atmosphere and emissions, chemical and physical processes. Previous studies on ozone control identified the high-order Decoupled Direct Method (HDDM) as an efficient tool to conduct sensitivity analysis. Given the growing recognition of the adverse health effects of fine particulate matter (i.e., particles with an aerodynamic diameter less than 2.5 micrometers (PM2.5)), this dissertation presents the development of a HDDM sensitivity technique for particulate matter and its implementation it in a widely used CTM, CMAQ. Compared to previous studies, two new features of the implementation are 1) including sensitivities of aerosol water content and activity coefficients, and 2) tracking the chemical regimes of the embedded thermodynamic model. The new features provide more accurate sensitivities especially for nitrate and ammonium. Results compare well with brute force sensitivities and are shown to be more stable and computationally efficient. Next, this dissertation explores the applications of HDDM. Source apportionment analysis for the Houston region in September 2006 indicates that nonlinear responses accounted for 3.5% to 33.7% of daily average PM2.5, and that PM2.5 formed rapidly during night especially in the presence of abundant ozone and under stagnant conditions. Uncertainty analysis based on the HDDM found that on average, uncertainties in the emissions rates led to 36% uncertainty in simulated daily average PM2.5 and could explain much, but not all, of the difference between simulated and observed PM2.5 concentrations at two observations sites. HDDM is then applied to assess the impact of flare VOC emissions with temporally variable combustion efficiency. Detailed study of flare emissions using the 2006 Texas special inventory indicates that daily maximum 8-hour ozone at a monitoring site can increase by 2.9 ppb when combustion is significantly decreased. The last application in this dissertation integrates the reduced form model into an electricity generation planning model, and enables representation of geospatial dependence of air quality-related health costs in the optimization process to seek the least cost planning for power generation. The integrated model can provide useful advice on selecting fuel types and locations for power plants.
60

Efficient Methods for Predicting Soil Hydraulic Properties

Minasny, Budiman January 2000 (has links)
Both empirical and process-simulation models are useful for evaluating the effects of management practices on environmental quality and crop yield. The use of these models is limited, however, because they need many soil property values as input. The first step towards modelling is the collection of input data. Soil properties can be highly variable spatially and temporally, and measuring them is time-consuming and expensive. Efficient methods, which consider the uncertainty and cost of measurements, for estimating soil hydraulic properties form the main thrust of this study. Hydraulic properties are affected by other soil physical, and chemical properties, therefore it is possible to develop empirical relations to predict them. This idea quantified is called a pedotransfer function. Such functions may be global or restricted to a country or region. The different classification of particle-size fractions used in Australia compared with other countries presents a problem for the immediate adoption of exotic pedotransfer functions. A database of Australian soil hydraulic properties has been compiled. Pedotransfer functions for estimating water-retention and saturated hydraulic conductivity from particle size and bulk density for Australian soil are presented. Different approaches for deriving hydraulic transfer functions have been presented and compared. Published pedotransfer functions were also evaluated, generally they provide a satisfactory estimation of water retention and saturated hydraulic conductivity depending on the spatial scale and accuracy of prediction. Several pedotransfer functions were developed in this study to predict water retention and hydraulic conductivity. The pedotransfer functions developed here may predict adequately in large areas but for site-specific applications local calibration is needed. There is much uncertainty in the input data, and consequently the transfer functions can produce varied outputs. Uncertainty analysis is therefore needed. A general approach to quantifying uncertainty is to use Monte Carlo methods. By sampling repeatedly from the assumed probability distributions of the input variables and evaluating the response of the model the statistical distribution of the outputs can be estimated. A modified Latin hypercube method is presented for sampling joint multivariate probability distributions. This method is applied to quantify the uncertainties in pedotransfer functions of soil hydraulic properties. Hydraulic properties predicted using pedotransfer functions developed in this study are also used in a field soil-water model to analyze the uncertainties in the prediction of dynamic soil-water regimes. The use of the disc permeameter in the field conventionally requires the placement of a layer of sand in order to provide good contact between the soil surface and disc supply membrane. The effect of sand on water infiltration into the soil and on the estimate of sorptivity was investigated. A numerical study and a field experiment on heavy clay were conducted. Placement of sand significantly increased the cumulative infiltration but showed small differences in the infiltration rate. Estimation of sorptivity based on the Philip's two term algebraic model using different methods was also examined. The field experiment revealed that the error in infiltration measurement was proportional to the cumulative infiltration curve. Infiltration without placement of sand was considerably smaller because of the poor contact between the disc and soil surface. An inverse method for predicting soil hydraulic parameters from disc permeameter data has been developed. A numerical study showed that the inverse method is quite robust in identifying the hydraulic parameters. However application to field data showed that the estimated water retention curve is generally smaller than the one obtained in laboratory measurements. Nevertheless the estimated near-saturated hydraulic conductivity matched the analytical solution quite well. Th author believes that the inverse method can give a reasonable estimate of soil hydraulic parameters. Some experimental and theoretical problems were identified and discussed. A formal analysis was carried out to evaluate the efficiency of the different methods in predicting water retention and hydraulic conductivity. The analysis identified the contribution of individual source of measurement errors to the overall uncertainty. For single measurements, the inverse disc-permeameter analysis is economically more efficient than using pedotransfer functions or measuring hydraulic properties in the laboratory. However, given the large amount of spatial variation of soil hydraulic properties it is perhaps not surprising that lots of cheap and imprecise measurements, e.g. by hand texturing, are more efficient than a few expensive precise ones.

Page generated in 0.0713 seconds