Spelling suggestions: "subject:"incertainty,"" "subject:"ncertainty,""
341 |
Assessing the Performance of HSPF When Using the High Water Table Subroutine to Simulate Hydrology in a Low-Gradient WatershedForrester, Michael Scott 30 May 2012 (has links)
Modeling ground-water hydrology is critical in low-gradient, high water table watersheds where ground-water is the dominant contribution to streamflow. The Hydrological Simulation Program-FORTRAN (HSPF) model has two different subroutines available to simulate ground water, the traditional ground-water (TGW) subroutine and the high water table (HWT) subroutine. The HWT subroutine has more parameters and requires more data but was created to enhance model performance in low-gradient, high water table watershed applications. The objective of this study was to compare the performance and uncertainty of the TGW and HWT subroutines when applying HSPF to a low-gradient watershed in the Coastal Plain of northeast North Carolina. One hundred thousand Monte Carlo simulations were performed to generate data needed for model performance comparison. The HWT model generated considerably higher Nash-Sutcliffe efficiency (NSE) values while performing slightly worse when simulating the 50% lowest and 10% highest flows. Model uncertainty was assessed using the Average Relative Interval Length (ARIL) metric. The HWT model operated with more average uncertainty throughout all flow regimes. Based on the results, the HWT subroutine is preferable when applying HSPF to a low-gradient watershed and the accuracy of simulated stream discharge is important. In situations where a balance between performance and uncertainty is called for, the choice of which subroutine to employ is less clear cut. / Master of Science
|
342 |
A risk management process for complex projectsBrown, Robert G. 21 July 2009 (has links)
A more effective and efficient method to identify, assess, track and document project risks was explored. Using the systems engineering approach, an adaptable, repeatable risk management process was designed for complex projects (typically multi-million dollar electronics I defense contracts with advanced technology, aggressive schedules and multiple contractors I subcontractors).
Structured tools and techniques were synthesized to increase the probability of risk identification, to facilitate qualitative and quantitative risk assessment, to graphically portray risk reduction priorities and to provide a vehicle for improved communication and traceability of risk reduction activity across the project team.
A description of the process used to survey current risk management methods, to ascertain the critical risk management process requirements and to define a means to prioritize risks for more effective resource allocation is included. / Master of Science
|
343 |
Three Essays on Adoption and Impact of Agricultural Technology in BangladeshAhsanuzzaman, Ahsanuzzaman 23 June 2015 (has links)
New agricultural technologies can improve productivity to meet the increased demand for food that places pressure on agricultural production systems in developing countries. Because technological innovation is one of major factors shaping agriculture in both developing and developed countries, it is important to identify factors that help or that hinder the adoption process. Adoption analysis can assist policy makers in making informed decisions about dissemination of technologies that are under consideration. It is also important to estimate the impact of a technology. This dissertation contains three essays that estimate factors affecting integrated pest management (IPM) adoption and the impact of IPM on sweet gourd farming in Bangladesh.
The first essay estimates factors that affect the timing of IPM adoption in Bangladesh. It employs duration models, fully parametric and semiparametric, and (i) compares results from different estimation methods to provide the best model for the data, and (ii) identifies factors that affect the length of time before Bangladeshi farmers adopt an agricultural technology. The paper provides two conclusions: 1) even though the non-parametric estimate of the hazard function indicated a non-monotone model such as log-normal or log-logistic, no differences are found in the sign and significance of the estimated coefficients between the non-monotone and monotone models. 2) economic factors do not directly influence the adoption decision but rather factors related to information diffusion and farmer's non-economic characteristics such as age and education. Particularly, farmer's age and education, membership in an association, training, distance of the farmer's house from local and town markets, and farmer's perception about the use of IPM affect the length of time to adoption. Farm size is the only variable closely related to economic factors that is found to be significant and it decreases the length of time to adoption.
The second paper measures Bangladeshi farmers' attitudes toward risk and ambiguity using experimental data. In different sessions, the experiment allows farmers to make decisions alone and communicate with peers in groups of 3 and 6 to see how social exchanges among peers affect attitudes toward uncertainty. Combining the measured attributes to household survey data, the paper investigates the factors affecting those attributes as well as the role of risk aversion and ambiguity aversion in technology choice by farmers who: face uncertainty alone, in a group of 3, or in a group of 6. It finds that Bangladeshi farmers in the sample are mostly risk and ambiguity averse. Their risk and ambiguity aversion, moreover, differ when they face the uncertain prospects alone from when they can communicate with other peer farmers before making decisions. In addition, farmer's demographic characteristics affect both risk and ambiguity aversion. Finally, findings suggest that the roles of risk and ambiguity aversion in technology adoption depend on which measure of uncertainty behavior is incorporated in the adoption model. While risk aversion increases the likelihood of technology adoption when farmers face uncertainty alone, only ambiguity aversion matters and it reduces the likelihood of technology adoption when farmers face uncertainty in groups of three. Neither risk aversion nor ambiguity aversion matter when farmers face uncertainty in groups of six.
The third paper presents an impact assessment of integrated pest management on sweet gourd in Bangladesh. It employs an instrumental variable and marginal treatment effects approach to estimate the impact of IPM on yield and cost of sweet gourd in Bangladesh. The estimation methods consider both homogeneous and heterogeneous treatment effects. The paper finds that IPM adoption has a 7% - 34% yield advantage over traditional pest management practices. Results regarding the effect of IPM adoption on cost are mixed. IPM adoption alters production costs from -1.2% cost to +42%, depending on the estimation method employed. However, most of the cost changes are not statistically significant. Therefore, while we confidently argue that the IPM adoption provides a yield advantage over non-adoption, we do not find a robust effect regarding a cost advantage of adoption. / Ph. D.
|
344 |
Regularization, Uncertainty Estimation and Out of Distribution Detection in Convolutional Neural NetworksKrothapalli, Ujwal K. 11 September 2020 (has links)
Classification is an important task in the field of machine learning and when classifiers are trained on images, a variety of problems can surface during inference. 1) Recent trends of using convolutional neural networks (CNNs) for various machine learning tasks has borne many successes and CNNs are surprisingly expressive in their learning ability due to a large number of parameters and numerous stacked layers in the CNNs. This increased model complexity also increases the risk of overfitting to the training data. Increasing the size of the training data using synthetic or artificial means (data augmentation) helps CNNs learn better by reducing the amount of over-fitting and producing a regularization effect to improve generalization of the learned model. 2) CNNs have proven to be very good classifiers and generally localize objects well; however, the loss functions typically used to train classification CNNs do not penalize inability to localize an object, nor do they take into account an object's relative size in the given image when producing confidence measures. 3) Convolutional neural networks always output in the space of the learnt classes with high confidence while predicting the class of a given image regardless of what the image consists of. For example an ImageNet-1K trained CNN can not say if the given image has no objects that it was trained on if it is provided with an image of a dinosaur (not an ImageNet category) or if the image has the main object cut out of it (context only). We approach these three different problems using bounding box information and learning to produce high entropy predictions on out of distribution classes.
To address the first problem, we propose a novel regularization method called CopyPaste. The idea behind our approach is that images from the same class share similar context and can be 'mixed' together without affecting the labels. We use bounding box annotations that are available for a subset of ImageNet images. We consistently outperform the standard baseline and explore the idea of combining our approach with other recent regularization methods as well. We show consistent performance gains on PASCAL VOC07, MS-COCO and ImageNet datasets.
For the second problem we employ objectness measures to learn meaningful CNN predictions. Objectness is a measure of likelihood of an object from any class being present in a given image. We present a novel approach to object localization that combines the ideas of objectness and label smoothing during training. Unlike previous methods, we compute a smoothing factor that is adaptive based on relative object size within an image.
We present extensive results using ImageNet and OpenImages to demonstrate that CNNs trained using adaptive label smoothing are much less likely to be overconfident in their predictions, as compared to CNNs trained using hard targets. We train CNNs using objectness computed from bounding box annotations that are available for the ImageNet dataset and the OpenImages dataset. We perform extensive experiments with the aim of improving the ability of a classification CNN to learn better localizable features and show object detection performance improvements, calibration and classification performance on standard datasets. We also show qualitative results using class activation maps to illustrate the improvements.
Lastly, we extend the second approach to train CNNs with images belonging to out of distribution and context using a uniform distribution of probability over the set of target classes for such images. This is a novel way to use uniform smooth labels as it allows the model to learn better confidence bounds. We sample 1000 classes (mutually exclusive to the 1000 classes in ImageNet-1K) from the larger ImageNet dataset comprising about 22K classes. We compare our approach with standard baselines and provide entropy and confidence plots for in distribution and out of distribution validation sets. / Doctor of Philosophy / Categorization is an important task in everyday life. Humans can perform the task of classifying objects effortlessly in pictures. Machines can also be trained to classify objects in images. With the tremendous growth in the area of artificial intelligence, machines have surpassed human performance for some tasks. However, there are plenty of challenges for artificial neural networks. Convolutional Neural Networks (CNNs) are a type of artificial neural networks. 1) Sometimes, CNNs simply memorize the samples provided during training and fail to work well with images that are slightly different from the training samples. 2) CNNs have proven to be very good classifiers and generally localize objects well; however, the objective functions typically used to train classification CNNs do not penalize inability to localize an object, nor do they take into account an object's relative size in the given image. 3) Convolutional neural networks always produce an output in the space of the learnt classes with high confidence while predicting the class of a given image regardless of what the image consists of. For example, an ImageNet-1K (a popular dataset) trained CNN can not say if the given image has no objects that it was trained on if it is provided with an image of a dinosaur (not an ImageNet category) or if the image has the main object cut out of it (images with background only).
We approach these three different problems using object position information and learning to produce low confidence predictions on out of distribution classes.
To address the first problem, we propose a novel regularization method called CopyPaste. The idea behind our approach is that images from the same class share similar context and can be 'mixed' together without affecting the labels. We use bounding box annotations that are available for a subset of ImageNet images. We consistently outperform the standard baseline and explore the idea of combining our approach with other recent regularization methods as well. We show consistent performance gains on PASCAL VOC07, MS-COCO and ImageNet datasets.
For the second problem we employ objectness measures to learn meaningful CNN predictions. Objectness is a measure of likelihood of an object from any class being present in a given image. We present a novel approach to object localization that combines the ideas of objectness and label smoothing during training. Unlike previous methods, we compute a smoothing factor that is adaptive based on relative object size within an image.
We present extensive results using ImageNet and OpenImages to demonstrate that CNNs trained using adaptive label smoothing are much less likely to be overconfident in their predictions, as compared to CNNs trained using hard targets. We train CNNs using objectness computed from bounding box annotations that are available for the ImageNet dataset and the OpenImages dataset. We perform extensive experiments with the aim of improving the ability of a classification CNN to learn better localizable features and show object detection performance improvements, calibration and classification performance on standard datasets. We also show qualitative results to illustrate the improvements.
Lastly, we extend the second approach to train CNNs with images belonging to out of distribution and context using a uniform distribution of probability over the set of target classes for such images. This is a novel way to use uniform smooth labels as it allows the model to learn better confidence bounds. We sample 1000 classes (mutually exclusive to the 1000 classes in ImageNet-1K) from the larger ImageNet dataset comprising about 22K classes. We compare our approach with standard baselines on `in distribution' and `out of distribution' validation sets.
|
345 |
The Use of Central Tendency Measures from an Operational Short Lead-time Hydrologic Ensemble Forecast System for Real-time ForecastsAdams, Thomas Edwin III 05 June 2018 (has links)
A principal factor contributing to hydrologic prediction uncertainty is modeling error intro- duced by the measurement and prediction of precipitation. The research presented demon- strates the necessity for using probabilistic methods to quantify hydrologic forecast uncer- tainty due to the magnitude of precipitation errors. Significant improvements have been made in precipitation estimation that have lead to greatly improved hydrologic simulations. However, advancements in the prediction of future precipitation have been marginal. This research shows that gains in forecasted precipitation accuracy have not significantly improved hydrologic forecasting accuracy. The use of forecasted precipitation, referred to as quantita- tive precipitation forecast (QPF), in hydrologic forecasting remains commonplace. Non-zero QPF is shown to improve hydrologic forecasts, but QPF duration should be limited to 6 to 12 hours for flood forecasting, particularly for fast responding watersheds. Probabilistic hydrologic forecasting captures hydrologic forecast error introduced by QPF for all forecast durations. However, public acceptance of probabilistic hydrologic forecasts is problematic. Central tendency measures from a probabilistic hydrologic forecast, such as the ensemble median or mean, have the appearance of a single-valued deterministic forecast. The research presented shows that hydrologic ensemble median and mean forecasts of river stage have smaller forecast errors than current operational methods with forecast lead-time beginning at 36-hours for fast response basins. Overall, hydrologic ensemble median and mean forecasts display smaller forecast error than current operational forecasts. / Ph. D. / Flood forecasting is uncertain, in part, because of errors in measuring precipitation and predicting the location and amount of precipitation accumulation in the future. Because of this, the public and other end-users of flood forecasts should understand the uncertainties inherent in forecasts. But, there is reluctance by many to accept forecasts that explicitly convey flood forecast uncertainty, such as, ”there is a 67% chance your house will be flooded”. Instead, most prefer ”your house will not be flooded” or something like ”flood levels will reach 0.5 feet in your house”. We hope the latter does not happen, but due to forecast uncertainties, explicit statements such as ”flood levels will reach 0.5 feet in your house” will be wrong. If by chance, flood levels do exactly reach 0.5 feet, that will have been a lucky forecast, very likely involving some skill, but the flood level could have reached 0.43 or 0.72 feet as well. This research presents a flood forecasting method that improves on traditional methods by directly incorporating uncertainty information into flood forecasts that still appear like forecasts people are familiar and comfortable with and understandable by them.
|
346 |
Validation and Uncertainty Quantification of Doublet Lattice Flight Loads using Flight Test DataOlson, Nicholai Kenneth Keeney 19 July 2018 (has links)
This paper presents a framework for tuning, validating, and quantifying uncertainties for flight loads. The flight loads are computed using a Nastran doublet lattice model and are validated using measured data from a flight loads survey for a Cessna Model 525B business jet equipped with Tamarack® Aerospace Group’s active winglet modification, ATLAS® (Active Technology Load Alleviation System). ATLAS® allows for significant aerodynamic improvements to be realized by reducing loads to below the values of the original, unmodified airplane. Flight loads are measured using calibrated strain gages and are used to tune and validate a Nastran doublet-lattice flight loads model. Methods used to tune and validate the model include uncertainty quantification of the Nastran model form and lead to an uncertainty quantified model which can be used to estimate flight loads at any given flight condition within the operating envelope of the airplane. The methods presented herein improve the efficiency of the loads process and reduce conservatism in design loads through improved prediction techniques. Regression techniques and uncertainty quantification methods are presented to more accurately assess the complexities in comparing models to flight test results. / Master of Science / This paper presents a process for correlating analytical airplane loads models to flight test data and validating the results. The flight loads are computed using Nastran, a structural modeling tool coupled with an aerodynamic loads solver. The flight loads models are correlated to flight test data and are validated using measured data from a flight loads survey for a Cessna Model 525B business jet equipped with Tamarack ® Aerospace Group’s active winglet modification, ATLAS ® (Active Technology Load Alleviation System). ATLAS ® allows for significant aerodynamic improvements and efficiency gains to be realized by reducing loads to below the values of the original, unmodified airplane. Flight loads are measured using a series of strain gage sensors mounted on the wing. These sensors are calibrated to measure aerodynamic loads and are used to tune and validate the Nastran flight loads model. Methods used to tune and validate the model include quantification of error and uncertainties in the model. These efforts lead to a substantially increased understanding of the model limitations and uncertainties, which is especially valuable at the corners of the operating envelope of the airplane. The methods presented herein improve the efficiency of the loads process and reduce conservatism in design loads through improved prediction techniques. The results provide a greater amount of guidance for decision making throughout the design and certification of a load alleviation system and similar airplane aerodynamic improvements.
|
347 |
Aerodynamic Uncertainty Quantification and Estimation of Uncertainty Quantified Performance of Unmanned Aircraft Using Non-Deterministic SimulationsHale II, Lawrence Edmond 24 January 2017 (has links)
This dissertation addresses model form uncertainty quantification, non-deterministic simulations, and sensitivity analysis of the results of these simulations, with a focus on application to analysis of unmanned aircraft systems. The model form uncertainty quantification utilizes equation error to estimate the error between an identified model and flight test results. The errors are then related to aircraft states, and prediction intervals are calculated. This method for model form uncertainty quantification results in uncertainty bounds that vary with the aircraft state, narrower where consistent information has been collected and wider where data are not available. Non-deterministic simulations can then be performed to provide uncertainty quantified estimates of the system performance. The model form uncertainties could be time varying, so multiple sampling methods were considered. The two methods utilized were a fixed uncertainty level and a rate bounded variation in the uncertainty level. For analysis using fixed uncertainty level, the corner points of the model form uncertainty were sampled, providing reduced computational time. The second model better represents the uncertainty but requires significantly more simulations to sample the uncertainty. The uncertainty quantified performance estimates are compared to estimates based on flight tests to check the accuracy of the results.
Sensitivity analysis is performed on the uncertainty quantified performance estimates to provide information on which of the model form uncertainties contribute most to the uncertainty in the performance estimates. The proposed method uses the results from the fixed uncertainty level analysis that utilizes the corner points of the model form uncertainties. The sensitivity of each parameter is estimated based on corner values of all the other uncertain parameters. This results in a range of possible sensitivities for each parameter dependent on the true value of the other parameters. / Ph. D. / This dissertation examines a process that can be utilized to quantify the uncertainty associated with an identified model, the performance of the system accounting for the uncertainty, and the sensitivity of the performance estimates to the various uncertainties. This uncertainty is present in the identified model because of modeling errors and will tend to increase as the states move away from locations where data has been collected. The method used in this paper to quantify the uncertainty attempts to represent this in a qualitatively correct sense. The uncertainties provide information that is used to predict the performance of the aircraft. A number of simulations are performed, with different values for the uncertain terms chosen for each simulation. This provides a family of possible results to be produced. The uncertainties can be sampled in various manners, and in this study were sampled at fixed levels and at time varying levels. The sampling of fixed uncertainty level required fewer samples, improving computational requirements. Sampling with time varying uncertainty better captures the nature of the uncertainty but requires significantly more simulations. The results provide a range of the expected performance based on the uncertainty.
Sensitivity analysis is performed to determine which of the input uncertainties produce the greatest uncertainty in the performance estimates. To account for the uncertainty in the true parameter values, the sensitivity is predicted for a number of possible values of the uncertain parameters. This results in a range of possible sensitivities for each parameter dependent on the true value of the other parameters. The range of sensitivities can be utilized to determine the future testing to be performed.
|
348 |
Regulatory and Economic Consequences of Empirical Uncertainty for Urban Stormwater ManagementAguilar, Marcus F. 10 October 2016 (has links)
The responsibility for mitigation of the ecological effects of urban stormwater runoff has been delegated to local government authorities through the Clean Water Act's National Pollutant Discharge Elimination Systems' Stormwater (NPDES SW), and Total Maximum Daily Load (TMDL) programs. These programs require that regulated entities reduce the discharge of pollutants from their storm drain systems to the "maximum extent practicable" (MEP), using a combination of structural and non-structural stormwater treatment — known as stormwater control measures (SCMs). The MEP regulatory paradigm acknowledges that there is empirical uncertainty regarding SCM pollutant reduction capacity, but that by monitoring, evaluation, and learning, this uncertainty can be reduced with time. The objective of this dissertation is to demonstrate the existing sources and magnitude of variability and uncertainty associated with the use of structural and non-structural SCMs towards the MEP goal, and to examine the extent to which the MEP paradigm of iterative implementation, monitoring, and learning is manifest in the current outcomes of the paradigm in Virginia.
To do this, three research objectives were fulfilled. First, the non-structural SCMs employed in Virginia in response to the second phase of the NPDES SW program were catalogued, and the variability in what is considered a "compliant" stormwater program was evaluated. Next, the uncertainty of several commonly used stormwater flow measurement devices were quantified in the laboratory and field, and the importance of this uncertainty for regulatory compliance was discussed. Finally, the third research objective quantified the uncertainty associated with structural SCMs, as a result of measurement error and environmental stochasticity. The impacts of this uncertainty are discussed in the context of the large number of structural SCMs prescribed in TMDL Implementation Plans. The outcomes of this dissertation emphasize the challenge that empirical uncertainty creates for cost-effective spending of local resources on flood control and water quality improvements, while successfully complying with regulatory requirements. The MEP paradigm acknowledged this challenge, and while the findings of this dissertation confirm the flexibility of the MEP paradigm, they suggest that the resulting magnitude of SCM implementation has outpaced the ability to measure and functionally define SCM pollutant removal performance. This gap between implementation, monitoring, and improvement is discussed, and several potential paths forward are suggested. / Ph. D. / Responsibility for mitigation of the ecological effects of urban stormwater runoff has largely been delegated to local government authorities through several Clean Water Act programs, which require that regulated entities reduce the discharge of pollutants from their storm drain systems to the “maximum extent practicable” (MEP). The existing definition of MEP requires a combination of structural and non-structural stormwater treatment – known as stormwater control measures (SCMs). The regulations acknowledge that there is uncertainty regarding the ability of SCMs to reduce pollution, but suggest that this uncertainty can be reduced over time, by monitoring and evaluation of SCMs. The objective of this dissertation is to demonstrate the existing sources and magnitude of variability and uncertainty associated with the use of structural and non-structural SCMs towards the MEP goal, and to examine the extent to which the MEP paradigm of implementation, monitoring, and learning appears in the current outcomes of the paradigm in Virginia.
To do this, three research objectives were fulfilled. First, the non-structural SCMs employed in Virginia were catalogued, and the variability in what is considered a “compliant” stormwater program was evaluated. Next, the uncertainty of several commonly used stormwater flow measurement devices were quantified in the laboratory and field, and the importance of this uncertainty for regulatory compliance was discussed. Finally, the third research objective quantified the uncertainty associated with structural SCMs, as a result of measurement error and environmental variability. The impacts of this uncertainty are discussed in the context of the large number of structural SCMs prescribed by Clean Water Act programs. The outcomes of this dissertation emphasize the challenge that uncertainty creates for cost-effective spending of local resources on flood control and water quality improvements, while successfully complying with regulatory requirements. The MEP paradigm acknowledged this challenge, and while the findings of this dissertation confirm the flexibility of the MEP paradigm, they suggest that the resulting magnitude of SCM implementation has outpaced the ability to measure and functionally define SCM pollutant removal performance. This gap between implementation, monitoring, and improvement is discussed, and several potential paths forward are suggested.
|
349 |
Rethinking communication in risk interpretation and actionKhan, S., Mishra, Jyoti L., Kuna-hui, E.L., Doyle, E.E.H. 06 June 2017 (has links)
Yes / Communication is fundamental to the transfer of information between individuals, agencies and organizations, and therefore, it is crucial to planning and decision-making particularly in cases of uncertainty and risk. This paper brings forth some critical aspects of communication that need to be acknowledged and considered while managing risks. Most of the previous studies and theories on natural hazards and disaster management have limited perspective on communication, and hence, its implication is limited to awareness, warnings and emergency response to some selected events. This paper exposes the role of communication as a moderator of not just risk interpretation and action but also various factors responsible for shaping overall response, such as individual decision-making under uncertainty, heuristics, past experiences, learning, trust, complexity, scale and the social context. It suggests that communication is a process that influences decision-making in multiple ways, and therefore, it plays a critical role in shaping local responses to various risks. It opens up the scope for using communication beyond its current use as a tool to manage emergency situations. An in-depth understanding of ongoing communication and its implications can help to plan risk management more effectively over time rather than as a short-term response.
|
350 |
Denoising and contrast constancy.McIlhagga, William H. January 2004 (has links)
No / Contrast constancy is the ability to perceive object contrast independent of size or spatial frequency, even though these affect both retinal contrast and detectability. Like other perceptual constancies, it is evidence that the visual system infers the stable properties of objects from the changing properties of retinal images. Here it is shown that perceived contrast is based on an optimal thresholding estimator of object contrast, that is identical to the VisuShrink estimator used in wavelet denoising.
|
Page generated in 0.0649 seconds