• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1699
  • 419
  • 238
  • 214
  • 136
  • 93
  • 31
  • 26
  • 25
  • 21
  • 20
  • 15
  • 9
  • 8
  • 7
  • Tagged with
  • 3614
  • 598
  • 433
  • 364
  • 360
  • 359
  • 347
  • 328
  • 326
  • 296
  • 282
  • 258
  • 214
  • 214
  • 210
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Assessing the Performance of HSPF When Using the High Water Table Subroutine to Simulate Hydrology in a Low-Gradient Watershed

Forrester, Michael Scott 30 May 2012 (has links)
Modeling ground-water hydrology is critical in low-gradient, high water table watersheds where ground-water is the dominant contribution to streamflow. The Hydrological Simulation Program-FORTRAN (HSPF) model has two different subroutines available to simulate ground water, the traditional ground-water (TGW) subroutine and the high water table (HWT) subroutine. The HWT subroutine has more parameters and requires more data but was created to enhance model performance in low-gradient, high water table watershed applications. The objective of this study was to compare the performance and uncertainty of the TGW and HWT subroutines when applying HSPF to a low-gradient watershed in the Coastal Plain of northeast North Carolina. One hundred thousand Monte Carlo simulations were performed to generate data needed for model performance comparison. The HWT model generated considerably higher Nash-Sutcliffe efficiency (NSE) values while performing slightly worse when simulating the 50% lowest and 10% highest flows. Model uncertainty was assessed using the Average Relative Interval Length (ARIL) metric. The HWT model operated with more average uncertainty throughout all flow regimes. Based on the results, the HWT subroutine is preferable when applying HSPF to a low-gradient watershed and the accuracy of simulated stream discharge is important. In situations where a balance between performance and uncertainty is called for, the choice of which subroutine to employ is less clear cut. / Master of Science
342

A risk management process for complex projects

Brown, Robert G. 21 July 2009 (has links)
A more effective and efficient method to identify, assess, track and document project risks was explored. Using the systems engineering approach, an adaptable, repeatable risk management process was designed for complex projects (typically multi-million dollar electronics I defense contracts with advanced technology, aggressive schedules and multiple contractors I subcontractors). Structured tools and techniques were synthesized to increase the probability of risk identification, to facilitate qualitative and quantitative risk assessment, to graphically portray risk reduction priorities and to provide a vehicle for improved communication and traceability of risk reduction activity across the project team. A description of the process used to survey current risk management methods, to ascertain the critical risk management process requirements and to define a means to prioritize risks for more effective resource allocation is included. / Master of Science
343

Three Essays on Adoption and Impact of Agricultural Technology in Bangladesh

Ahsanuzzaman, Ahsanuzzaman 23 June 2015 (has links)
New agricultural technologies can improve productivity to meet the increased demand for food that places pressure on agricultural production systems in developing countries. Because technological innovation is one of major factors shaping agriculture in both developing and developed countries, it is important to identify factors that help or that hinder the adoption process. Adoption analysis can assist policy makers in making informed decisions about dissemination of technologies that are under consideration. It is also important to estimate the impact of a technology. This dissertation contains three essays that estimate factors affecting integrated pest management (IPM) adoption and the impact of IPM on sweet gourd farming in Bangladesh. The first essay estimates factors that affect the timing of IPM adoption in Bangladesh. It employs duration models, fully parametric and semiparametric, and (i) compares results from different estimation methods to provide the best model for the data, and (ii) identifies factors that affect the length of time before Bangladeshi farmers adopt an agricultural technology. The paper provides two conclusions: 1) even though the non-parametric estimate of the hazard function indicated a non-monotone model such as log-normal or log-logistic, no differences are found in the sign and significance of the estimated coefficients between the non-monotone and monotone models. 2) economic factors do not directly influence the adoption decision but rather factors related to information diffusion and farmer's non-economic characteristics such as age and education. Particularly, farmer's age and education, membership in an association, training, distance of the farmer's house from local and town markets, and farmer's perception about the use of IPM affect the length of time to adoption. Farm size is the only variable closely related to economic factors that is found to be significant and it decreases the length of time to adoption. The second paper measures Bangladeshi farmers' attitudes toward risk and ambiguity using experimental data. In different sessions, the experiment allows farmers to make decisions alone and communicate with peers in groups of 3 and 6 to see how social exchanges among peers affect attitudes toward uncertainty. Combining the measured attributes to household survey data, the paper investigates the factors affecting those attributes as well as the role of risk aversion and ambiguity aversion in technology choice by farmers who: face uncertainty alone, in a group of 3, or in a group of 6. It finds that Bangladeshi farmers in the sample are mostly risk and ambiguity averse. Their risk and ambiguity aversion, moreover, differ when they face the uncertain prospects alone from when they can communicate with other peer farmers before making decisions. In addition, farmer's demographic characteristics affect both risk and ambiguity aversion. Finally, findings suggest that the roles of risk and ambiguity aversion in technology adoption depend on which measure of uncertainty behavior is incorporated in the adoption model. While risk aversion increases the likelihood of technology adoption when farmers face uncertainty alone, only ambiguity aversion matters and it reduces the likelihood of technology adoption when farmers face uncertainty in groups of three. Neither risk aversion nor ambiguity aversion matter when farmers face uncertainty in groups of six. The third paper presents an impact assessment of integrated pest management on sweet gourd in Bangladesh. It employs an instrumental variable and marginal treatment effects approach to estimate the impact of IPM on yield and cost of sweet gourd in Bangladesh. The estimation methods consider both homogeneous and heterogeneous treatment effects. The paper finds that IPM adoption has a 7% - 34% yield advantage over traditional pest management practices. Results regarding the effect of IPM adoption on cost are mixed. IPM adoption alters production costs from -1.2% cost to +42%, depending on the estimation method employed. However, most of the cost changes are not statistically significant. Therefore, while we confidently argue that the IPM adoption provides a yield advantage over non-adoption, we do not find a robust effect regarding a cost advantage of adoption. / Ph. D.
344

Methods of Model Uncertainty: Bayesian Spatial Predictive Synthesis

Cabel, Danielle 05 1900 (has links)
This dissertation develops a new method of modeling uncertainty with spatial data called Bayesian spatial predictive synthesis (BSPS) and compares its predictive accuracy to established methods. Spatial data are often non-linear, complex, and difficult to capture with a single model. Existing methods such as model selection or simple model ensembling fail to consider the critical spatially varying model uncertainty problem; different models perform better or worse in different regions. BSPS can capture the model uncertainty by specifying a latent factor coefficient model that varies spatially as a synthesis function. This allows the model coefficients to vary across a region to achieve flexible spatial model ensembling. This method is derived from the theoretically best approximation of the data generating process (DGP), where the predictions are exact minimax. Two Markov chain Monte Carlo (MCMC) based algorithms are implemented in the BSPS framework for full uncertainty quantification, along with a variational Bayes strategy for faster point inference. This method is also extended for general responses. The examples in this dissertation include multiple simulation studies and two real world data applications. Through these examples, the performance and predictive power of BSPS is shown against various standard spatial models, ensemble methods, and machine learning methods. BSPS is able to maintain predictive accuracy as well as maintain interpretability of the prediction mechanisms. / Statistics
345

Rethinking communication in risk interpretation and action

Khan, S., Mishra, Jyoti L., Kuna-hui, E.L., Doyle, E.E.H. 06 June 2017 (has links)
Yes / Communication is fundamental to the transfer of information between individuals, agencies and organizations, and therefore, it is crucial to planning and decision-making particularly in cases of uncertainty and risk. This paper brings forth some critical aspects of communication that need to be acknowledged and considered while managing risks. Most of the previous studies and theories on natural hazards and disaster management have limited perspective on communication, and hence, its implication is limited to awareness, warnings and emergency response to some selected events. This paper exposes the role of communication as a moderator of not just risk interpretation and action but also various factors responsible for shaping overall response, such as individual decision-making under uncertainty, heuristics, past experiences, learning, trust, complexity, scale and the social context. It suggests that communication is a process that influences decision-making in multiple ways, and therefore, it plays a critical role in shaping local responses to various risks. It opens up the scope for using communication beyond its current use as a tool to manage emergency situations. An in-depth understanding of ongoing communication and its implications can help to plan risk management more effectively over time rather than as a short-term response.
346

Denoising and contrast constancy.

McIlhagga, William H. January 2004 (has links)
No / Contrast constancy is the ability to perceive object contrast independent of size or spatial frequency, even though these affect both retinal contrast and detectability. Like other perceptual constancies, it is evidence that the visual system infers the stable properties of objects from the changing properties of retinal images. Here it is shown that perceived contrast is based on an optimal thresholding estimator of object contrast, that is identical to the VisuShrink estimator used in wavelet denoising.
347

Chemical Contaminants in Drinking Water: An Integrated Exposure Analysis

Khanal, Rajesh 26 May 1999 (has links)
The objective of this research is to develop an integrated exposure model, which performs uncertainty analysis of exposure to the entire range of chemical contaminants in drinking water via inhalation, ingestion and dermal sorption. The study is focused on a residential environment. The various water devices considered are shower, bath, bathroom, kitchen faucet, washing machine and the dishwasher. All devices impact inhalation exposure, while showering, bathing and washing hands are considered in the analysis of dermal exposure. A set of transient mass balance equations are solved numerically to predict the concentration profiles of a chemical contaminant for three different compartments in a house (shower, bathroom and main house). Inhalation exposure is computed by combining this concentration profile with the occupancy and activity patterns of a specific individual. Mathematical models of dermal penetration, which account for steady and non-steady state analysis, are used to estimate exposure via dermal absorption. Mass transfer coefficients are used to compute the fraction of contaminant remaining in water at the time of ingestion before estimating ingestion exposure. Three chemical contaminant in water: chloroform, chromium and methyl parathion are considered for detailed analysis. These contaminants cover a wide range in chemical properties. The magnitude of overall exposure and comparison of the relative contribution of individual exposure pathways for each contaminant is evaluated. The major pathway of exposure for chloroform is inhalation, which accounts for 2/3rd of the total exposure. Dermal absorption and ingestion exposures contribute almost equally to the remaining 1/3rd of total exposure for chloroform. Ingestion accounts for about 60% of total exposure for methyl parathion and the remaining 40% of exposure is via dermal sorption. Nearly all of the total exposure (98%) for chromium is via the ingestion pathway. / Master of Science
348

Uncertainty relations in terms of the Gini index for finite quantum systems

Vourdas, Apostolos 29 May 2020 (has links)
Yes / Lorenz values and the Gini index are popular quantities in Mathematical Economics, and are used here in the context of quantum systems with finite-dimensional Hilbert space. They quantify the uncertainty in the probability distribution related to an orthonormal basis. It is shown that Lorenz values are superadditive functions and the Gini indices are subadditive functions. The supremum over all density matrices of the sum of the two Gini indices with respect to position and momentum states is used to define an uncertainty coefficient which quantifies the uncertainty in the quantum system. It is shown that the uncertainty coefficient is positive, and an upper bound for it is given. Various examples demonstrate these ideas.
349

Regularization, Uncertainty Estimation and Out of Distribution Detection in Convolutional Neural Networks

Krothapalli, Ujwal K. 11 September 2020 (has links)
Classification is an important task in the field of machine learning and when classifiers are trained on images, a variety of problems can surface during inference. 1) Recent trends of using convolutional neural networks (CNNs) for various machine learning tasks has borne many successes and CNNs are surprisingly expressive in their learning ability due to a large number of parameters and numerous stacked layers in the CNNs. This increased model complexity also increases the risk of overfitting to the training data. Increasing the size of the training data using synthetic or artificial means (data augmentation) helps CNNs learn better by reducing the amount of over-fitting and producing a regularization effect to improve generalization of the learned model. 2) CNNs have proven to be very good classifiers and generally localize objects well; however, the loss functions typically used to train classification CNNs do not penalize inability to localize an object, nor do they take into account an object's relative size in the given image when producing confidence measures. 3) Convolutional neural networks always output in the space of the learnt classes with high confidence while predicting the class of a given image regardless of what the image consists of. For example an ImageNet-1K trained CNN can not say if the given image has no objects that it was trained on if it is provided with an image of a dinosaur (not an ImageNet category) or if the image has the main object cut out of it (context only). We approach these three different problems using bounding box information and learning to produce high entropy predictions on out of distribution classes. To address the first problem, we propose a novel regularization method called CopyPaste. The idea behind our approach is that images from the same class share similar context and can be 'mixed' together without affecting the labels. We use bounding box annotations that are available for a subset of ImageNet images. We consistently outperform the standard baseline and explore the idea of combining our approach with other recent regularization methods as well. We show consistent performance gains on PASCAL VOC07, MS-COCO and ImageNet datasets. For the second problem we employ objectness measures to learn meaningful CNN predictions. Objectness is a measure of likelihood of an object from any class being present in a given image. We present a novel approach to object localization that combines the ideas of objectness and label smoothing during training. Unlike previous methods, we compute a smoothing factor that is adaptive based on relative object size within an image. We present extensive results using ImageNet and OpenImages to demonstrate that CNNs trained using adaptive label smoothing are much less likely to be overconfident in their predictions, as compared to CNNs trained using hard targets. We train CNNs using objectness computed from bounding box annotations that are available for the ImageNet dataset and the OpenImages dataset. We perform extensive experiments with the aim of improving the ability of a classification CNN to learn better localizable features and show object detection performance improvements, calibration and classification performance on standard datasets. We also show qualitative results using class activation maps to illustrate the improvements. Lastly, we extend the second approach to train CNNs with images belonging to out of distribution and context using a uniform distribution of probability over the set of target classes for such images. This is a novel way to use uniform smooth labels as it allows the model to learn better confidence bounds. We sample 1000 classes (mutually exclusive to the 1000 classes in ImageNet-1K) from the larger ImageNet dataset comprising about 22K classes. We compare our approach with standard baselines and provide entropy and confidence plots for in distribution and out of distribution validation sets. / Doctor of Philosophy / Categorization is an important task in everyday life. Humans can perform the task of classifying objects effortlessly in pictures. Machines can also be trained to classify objects in images. With the tremendous growth in the area of artificial intelligence, machines have surpassed human performance for some tasks. However, there are plenty of challenges for artificial neural networks. Convolutional Neural Networks (CNNs) are a type of artificial neural networks. 1) Sometimes, CNNs simply memorize the samples provided during training and fail to work well with images that are slightly different from the training samples. 2) CNNs have proven to be very good classifiers and generally localize objects well; however, the objective functions typically used to train classification CNNs do not penalize inability to localize an object, nor do they take into account an object's relative size in the given image. 3) Convolutional neural networks always produce an output in the space of the learnt classes with high confidence while predicting the class of a given image regardless of what the image consists of. For example, an ImageNet-1K (a popular dataset) trained CNN can not say if the given image has no objects that it was trained on if it is provided with an image of a dinosaur (not an ImageNet category) or if the image has the main object cut out of it (images with background only). We approach these three different problems using object position information and learning to produce low confidence predictions on out of distribution classes. To address the first problem, we propose a novel regularization method called CopyPaste. The idea behind our approach is that images from the same class share similar context and can be 'mixed' together without affecting the labels. We use bounding box annotations that are available for a subset of ImageNet images. We consistently outperform the standard baseline and explore the idea of combining our approach with other recent regularization methods as well. We show consistent performance gains on PASCAL VOC07, MS-COCO and ImageNet datasets. For the second problem we employ objectness measures to learn meaningful CNN predictions. Objectness is a measure of likelihood of an object from any class being present in a given image. We present a novel approach to object localization that combines the ideas of objectness and label smoothing during training. Unlike previous methods, we compute a smoothing factor that is adaptive based on relative object size within an image. We present extensive results using ImageNet and OpenImages to demonstrate that CNNs trained using adaptive label smoothing are much less likely to be overconfident in their predictions, as compared to CNNs trained using hard targets. We train CNNs using objectness computed from bounding box annotations that are available for the ImageNet dataset and the OpenImages dataset. We perform extensive experiments with the aim of improving the ability of a classification CNN to learn better localizable features and show object detection performance improvements, calibration and classification performance on standard datasets. We also show qualitative results to illustrate the improvements. Lastly, we extend the second approach to train CNNs with images belonging to out of distribution and context using a uniform distribution of probability over the set of target classes for such images. This is a novel way to use uniform smooth labels as it allows the model to learn better confidence bounds. We sample 1000 classes (mutually exclusive to the 1000 classes in ImageNet-1K) from the larger ImageNet dataset comprising about 22K classes. We compare our approach with standard baselines on `in distribution' and `out of distribution' validation sets.
350

The Use of Central Tendency Measures from an Operational Short Lead-time Hydrologic Ensemble Forecast System for Real-time Forecasts

Adams, Thomas Edwin III 05 June 2018 (has links)
A principal factor contributing to hydrologic prediction uncertainty is modeling error intro- duced by the measurement and prediction of precipitation. The research presented demon- strates the necessity for using probabilistic methods to quantify hydrologic forecast uncer- tainty due to the magnitude of precipitation errors. Significant improvements have been made in precipitation estimation that have lead to greatly improved hydrologic simulations. However, advancements in the prediction of future precipitation have been marginal. This research shows that gains in forecasted precipitation accuracy have not significantly improved hydrologic forecasting accuracy. The use of forecasted precipitation, referred to as quantita- tive precipitation forecast (QPF), in hydrologic forecasting remains commonplace. Non-zero QPF is shown to improve hydrologic forecasts, but QPF duration should be limited to 6 to 12 hours for flood forecasting, particularly for fast responding watersheds. Probabilistic hydrologic forecasting captures hydrologic forecast error introduced by QPF for all forecast durations. However, public acceptance of probabilistic hydrologic forecasts is problematic. Central tendency measures from a probabilistic hydrologic forecast, such as the ensemble median or mean, have the appearance of a single-valued deterministic forecast. The research presented shows that hydrologic ensemble median and mean forecasts of river stage have smaller forecast errors than current operational methods with forecast lead-time beginning at 36-hours for fast response basins. Overall, hydrologic ensemble median and mean forecasts display smaller forecast error than current operational forecasts. / Ph. D. / Flood forecasting is uncertain, in part, because of errors in measuring precipitation and predicting the location and amount of precipitation accumulation in the future. Because of this, the public and other end-users of flood forecasts should understand the uncertainties inherent in forecasts. But, there is reluctance by many to accept forecasts that explicitly convey flood forecast uncertainty, such as, ”there is a 67% chance your house will be flooded”. Instead, most prefer ”your house will not be flooded” or something like ”flood levels will reach 0.5 feet in your house”. We hope the latter does not happen, but due to forecast uncertainties, explicit statements such as ”flood levels will reach 0.5 feet in your house” will be wrong. If by chance, flood levels do exactly reach 0.5 feet, that will have been a lucky forecast, very likely involving some skill, but the flood level could have reached 0.43 or 0.72 feet as well. This research presents a flood forecasting method that improves on traditional methods by directly incorporating uncertainty information into flood forecasts that still appear like forecasts people are familiar and comfortable with and understandable by them.

Page generated in 0.5403 seconds