• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 15
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 289
  • 289
  • 101
  • 99
  • 81
  • 69
  • 69
  • 46
  • 39
  • 38
  • 38
  • 37
  • 35
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Effects of Task Load on Situational Awareness During Rear-End Crash Scenarios - A Simulator Study

Nair, Rajiv 02 July 2019 (has links)
The current driving simulator study investigates the effect of 2 distinct levels of distraction on a drivers’ situational awareness and latent and inherent hazard anticipation. In this study, rear-end crashes were used as the primary crash configuration to target a specific category of crashes due to distraction. The two types of task load used in the experiment was a cognitive distraction (mock cell-phone task) & visual distraction (I-pad task). Forty-eight young participants aged 18-25 years navigated 8 scenarios each in a mixed subject design with task load (cognitive or visual distraction) as a between-subject variable and the presence/absence of distraction representing the within-subject variable. All participants drove 4 scenarios with a distraction and 4 scenarios without any distraction. Physiological variables in the form of Heart rate and heart rate variability was collected for each participant during the practice drives and after each of the 8 experimental drives. After the completion of each experimental drive, participants were asked to fill up a NASA TLX questionnaire which quantifies the overall task load experienced by giving it a score between 1 and 100, where higher scores translate to higher perceived task load. Eye-movements were also recorded for the proportion of latent and inherent hazards anticipated and mitigated for all participants. Standard vehicle data (velocity, acceleration & lane offset) were also collected from the simulator for each participants’ each drive. Analysis of data showed that there was a significant difference in velocity, lane offset and task load index scores across the 2 groups (between-subject factors). The vehicle data, heart rate data and TLX data was analyzed using Mixed subject ANOVA. There was also a logistic regression model devised which showed significant effects of velocity, lane offset, TLX scores and age on a participants’ hazard anticipation abilities. The findings have a major practical implication in reducing drivers’ risk of fatal, serious or near crashes.
192

Dresdner Beiträge zu Quantitativen Verfahren

30 March 2017 (has links)
No description available.
193

Variational Inference for Data-driven Stochastic Programming

Prateek Jaiswal (11210091) 30 July 2021 (has links)
<div>Stochastic programs are standard models for decision-making under uncertainty and have been extensively studied in the operations research literature. In general, stochastic programming involves minimizing an expected cost function, where the expectation is with respect to fully specified stochastic models that quantify the aleatoric or `inherent' uncertainty in the decision-making problem. In practice, however, the stochastic models are unknown but can be estimated from data, introducing an additional epistemic uncertainty into the decision-making problem. The Bayesian framework provides a coherent way to quantify the epistemic uncertainty through the posterior distribution by combining prior beliefs of the decision-makers with the observed data. Bayesian methods have been used for data-driven decision-making in various applications such as inventory management, portfolio design, machine learning, optimal scheduling, and staffing, etc.</div><div> </div><div>Bayesian methods are challenging to implement, mainly due to the fact that the posterior is computationally intractable, necessitating the computation of approximate posteriors. Broadly speaking, there are two methods in the literature implementing approximate posterior inference. First are sampling-based methods such as Markov Chain Monte Carlo. Sampling-based methods are theoretically well understood, but they suffer from various issues like high variance, poor scalability to high-dimensional problems, and have complex diagnostics. Consequently, we propose to use optimization-based methods collectively known as variational inference (VI) that use information projections to compute an approximation to the posterior. Empirical studies have shown that VI methods are computationally faster and easily scalable to higher-dimensional problems and large datasets. However, the theoretical guarantees of these methods are not well understood. Moreover, VI methods are empirically and theoretically less explored in the decision-theoretic setting.</div><div><br></div><div> In this thesis, we first propose a novel VI framework for risk-sensitive data-driven decision-making, which we call risk-sensitive variational Bayes (RSVB). In RSVB, we jointly compute a risk-sensitive approximation to the `true' posterior and the optimal decision by solving a minimax optimization problem. The RSVB framework includes the naive approach of first computing a VI approximation to the true posterior and then using it in place of the true posterior for decision-making. We show that the RSVB approximate posterior and the corresponding optimal value and decision rules are asymptotically consistent, and we also compute their rate of convergence. We illustrate our theoretical findings in both parametric as well as nonparametric setting with the help of three examples: the single and multi-product newsvendor model and Gaussian process classification. Second, we present the Bayesian joint chance-constrained stochastic program (BJCCP) for modeling decision-making problems with epistemically uncertain constraints. We discover that using VI methods for posterior approximation can ensure the convexity of the feasible set in (BJCCP) unlike any sampling-based methods and thus propose a VI approximation for (BJCCP). We also show that the optimal value computed using the VI approximation of (BJCCP) are statistically consistent. Moreover, we derive the rate of convergence of the optimal value and compute the rate at which a VI approximate solution of (BJCCP) is feasible under the true constraints. We demonstrate the utility of our approach on an optimal staffing problem for an M/M/c queue. Finally, this thesis also contributes to the growing literature in understanding statistical performance of VI methods. In particular, we establish the frequentist consistency of an approximate posterior computed using a well known VI method that computes an approximation to the posterior distribution by minimizing the Renyi divergence from the ‘true’ posterior.</div>
194

Early Stopping of a Neural Network via the Receiver Operating Curve.

Yu, Daoping 13 August 2010 (has links) (PDF)
This thesis presents the area under the ROC (Receiver Operating Characteristics) curve, or abbreviated AUC, as an alternate measure for evaluating the predictive performance of ANNs (Artificial Neural Networks) classifiers. Conventionally, neural networks are trained to have total error converge to zero which may give rise to over-fitting problems. To ensure that they do not over fit the training data and then fail to generalize well in new data, it appears effective to stop training as early as possible once getting AUC sufficiently large via integrating ROC/AUC analysis into the training process. In order to reduce learning costs involving the imbalanced data set of the uneven class distribution, random sampling and k-means clustering are implemented to draw a smaller subset of representatives from the original training data set. Finally, the confidence interval for the AUC is estimated in a non-parametric approach.
195

A Proof of Concept for Crowdsourcing Color Perception Experiments

McLeod, Ryan Nathaniel 01 June 2014 (has links) (PDF)
Accurately quantifying the human perception of color is an unsolved prob- lem. There are dozens of numerical systems for quantifying colors and how we as humans perceive them, but as a whole, they are far from perfect. The ability to accurately measure color for reproduction and verification is critical to indus- tries that work with textiles, paints, food and beverages, displays, and media compression algorithms. Because the science of color deals with the body, mind, and the subjective study of perception, building models of color requires largely empirical data over pure analytical science. Much of this data is extremely dated, from small and/or homogeneous data sets, and is hard to compare. While these studies have somewhat advanced our understanding of color adequately, mak- ing significant, further progress without improved datasets has proven dicult if not impossible. I propose new methods of crowdsourcing color experiments through color-accurate mobile devices to help develop a massive, global set of color perception data to aid in creating a more accurate model of human color perception.
196

Combining Machine Learning and Empirical Engineering Methods Towards Improving Oil Production Forecasting

Allen, Andrew J 01 July 2020 (has links) (PDF)
Current methods of production forecasting such as decline curve analysis (DCA) or numerical simulation require years of historical production data, and their accuracy is limited by the choice of model parameters. Unconventional resources have proven challenging to apply traditional methods of production forecasting because they lack long production histories and have extremely variable model parameters. This research proposes a data-driven alternative to reservoir simulation and production forecasting techniques. We create a proxy-well model for predicting cumulative oil production by selecting statistically significant well completion parameters and reservoir information as independent predictor variables in regression-based models. Then, principal component analysis (PCA) is applied to extract key features of a well’s time-rate production profile and is used to estimate cumulative oil production. The efficacy of models is examined on field data of over 400 wells in the Eagle Ford Shale in South Texas, supplied from an industry database. The results of this study can be used to help oil and gas companies determine the estimated ultimate recovery (EUR) of a well and in turn inform financial and operational decisions based on available production and well completion data.
197

Psychometric Properties of a Working Memory Span Task

Alzate Vanegas, Juan M 01 January 2018 (has links)
The intent of this thesis is to examine the psychometric properties of a complex span task (CST) developed to measure working memory capacity (WMC) using measurements obtained from a sample of 68 undergraduate students at the University of Central Florida. The Grocery List Task (GLT) promises several design improvements over traditional CSTs in a prior study about individual differences in WMC and distraction effects on driving performance, and it offers potential benefits for studying WMC as well as the serial-position effect. Currently, the working memory system is composed of domain-general memorial storage processes and information-processing, which involves the use of executive functions. Prior research has found WMC to be associated with attentional measures (i.e., executive attention) and the updating function, and unrelated to the shifting function. The present study replicates these relationships to other latent variables in measures obtained from the GLT as convergent and discriminant evidence of validity. In addition, GLT measures correlate strongly with established measures of WMC. Task reliability is assessed by estimates of internal consistency, pairwise comparisons with a cross-validation sample, and an analysis of demographic effects on task measurements.
198

Foundations Of Memory Capacity In Models Of Neural Cognition

Chowdhury, Chandradeep 01 December 2023 (has links) (PDF)
A central problem in neuroscience is to understand how memories are formed as a result of the activities of neurons. Valiant’s neuroidal model attempted to address this question by modeling the brain as a random graph and memories as subgraphs within that graph. However the question of memory capacity within that model has not been explored: how many memories can the brain hold? Valiant introduced the concept of interference between memories as the defining factor for capacity; excessive interference signals the model has reached capacity. Since then, exploration of capacity has been limited, but recent investigations have delved into the capacity of the Assembly Calculus, a derivative of Valiant's Neuroidal model. In this paper, we provide rigorous definitions for capacity and interference and present theoretical formulations for the memory capacity within a finite set, where subsets represent memories. We propose that these results can be adapted to suit both the Neuroidal model and Assembly calculus. Furthermore, we substantiate our claims by providing simulations that validate the theoretical findings. Our study aims to contribute essential insights into the understanding of memory capacity in complex cognitive models, offering potential ideas for applications and extensions to contemporary models of cognition.
199

DIMENSION REDUCTION, OPERATOR LEARNING AND UNCERTAINTY QUANTIFICATION FOR PROBLEMS OF DIFFERENTIAL EQUATIONS

Shiqi Zhang (12872678) 26 July 2022 (has links)
<p>In this work, we mainly focus on the topic related to dimension reduction, operator learning and uncertainty quantification for problems of differential equations. The supervised machine learning methods introduced here belong to a newly booming field compared to traditional numerical methods. The building blocks for our works are mainly Gaussian process and neural network. </p> <p><br></p> <p>The first work focuses on supervised dimension reduction problems. A new framework based on rotated multi-fidelity Gaussian process regression is introduced. It can effectively solve high-dimensional problems while the data are insufficient for traditional methods. Moreover, an accurate surrogate Gaussian process model of original problem can be formulated. The second one we would like to introduce is a physics-assisted Gaussian process framework with active learning for forward and inverse problems of partial differential equations(PDEs). In this work, Gaussian process regression model is incorporated with given physical information to find solutions or discover unknown coefficients of given PDEs. Three different models are introduce and their performance are compared and discussed. Lastly, we propose attention based MultiAuto-DeepONet for operator learning of stochastic problems. The target of this work is to solve operator learning problems related to time-dependent stochastic differential equations(SDEs). The work is built on MultiAuto-DeepONet and attention mechanism is applied to improve the model performance in specific type of problems. Three different types of attention mechanism are presented and compared. Numerical experiments are provided to illustrate the effectiveness of our proposed models.</p>
200

Statistical Methods for Small Sample Cognitive Diagnosis

David B Arthur (10165121) 19 April 2024 (has links)
<p dir="ltr">It has been shown that formative assessments can lead to improvements in the learning process. Cognitive Diagnostic Models (CDMs) are a powerful formative assessment tool that can be used to provide individuals with valuable information regarding skill mastery in educational settings. These models provide each student with a ``skill mastery profile'' that shows the level of mastery they have obtained with regard to a specific set of skills. These profiles can be used to help both students and educators make more informed decisions regarding the educational process, which can in turn accelerate learning for students. However, despite their utility, these models are rarely used with small sample sizes. One reason for this is that these models are often complex, containing many parameters that can be difficult to estimate accurately when working with a small number of observations. This work aims to contribute to and expand upon previous work to make CDMs more accessible for a wider range of educators and students.</p><p dir="ltr">There are three main small sample statistical problems that we address in this work: 1) accurate estimation of the population distribution of skill mastery profiles, 2) accurate estimation of additional model parameters for CDMs as well as improved classification of individual skill mastery profiles, and 3) improved selection of an appropriate CDM for each item on the assessment. Each of these problems deals with a different aspect of educational measurement and the solutions provided to these problems can ultimately lead to improvements in the educational process for both students and teachers. By finding solutions to these problems that work well when using small sample sizes, we make it possible to improve learning in everyday classroom settings and not just in large scale assessment settings.</p><p dir="ltr">In the first part of this work, we propose novel algorithms for estimating the population distribution of skill mastery profiles for a popular CDM, the Deterministic Inputs Noisy ``and'' Gate (DINA) model. These algorithms borrow inspiration from the concepts behind popular machine learning algorithms. However, in contrast to these methods, which are often used solely for prediction, we illustrate how the ideas behind these methods can be adapted to obtain estimates of specific model parameters. Through studies involving simulated and real-life data, we illustrate how the proposed algorithms can be used to gain a better picture of the distribution of skill mastery profiles for an entire population students, but can do so by only using a small sample of students from that population. </p><p dir="ltr">In the second part of this work, we introduce a new method for regularizing high-dimensional CDMs using a class of Bayesian shrinkage priors known as catalytic priors. We show how a simpler model can first be fit to the observed data and then be used to generate additional pseudo-observations that, when combined with the original observations, make it easier to more accurately estimate the parameters in a complex model of interest. We propose an alternative, simpler model that can be used instead of the DINA model and show how the information from this model can be used to formulate an intuitive shrinkage prior that effectively regularizes model parameters. This makes it possible to improve the accuracy of parameter estimates for the more complex model, which in turn leads to better classification of skill mastery. We demonstrate the utility of this method in studies involving simulated and real-life data and show how the proposed approach is superior to other common approaches for small sample estimation of CDMs.</p><p dir="ltr">Finally, we discuss the important problem of selecting the most appropriate model for each item on assessment. Often, it is not uncommon in practice to use the same CDM for each item on an assessment. However, this can lead to suboptimal results in terms of parameter estimation and overall model fit. Current methods for item-level model selection rely on large sample asymptotic theory and are thus inappropriate when the sample size is small. We propose a Bayesian approach for performing item-level model selection using Reversible Jump Markov chain Monte Carlo. This approach allows for the simultaneous estimation of posterior probabilities and model parameters for each candidate model and does not require a large sample size to be valid. We again demonstrate through studies involving simulated and real-life data that the proposed approach leads to a much higher chance of selecting the best model for each item. This in turn leads to better estimates of item and other model parameters, which ultimately leads to more accurate information regarding skill mastery. </p>

Page generated in 0.0886 seconds