• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 197
  • 31
  • 18
  • 12
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 559
  • 559
  • 214
  • 196
  • 107
  • 102
  • 73
  • 67
  • 67
  • 67
  • 66
  • 57
  • 54
  • 50
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Bayesian, Frequentist, and Information Geometry Approaches to Parametric Uncertainty Quantification of Classical Empirical Interatomic Potentials

Kurniawan, Yonatan 20 December 2021 (has links)
Uncertainty quantification (UQ) is an increasingly important part of materials modeling. In this paper, we consider the problem of quantifying parametric uncertainty in classical empirical interatomic potentials (IPs). Previous work based on local sensitivity analysis using the Fisher Information has shown that IPs are sloppy, i.e., are insensitive to coordinated changes of many parameter combinations. We confirm these results and further explore the non-local statistics in the context of sloppy model analysis using both Bayesian (MCMC) and Frequentist (profile likelihood) methods. We interface these tools with the Knowledgebase of Interatomic Models (OpenKIM) and study three models based on the Lennard-Jones, Morse, and Stillinger-Weber potentials, respectively. We confirm that IPs have global properties similar to those of sloppy models from fields such as systems biology, power systems, and critical phenomena. These models exhibit a low effective dimensionality in which many of the parameters are unidentifiable, i.e., do not encode any information when fit to data. Because the inverse problem in such models is ill-conditioned, unidentifiable parameters present challenges for traditional statistical methods. In the Bayesian approach, Monte Carlo samples can depend on the choice of prior in subtle ways. In particular, they often "evaporate" parameters into high-entropy, sub-optimal regions of the parameter space. For profile likelihoods, confidence regions are extremely sensitive to the choice of confidence level. To get a better picture of the relationship between data and parametric uncertainty, we sample the Bayesian posterior at several sampling temperatures and compare the results with those of Frequentist analyses. In analogy to statistical mechanics, we classify samples as either energy-dominated, i.e., characterized by identifiable parameters in constrained (ground state) regions of parameter space, or entropy-dominated, i.e., characterized by unidentifiable (evaporated) parameters. We complement these two pictures with information geometry to illuminate the underlying cause of this phenomenon. In this approach, a parameterized model is interpreted as a manifold embedded in the space of possible data with parameters as coordinates. We calculate geodesics on the model manifold and find that IPs, like other sloppy models, have bounded manifolds with a hierarchy of widths, leading to low effective dimensionality in the model. We show how information geometry can motivate new, natural parameterizations that improve the stability and interpretation of UQ analysis and further suggest simplified, less-sloppy models.
222

STOCHASTIC MODELING AND UNCERTAINTY EVALUATION FOR PERFORMANCE PROGNOSIS IN DYNAMICAL SYSTEMS

Wang, Peng 07 September 2017 (has links)
No description available.
223

Parameter Dependencies in an Accumulation-to-Threshold Model of Simple Perceptual Decisions

Nikitin, Vyacheslav Y. January 2015 (has links)
No description available.
224

Discovering interpretable topics in free-style text: diagnostics, rare topics, and topic supervision

Zheng, Ning 07 January 2008 (has links)
No description available.
225

Investigation of Multi-Digit Tactile Integration / Investigation of Multi-Digit Tactile Integration: Evidence for Sub-Optimal Human Performance

Jajarmi, Rose January 2023 (has links)
When examining objects using tactile senses, individuals often incorporate multiple sources of haptic sensory information to estimate the object’s properties. How do our brains integrate various cues to form a single percept of the object? Previous research has indicated that integration from cues across sensory modalities is optimally achieved by weighting each cue according to its variance, such that more reliable cues have more weight in determining the percept. To explore this question in the context of a within-modality haptic setting, we assessed participants’ perception of edges that cross the index, middle, and ring fingers of the right hand. We used a 2-interval forced choice (2IFC) task to measure the acuity of each digit individually, as well as the acuity of all three digits working together, by asking participants to distinguish the locations of two closely spaced plastic edges. In examining the data, we considered three perceptual models, an optimal (Bayesian) model, an unweighted average model, and a winner-take-all model. The results indicate that participants perceived sub-optimally, such that the acuity of the three digits together did not exceed that of the best individual digit. We further investigated our question by having participants unknowingly undergo a 2IFC cue conflict condition, where they thought they were touching a straight edge which was actually staggered and thus gave each digit a different positional cue. Our analyses indicate that participants did not undertake optimal cue combination but are inconclusive with respect to which suboptimal strategy they employed. / Thesis / Master of Science (MSc) / This thesis investigates the neural mechanisms behind tactile perception, specifically how the brain combines multiple sensory cues to construct a unified percept when interacting with objects through touch. Typically, optimal sensory integration involves assigning more weight to more reliable cues. Our research focused on tactile integration by examining participants’ ability to perceive the positions of edges crossing their index, middle, and ring fingers simultaneously. The results indicated that, contrary to predictions, participants exhibited various sub-optimal cue integration strategies. Their ability to perceive the combined positions of all three fingers was not superior to that of the best-performing individual finger. We also explored cue conflict situations, where the locations of the tactile cues were no longer from a straight edge, unbeknown to participants, and the results here reinforced the finding that participants did not consistently employ optimal cue combination strategies. This research offers valuable insights into how the brain processes tactile information.
226

Tensorial Data Low-Rank Decomposition on Multi-dimensional Image Data Processing

Luo, Qilun 01 August 2022 (has links)
How to handle large multi-dimensional datasets such as hyperspectral images and video information both efficiently and effectively plays an important role in big-data processing. The characteristics of tensor low-rank decomposition in recent years demonstrate the importance of capturing the tensor structure adequately which usually yields efficacious approaches. In this dissertation, we first aim to explore the tensor singular value decomposition (t-SVD) with the nonconvex regularization on the multi-view subspace clustering (MSC) problem, then develop two new tensor decomposition models with the Bayesian inference framework on the tensor completion and tensor robust principal component analysis (TRPCA) and tensor completion (TC) problems. Specifically, the following developments for multi-dimensional datasets under the mathematical tensor framework will be addressed. (1) By utilizing the t-SVD proposed by Kilmer et al. \cite{kilmer2013third}, we unify the Hyper-Laplacian (HL) and exclusive $\ell_{2,1}$ (L21) regularization with Tensor Log-Determinant Rank Minimization (TLD) to identify data clusters from the multiple views' inherent information. Whereby the HL regularization maintains the local geometrical structure that makes the estimation prune to nonlinearities, and the mixed $\ell_{2,1}$ and $\ell_{1,2}$ regularization provides the joint sparsity within-cluster as well as the exclusive sparsity between-cluster. Furthermore, a log-determinant function is used as a tighter tensor rank approximation to discriminate the dimension of features. (2) By considering a tube as an atom of a third-order tensor and constructing a data-driven learning dictionary from the observed noisy data along the tubes of a tensor, we develop a Bayesian dictionary learning model with tensor tubal transformed factorization to identify the underlying low-tubal-rank structure of the tensor substantially with the data-adaptive dictionary for the TRPCA problem. With the defined page-wise operators, an efficient variational Bayesian dictionary learning algorithm is established for TPRCA that enables to update of the posterior distributions along the third dimension simultaneously. (3) With the defined matrix outer product into the tensor decomposition process, we present a new decomposition model for a third-order tensor. The fundamental idea is to decompose tensors mathematically in a compact manner as much as possible. By incorporating the framework of Bayesian probabilistic inference, the new tensor decomposition model on the subtle matrix outer product (BPMOP) is developed for the TC and TRPCA problems. Extensive experiments on synthetic data and real-world datasets are conducted for the multi-view clustering, TC, and TRPCA problems to demonstrate the desirable effectiveness of the proposed approaches, by detailed comparison with currently available results in the literature.
227

A Bayesian approach to fault isolation with application to diesel engine diagnosis

Pernestål, Anna January 2007 (has links)
Users of heavy trucks, as well as legislation, put increasing demands on heavy trucks. The vehicles should be more comfortable, reliable and safe. Furthermore, they should consume less fuel and be more environmentally friendly. For example, this means that faults that cause the emissions to increase must be detected early. To meet these requirements on comfort and performance, advanced sensor-based computer control-systems are used. However, the increased complexity makes the vehicles more difficult for the workshop mechanic to maintain and repair. A diagnosis system that detects and localizes faults is thus needed, both as an aid in the repair process and for detecting and isolating (localizing) faults on-board, to guarantee that safety and environmental goals are satisfied. Reliable fault isolation is often a challenging task. Noise, disturbances and model errors can cause problems. Also, two different faults may lead to the same observed behavior of the system under diagnosis. This means that there are several faults, which could possibly explain the observed behavior of the vehicle. In this thesis, a Bayesian approach to fault isolation is proposed. The idea is to compute the probabilities, given ``all information at hand'', that certain faults are present in the system under diagnosis. By ``all information at hand'' we mean qualitative and quantitative information about how probable different faults are, and possibly also data which is collected during test drives with the vehicle when faults are present. The information may also include knowledge about which observed behavior that is to be expected when certain faults are present. The advantage of the Bayesian approach is the possibility to combine information of different characteristics, and also to facilitate isolation of previously unknown faults as well as faults from which only vague information is available. Furthermore, Bayesian probability theory combined with decision theory provide methods for determining the best action to perform to reduce the effects from faults. Using the Bayesian approach to fault isolation to diagnose large and complex systems may lead to computational and complexity problems. In this thesis, these problems are solved in three different ways. First, equivalence classes are introduced for different faults with equal probability distributions. Second, by using the structure of the computations, efficient storage methods can be used. Finally, if the previous two simplifications are not sufficient, it is shown how the problem can be approximated by partitioning it into a set of sub problems, which each can be efficiently solved using the presented methods. The Bayesian approach to fault isolation is applied to the diagnosis of the gas flow of an automotive diesel engine. Data collected from real driving situations with implemented faults, is used in the evaluation of the methods. Furthermore, the influences of important design parameters are investigated. The experiments show that the proposed Bayesian approach has promising potentials for vehicle diagnosis, and performs well on this real problem. Compared with more classical methods, e.g. structured residuals, the Bayesian approach used here gives higher probability of detection and isolation of the true underlying fault. / Både användare och lagstiftare ställer idag ökande krav på prestanda hos tunga lastbilar. Fordonen ska var bekväma, tillförlitliga och säkra. Dessutom ska de ha bättre bränsleekonomi vara mer miljövänliga. Detta betyder till exempel att fel som orsakar förhöjda emissioner måste upptäckas i ett tidigt stadium. För att möta dessa krav på komfort och prestanda används avancerade sensorbaserade reglersystem. Emellertid leder den ökade komplexiteten till att fordonen blir mer komplicerade för en mekaniker att underhålla, felsöka och reparera. Därför krävs det ett diagnossystem som detekterar och lokaliserar felen, både som ett hjälpmedel i reparationsprocessen, och för att kunna detektera och lokalisera (isolera) felen ombord för att garantera att säkerhetskrav och miljömål är uppfyllda. Tillförlitlig felisolering är ofta en utmanande uppgift. Brus, störningar och modellfel kan orsaka problem. Det kan också det faktum två olika fel kan leda till samma observerade beteende hos systemet som diagnosticeras. Detta betyder att det finns flera fel som möjligen skulle kunna förklara det observerade beteendet hos fordonet. I den här avhandlingen föreslås användandet av en Bayesianska ansats till felisolering. I metoden beräknas sannolikheten för att ett visst fel är närvarande i det diagnosticerade systemet, givet ''all tillgänglig information''. Med ''all tillgänglig information'' menas både kvalitativ och kvantitativ information om hur troliga fel är och möjligen även data som samlats in under testkörningar med fordonet, då olika fel finns närvarande. Informationen kan även innehålla kunskap om vilket beteende som kan förväntas observeras då ett särskilt fel finns närvarande. Fördelarna med den Bayesianska metoden är möjligheten att kombinera information av olika karaktär, men också att att den möjliggör isolering av tidigare okända fel och fel från vilka det endast finns vag information tillgänglig. Vidare kan Bayesiansk sannolikhetslära kombineras med beslutsteori för att erhålla metoder för att bestämma nästa bästa åtgärd för att minska effekten från fel. Användandet av den Bayesianska metoden kan leda till beräknings- och komplexitetsproblem. I den här avhandlingen hanteras dessa problem på tre olika sätt. För det första så introduceras ekvivalensklasser för fel med likadana sannolikhetsfördelningar. För det andra, genom att använda strukturen på beräkningarna kan effektiva lagringsmetoder användas. Slutligen, om de två tidigare förenklingarna inte är tillräckliga, visas det hur problemet kan approximeras med ett antal delproblem, som vart och ett kan lösas effektivt med de presenterade metoderna. Den Bayesianska ansatsen till felisolering har applicerats på diagnosen av gasflödet på en dieselmotor. Data som har samlats från riktiga körsituationer med fel implementerade används i evalueringen av metoderna. Vidare har påverkan av viktiga parametrar på isoleringsprestandan undersökts. Experimenten visar att den föreslagna Bayesianska ansatsen har god potential för fordonsdiagnos, och prestandan är bra på detta reella problem. Jämfört med mer klassiska metoder baserade på strukturerade residualer ger den Bayesianska metoden högre sannolikhet för detektion och isolering av det sanna, underliggande, felet. / QC 20101115
228

AN INVESTIGATION OF MULTIPLE-DIGIT CUE COMBINATION: PSYCHOPHYSICS AND BAYESIAN MODELING / MULTIPLE-DIGIT CUE COMBINATION

Prodribaba, Nina January 2018 (has links)
In recent years, computational neuroscientists have suggested that human behaviour, including perception, occurs in a manner consistent with Bayesian inference. According to the Bayesian ideal observer model, the observer combines cues from multiple sensory streams as a weighted average based on each cue’s reliability. Most cue-combination research has focused on integration of cues between sensory modalities or within the visual modality. Cue combination within the tactile modality has been relatively rarely studied, and it is still not known whether cues from individual digits combine optimally. In this thesis, we use the ideal observer model to determine whether cues from three different digits are combined optimally. We predicted that cues from multiple digits would be combined according to the optimal cue combination model. To test our hypothesis, we devised a two-interval forced choice (2IFC) task where participants had to discriminate the distal/proximal location of a 1-mm thick edge across the fingerpad(s) of the index (D2), middle (D3), and ring (D4) fingers. We used a Bayesian adaptive method, the ψ method, to compute participants’ psychometric functions for single-digit (D2, D3, and D4) and multiple-digit (D23, D24, D34, and D234) conditions. We determined the stimulus level ∆x, the distance (mm) between the distal and proximal stimuli locations, at 76% correct probability. This distance corresponds to a sensitivity index d'=1 and is the σ value of the participant’s stimulus measurement distribution. We then used the single-digit σ values to predict optimal cue combination for the multiple-digits combinations. We did not observer optimal cue-combination between the digits in this study. We outline potential implications the results of this experimental have on determining how the nervous system combines cues between digits, focusing on theoretical and experimental updates to the experiment that might result in the observation of optimal cue combination between digits. / Thesis / Master of Science (MSc)
229

Robust Online Trajectory Prediction for Non-cooperative Small Unmanned Aerial Vehicles

Badve, Prathamesh Mahesh 21 January 2022 (has links)
In recent years, unmanned aerial vehicles (UAVs) have got a boost in their applications in civilian areas like aerial photography, agriculture, communication, etc. An increasing research effort is being exerted to develop sophisticated trajectory prediction methods for UAVs for collision detection and trajectory planning. The existing techniques suffer from problems such as inadequate uncertainty quantification of predicted trajectories. This work adopts particle filters together with Löwner-John ellipsoid to approximate the highest posterior density region for trajectory prediction and uncertainty quantification. The particle filter is tuned and tested on real-world and simulated data sets and compared with the Kalman filter. A parallel computing approach for particle filter is further proposed. This parallel implementation makes the particle filter faster and more suitable for real-time online applications. / Master of Science / In recent years, unmanned aerial vehicles (UAVs) have got a boost in their applications in civilian areas like aerial photography, agriculture, communication, etc. Over the coming years, the number of UAVs will increase rapidly. As a result, the risk of mid-air collisions grows, leading to property damages and possible loss of life if a UAV collides with manned aircraft. An increasing research effort has been made to develop sophisticated trajectory prediction methods for UAVs for collision detection and trajectory planning. The existing techniques suffer from problems such as inadequate uncertainty quantification of predicted trajectories. This work adopts particle filters, a Bayesian inferencing technique for trajectory prediction. The use of minimum volume enclosing ellipsoid to approximate the highest posterior density region for prediction uncertainty quantification is also investigated. The particle filter is tuned and tested on real-world and simulated data sets and compared with the Kalman filter. A parallel computing approach for particle filter is further proposed. This parallel implementation makes the particle filter faster and more suitable for real-time online applications.
230

Enhanced Air Transportation Modeling Techniques for Capacity Problems

Spencer, Thomas Louis 02 September 2016 (has links)
Effective and efficient air transportation systems are crucial to a nation's economy and connectedness. These systems involve capital-intensive facilities and equipment and move millions of people and tonnes of freight every day. As air traffic has continued to increase, the systems necessary to ensure safe and efficient operation will continue to grow more and more complex. Hence, it is imperative that air transport analysts are equipped with the best tools to properly predict and respond to expected air transportation operations. This dissertation aims to improve on those tools currently available to air transportation analysts, while offering new ones. Specifically, this thesis will offer the following: 1) A model for predicting arrival runway occupancy times (AROT); 2) a model for predicting departure runway occupancy times (DROT); and 3) a flight planning model. This thesis will also offer an exploration of the uses of unmanned aerial vehicles for providing wireless communications services. For the predictive models of AROT and DROT, we fit hierarchical Bayesian regression models to the data, grouped by aircraft type using airport physical and aircraft operational parameters as the regressors. Recognizing that many existing air transportation models require distributions of AROT and DROT, Bayesian methods are preferred since their output are distributions that can be directly inputted into air transportation modeling programs. Additionally, we exhibit how analysts will be able to decouple AROT and DROT predictions from the traditional 4 or 5 groupings of aircraft currently in use. Lastly, for the flight planning model, we present a 2-D model using presently available wind data that provides wind-optimal flight routings. We improve over current models by allowing free-flight unconnected to pre-existing airways and by offering finer resolutions over the current 2.5 degree norm. / Ph. D.

Page generated in 0.0508 seconds