Spelling suggestions: "subject:"bayesian"" "subject:"eayesian""
441 |
Spectral Bayesian Network and Spectral Connectivity Analysis for Functional Magnetic Resonance Imaging StudiesMeng, Xiangxiang January 2011 (has links)
No description available.
|
442 |
Bayesian multiresolution dynamic modelsKim, Yong Ku 25 June 2007 (has links)
No description available.
|
443 |
Likelihood-Free Bayesian ModelingTurner, Brandon Michael 15 December 2011 (has links)
No description available.
|
444 |
Investigation of Multi-Digit Tactile Integration / Investigation of Multi-Digit Tactile Integration: Evidence for Sub-Optimal Human PerformanceJajarmi, Rose January 2023 (has links)
When examining objects using tactile senses, individuals often incorporate multiple sources of haptic sensory information to estimate the object’s properties. How do our brains integrate various cues to form a single percept of the object? Previous research has indicated that integration from cues across sensory modalities is optimally achieved by weighting each cue according to its variance, such that more reliable cues have more weight in determining the percept. To explore this question in the context of a within-modality haptic setting, we assessed participants’ perception of edges that cross the index, middle, and ring fingers of the right hand. We used a 2-interval forced choice (2IFC) task to measure the acuity of each digit individually, as well as the acuity of all three digits working together, by asking participants to distinguish the locations of two closely spaced plastic edges. In examining the data, we considered three perceptual models, an optimal (Bayesian) model, an unweighted average model, and a winner-take-all model. The results indicate that participants perceived sub-optimally, such that the acuity of the three digits together did not exceed that of the best individual digit. We further investigated our question by having participants unknowingly undergo a 2IFC cue conflict condition, where they thought they were touching a straight edge which was actually staggered and thus gave each digit a different positional cue. Our analyses indicate that participants did not undertake optimal cue combination but are inconclusive with respect to which suboptimal strategy they employed. / Thesis / Master of Science (MSc) / This thesis investigates the neural mechanisms behind tactile perception, specifically how the brain combines multiple sensory cues to construct a unified percept when interacting with objects through touch. Typically, optimal sensory integration involves assigning more weight to more reliable cues. Our research focused on tactile integration by examining participants’ ability to perceive the positions of edges crossing their index, middle, and ring fingers simultaneously. The results indicated that, contrary to predictions, participants exhibited various sub-optimal cue integration strategies. Their ability to perceive the combined positions of all three fingers was not superior to that of the best-performing individual finger. We also explored cue conflict situations, where the locations of the tactile cues were no longer from a straight edge, unbeknown to participants, and the results here reinforced the finding that participants did not consistently employ optimal cue combination strategies. This research offers valuable insights into how the brain processes tactile information.
|
445 |
A Bayesian approach to fault isolation with application to diesel engine diagnosisPernestål, Anna January 2007 (has links)
Users of heavy trucks, as well as legislation, put increasing demands on heavy trucks. The vehicles should be more comfortable, reliable and safe. Furthermore, they should consume less fuel and be more environmentally friendly. For example, this means that faults that cause the emissions to increase must be detected early. To meet these requirements on comfort and performance, advanced sensor-based computer control-systems are used. However, the increased complexity makes the vehicles more difficult for the workshop mechanic to maintain and repair. A diagnosis system that detects and localizes faults is thus needed, both as an aid in the repair process and for detecting and isolating (localizing) faults on-board, to guarantee that safety and environmental goals are satisfied. Reliable fault isolation is often a challenging task. Noise, disturbances and model errors can cause problems. Also, two different faults may lead to the same observed behavior of the system under diagnosis. This means that there are several faults, which could possibly explain the observed behavior of the vehicle. In this thesis, a Bayesian approach to fault isolation is proposed. The idea is to compute the probabilities, given ``all information at hand'', that certain faults are present in the system under diagnosis. By ``all information at hand'' we mean qualitative and quantitative information about how probable different faults are, and possibly also data which is collected during test drives with the vehicle when faults are present. The information may also include knowledge about which observed behavior that is to be expected when certain faults are present. The advantage of the Bayesian approach is the possibility to combine information of different characteristics, and also to facilitate isolation of previously unknown faults as well as faults from which only vague information is available. Furthermore, Bayesian probability theory combined with decision theory provide methods for determining the best action to perform to reduce the effects from faults. Using the Bayesian approach to fault isolation to diagnose large and complex systems may lead to computational and complexity problems. In this thesis, these problems are solved in three different ways. First, equivalence classes are introduced for different faults with equal probability distributions. Second, by using the structure of the computations, efficient storage methods can be used. Finally, if the previous two simplifications are not sufficient, it is shown how the problem can be approximated by partitioning it into a set of sub problems, which each can be efficiently solved using the presented methods. The Bayesian approach to fault isolation is applied to the diagnosis of the gas flow of an automotive diesel engine. Data collected from real driving situations with implemented faults, is used in the evaluation of the methods. Furthermore, the influences of important design parameters are investigated. The experiments show that the proposed Bayesian approach has promising potentials for vehicle diagnosis, and performs well on this real problem. Compared with more classical methods, e.g. structured residuals, the Bayesian approach used here gives higher probability of detection and isolation of the true underlying fault. / Både användare och lagstiftare ställer idag ökande krav på prestanda hos tunga lastbilar. Fordonen ska var bekväma, tillförlitliga och säkra. Dessutom ska de ha bättre bränsleekonomi vara mer miljövänliga. Detta betyder till exempel att fel som orsakar förhöjda emissioner måste upptäckas i ett tidigt stadium. För att möta dessa krav på komfort och prestanda används avancerade sensorbaserade reglersystem. Emellertid leder den ökade komplexiteten till att fordonen blir mer komplicerade för en mekaniker att underhålla, felsöka och reparera. Därför krävs det ett diagnossystem som detekterar och lokaliserar felen, både som ett hjälpmedel i reparationsprocessen, och för att kunna detektera och lokalisera (isolera) felen ombord för att garantera att säkerhetskrav och miljömål är uppfyllda. Tillförlitlig felisolering är ofta en utmanande uppgift. Brus, störningar och modellfel kan orsaka problem. Det kan också det faktum två olika fel kan leda till samma observerade beteende hos systemet som diagnosticeras. Detta betyder att det finns flera fel som möjligen skulle kunna förklara det observerade beteendet hos fordonet. I den här avhandlingen föreslås användandet av en Bayesianska ansats till felisolering. I metoden beräknas sannolikheten för att ett visst fel är närvarande i det diagnosticerade systemet, givet ''all tillgänglig information''. Med ''all tillgänglig information'' menas både kvalitativ och kvantitativ information om hur troliga fel är och möjligen även data som samlats in under testkörningar med fordonet, då olika fel finns närvarande. Informationen kan även innehålla kunskap om vilket beteende som kan förväntas observeras då ett särskilt fel finns närvarande. Fördelarna med den Bayesianska metoden är möjligheten att kombinera information av olika karaktär, men också att att den möjliggör isolering av tidigare okända fel och fel från vilka det endast finns vag information tillgänglig. Vidare kan Bayesiansk sannolikhetslära kombineras med beslutsteori för att erhålla metoder för att bestämma nästa bästa åtgärd för att minska effekten från fel. Användandet av den Bayesianska metoden kan leda till beräknings- och komplexitetsproblem. I den här avhandlingen hanteras dessa problem på tre olika sätt. För det första så introduceras ekvivalensklasser för fel med likadana sannolikhetsfördelningar. För det andra, genom att använda strukturen på beräkningarna kan effektiva lagringsmetoder användas. Slutligen, om de två tidigare förenklingarna inte är tillräckliga, visas det hur problemet kan approximeras med ett antal delproblem, som vart och ett kan lösas effektivt med de presenterade metoderna. Den Bayesianska ansatsen till felisolering har applicerats på diagnosen av gasflödet på en dieselmotor. Data som har samlats från riktiga körsituationer med fel implementerade används i evalueringen av metoderna. Vidare har påverkan av viktiga parametrar på isoleringsprestandan undersökts. Experimenten visar att den föreslagna Bayesianska ansatsen har god potential för fordonsdiagnos, och prestandan är bra på detta reella problem. Jämfört med mer klassiska metoder baserade på strukturerade residualer ger den Bayesianska metoden högre sannolikhet för detektion och isolering av det sanna, underliggande, felet. / QC 20101115
|
446 |
A Bayesian Network Approach to the Self-organization and Learning in Intelligent AgentsSahin, Ferat 25 September 2000 (has links)
A Bayesian network approach to self-organization and learning is introduced for use with intelligent agents. Bayesian networks, with the help of influence diagrams, are employed to create a decision-theoretic intelligent agent. Influence diagrams combine both Bayesian networks and utility theory. In this research, an intelligent agent is modeled by its belief, preference, and capabilities attributes. Each agent is assumed to have its own belief about its environment. The belief aspect of the intelligent agent is accomplished by a Bayesian network. The goal of an intelligent agent is said to be the preference of the agent and is represented with a utility function in the decision theoretic intelligent agent. Capabilities are represented with a set of possible actions of the decision-theoretic intelligent agent. Influence diagrams have utility nodes and decision nodes to handle the preference and capabilities of the decision-theoretic intelligent agent, respectively.
Learning is accomplished by Bayesian networks in the decision-theoretic intelligent agent. Bayesian network learning methods are discussed intensively in this paper. Because intelligent agents will explore and learn the environment, the learning algorithm should be implemented online. None of the existent Bayesian network learning algorithms has online learning. Thus, an online Bayesian network learning method is proposed to allow the intelligent agent learn during its exploration.
Self-organization of the intelligent agents is accomplished because each agent models other agents by observing their behavior. Agents have belief, not only about environment, but also about other agents. Therefore, an agent takes its decisions according to the model of the environment and the model of the other agents. Even though each agent acts independently, they take the other agents behaviors into account to make a decision. This permits the agents to organize themselves for a common task.
To test the proposed intelligent agent's learning and self-organizing abilities, Windows application software is written to simulate multi-agent systems. The software, IntelliAgent, lets the user design decision-theoretic intelligent agents both manually and automatically. The software can also be used for knowledge discovery by employing Bayesian network learning a database.
Additionally, we have explored a well-known herding problem to obtain sound results for our intelligent agent design. In the problem, a dog tries to herd a sheep to a certain location, i.e. a pen. The sheep tries to avoid the dog by retreating from the dog. The herding problem is simulated using the IntelliAgent software. Simulations provided good results in terms of the dog's learning ability and its ability to organize its actions according to the sheep's (other agent) behavior.
In summary, a decision-theoretic approach is applied to the self-organization and learning problems in intelligent agents. Software was written to simulate the learning and self-organization abilities of the proposed agent design. A user manual for the software and the simulation results are presented.
This research is supported by the Office of Naval Research with the grant number N00014-98-1-0779. Their financial support is greatly appreciated. / Ph. D.
|
447 |
Multiset Model Selection and Averaging, and Interactive StorytellingMaiti, Dipayan 23 August 2012 (has links)
The Multiset Sampler [Leman et al., 2009] has previously been deployed and developed for efficient sampling from complex stochastic processes. We extend the sampler and the surrounding theory to model selection problems. In such problems efficient exploration of the model space becomes a challenge since independent and ad-hoc proposals might not be able to jointly propose multiple parameter sets which correctly explain a new pro- posed model. In order to overcome this we propose a multiset on the model space to en- able efficient exploration of multiple model modes with almost no tuning. The Multiset Model Selection (MSMS) framework is based on independent priors for the parameters and model indicators on variables. We show that posterior model probabilities can be easily obtained from multiset averaged posterior model probabilities in MSMS. We also obtain typical Bayesian model averaged estimates for the parameters from MSMS. We apply our algorithm to linear regression where it allows easy moves between parame- ter modes of different models, and in probit regression where it allows jumps between widely varying model specific covariance structures in the latent space of a hierarchical model.
The Storytelling algorithm [Kumar et al., 2006] constructs stories by discovering and con- necting latent connections between documents in a network. Such automated algorithms often do not agree with user's mental map of the data. Hence systems that incorporate feedback through visual interaction from the user are of immediate importance. We pro- pose a visual analytic framework in which such interactions are naturally incorporated in to the existing Storytelling algorithm through a redefinition of the latent topic space used in the similarity measure of the network. The document network can be explored us- ing the newly learned normalized topic weights for each document. Hence our algorithm augments the limitations of human sensemaking capabilities in large document networks by providing a collaborative framework between the underlying model and the user. Our formulation of the problem is a supervised topic modeling problem where the supervi- sion is based on relationships imposed by the user as a set of inequalities derived from tolerances on edge costs from inverse shortest path problem. We show a probabilistic modeling of the relationships based on auxiliary variables and propose a Gibbs sampling based strategy. We provide detailed results from a simulated data and the Atlantic Storm data set. / Ph. D.
|
448 |
Bayesian Modeling of Complex High-Dimensional DataHuo, Shuning 07 December 2020 (has links)
With the rapid development of modern high-throughput technologies, scientists can now collect high-dimensional complex data in different forms, such as medical images, genomics measurements. However, acquisition of more data does not automatically lead to better knowledge discovery. One needs efficient and reliable analytical tools to extract useful information from complex datasets. The main objective of this dissertation is to develop innovative Bayesian methodologies to enable effective and efficient knowledge discovery from complex high-dimensional data. It contains two parts—the development of computationally efficient functional mixed models and the modeling of data heterogeneity via Dirichlet Diffusion Tree. The first part focuses on tackling the computational bottleneck in Bayesian functional mixed models. We propose a computational framework called variational functional mixed model (VFMM). This new method facilitates efficient data compression and high-performance computing in basis space. We also propose a new multiple testing procedure in basis space, which can be used to detect significant local regions. The effectiveness of the proposed model is demonstrated through two datasets, a mass spectrometry dataset in a cancer study and a neuroimaging dataset in an Alzheimer's disease study. The second part is about modeling data heterogeneity by using Dirichlet Diffusion Trees. We propose a Bayesian latent tree model that incorporates covariates of subjects to characterize the heterogeneity and uncover the latent tree structure underlying data. This innovative model may reveal the hierarchical evolution process through branch structures and estimate systematic differences between groups of samples. We demonstrate the effectiveness of the model through the simulation study and a brain tumor real data. / Doctor of Philosophy / With the rapid development of modern high-throughput technologies, scientists can now collect high-dimensional data in different forms, such as engineering signals, medical images, and genomics measurements. However, acquisition of such data does not automatically lead to efficient knowledge discovery. The main objective of this dissertation is to develop novel Bayesian methods to extract useful knowledge from complex high-dimensional data. It has two parts—the development of an ultra-fast functional mixed model and the modeling of data heterogeneity via Dirichlet Diffusion Trees. The first part focuses on developing approximate Bayesian methods in functional mixed models to estimate parameters and detect significant regions. Two datasets demonstrate the effectiveness of proposed method—a mass spectrometry dataset in a cancer study and a neuroimaging dataset in an Alzheimer's disease study. The second part focuses on modeling data heterogeneity via Dirichlet Diffusion Trees. The method helps uncover the underlying hierarchical tree structures and estimate systematic differences between the group of samples. We demonstrate the effectiveness of the method through the brain tumor imaging data.
|
449 |
Multi-Bayesian Approach to Stochastic Feature Recognition in the Context of Road Crack Detection and ClassificationSteckenrider, John J. 04 December 2017 (has links)
This thesis introduces a multi-Bayesian framework for detection and classification of features in environments abundant with error-inducing noise. The approach takes advantage of Bayesian correction and classification in three distinct stages. The corrective scheme described here extracts useful but highly stochastic features from a data source, whether vision-based or otherwise, to aid in higher-level classification. Unlike many conventional methods, these features’ uncertainties are characterized so that test data can be correctively cast into the feature space with probability distribution functions that can be integrated over class decision boundaries created by a quadratic Bayesian classifier. The proposed approach is specifically formulated for road crack detection and characterization, which is one of the potential applications. For test images assessed with this technique, ground truth was estimated accurately and consistently with effective Bayesian correction, showing a 33% improvement in recall rate over standard classification. Application to road cracks demonstrated successful detection and classification in a practical domain. The proposed approach is extremely effective in characterizing highly probabilistic features in noisy environments when several correlated observations are available either from multiple sensors or from data sequentially obtained by a single sensor. / Master of Science / Humans have an outstanding ability to understand things about the world around them. We learn from our youngest years how to make sense of things and perceive our environment even when it is not easy. To do this, we inherently think in terms of probabilities, updating our belief as we gain new information. The methods introduced here allow an autonomous system to think similarly, by applying a fairly common probabilistic technique to the task of perception and classification. In particular, road cracks are observed and classified using these methods, in order to develop an autonomous road condition monitoring system. The results of this research are promising; cracks are identified and correctly categorized with 92% accuracy, and the additional “intelligence” of the system leads to a 33% improvement in road crack assessment. These methods could be applied in a variety of contexts as the leading edge of robotics research seeks to develop more robust and human-like ways of perceiving the world.
|
450 |
A Bayesian Approach to Estimating Background Flows from a Passive ScalarKrometis, Justin 26 June 2018 (has links)
We consider the statistical inverse problem of estimating a background flow field (e.g., of air or water) from the partial and noisy observation of a passive scalar (e.g., the concentration of a pollutant). Here the unknown is a vector field that is specified by large or infinite number of degrees of freedom. We show that the inverse problem is ill-posed, i.e., there may be many or no background flows that match a given set of observations. We therefore adopt a Bayesian approach, incorporating prior knowledge of background flows and models of the observation error to develop probabilistic estimates of the fluid flow. In doing so, we leverage frameworks developed in recent years for infinite-dimensional Bayesian inference. We provide conditions under which the inference is consistent, i.e., the posterior measure converges to a Dirac measure on the true background flow as the number of observations of the solute concentration grows large. We also define several computationally-efficient algorithms adapted to the problem. One is an adjoint method for computation of the gradient of the log likelihood, a key ingredient in many numerical methods. A second is a particle method that allows direct computation of point observations of the solute concentration, leveraging the structure of the inverse problem to avoid approximation of the full infinite-dimensional scalar field. Finally, we identify two interesting example problems with very different posterior structures, which we use to conduct a large-scale benchmark of the convergence of several Markov Chain Monte Carlo methods that have been developed in recent years for infinite-dimensional settings. / Ph. D. / We consider the problem of estimating a fluid flow (e.g., of air or water) from partial and noisy observations of the concentration of a solute (e.g., a pollutant) dissolved in the fluid. Because of observational noise, and because there are cases where the fluid flow will not affect the movement of the pollutant, the fluid flow cannot be uniquely determined from the observations. We therefore adopt a statistical (Bayesian) approach, developing probabilistic estimates of the fluid flow using models of observation error and our understanding of the flow before measurements are taken. We provide conditions under which, as the number of observations grows large, the approach is able to identify the fluid flow that generated the observations. We define several efficient algorithms for computing statistics of the fluid flow, one of which involves approximating the movement of individual solute particles to estimate concentrations only where required by the inverse problem. We identify two interesting example problems for which the statistics of the fluid flow are very different. The first case produces an approximately normal distribution. The second example exhibits highly nonGaussian structure, where several different classes of fluid flows match the data very well. We use these examples to test the functionality and efficiency of several numerical (Markov Chain Monte Carlo) methods developed in recent years to compute the solution to similar problems.
|
Page generated in 0.0437 seconds