• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 1
  • Tagged with
  • 11
  • 11
  • 9
  • 8
  • 8
  • 8
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Applications of Bayesian networks in natural hazard assessments

Vogel, Kristin January 2013 (has links)
Even though quite different in occurrence and consequences, from a modeling perspective many natural hazards share similar properties and challenges. Their complex nature as well as lacking knowledge about their driving forces and potential effects make their analysis demanding: uncertainty about the modeling framework, inaccurate or incomplete event observations and the intrinsic randomness of the natural phenomenon add up to different interacting layers of uncertainty, which require a careful handling. Nevertheless deterministic approaches are still widely used in natural hazard assessments, holding the risk of underestimating the hazard with disastrous effects. The all-round probabilistic framework of Bayesian networks constitutes an attractive alternative. In contrast to deterministic proceedings, it treats response variables as well as explanatory variables as random variables making no difference between input and output variables. Using a graphical representation Bayesian networks encode the dependency relations between the variables in a directed acyclic graph: variables are represented as nodes and (in-)dependencies between variables as (missing) edges between the nodes. The joint distribution of all variables can thus be described by decomposing it, according to the depicted independences, into a product of local conditional probability distributions, which are defined by the parameters of the Bayesian network. In the framework of this thesis the Bayesian network approach is applied to different natural hazard domains (i.e. seismic hazard, flood damage and landslide assessments). Learning the network structure and parameters from data, Bayesian networks reveal relevant dependency relations between the included variables and help to gain knowledge about the underlying processes. The problem of Bayesian network learning is cast in a Bayesian framework, considering the network structure and parameters as random variables itself and searching for the most likely combination of both, which corresponds to the maximum a posteriori (MAP score) of their joint distribution given the observed data. Although well studied in theory the learning of Bayesian networks based on real-world data is usually not straight forward and requires an adoption of existing algorithms. Typically arising problems are the handling of continuous variables, incomplete observations and the interaction of both. Working with continuous distributions requires assumptions about the allowed families of distributions. To "let the data speak" and avoid wrong assumptions, continuous variables are instead discretized here, thus allowing for a completely data-driven and distribution-free learning. An extension of the MAP score, considering the discretization as random variable as well, is developed for an automatic multivariate discretization, that takes interactions between the variables into account. The discretization process is nested into the network learning and requires several iterations. Having to face incomplete observations on top, this may pose a computational burden. Iterative proceedings for missing value estimation become quickly infeasible. A more efficient albeit approximate method is used instead, estimating the missing values based only on the observations of variables directly interacting with the missing variable. Moreover natural hazard assessments often have a primary interest in a certain target variable. The discretization learned for this variable does not always have the required resolution for a good prediction performance. Finer resolutions for (conditional) continuous distributions are achieved with continuous approximations subsequent to the Bayesian network learning, using kernel density estimations or mixtures of truncated exponential functions. All our proceedings are completely data-driven. We thus avoid assumptions that require expert knowledge and instead provide domain independent solutions, that are applicable not only in other natural hazard assessments, but in a variety of domains struggling with uncertainties. / Obwohl Naturgefahren in ihren Ursachen, Erscheinungen und Auswirkungen grundlegend verschieden sind, teilen sie doch viele Gemeinsamkeiten und Herausforderungen, wenn es um ihre Modellierung geht. Fehlendes Wissen über die zugrunde liegenden Kräfte und deren komplexes Zusammenwirken erschweren die Wahl einer geeigneten Modellstruktur. Hinzu kommen ungenaue und unvollständige Beobachtungsdaten sowie dem Naturereignis innewohnende Zufallsprozesse. All diese verschiedenen, miteinander interagierende Aspekte von Unsicherheit erfordern eine sorgfältige Betrachtung, um fehlerhafte und verharmlosende Einschätzungen von Naturgefahren zu vermeiden. Dennoch sind deterministische Vorgehensweisen in Gefährdungsanalysen weit verbreitet. Bayessche Netze betrachten die Probleme aus wahrscheinlichkeitstheoretischer Sicht und bieten somit eine sinnvolle Alternative zu deterministischen Verfahren. Alle vom Zufall beeinflussten Größen werden hierbei als Zufallsvariablen angesehen. Die gemeinsame Wahrscheinlichkeitsverteilung aller Variablen beschreibt das Zusammenwirken der verschiedenen Einflussgrößen und die zugehörige Unsicherheit/Zufälligkeit. Die Abhängigkeitsstrukturen der Variablen können durch eine grafische Darstellung abgebildet werden. Die Variablen werden dabei als Knoten in einem Graphen/Netzwerk dargestellt und die (Un-)Abhängigkeiten zwischen den Variablen als (fehlende) Verbindungen zwischen diesen Knoten. Die dargestellten Unabhängigkeiten veranschaulichen, wie sich die gemeinsame Wahrscheinlichkeitsverteilung in ein Produkt lokaler, bedingter Wahrscheinlichkeitsverteilungen zerlegen lässt. Im Verlauf dieser Arbeit werden verschiedene Naturgefahren (Erdbeben, Hochwasser und Bergstürze) betrachtet und mit Bayesschen Netzen modelliert. Dazu wird jeweils nach der Netzwerkstruktur gesucht, welche die Abhängigkeiten der Variablen am besten beschreibt. Außerdem werden die Parameter der lokalen, bedingten Wahrscheinlichkeitsverteilungen geschätzt, um das Bayessche Netz und dessen zugehörige gemeinsame Wahrscheinlichkeitsverteilung vollständig zu bestimmen. Die Definition des Bayesschen Netzes kann auf Grundlage von Expertenwissen erfolgen oder - so wie in dieser Arbeit - anhand von Beobachtungsdaten des zu untersuchenden Naturereignisses. Die hier verwendeten Methoden wählen Netzwerkstruktur und Parameter so, dass die daraus resultierende Wahrscheinlichkeitsverteilung den beobachteten Daten eine möglichst große Wahrscheinlichkeit zuspricht. Da dieses Vorgehen keine Expertenwissen voraussetzt, ist es universell in verschiedenen Gebieten der Gefährdungsanalyse einsetzbar. Trotz umfangreicher Forschung zu diesem Thema ist das Bestimmen von Bayesschen Netzen basierend auf Beobachtungsdaten nicht ohne Schwierigkeiten. Typische Herausforderungen stellen die Handhabung stetiger Variablen und unvollständiger Datensätze dar. Beide Probleme werden in dieser Arbeit behandelt. Es werden Lösungsansätze entwickelt und in den Anwendungsbeispielen eingesetzt. Eine Kernfrage ist hierbei die Komplexität des Algorithmus. Besonders wenn sowohl stetige Variablen als auch unvollständige Datensätze in Kombination auftreten, sind effizient arbeitende Verfahren gefragt. Die hierzu in dieser Arbeit entwickelten Methoden ermöglichen die Verarbeitung von großen Datensätze mit stetigen Variablen und unvollständigen Beobachtungen und leisten damit einen wichtigen Beitrag für die wahrscheinlichkeitstheoretische Gefährdungsanalyse.
2

Repairing event logs using stochastic process models

Rogge-Solti, Andreas, Mans, Ronny S., van der Aalst, Wil M. P., Weske, Mathias January 2013 (has links)
Companies strive to improve their business processes in order to remain competitive. Process mining aims to infer meaningful insights from process-related data and attracted the attention of practitioners, tool-vendors, and researchers in recent years. Traditionally, event logs are assumed to describe the as-is situation. But this is not necessarily the case in environments where logging may be compromised due to manual logging. For example, hospital staff may need to manually enter information regarding the patient’s treatment. As a result, events or timestamps may be missing or incorrect. In this paper, we make use of process knowledge captured in process models, and provide a method to repair missing events in the logs. This way, we facilitate analysis of incomplete logs. We realize the repair by combining stochastic Petri nets, alignments, and Bayesian networks. We evaluate the results using both synthetic data and real event data from a Dutch hospital. / Unternehmen optimieren ihre Geschäftsprozesse laufend um im kompetitiven Umfeld zu bestehen. Das Ziel von Process Mining ist es, bedeutende Erkenntnisse aus prozessrelevanten Daten zu extrahieren. In den letzten Jahren sorgte Process Mining bei Experten, Werkzeugherstellern und Forschern zunehmend für Aufsehen. Traditionell wird dabei angenommen, dass Ereignisprotokolle die tatsächliche Ist-Situation widerspiegeln. Dies ist jedoch nicht unbedingt der Fall, wenn prozessrelevante Ereignisse manuell erfasst werden. Ein Beispiel hierfür findet sich im Krankenhaus, in dem das Personal Behandlungen meist manuell dokumentiert. Vergessene oder fehlerhafte Einträge in Ereignisprotokollen sind in solchen Fällen nicht auszuschließen. In diesem technischen Bericht wird eine Methode vorgestellt, die das Wissen aus Prozessmodellen und historischen Daten nutzt um fehlende Einträge in Ereignisprotokollen zu reparieren. Somit wird die Analyse unvollständiger Ereignisprotokolle erleichtert. Die Reparatur erfolgt mit einer Kombination aus stochastischen Petri Netzen, Alignments und Bayes'schen Netzen. Die Ergebnisse werden mit synthetischen Daten und echten Daten eines holländischen Krankenhauses evaluiert.
3

Bayesian cognitive modeling of the balancing between goal-directed and habitual behavior

Schwöbel, Sarah 05 November 2020 (has links)
This thesis proposes a novel way to describe habit learning and the resulting balancing of goal-directed and habitual behavior using cognitive computational modeling. This approach builds on experimental evidence that habits may be understood as context-dependent automated sequences of behavior embedded in a hierarchical model. These assumptions were implemented in a Bayesian model, where goal-directed action sequences are encoded using a Markov decision process, and habits are interpreted to arise from a Bayesian prior over such sequences. Simulations show that this modeling approach yields key properties of habit learning, such as increased habit strength with increased training duration. This novel mechanistic description may lead to an improved understanding of habit learning mechanisms and individual learning trajectories, which may have implications for mental disorders which are believed to be accompanied by a maladapted balance between goal-directed an habitual control. / Diese Arbeit stellt eine neue mechanistische Beschreibung von Gewohnheitslernen und der daraus resultierenden Balance zwischen zielgerichtetem und habituellem Verhalten vor, die auf einem mathematischen kognitiven Modell aufbaut. Der Ansatz beruht auf experimenteller Evidenz, dass Gewohnheiten als kontext-abhängige, automatisierte Verhaltenssequenzen verstanden werden können, die in ein hierarchisches Modell eingebettet sind. Diese Annahmen werden mathematisch in einem Bayes'schen Modell umgesetzt, in dem zielgerichtetes Handeln als ein Markov'scher Entscheidungsprozess implementiert ist und Gewohnheiten aus einer Bayes'schen a-priori Wahrscheinlichkeit von Verhaltenssequenzen entstehen. Simulationen zeigen, dass dieser Ansatz wichtige Eigenschaften von Gewohnheitslernen reproduzieren kann, wie beispielsweise dass längere Trainingsdauern zu stärkeren Gewohnheiten führen. Diese neue mechanistische Beschreibung kann zu einem besseren Verständis individueller Lerntrajektorien und der Mechanismen beitragen, die dem Gewohnheitslernen zugrundeliegen. Dies könnte auch Auswirkungen auf das Verständnis psychischer Erkrankungen haben, bei denen davon ausgegangen wird, dass sie von einer maladaptiven Balance zwischen zielgerichtetem und habituellem Verhalten begleitet werden.
4

Application of Saliency Maps for Optimizing Camera Positioning in Deep Learning Applications

Wecke, Leonard-Riccardo Hans 05 January 2024 (has links)
In the fields of process control engineering and robotics, especially in automatic control, optimization challenges frequently manifest as complex problems with expensive evaluations. This thesis zeroes in on one such problem: the optimization of camera positions for Convolutional Neural Networks (CNNs). CNNs have specific attention points in images that are often not intuitive to human perception, making camera placement critical for performance. The research is guided by two primary questions. The first investigates the role of Explainable Artificial Intelligence (XAI), specifically GradCAM++ visual explanations, in Computer Vision for aiding in the evaluation of different camera positions. Building on this, the second question assesses a novel algorithm that leverages these XAI features against traditional black-box optimization methods. To answer these questions, the study employs a robotic auto-positioning system for data collection, CNN model training, and performance evaluation. A case study focused on classifying flow regimes in industrial-grade bioreactors validates the method. The proposed approach shows improvements over established techniques like Grid Search, Random Search, Bayesian optimization, and Simulated Annealing. Future work will focus on gathering more data and including noise for generalized conclusions.:Contents 1 Introduction 1.1 Motivation 1.2 Problem Analysis 1.3 Research Question 1.4 Structure of the Thesis 2 State of the Art 2.1 Literature Research Methodology 2.1.1 Search Strategy 2.1.2 Inclusion and Exclusion Criteria 2.2 Blackbox Optimization 2.3 Mathematical Notation 2.4 Bayesian Optimization 2.5 Simulated Annealing 2.6 Random Search 2.7 Gridsearch 2.8 Explainable A.I. and Saliency Maps 2.9 Flowregime Classification in Stirred Vessels 2.10 Performance Metrics 2.10.1 R2 Score and Polynomial Regression for Experiment Data Analysis 2.10.2 Blackbox Optimization Performance Metrics 2.10.3 CNN Performance Metrics 3 Methodology 3.1 Requirement Analysis and Research Hypothesis 3.2 Research Approach: Case Study 3.3 Data Collection 3.4 Evaluation and Justification 4 Concept 4.1 System Overview 4.2 Data Flow 4.3 Experimental Setup 4.4 Optimization Challenges and Approaches 5 Data Collection and Experimental Setup 5.1 Hardware Components 5.2 Data Recording and Design of Experiments 5.3 Data Collection 5.4 Post-Experiment 6 Implementation 6.1 Simulation Unit 6.2 Recommendation Scalar from Saliency Maps 6.3 Saliency Map Features as Guidance Mechanism 6.4 GradCam++ Enhanced Bayesian Optimization 6.5 Benchmarking Unit 6.6 Benchmarking 7 Results and Evaluation 7.1 Experiment Data Analysis 7.2 Recommendation Scalar 7.3 Benchmarking Results and Quantitative Analysis 7.3.1 Accuracy Results from the Benchmarking Process 7.3.2 Cumulative Results Interpretation 7.3.3 Analysis of Variability 7.4 Answering the Research Questions 7.5 Summary 8 Discussion 8.1 Critical Examination of Limitations 8.2 Discussion of Solutions to Limitations 8.3 Practice-Oriented Discussion of Findings 9 Summary and Outlook / Im Bereich der Prozessleittechnik und Robotik, speziell bei der automatischen Steuerung, treten oft komplexe Optimierungsprobleme auf. Diese Arbeit konzentriert sich auf die Optimierung der Kameraplatzierung in Anwendungen, die Convolutional Neural Networks (CNNs) verwenden. Da CNNs spezifische, für den Menschen nicht immer ersichtliche, Merkmale in Bildern hervorheben, ist die intuitive Platzierung der Kamera oft nicht optimal. Zwei Forschungsfragen leiten diese Arbeit: Die erste Frage untersucht die Rolle von Erklärbarer Künstlicher Intelligenz (XAI) in der Computer Vision zur Bereitstellung von Merkmalen für die Bewertung von Kamerapositionen. Die zweite Frage vergleicht einen darauf basierenden Algorithmus mit anderen Blackbox-Optimierungstechniken. Ein robotisches Auto-Positionierungssystem wird zur Datenerfassung und für Experimente eingesetzt. Als Lösungsansatz wird eine Methode vorgestellt, die XAI-Merkmale, insbesondere solche aus GradCAM++ Erkenntnissen, mit einem Bayesschen Optimierungsalgorithmus kombiniert. Diese Methode wird in einer Fallstudie zur Klassifizierung von Strömungsregimen in industriellen Bioreaktoren angewendet und zeigt eine gesteigerte performance im Vergleich zu etablierten Methoden. Zukünftige Forschung wird sich auf die Sammlung weiterer Daten, die Inklusion von verrauschten Daten und die Konsultation von Experten für eine kostengünstigere Implementierung konzentrieren.:Contents 1 Introduction 1.1 Motivation 1.2 Problem Analysis 1.3 Research Question 1.4 Structure of the Thesis 2 State of the Art 2.1 Literature Research Methodology 2.1.1 Search Strategy 2.1.2 Inclusion and Exclusion Criteria 2.2 Blackbox Optimization 2.3 Mathematical Notation 2.4 Bayesian Optimization 2.5 Simulated Annealing 2.6 Random Search 2.7 Gridsearch 2.8 Explainable A.I. and Saliency Maps 2.9 Flowregime Classification in Stirred Vessels 2.10 Performance Metrics 2.10.1 R2 Score and Polynomial Regression for Experiment Data Analysis 2.10.2 Blackbox Optimization Performance Metrics 2.10.3 CNN Performance Metrics 3 Methodology 3.1 Requirement Analysis and Research Hypothesis 3.2 Research Approach: Case Study 3.3 Data Collection 3.4 Evaluation and Justification 4 Concept 4.1 System Overview 4.2 Data Flow 4.3 Experimental Setup 4.4 Optimization Challenges and Approaches 5 Data Collection and Experimental Setup 5.1 Hardware Components 5.2 Data Recording and Design of Experiments 5.3 Data Collection 5.4 Post-Experiment 6 Implementation 6.1 Simulation Unit 6.2 Recommendation Scalar from Saliency Maps 6.3 Saliency Map Features as Guidance Mechanism 6.4 GradCam++ Enhanced Bayesian Optimization 6.5 Benchmarking Unit 6.6 Benchmarking 7 Results and Evaluation 7.1 Experiment Data Analysis 7.2 Recommendation Scalar 7.3 Benchmarking Results and Quantitative Analysis 7.3.1 Accuracy Results from the Benchmarking Process 7.3.2 Cumulative Results Interpretation 7.3.3 Analysis of Variability 7.4 Answering the Research Questions 7.5 Summary 8 Discussion 8.1 Critical Examination of Limitations 8.2 Discussion of Solutions to Limitations 8.3 Practice-Oriented Discussion of Findings 9 Summary and Outlook
5

Quantifying and mathematical modelling of the influence of soluble adenylate cyclase on cell cycle in human endothelial cells with Bayesian inference

Woranush, Warunya, Moskopp, Mats Leif, Noll, Thomas, Dieterich, Peter 22 April 2024 (has links)
Adenosine-3′, 5′-cyclic monophosphate (cAMP) produced by adenylate cyclases (ADCYs) is an established key regulator of cell homoeostasis. However, its role in cell cycle control is still controversially discussed. This study focussed on the impact of soluble HCO3− -activated ADCY10 on cell cycle progression. Effects are quantified with Bayesian inference integrating a mathematical model and experimental data. The activity of ADCY10 in human umbilical vein endothelial cells (HUVECs) was either pharmacologically inhibited by KH7 or endogenously activated by HCO3−. Cell numbers of individual cell cycle phases were assessed over time using flow cytometry. Based on these numbers, cell cycle dynamics were analysed using a mathematical model. This allowed precise quantification of cell cycle dynamics with model parameters that describe the durations of individual cell cycle phases. Endogenous inactivation of ADCY10 resulted in prolongation of mean cell cycle times (38.7 ± 8.3 h at 0 mM HCO3− vs 30.3 ± 2.7 h at 24 mM HCO3−), while pharmacological inhibition resulted in functional arrest of cell cycle by increasing mean cell cycle time after G0/G1 synchronization to 221.0 ± 96.3 h. All cell cycle phases progressed slower due to ADCY10 inactivation. In particular, the G1-S transition was quantitatively the most influenced by ADCY10. In conclusion, the data of the present study show that ADCY10 is a key regulator in cell cycle progression linked specifically to the G1-S transition.
6

Revealing human sensitivity to a latent temporal structure of changes

Marković, Dimitrije, Reiter, Andrea M. F., Kiebel, Stefan J. 22 May 2024 (has links)
Precisely timed behavior and accurate time perception plays a critical role in our everyday lives, as our wellbeing and even survival can depend on well-timed decisions. Although the temporal structure of the world around us is essential for human decision making, we know surprisingly little about how representation of temporal structure of our everyday environment impacts decision making. How does the representation of temporal structure affect our ability to generate well-timed decisions? Here we address this question by using a well-established dynamic probabilistic learning task. Using computational modeling, we found that human subjects' beliefs about temporal structure are reflected in their choices to either exploit their current knowledge or to explore novel options. The model-based analysis illustrates a large within-group and within-subject heterogeneity. To explain these results, we propose a normative model for how temporal structure is used in decision making, based on the semi-Markov formalism in the active inference framework. We discuss potential key applications of the presented approach to the fields of cognitive phenotyping and computational psychiatry.
7

Intact Context-Dependent Modulation of Conflict Monitoring in Childhood ADHD

Bluschke, Annet, Chmielewski, Witold X., Roessner, Veit, Beste, Christian 18 May 2022 (has links)
Objective: Conflict monitoring is well known to be modulated by context. This is known as the Gratton effect, meaning that the degree of interference is smaller when a stimulus–response conflict had been encountered previously. It is unclear to what extent these processes are changed in ADHD. Method: Children with ADHD (combined subtype) and healthy controls performed a modified version of the sequence flanker task. Results: Patients with ADHD made significantly more errors than healthy controls, indicating general performance deficits. However, there were no differences regarding reaction times, indicating an intact Gratton effect in ADHD. These results were supported by Bayesian statistics. Conclusion: The results suggest that the ability to take contextual information into account during conflict monitoring is preserved in patients with ADHD despite this disorder being associated with changes in executive control functions overall. These findings are discussed in light of different theoretical accounts on contextual modulations of conflict monitoring. (J. of Att. Dis. 2020; 24(11) 1503-1510)
8

Stochastic Motion Stimuli Influence Perceptual Choices in Human Participants

Fard, Pouyan R., Bitzer, Sebastian, Pannasch, Sebastian, Kiebel, Stefan J. 22 March 2024 (has links)
In the study of perceptual decision making, it has been widely assumed that random fluctuations of motion stimuli are irrelevant for a participant’s choice. Recently, evidence was presented that these random fluctuations have a measurable effect on the relationship between neuronal and behavioral variability, the so-called choice probability. Here, we test, in a behavioral experiment, whether stochastic motion stimuli influence the choices of human participants. Our results show that for specific stochastic motion stimuli, participants indeed make biased choices, where the bias is consistent over participants. Using a computational model, we show that this consistent choice bias is caused by subtle motion information contained in the motion noise. We discuss the implications of this finding for future studies of perceptual decision making. Specifically, we suggest that future experiments should be complemented with a stimulus-informed modeling approach to control for the effects of apparent decision evidence in random stimuli.
9

Comparative Analysis of Behavioral Models for Adaptive Learning in Changing Environments

Marković, Dimitrije, Kiebel, Stefan J. 16 January 2017 (has links) (PDF)
Probabilistic models of decision making under various forms of uncertainty have been applied in recent years to numerous behavioral and model-based fMRI studies. These studies were highly successful in enabling a better understanding of behavior and delineating the functional properties of brain areas involved in decision making under uncertainty. However, as different studies considered different models of decision making under uncertainty, it is unclear which of these computational models provides the best account of the observed behavioral and neuroimaging data. This is an important issue, as not performing model comparison may tempt researchers to over-interpret results based on a single model. Here we describe how in practice one can compare different behavioral models and test the accuracy of model comparison and parameter estimation of Bayesian and maximum-likelihood based methods. We focus our analysis on two well-established hierarchical probabilistic models that aim at capturing the evolution of beliefs in changing environments: Hierarchical Gaussian Filters and Change Point Models. To our knowledge, these two, well-established models have never been compared on the same data. We demonstrate, using simulated behavioral experiments, that one can accurately disambiguate between these two models, and accurately infer free model parameters and hidden belief trajectories (e.g., posterior expectations, posterior uncertainties, and prediction errors) even when using noisy and highly correlated behavioral measurements. Importantly, we found several advantages of Bayesian inference and Bayesian model comparison compared to often-used Maximum-Likelihood schemes combined with the Bayesian Information Criterion. These results stress the relevance of Bayesian data analysis for model-based neuroimaging studies that investigate human decision making under uncertainty.
10

Comparative Analysis of Behavioral Models for Adaptive Learning in Changing Environments

Marković, Dimitrije, Kiebel, Stefan J. 16 January 2017 (has links)
Probabilistic models of decision making under various forms of uncertainty have been applied in recent years to numerous behavioral and model-based fMRI studies. These studies were highly successful in enabling a better understanding of behavior and delineating the functional properties of brain areas involved in decision making under uncertainty. However, as different studies considered different models of decision making under uncertainty, it is unclear which of these computational models provides the best account of the observed behavioral and neuroimaging data. This is an important issue, as not performing model comparison may tempt researchers to over-interpret results based on a single model. Here we describe how in practice one can compare different behavioral models and test the accuracy of model comparison and parameter estimation of Bayesian and maximum-likelihood based methods. We focus our analysis on two well-established hierarchical probabilistic models that aim at capturing the evolution of beliefs in changing environments: Hierarchical Gaussian Filters and Change Point Models. To our knowledge, these two, well-established models have never been compared on the same data. We demonstrate, using simulated behavioral experiments, that one can accurately disambiguate between these two models, and accurately infer free model parameters and hidden belief trajectories (e.g., posterior expectations, posterior uncertainties, and prediction errors) even when using noisy and highly correlated behavioral measurements. Importantly, we found several advantages of Bayesian inference and Bayesian model comparison compared to often-used Maximum-Likelihood schemes combined with the Bayesian Information Criterion. These results stress the relevance of Bayesian data analysis for model-based neuroimaging studies that investigate human decision making under uncertainty.

Page generated in 0.0517 seconds