31 |
Search for the Standard Model Higgs boson in the dimuon decay channel with the ATLAS detectorRudolph, Christian 02 October 2014 (has links) (PDF)
Die Suche nach dem Higgs-Boson des Standardmodells der Teilchenphysik stellte einen der Hauptgründe für den Bau des Large Hadron Colliders (LHC) dar, dem derzeit größten Teilchenphysik-Experiment der Welt. Die vorliegende Arbeit ist gleichfalls von dieser Suche getrieben. Der direkte Zerfall des Higgs-Bosons in Myonen wird untersucht. Dieser Kanal hat mehrere Vorteile. Zum einen ist der Endzustand, bestehend aus zwei Myonen unterschiedlicher Ladung, leicht nachzuweisen und besitzt eine klare Signatur. Weiterhin ist die Massenauflösung hervorragend, sodass eine gegebenenfalls vorhandene Resonanz gleich in ihrer grundlegenden Eigenschaft - ihrer Masse - bestimmt werden kann. Leider ist der Zerfall des Higgs-Bosons in ein Paar von Myonen sehr selten. Lediglich etwa 2 von 10000 erzeugten Higgs-Bosonen zeigen diesen Endzustand . Außerdem existiert mit dem Standardmodellprozess Z/γ∗ → μμ ein Zerfall mit einer sehr ähnlichen Signatur, jedoch um Größenordnungen höherer Eintrittswahrscheinlichkeit. Auf ein entstandenes Higgs-Boson kommen so etwa 1,5 Millionen Z-Bosonen, welche am LHC bei einer Schwerpunktsenergie von 8 TeV produziert werden.
In dieser Arbeit werden zwei eng miteinander verwandte Analysen präsentiert. Zum einen handelt es sich hierbei um die Untersuchung des Datensatzes von Proton-Proton-Kollisionen bei einer Schwerpunktsenergie von 8 TeV, aufgezeichnet vom ATLAS-Detektor im Jahre 2012, auch als alleinstehende Analyse bezeichnet. Zum anderen erfolgt die Präsentation der kombinierten Analyse des kompletten Run-I Datensatzes, welcher aus Aufzeichnungen von Proton-Proton-Kollisionen der Jahre 2011 und 2012 bei Schwerpunktsenergien von 7 TeV bzw. 8 TeV besteht. In beiden Fällen wird die Verteilung der invarianten Myon-Myon-Masse nach einer schmalen Resonanzsignatur auf der kontinuierlichen Untergrundverteilung hin untersucht. Dabei dient die theoretisch erwartete Massenverteilung sowie die Massenauflösung des ATLAS-Detektors als Grundlage, um analytische Parametrisierungen der Signal- und Untergrundverteilungen zu entwickeln. Auf diese Art wird der Einfluss systematischer Unsicherheiten auf Grund von ungenauer Beschreibung der Spektren in Monte-Carlo Simulationen verringert. Verbleibende systematische Unsicherheiten auf die Signalakzeptanz werden auf eine neuartige Weise bestimmt. Zusätzlich wird ein bisher einzigartiger Ansatz verfolgt, um die systematische Unsicherheit resultierend aus der Wahl der Untergrundparametrisierung in der kombinierten Analyse verfolgt. Zum ersten Mal wird dabei die Methode des scheinbaren Signals auf einem simulierten Untergrunddatensatz auf Generator-Niveau angewendet, was eine Bestimmung des Einflusses des Untergrundmodells auf die Anzahl der ermittelten Signalereignisse mit nie dagewesener Präzision ermöglicht.
In keiner der durchgeführten Analysen konnte ein signifikanter Überschuss im invarianten Massenspektrum des Myon-Myon-Systems nachgewiesen werden, sodass obere Ausschlussgrenzen auf die Signalstärke μ = σ/σ(SM) in Abhängigkeit von der Higgs-Boson-Masse gesetzt werden. Dabei sind Stärken von μ ≥ 10,13 bzw. μ ≥ 7,05 mit einem Konfidenzniveau von 95% durch die alleinstehende bzw. kombinierte Analyse ausgeschlossen, jeweils für eine Higgs-Boson-Masse von 125,5 GeV.
Die erzielten Ergebnisse werden ebenfalls im Hinblick auf die kürzlich erfolgte Entdeckung des neuen Teilchens interpretiert, dessen Eigenschaften mit den Vorhersagen eines Standardmodell-Higgs-Bosons mit einer Masse von etwa 125,5 GeV kompatibel sind. Dabei werden obere Grenzen auf das Verzweigungsverhältnis von BR(H → μμ) ≤ 1,3 × 10^−3 und auf die Yukawa-Kopplung des Myons von λμ ≤ 1,6 × 10^−3 gesetzt, jeweils mit einem Konfidenzniveau von 95%. / The search for the Standard Model Higgs boson was one of the key motivations to build the world’s largest particle physics experiment to date, the Large Hadron Collider (LHC). This thesis is equally driven by this search, and it investigates the direct muonic decay of the Higgs boson. The decay into muons has several advantages: it provides a very clear final state with two muons of opposite charge, which can easily be detected.
In addition, the muonic final state has an excellent mass resolution, such that an observed resonance can be pinned down in one of its key properties: its mass. Unfortunately, the decay of a Standard Model Higgs boson into a pair of muons is very rare, only two out of 10000 Higgs bosons are predicted to exhibit this decay. On top of that, the non-resonant Standard Model background arising from the Z/γ∗ → μμ process has a very similar signature, while possessing a much higher cross-section. For one produced Higgs boson, there are approximately 1.5 million Z bosons produced at the LHC for a centre-of-mass energy of 8 TeV. Two related analyses are presented in this thesis: the investigation of 20.7 fb^−1 of the proton-proton collision dataset recorded by the ATLAS detector in 2012, referred to as standalone analysis, and the combined analysis as the search in the full run-I dataset consisting of proton-proton collision data recorded in 2011 and 2012, which corresponds to an integrated luminosity of L = 24.8 fb^−1 .
In each case, the dimuon invariant mass spectrum is examined for a narrow resonance on top of the continuous background distribution. The dimuon phenomenology and ATLAS detector performance serve as the foundations to develop analytical models describing the spectra. Using these analytical parametrisations for the signal and background mass distributions, the sensitivity of the analyses to systematic uncertainties due to Monte-Carlo simulation mismodeling are minimised. These residual systematic uncertainties are addressed in a unique way as signal acceptance uncertainties. In addition, a new approach to assess the systematic uncertainty associated with the choice of the background model is designed for the combined analysis.
For the first time, the spurious signal technique is performed on generator-level simulated background samples, which allows for a precise determination of the background fit bias. No statistically significant excess in the dimuon invariant mass spectrum is observed in either analysis, and upper limits are set on the signal strength μ = σ/σ(SM) as a function of the Higgs boson mass. Signal strengths of μ ≥ 10.13 and μ ≥ 7.05 are excluded for a Higgs boson mass of 125.5 GeV with a confidence level of 95% by the standalone and combined analysis, respectively. In the light of the discovery of a particle consistent with the predictions for a Standard Model Higgs boson with a mass of m H = 125.5 GeV, the search results are reinterpreted for this special case, setting upper limits on the Higgs boson branching ratio of BR(H →μμ) ≤ 1.3 × 10^−3, and on the muon Yukawa coupling of λμ ≤ 1.6 × 10^−3 , both with a confidence level of 95 %. Read more
|
32 |
Braindump – Konzept eines visuellen Systems für das Wissensmanagement am Beispiel der Verwaltung von InternetquellenBrade, Marius, Lamack, Frank, Groh, Rainer 16 May 2014 (has links) (PDF)
No description available.
|
33 |
Search for neutral MSSM Higgs bosons in the fully hadronic di-tau decay channel with the ATLAS detectorWahrmund, Sebastian 18 July 2017 (has links) (PDF)
The search for additional heavy neutral Higgs bosons predicted in Minimal Supersymmetric Extensions of the Standard Model is presented, using the direct decay channel into two tau leptons which themselves decay hadronically. The study is based on proton-proton collisions recorded in 2011 at a center-of-mass energy of 7 TeV with the ATLAS detector at the Large Hadron Collider at CERN. With a sample size corresponding to an integrated luminosity of 4.5 fb−1, no significant excess above the expected Standard Model background prediction is observed and CLs exclusion limits at a 95% confidence level are evaluated for values of the CP-odd Higgs boson mass mA between 140 GeV to 800 GeV within the context of the mhmax and mhmod± benchmark scenarios. The results are combined with searches for neutral Higgs bosons performed using proton-proton collisions at a center-of-mass energy of 8 TeV recorded with the ATLAS detector in 2012, with a corresponding integrated luminosity of 19.5 fb−1. The combination allowed an improvement of the exclusion limit at the order of 1 to 3 units in tan β.
Within the context of this study, the structure of additional interactions during a single proton-proton collision (the “underlying event”) in di-jet final states is analyzed using collision data at a center-of-mass energy of 7 TeV recorded with the ATLAS detector in 2010, with a corresponding integrated luminosity of 37 pb−1. The contribution of the underlying event is measured up to an energy scale of 800 GeV and compared to the predictions of various models. For several models, significant deviations compared to the measurements are found and the results are provided for the optimization of simulation algorithms. Read more
|
34 |
Braindump – Konzept eines visuellen Systems für das Wissensmanagement am Beispiel der Verwaltung von InternetquellenBrade, Marius, Lamack, Frank, Groh, Rainer January 2009 (has links)
No description available.
|
35 |
Unscharfe Suche für Terme geringer Frequenz in einem großen Korpus / Fuzzy Search for Infrequent Terms in a Large CorpusGerhards, Karl 10 January 2011 (has links)
Until now infrequent terms have been neglected in searching in order to save time
and memory. With the help of a cascaded index and the introduced algorithms,
such considerations are no longer necessary.
A fast and efficient method was developed in order to find all terms in the
largest freely available corpus of texts in the German language by exact search,
part-word-search and fuzzy search.
The process can be extended to include transliterated passages.
In addition, documents that contain the term with a modified spelling, can
also be found by a fuzzy search.
Time and memory requirements are determined and fall considerably below
the requests of common search engines.
|
36 |
Two-Stage Vehicle Routing Problems with Profits and Buffers: Analysis and Metaheuristic Optimization AlgorithmsLe, Hoang Thanh 09 June 2023 (has links)
This thesis considers the Two-Stage Vehicle Routing Problem (VRP) with Profits and Buffers, which generalizes various optimization problems that are relevant for practical applications, such as the Two-Machine Flow Shop with Buffers and the Orienteering Problem. Two optimization problems are considered for the Two-Stage VRP with Profits and Buffers, namely the minimization of total time while respecting a profit constraint and the maximization of total profit under a budget constraint. The former generalizes the makespan minimization problem for the Two-Machine Flow Shop with Buffers, whereas the latter is comparable to the problem of maximizing score in the Orienteering Problem.
For the three problems, a theoretical analysis is performed regarding computational complexity, existence of optimal permutation schedules (where all vehicles traverse the same nodes in the same order) and potential gaps in attainable solution quality between permutation schedules and non-permutation schedules. The obtained theoretical results are visualized in a table that gives an overview of various subproblems belonging to the Two-Stage VRP with Profits and Buffers, their theoretical properties and how they are connected.
For the Two-Machine Flow Shop with Buffers and the Orienteering Problem, two metaheuristics 2BF-ILS and VNSOP are presented that obtain favorable results in computational experiments when compared to other state-of-the-art algorithms. For the Two-Stage VRP with Profits and Buffers, an algorithmic framework for Iterative Search Algorithms with Variable Neighborhoods (ISAVaN) is proposed that generalizes aspects from 2BF-ILS as well as VNSOP. Various algorithms derived from that framework are evaluated in an experimental study. The evaluation methodology used for all computational experiments in this thesis takes the performance during the run time into account and demonstrates that algorithms for structurally different problems, which are encompassed by the Two-Stage VRP with Profits and Buffers, can be evaluated with similar methods.
The results show that the most suitable choice for the components in these algorithms is dependent on the properties of the problem and the considered evaluation criteria. However, a number of similarities to algorithms that perform well for the Two-Machine Flow Shop with Buffers and the Orienteering Problem can be identified. The framework unifies these characteristics, providing a spectrum of algorithms that can be adapted to the specifics of the considered Vehicle Routing Problem.:1 Introduction
2 Background
2.1 Problem Motivation
2.2 Formal Definition of the Two-Stage VRP with Profits and Buffers
2.3 Review of Literature on Related Vehicle Routing Problems
2.3.1 Two-Stage Vehicle Routing Problems
2.3.2 Vehicle Routing Problems with Profits
2.3.3 Vehicle Routing Problems with Capacity- or Resource-based Restrictions
2.4 Preliminary Remarks on Subsequent Chapters
3 The Two-Machine Flow Shop Problem with Buffers
3.1 Review of Literature on Flow Shop Problems with Buffers
3.1.1 Algorithms and Metaheuristics for Flow Shops with Buffers
3.1.2 Two-Machine Flow Shop Problems with Buffers
3.1.3 Blocking Flow Shops
3.1.4 Non-Permutation Schedules
3.1.5 Other Extensions and Variations of Flow Shop Problems
3.2 Theoretical Properties
3.2.1 Computational Complexity
3.2.2 The Existence of Optimal Permutation Schedules
3.2.3 The Gap Between Permutation Schedules an Non-Permutation
3.3 A Modification of the NEH Heuristic
3.4 An Iterated Local Search for the Two-Machine Flow Shop Problem with Buffers
3.5 Computational Evaluation
3.5.1 Algorithms for Comparison
3.5.2 Generation of Problem Instances
3.5.3 Parameter Values
3.5.4 Comparison of 2BF-ILS with other Metaheuristics
3.5.5 Comparison of 2BF-OPT with NEH
3.6 Summary
4 The Orienteering Problem
4.1 Review of Literature on Orienteering Problems
4.2 Theoretical Properties
4.3 A Variable Neighborhood Search for the Orienteering Problem
4.4 Computational Evaluation
4.4.1 Measurement of Algorithm Performance
4.4.2 Choice of Algorithms for Comparison
4.4.3 Problem Instances
4.4.4 Parameter Values
4.4.5 Experimental Setup
4.4.6 Comparison of VNSOP with other Metaheuristics
4.5 Summary
5 The Two-Stage Vehicle Routing Problem with Profits and Buffers
5.1 Theoretical Properties of the Two-Stage VRP with Profits and Buffers
5.1.1 Computational Complexity of the General Problem
5.1.2 Existence of Permutation Schedules in the Set of Optimal Solutions
5.1.3 The Gap Between Permutation Schedules an Non-Permutation Schedules
5.1.4 Remarks on Restricted Cases
5.1.5 Overview of Theoretical Results
5.2 A Metaheuristic Framework for the Two-Stage VRP with Profits and Buffers
5.3 Experimental Results
5.3.1 Problem Instances
5.3.2 Experimental Results for O_{max R, Cmax≤B}
5.3.3 Experimental Results for O_{min Cmax, R≥Q}
5.4 Summary
Bibliography
List of Figures
List of Tables
List of Algorithms Read more
|
37 |
Cost-aware sequential diagnosticsGanter, Bernhard 19 March 2024 (has links)
A simple search problem is studied in which a binary n-tuple is to be found in a list, by sequential bit comparisons with cost. The problem can be solved (for small n) using dynamic programming. We show how the “bottom up” part of the algorithm can be organized by means of Formal Concept Analysis.
|
38 |
Application of Saliency Maps for Optimizing Camera Positioning in Deep Learning ApplicationsWecke, Leonard-Riccardo Hans 05 January 2024 (has links)
In the fields of process control engineering and robotics, especially in automatic control, optimization challenges frequently manifest as complex problems with expensive evaluations. This thesis zeroes in on one such problem: the optimization of camera positions for Convolutional Neural Networks (CNNs). CNNs have specific attention points in images that are often not intuitive to human perception, making camera placement critical for performance.
The research is guided by two primary questions. The first investigates the role of Explainable Artificial Intelligence (XAI), specifically GradCAM++ visual explanations, in Computer Vision for aiding in the evaluation of different camera positions. Building on this, the second question assesses a novel algorithm that leverages these XAI features against traditional black-box optimization methods.
To answer these questions, the study employs a robotic auto-positioning system for data collection, CNN model training, and performance evaluation. A case study focused on classifying flow regimes in industrial-grade bioreactors validates the method. The proposed approach shows improvements over established techniques like Grid Search, Random Search, Bayesian optimization, and Simulated Annealing. Future work will focus on gathering more data and including noise for generalized conclusions.:Contents
1 Introduction
1.1 Motivation
1.2 Problem Analysis
1.3 Research Question
1.4 Structure of the Thesis
2 State of the Art
2.1 Literature Research Methodology
2.1.1 Search Strategy
2.1.2 Inclusion and Exclusion Criteria
2.2 Blackbox Optimization
2.3 Mathematical Notation
2.4 Bayesian Optimization
2.5 Simulated Annealing
2.6 Random Search
2.7 Gridsearch
2.8 Explainable A.I. and Saliency Maps
2.9 Flowregime Classification in Stirred Vessels
2.10 Performance Metrics
2.10.1 R2 Score and Polynomial Regression for Experiment Data Analysis
2.10.2 Blackbox Optimization Performance Metrics
2.10.3 CNN Performance Metrics
3 Methodology
3.1 Requirement Analysis and Research Hypothesis
3.2 Research Approach: Case Study
3.3 Data Collection
3.4 Evaluation and Justification
4 Concept
4.1 System Overview
4.2 Data Flow
4.3 Experimental Setup
4.4 Optimization Challenges and Approaches
5 Data Collection and Experimental Setup
5.1 Hardware Components
5.2 Data Recording and Design of Experiments
5.3 Data Collection
5.4 Post-Experiment
6 Implementation
6.1 Simulation Unit
6.2 Recommendation Scalar from Saliency Maps
6.3 Saliency Map Features as Guidance Mechanism
6.4 GradCam++ Enhanced Bayesian Optimization
6.5 Benchmarking Unit
6.6 Benchmarking
7 Results and Evaluation
7.1 Experiment Data Analysis
7.2 Recommendation Scalar
7.3 Benchmarking Results and Quantitative Analysis
7.3.1 Accuracy Results from the Benchmarking Process
7.3.2 Cumulative Results Interpretation
7.3.3 Analysis of Variability
7.4 Answering the Research Questions
7.5 Summary
8 Discussion
8.1 Critical Examination of Limitations
8.2 Discussion of Solutions to Limitations
8.3 Practice-Oriented Discussion of Findings
9 Summary and Outlook / Im Bereich der Prozessleittechnik und Robotik, speziell bei der automatischen Steuerung, treten oft komplexe Optimierungsprobleme auf. Diese Arbeit konzentriert sich auf die Optimierung der Kameraplatzierung in Anwendungen, die Convolutional Neural Networks (CNNs) verwenden. Da CNNs spezifische, für den Menschen nicht immer ersichtliche, Merkmale in Bildern hervorheben, ist die intuitive Platzierung der Kamera oft nicht optimal.
Zwei Forschungsfragen leiten diese Arbeit: Die erste Frage untersucht die Rolle von Erklärbarer Künstlicher Intelligenz (XAI) in der Computer Vision zur Bereitstellung von Merkmalen für die Bewertung von Kamerapositionen. Die zweite Frage vergleicht einen darauf basierenden Algorithmus mit anderen Blackbox-Optimierungstechniken. Ein robotisches Auto-Positionierungssystem wird zur Datenerfassung und für Experimente eingesetzt.
Als Lösungsansatz wird eine Methode vorgestellt, die XAI-Merkmale, insbesondere solche aus GradCAM++ Erkenntnissen, mit einem Bayesschen Optimierungsalgorithmus kombiniert. Diese Methode wird in einer Fallstudie zur Klassifizierung von Strömungsregimen in industriellen Bioreaktoren angewendet und zeigt eine gesteigerte performance im Vergleich zu etablierten Methoden. Zukünftige Forschung wird sich auf die Sammlung weiterer Daten, die Inklusion von verrauschten Daten und die Konsultation von Experten für eine kostengünstigere Implementierung konzentrieren.:Contents
1 Introduction
1.1 Motivation
1.2 Problem Analysis
1.3 Research Question
1.4 Structure of the Thesis
2 State of the Art
2.1 Literature Research Methodology
2.1.1 Search Strategy
2.1.2 Inclusion and Exclusion Criteria
2.2 Blackbox Optimization
2.3 Mathematical Notation
2.4 Bayesian Optimization
2.5 Simulated Annealing
2.6 Random Search
2.7 Gridsearch
2.8 Explainable A.I. and Saliency Maps
2.9 Flowregime Classification in Stirred Vessels
2.10 Performance Metrics
2.10.1 R2 Score and Polynomial Regression for Experiment Data Analysis
2.10.2 Blackbox Optimization Performance Metrics
2.10.3 CNN Performance Metrics
3 Methodology
3.1 Requirement Analysis and Research Hypothesis
3.2 Research Approach: Case Study
3.3 Data Collection
3.4 Evaluation and Justification
4 Concept
4.1 System Overview
4.2 Data Flow
4.3 Experimental Setup
4.4 Optimization Challenges and Approaches
5 Data Collection and Experimental Setup
5.1 Hardware Components
5.2 Data Recording and Design of Experiments
5.3 Data Collection
5.4 Post-Experiment
6 Implementation
6.1 Simulation Unit
6.2 Recommendation Scalar from Saliency Maps
6.3 Saliency Map Features as Guidance Mechanism
6.4 GradCam++ Enhanced Bayesian Optimization
6.5 Benchmarking Unit
6.6 Benchmarking
7 Results and Evaluation
7.1 Experiment Data Analysis
7.2 Recommendation Scalar
7.3 Benchmarking Results and Quantitative Analysis
7.3.1 Accuracy Results from the Benchmarking Process
7.3.2 Cumulative Results Interpretation
7.3.3 Analysis of Variability
7.4 Answering the Research Questions
7.5 Summary
8 Discussion
8.1 Critical Examination of Limitations
8.2 Discussion of Solutions to Limitations
8.3 Practice-Oriented Discussion of Findings
9 Summary and Outlook Read more
|
39 |
Are YouTube videos on cutaneous squamous cell carcinoma a useful and reliable source for patients?Reinhardt, Lydia, Steeb, Theresa, Harlaß, Matthias, Brütting, Julia, Meier, Friedegund, Berking, Carola 21 May 2024 (has links)
A variety of new treatment options for skin cancer patients drives the need for information and education, which is increasingly met by videos and websites [1, 2]. However, distinguishing between high- and low-quality content becomes more difficult as the number of videos increases. Recently, videos addressing patients with melanoma or basal cell carcinoma (BCC) were found to be of predominantly mediocre quality and poor reliability [3, 4]. Until now, no evaluation of videos on cutaneous squamous cell carcinoma (cSCC) has been performed. Furthermore, no patient guideline currently exists for this entity [5–7]. Therefore, we aimed to systematically identify and evaluate videos on cSCC, the worldwide second most common type of skin cancer after BCC [8]. Our results will contribute to shared decision-making and help physicians and patients to select high-quality videos.
|
40 |
Psychological process models and aggregate behaviorAnalytis, Pantelis Pipergias 17 September 2015 (has links)
Diese Dissertation umfasst drei voneinander unabhängige Artikel. In diesen werden neue Prozess-modelle vorgestellt, die von der entscheidungspsychologischen Forschung inspiriert wurden. Im ersten Artikel werden Entscheidungsprozesse mit mehreren Entscheidungsmerkmalen als gesteuerte Suchprozesse modelliert. Zunächst wird ein theoretischer Rahmen vorgestellt, in dem ökonomische Modelle Entscheidungen mit Suche mit Modellen des subjektiven Nutzens aus dem Bereich der psychologischen Forschung integriert wird. In den so modellierten Entscheidungsprozessen wird angenommen, dass Individuen ihre Entscheidungsalternativen nach deren abnehmenden Nutzen ordnen und dann so lange durchsuchen, bis die erwarteten Suchkosten höher als die entsprechenden Gewinne sind. Anschliessend wird die Güte dreier Entscheidungsmodelle an zwölf realen Datensätzen überprüft. Im zweiten Artikel werden die Ergebnisse zweier Experimente vorgestellt, in denen untersucht wurde, wie Personen ihre Urteile verändern, wenn sie den Urteilen und dem der Konfidenzniveau anderer Personen ausgesetzt sind. Ein Baummodell wird eingeführt, welches abbildet, wie Urteile aufgrund solcher Informationen revidiert werden. Dieses Modell basiert auf den Ergebnissen der beiden Experimente: Indem soziale Informationen berücksichtigt werden, kann es zeigen, wie Urteile in einer Gruppe interagierender Personen zusammenlaufen oder polarisieren. Im dritten Artikel wird kollektives Verhalten in Märkten für kulturelle Produkte untersucht. Personen ordnen die Optionen entsprechend ihrer Popularität an und entscheiden sich dann für diejenige, die einen Nutzen hat, der über einer bestimmten ausreichend guten Schwelle liegt. Nach jeder individuellen Entscheidung wird die Rangfolge revidiert. Innerhalb dieses einfachen Rahmens wird demonstriert, dass solche Märkte durch eine sogenannte rich get richer-Dynamik charakterisiert sind. Diese führt zu Ungleichheiten in den Marktanteilen und ungewissen finanziellen Erlösen. / This dissertation comprises of three independent essays which introduce novel psychologically inspired process models and examine their implications for individual, collective or market behavior. The first essay studies multi-attribute choice as a guided process of search. It puts forward a theoretical framework which integrates work on search and stopping with partial information from economics with psychological subjective utility models from the field of judgment and decision making. The alternatives are searched in order of decreasing estimated utility, until the expected cost of search exceeds the relevant benefits; The essay presents the results of a performance comparison of three well-studied multi-attribute choice models.The second essay reports the results of two experiments designed to understand how people revise their judgments of factual questions after being exposed to the opinion and confidence levels of others. It introduces a tree model of judgment revision which is directly derived from the empirical observations. The model demonstrates how opinions in a group of interacting people can converge or polarize over repeated interactions. The third essay, studies collective behavior in markets for search products. The decision makers consider the alternatives in order of decreasing popularity and choose the first alternative with utility higher than a certain satisficing threshold. The popularity order is updated after each individual choice. The presented framework illustrates that such markets are characterized by rich-get-richer dynamics which lead to inequality in the market-share distribution and unpredictability in regard to the final outcome. Read more
|
Page generated in 0.0321 seconds