• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 41
  • 23
  • 21
  • 19
  • 16
  • 12
  • 11
  • 9
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 372
  • 113
  • 104
  • 69
  • 68
  • 67
  • 56
  • 47
  • 44
  • 41
  • 32
  • 31
  • 27
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

L'impact des réactions affectives multiples sur la prise de décision : combinaison de l'affect et les mécanismes médiateurs de l’influence affective / The Impact of Multiple Affective Reactions on Decision Making : Combination of Affect and the Mediating Mechanisms of Affective Influence

Efendic, Emir 23 June 2017 (has links)
Alors qu'il existe de nombreuses études qui démontrent la manière dont une seule réaction affective impacte une décision, il n'y a pratiquement aucune recherche qui s'est intéressée à l'impact des réactions affectives multiples. De plus, les mécanismes médiateurs de cet impact sont encore débattus, et de nombreux modèles de médiation sont proposés, mais ceux-ci n'ont jamais été testés et comparés conjointement. Dans cette thèse, huit études ont été conduites qui s'intéressent de plus près à ces deux enjeux. Les résultats montrent que les réactions affectives multiples se combinent afin d'impacter la prise de décision et que dans cette combinaison les sentiments sont moyennés. Cependant, la combinaison n'a lieu que lorsque les réactions affectives sont liées à la même source de décision (p. ex. deux réactions associées à une potentielle récompense). Quand, d'autre part, les réactions affectives sont associées à deux sources de décision indépendantes (p. ex. l'une des réactions associées à une tâche et l'autre à une récompense potentielle), il n'y a pas de combinaison, et les personnes s'appuient uniquement sur l'affectivité associée à la source conséquentielle (c.-à-d. les récompenses). Enfin, le modèle de médiation, le plus systématiquement obtenu, était celui dans lequel seules les réactions affectives immédiates étaient médiatrices entre la source de l'affect et la décision. Les résultats élargissent la littérature en démontrant le phénomène de combinaison affective ainsi que les conditions aux limites qui gouvernent son impact sur la décision, ils offrent un nouvel aperçu sur ce qui agit comme médiateur de cet impact, et ils fournissent une base solide pour de futurs travaux visant à étudier l'impact des réactions affectives multiples sur les décisions. / While there is plenty of research showing how a single affective reaction impacts a decision, there is practically no research which looked at the impact of multiple affective reactions. Moreover, the mediating mechanisms of this impact are still debated, with several mediation models proposed, but never tested and compared at the same time. In this thesis, eight studies were conducted that took a closer look at these two issues. The results show that multiple affective reactions combine in order to impact the decision and that in this combination, feelings are averaged. However, the combination only happens when the affective reactions are related to the same decision source (e.g. two reactions associated with a potential reward). When, on the other hand, the affective reactions are associated with two independent decision sources (e.g. one reaction associated with a task and the other with the potential reward), there is no combination and people only rely on the affectivity associated with the consequential source (i.e. the rewards). Finally, the most consistently obtained mediation model was where only immediate affective reactions mediated between the affective source and the decision. The results extend the literature by demonstrating the phenomenon of affective combination along with the boundary conditions that govern its impact on the decision, they offer new insights into what mediates this impact, and they provide solid ground for future work aimed at looking at multiple affective reactions’ impact on decisions.
122

Transformative Decision Rules : Foundations and Applications

Peterson, Martin January 2003 (has links)
A transformative decision rule alters the representation of a decisionproblem, either by changing the sets of acts and states taken intoconsideration, or by modifying the probability or value assignments.Examples of decision rules belonging to this class are the principleof insufficient reason, Isaac Levi’s condition of E-admissibility, Luceand Raiffa’s merger of states-rule, and the de minimis principle. Inthis doctoral thesis transformative decision rules are analyzed froma foundational point of view, and applied to two decision theoreticalproblems: (i) How should a rational decision maker model a decisionproblem in a formal representation (‘problem specification’, ‘formaldescription’)? (ii) What role can transformative decision rules play inthe justification of the principle of maximizing expected utility?The thesis consists of a summary and seven papers. In Papers Iand II certain foundational issues concerning transformative decisionrules are investigated, and a number of formal properties of this classof rules are proved: convergence, iterativity, and permutability. InPaper III it is argued that there is in general no unique representationof a decision problem that is strictly better than all alternative representations.In Paper IV it is shown that the principle of maximizingexpected utility can be decomposed into a sequence of transformativedecision rules. A set of axioms is proposed that together justify theprinciple of maximizing expected utility. It is shown that the suggestedaxiomatization provides a resolution of Allais’ paradox that cannot beobtained by Savage-style, nor by von Neumann and Morgenstern-styleaxiomatizations. In Paper V the axiomatization from Paper IV is furtherelaborated, and compared to the axiomatizations proposed byvon Neumann and Morgenstern, and Savage. The main results in PaperVI are two impossibility theorems for catastrophe averse decisionrules, demonstrating that given a few reasonable desiderata for suchrules, there is no rule that can fulfill the proposed desiderata. In PaperVII transformative decision rules are applied to extreme risks, i.e.to a potential outcome of an act for which the probability is low, butwhose (negative) value is high. / <p>QC 20100622</p>
123

Decision making under compound uncertainty : experimental study of ambiguity attitudes and sequential choice behavior / Prise de décision en situation d'incertitude composée : étude expérimentale des attitudes face à l'ambiguïté et des comportements de choix séquentiels

Nebout, Antoine 02 December 2011 (has links)
Cette thèse appartient au domaine de la théorie de la décision en situation d'incertitude. Elle vise à comprendre, décrire, et représenter les choix individuels dans différents contextes de décision. Notre travail se concentre sur le fait que le comportement économique est souvent influencé par la structure et le déroulement de la résolution de l'incertitude. Dans une première expérience nous avons confronté nos sujets à différents types d'incertitude – à savoir du risque (probabilités connues), de l'incertain (probabilités inconnues), du risque composé et de l'incertain composé – en utilisant des mécanismes aléatoires particuliers. Le chapitre 1 analyse l'hétérogénéité des attitudes individuelles face à l'ambiguïté, au risque composé et à l'incertain composé alors que dans le chapitre 2, le modèle d'espérance d'utilité à dépendance du rang est utilisé comme outil de mesure afin d'étudier en détails ces attitudes au niveau individuel. Le chapitre 3 confronte à l'expérience l'interprétation de l'ambiguïté en terme de croyances de second ordre et propose une méthode d'élicitation de la fonction qui caractérise l'attitude face à l'ambiguïté dans les modèles « récursifs » de décision face à l'incertain. La seconde partie de la thèse s'intéresse aux comportements de décision individuelle dans un contexte dynamique et est composée de deux études expérimentales indépendantes. Néanmoins, elles reposent toutes deux sur la décomposition de l'axiome d'indépendance en trois axiomes dynamiques: conséquentialisme, cohérence dynamique et réduction des loteries composées. Le chapitre 4 rapporte les résultats d'une expérience de décision individuelle sur les facteurs de violations de chacun de ces axiomes. Le chapitre 5 présente une catégorisation conceptuelle des comportements individuels dans des problèmes de décision séquentiels face au risque. Le cas des agents ne se conformant pas à l'axiome d'indépendance y est étudié de façon systématique et les résultats d'une expérience spécialement conçue pour tester cette catégorisation sont présentés. / This thesis belongs to the domain of decision theory under uncertainty and aims to understand, describe and represent individual choices in various decision contexts. Our work focuses on the fact that economic behavior is often influenced by the structure and the timing of resolution of uncertainty. In a first experimental part, we confronted subjects with different types of uncertainty, namely risk (known probabilities), uncertainty (unknown probabilities), compound risk and compound uncertainty, which were generated using special random devices. In chapter 1 we analyze the heterogeneity of attitudes towards ambiguity, compound risk and compound uncertainty whereas in chapter 2, we use rank dependent expected utility as a measuring tool in order to individually investigate these attitudes. Chapter 3 confronts the interpretation of ambiguity in term of second order beliefs with the experimental data and proposes a method for eliciting the function that encapsulates attitudes toward ambiguity in the “recursive” or multistage models of decision under uncertainty. The second part of the thesis deals with individual decision making under risk in a dynamic context and is composed of two independent experimental studies. Both of them rely on the decomposition of the independence axiom into three dynamic axioms: consequentialism, dynamic consistency and reduction of compound lotteries. Chapter 4 reports experimental data about violations of each of the three axioms. Chapter 5 presents a conceptual categorization of individual behavior in sequential decision problems under risk, especially those which do not conform to the independence axiom. We propose an experiment specially designed to test the predictions of this categorization.
124

Um estudo comparativo das técnicas de validação cruzada aplicadas a modelos mistos / A comparative study of cross-validation techniques applied to mixed models

Cunha, João Paulo Zanola 28 May 2019 (has links)
A avaliação da predição de um modelo por meio do cálculo do seu risco esperado é uma importante etapa no processo de escolha do um preditor eficiente para observações futuras. Porém, deve ser evitado nessa avaliação usar a mesma base em que foi criado o preditor, pois traz, no geral, estimativas abaixo do valor real do risco esperado daquele modelo. As técnicas de validação cruzada (K-fold, Leave-One-Out, Hold-Out e Bootstrap) são aconselhadas nesse caso, pois permitem a divisão de uma base em amostra de treino e validação, fazendo assim que a criação do preditor e a avaliação do seu risco sejam feitas em bases diferentes. Este trabalho apresenta uma revisão dessas técnicas e suas particularidades na estimação do risco esperado. Essas técnicas foram avaliadas em dois modelos mistos com distribuições Normal e Logístico e seus desempenhos comparados por meio de estudos de simulação. Por fim, as metodologias foram aplicadas em um conjunto de dados real. / The appraisal of models prediction through the calculation of the expected risk is an important step on the process of the choice of an efficient predictor to future observations. However, in this evaluation it should be avoided to use the same data to calculate the predictor on which it was created, due to it brings, in general, estimates above the real expected risk value of the model. In this case, the cross-validation methods (K-fold, Leave-One-Out, Hold-Out and Bootstrap) are recommended because the partitioning of the data in training and validation samples allows the creation of the predictor and its risk evaluation on different data sets. This work presents a briefing of this methods and its particularities on the expected risk estimation. These methods were evaluated on two mixed models with Normal and Logistic distributions and their performances were compared through simulation cases. Lastly, those methods were applied on a real database.
125

Zadávání veřejných zakázek z pohledu zadavatele / Public Procurement from the Perspective of the Contracting Authority

Židková, Michaela January 2016 (has links)
The aim of this master thesis is primarily to develop a methodological framework for a contracting authority receiving an abnormally low bid. The theoretical part of the thesis outlines the basic terms and definitions, analyzes a public tender from the perspective of a contracting authority and defines an extremely low bid price. The practical part of the thesis applies the methodological framework for dealing with an extremely low bid price on the case study, where on the set of selected areas of mechanical items indicates possible views above the limit costs of individual items costing unit prices of construction work.
126

Portfolio risk measures and option pricing under a Hybrid Brownian motion model

Mbona, Innocent January 2017 (has links)
The 2008/9 financial crisis intensified the search for realistic return models, that capture real market movements. The assumed underlying statistical distribution of financial returns plays a crucial role in the evaluation of risk measures, and pricing of financial instruments. In this dissertation, we discuss an empirical study on the evaluation of the traditional portfolio risk measures, and option pricing under the hybrid Brownian motion model, developed by Shaw and Schofield. Under this model, we derive probability density functions that have a fat-tailed property, such that “25-sigma” or worse events are more probable. We then estimate Value-at-Risk (VaR) and Expected Shortfall (ES) using four equity stocks listed on the Johannesburg Stock Exchange, including the FTSE/JSE Top 40 index. We apply the historical method and Variance-Covariance method (VC) in the valuation of VaR. Under the VC method, we adopt the GARCH(1,1) model to deal with the volatility clustering phenomenon. We backtest the VaR results and discuss our findings for each probability density function. Furthermore, we apply the hybrid model to price European style options. We compare the pricing performance of the hybrid model to the classical Black-Scholes model. / Dissertation (MSc)--University of Pretoria, 2017. / National Research Fund (NRF), University of Pretoria Postgraduate bursary and the General Studentship bursary / Mathematics and Applied Mathematics / MSc / Unrestricted
127

Parallelisierung Ersatzmodell-gestützter Optimierungsverfahren

Schmidt, Hansjörg 12 February 2009 (has links)
Bei der Entwicklung neuer Produkte nehmen numerische Simulationen eine immer größere Rolle ein. Dadurch entsteht die Möglichkeit, relativ kostengünstig das neue Produkt zu testen, noch bevor ein teurer Prototyp angefertigt werden muss. Diese Möglichkeit weckt das Verlangen, Teile des Designprozesses zu automatisieren. Aber selbst mit den modernsten Algorithmen und Rechnern sind einige dieser Simulationen sehr zeitaufwändig, d.h. im Bereich von Minuten bis Stunden. Beispiele aus dem Automobilbereich dafür sind Kettentriebssimulationen, Strömungssimulationen oder Crashsimulationen. Mathematisch stehen dafür das Lösen von Differential-Algebraischen Gleichungen und partiellen Differentialgleichungen. Ziele des teilweise automatischen Designprozesses sind die Funktionsfähigkeit und möglichst optimale weitere Eigenschaften wie beispielsweise Leistung oder Kosten. In dieser Arbeit werden Optimierungsprobleme betrachtet, bei denen die Auswertung der Zielfunktion eine numerische Simulation erfordert. Um solche Probleme in annehmbarer Zeit lösen zu können, braucht man also Optimierungsverfahren, die mit wenigen Funktionsauswertungen schon gute Näherungen des globalen Optimums finden können. In dieser Arbeit werden Ersatzmodell-gestützte Optimierungsverfahren, die eine Kriging-Approximation benutzen, betrachtet. Diese Verfahren besitzen die oben genannten Anforderungen, sind aber nur eingeschränkt parallelisierbar. Die Arbeit gliedert sich wie folgt. Die für diese Arbeit benötigten Grundlagen der Optimierung werden im zweiten Kapitel vorgestellt. Das dritte Kapitel beschäftigt sich mit der Theorie der Kriging- Approximation. Die Verwendung eines Ersatzmodells zur Optimierung und die Parallelisierung der entstehenden Verfahren sind das Thema des vierten Kapitels. Im fünften Kapitel werden die vorgestellten Verfahren numerisch verifiziert und es werden Vorschläge für die Anwendung gegeben. Das sechste Kapitel gibt einen Überblick über die Kettentriebskonstruktion und die Verwendung der vorgestellten Algorithmen. Das letzte Kapitel fasst die erreichten Ziele zusammen und gibt Vorschläge für weitere Verbesserungen und Forschungsthemen.
128

Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

Ben Issaid, Chaouki 12 May 2015 (has links)
Experimental design can be vital when experiments are resource-exhaustive and time-consuming. In this work, we carry out experimental design in the Bayesian framework. To measure the amount of information that can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data about the model parameters. One of the major difficulties in evaluating the expected information gain is that it naturally involves nested integration over a possibly high dimensional domain. We use the Multilevel Monte Carlo (MLMC) method to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, MLMC can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the MLMC method imposes fewer assumptions, such as the asymptotic concentration of posterior measures, required for instance by the Laplace approximation (LA). We test the MLMC method using two numerical examples. The first example is the design of sensor deployment for a Darcy flow problem governed by a one-dimensional Poisson equation. We place the sensors in the locations where the pressure is measured, and we model the conductivity field as a piecewise constant random vector with two parameters. The second one is chemical Enhanced Oil Recovery (EOR) core flooding experiment assuming homogeneous permeability. We measure the cumulative oil recovery, from a horizontal core flooded by water, surfactant and polymer, for different injection rates. The model parameters consist of the endpoint relative permeabilities, the residual saturations and the relative permeability exponents for the three phases: water, oil and microemulsions. We also compare the performance of the MLMC to the LA and the direct Double Loop Monte Carlo (DLMC). In fact, we show that, in the case of the aforementioned examples, MLMC combined with LA turns to be the best method in terms of computational cost.
129

Backtesting Expected Shortfall: the design and implementation of different backtests / Validering av Expected Shortfall: design och tillämpning av olika metoder

Wimmerstedt, Lisa January 2015 (has links)
In recent years, the question of whether Expected Shortfall is possible to backtest has been a hot topic after the findings of Gneiting in 2011 that Expected Shortfall lacks a mathematical property called elicitability. However, new research has indicated that backtesting of Expected Shortfall is in fact possible and that it does not have to be very difficult. The purpose of this thesis is to show that Expected Shortfall is in fact backtestable by providing six different examples of how a backtest could be designed without exploiting the property of elicitability. The different approaches are tested and their performances are compared against each other. The material can be seen as guidance on how to think in the initial steps of the implementation of an Expected Shortfall backtest in practice. / De senaste åren har frågan om huruvida det är möjligt att hitta backtester som validerar Expected Shortfall varit ett omdiskuterat ämne efter att Gneiting 2011 visade att Expected Shortfall saknade den matematiska egenskapen som kallas elicitabilitet. Ny forskning tyder på att det går att validera Expected Shortfall och att det inte behöver vara alltför svårt. Syftet med den här uppsatsen är att visa att det går att hitta metoder som backtestar Expected Shortfall. Vi gör det genom att visa utförandet av sex olika metoder som validerar Expected Shortfall utan att använda sig av elicitabilitet. De olika metoderna testas och deras egenskaper jämförs mot varandra. Materialet kan ses som en guide i hur man ska tänka i de första stegen i implementeringen av en metod för att backtesta Expected Shortfall.
130

Estimating expected shortfall using an unconditional peaks-over-threshold method under an extreme value approach

Wahlström, Rikard January 2021 (has links)
Value-at-Risk (VaR) has long been the standard risk measure in financial risk management. However, VaR suffers from critical shortcomings as a risk measure when it comes to quantifying the most severe risks, which was made especially apparent during the financial crisis of 2007–2008. An alternative risk measure addressing the shortcomings of VaR known as expected shortfall (ES) is gaining popularity and is set to replace VaR as the standard measure of financial risk. This thesis introduces how extreme value theory can be applied in estimating ES using an unconditional peaks-over-threshold method. This includes giving an introduction to the theoretical foundations of the method. An application of this method is also performed on five different assets. These assets are chosen to serve as a proxy for the more broad asset classes of equity, fixed income, currencies, commodities and cryptocurrencies. In terms of ES, we find that cryptocurrencies is the riskiest asset and fixed income the safest.

Page generated in 0.0674 seconds