• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Aggregation of Group Prioritisations for Energy Rationing with an Additive Group Decision Model : A Case Study of the Swedish Emergency Preparedness Planning in case of Power Shortage

Petersen, Rebecca January 2016 (has links)
The backbone of our industrialised society and economy is electricity. To avoid a catastrophic situation, a plan for how to act during a power shortage is crucial. Previous research shows that decision models provide support to decision makers providing efficient energy rationing during power shortages in the Netherlands, United States and Canada. The existing research needs to be expanded with a group decision model to enable group decisions. This study is conducted with a case study approach where the Swedish emergency preparedness plan in case of power shortage, named Styrel, is explored and used to evaluate properties of a proposed group decision model. The study consist of a qualitative phase and a quantitative phase including a Monte Carlo simulation of group decisions in Styrel evaluated with correlation analysis. The qualitative results show that participants in Styrel experience the group decisions as time-consuming and unstructured. The current decision support is not used in neither of the two counties included in the study, with the motivation that the preferences provided by the decision support are misleading. The proposed group decision model include a measurable value function assigning values to priority classes for electricity users, an additive model to represent preferences of individual decision makers and an additive group decision model to aggregate preferences of several individual decision makers into a group decision. The conducted simulation indicate that the proposed group decision model evaluated in Styrel is sensitive to significant changes and more robust to moderate changes in preference differences between priority classes.
2

Essays on Robust Social Preferences under Uncertainty / 不確実性下の頑健性を持つ社会選好に関する小論

Li, Chen 23 March 2023 (has links)
京都大学 / 新制・課程博士 / 博士(経済学) / 甲第24381号 / 経博第668号 / 新制||経||303(附属図書館) / 京都大学大学院経済学研究科経済学専攻 / (主査)教授 関口 格, 教授 原 千秋, 教授 NEWTON Jonathan Charles Scott / 学位規則第4条第1項該当 / Doctor of Economics / Kyoto University / DGAM
3

Democracy and the Common Good : A Study of the Weighted Majority Rule

Berndt Rasmussen, Katharina January 2013 (has links)
In this study I analyse the performance of a democratic decision-making rule: the weighted majority rule. It assigns to each voter a number of votes that is proportional to her stakes in the decision. It has been shown that, for collective decisions with two options, the weighted majority rule in combination with self-interested voters maximises the common good when the latter is understood in terms of either the sum-total or prioritarian sum of the voters’ well-being. The main result of my study is that this argument for the weighted majority rule — that it maximises the common good — can be improved along the following three main lines. (1) The argument can be adapted to other criteria of the common good, such as sufficientarian, maximin, leximin or non-welfarist criteria. I propose a generic argument for the collective optimality of the weighted majority rule that works for all of these criteria. (2) The assumption of self-interested voters can be relaxed. First, common-interest voters can be accommodated. Second, even if voters are less than fully competent in judging their self-interest or the common interest, the weighted majority rule is weakly collectively optimal, that is, it almost certainly maximises the common good given a large numbers of voters. Third, even for smaller groups of voters, the weighted majority rule still has some attractive features. (3) The scope of the argument can be extended to decisions with more than two options. I state the conditions under which the weighted majority rule maximises the common good even in multi-option contexts. I also analyse the possibility and the detrimental effects of strategic voting. Furthermore, I argue that self-interested voters have reason to accept the weighted majority rule.
4

On Social Choice in Social Networks

Becirovic, Ema January 2017 (has links)
Kollektiva beslut blir en del av vardagen när grupper av människor står inför val. Vi anpassar ofta våra personliga övertygelser med hänsyn till våra vänner. Vi är naturligt beroende av lyckan hos dem som står oss nära. I det här exjobbet undersöker vi en befintlig empatimodell som används för att välja en vinnare från en uppsättning alternativ genom att använda poängbaserade omröstningsprocedurer. Vi visar att en liten modifikation av modellen är tillräcklig för att kunna använda överlägsna omröstningsprocedurer som bygger på parvisa jämförelser av alternativen. Sammanfattningsvis visar vi att det i grunden inte finns någon anledning att använda poängbaserade omröstningsprocedurer i de föreslagna modellerna, eftersom ett mer önskvärt resultat uppnås genom att använda de överlägsna omröstningsprocedurerna. / Social choice becomes a part of everyday life when groups of people are faced with decisions to make. We often adjust our personal beliefs with the respect to our friends. We are inherently dependent on the happiness of those near us. In this thesis, we investigate an existing empathy model that is used to select a winner in a set of alternatives by using scoring winner selection methods. We show that a slight modification of the model is enough to be able to use superior winner selection methods that are based on pairwise comparisons of alternatives. We show that there is essentially no reason to use scoring winner selection methods in the models proposed as a more desirable result is achieved by using superior winner selection methods.
5

Essays on matching and preference aggregation

Bonkoungou, Somouaoga 02 1900 (has links)
No description available.
6

Agrégation de classements avec égalités : algorithmes, guides à l'utilisateur et applications aux données biologiques / Rank aggregation with ties : algorithms, user guidance et applications to biologicals data

Brancotte, Bryan 25 September 2015 (has links)
L'agrégation de classements consiste à établir un consensus entre un ensemble de classements (éléments ordonnés). Bien que ce problème ait de très nombreuses applications (consensus entre les votes d'utilisateurs, consensus entre des résultats ordonnés différemment par divers moteurs de recherche...), calculer un consensus exact est rarement faisable dans les cas d'applications réels (problème NP-difficile). De nombreux algorithmes d'approximation et heuristiques ont donc été conçus. Néanmoins, leurs performances (en temps et en qualité de résultat produit) sont très différentes et dépendent des jeux de données à agréger. Plusieurs études ont cherché à comparer ces algorithmes mais celles-ci n’ont généralement pas considéré le cas (pourtant courant dans les jeux de données réels) des égalités entre éléments dans les classements (éléments classés au même rang). Choisir un algorithme de consensus adéquat vis-à-vis d'un jeu de données est donc un problème particulièrement important à étudier (grand nombre d’applications) et c’est un problème ouvert au sens où aucune des études existantes ne permet d’y répondre. Plus formellement, un consensus de classements est un classement qui minimise le somme des distances entre ce consensus et chacun des classements en entrés. Nous avons considérés (comme une grande partie de l’état-de-art) la distance de Kendall-Tau généralisée, ainsi que des variantes, dans nos études. Plus précisément, cette thèse comporte trois contributions. Premièrement, nous proposons de nouveaux résultats de complexité associés aux cas que l'on rencontre dans les données réelles où les classements peuvent être incomplets et où plusieurs éléments peuvent être classés à égalité. Nous isolons les différents « paramètres » qui peuvent expliquer les variations au niveau des résultats produits par les algorithmes d’agrégation (par exemple, utilisation de la distance de Kendall-Tau généralisée ou de variantes, d’un pré-traitement des jeux de données par unification ou projection). Nous proposons un guide pour caractériser le contexte et le besoin d’un utilisateur afin de le guider dans le choix à la fois d’un pré-traitement de ses données mais aussi de la distance à choisir pour calculer le consensus. Nous proposons finalement une adaptation des algorithmes existants à ce nouveau contexte. Deuxièmement, nous évaluons ces algorithmes sur un ensemble important et varié de jeux de données à la fois réels et synthétiques reproduisant des caractéristiques réelles telles que similarité entre classements, la présence d'égalités, et différents pré-traitements. Cette large évaluation passe par la proposition d’une nouvelle méthode pour générer des données synthétiques avec similarités basée sur une modélisation en chaîne Markovienne. Cette évaluation a permis d'isoler les caractéristiques des jeux de données ayant un impact sur les performances des algorithmes d'agrégation et de concevoir un guide pour caractériser le besoin d'un utilisateur et le conseiller dans le choix de l'algorithme à privilégier. Une plateforme web permettant de reproduire et étendre ces analyses effectuée est disponible (rank-aggregation-with-ties.lri.fr). Enfin, nous démontrons l'intérêt d'utiliser l'approche d'agrégation de classements dans deux cas d'utilisation. Nous proposons un outil reformulant à-la-volé des requêtes textuelles d'utilisateur grâce à des terminologies biomédicales, pour ensuite interroger de bases de données biologiques, et finalement produire un consensus des résultats obtenus pour chaque reformulation (conqur-bio.lri.fr). Nous comparons l'outil à la plateforme de références et montrons une amélioration nette des résultats en qualité. Nous calculons aussi des consensus entre liste de workflows établie par des experts dans le contexte de la similarité entre workflows scientifiques. Nous observons que les consensus calculés sont très en accord avec les utilisateurs dans une large proportion de cas. / The rank aggregation problem is to build consensus among a set of rankings (ordered elements). Although this problem has numerous applications (consensus among user votes, consensus between results ordered differently by different search engines ...), computing an optimal consensus is rarely feasible in cases of real applications (problem NP-Hard). Many approximation algorithms and heuristics were therefore designed. However, their performance (time and quality of product loss) are quite different and depend on the datasets to be aggregated. Several studies have compared these algorithms but they have generally not considered the case (yet common in real datasets) that elements can be tied in rankings (elements at the same rank). Choosing a consensus algorithm for a given dataset is therefore a particularly important issue to be studied (many applications) and it is an open problem in the sense that none of the existing studies address it. More formally, a consensus ranking is a ranking that minimizes the sum of the distances between this consensus and the input rankings. Like much of the state-of-art, we have considered in our studies the generalized Kendall-Tau distance, and variants. Specifically, this thesis has three contributions. First, we propose new complexity results associated with cases encountered in the actual data that rankings may be incomplete and where multiple items can be classified equally (ties). We isolate the different "features" that can explain variations in the results produced by the aggregation algorithms (for example, using the generalized distance of Kendall-Tau or variants, pre-processing the datasets with unification or projection). We propose a guide to characterize the context and the need of a user to guide him into the choice of both a pre-treatment of its datasets but also the distance to choose to calculate the consensus. We finally adapt existing algorithms to this new context. Second, we evaluate these algorithms on a large and varied set of datasets both real and synthetic reproducing actual features such as similarity between rankings, the presence of ties and different pre-treatments. This large evaluation comes with the proposal of a new method to generate synthetic data with similarities based on a Markov chain modeling. This evaluation led to the isolation of datasets features that impact the performance of the aggregation algorithms, and to design a guide to characterize the needs of a user and advise him in the choice of the algorithm to be use. A web platform to replicate and extend these analyzes is available (rank-aggregation-with-ties.lri.fr). Finally, we demonstrate the value of using the rankings aggregation approach in two use cases. We provide a tool to reformulating the text user queries through biomedical terminologies, to then query biological databases, and ultimately produce a consensus of results obtained for each reformulation (conqur-bio.lri.fr). We compare the results to the references platform and show a clear improvement in quality results. We also calculate consensus between list of workflows established by experts in the context of similarity between scientific workflows. We note that the computed consensus agree with the expert in a very large majority of cases.

Page generated in 0.1274 seconds