• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 359
  • 51
  • 29
  • 24
  • 16
  • 14
  • 13
  • 11
  • 8
  • 7
  • 4
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 673
  • 144
  • 84
  • 61
  • 57
  • 54
  • 52
  • 51
  • 50
  • 45
  • 43
  • 40
  • 39
  • 38
  • 38
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

Resource management in computer clusters : algorithm design and performance analysis / Gestion des ressources dans les grappes d’ordinateurs : conception d'algorithmes et analyse de performance

Comte, Céline 24 September 2019 (has links)
La demande croissante pour les services de cloud computing encourage les opérateurs à optimiser l’utilisation des ressources dans les grappes d’ordinateurs. Cela motive le développement de nouvelles technologies qui rendent plus flexible la gestion des ressources. Cependant, exploiter cette flexibilité pour réduire le nombre d’ordinateurs nécessite aussi des algorithmes de gestion des ressources efficaces et dont la performance est prédictible sous une demande stochastique. Dans cette thèse, nous concevons et analysons de tels algorithmes en utilisant le formalisme de la théorie des files d’attente.Notre abstraction du problème est une file multi-serveur avec plusieurs classes de clients. Les capacités des serveurs sont hétérogènes et les clients de chaque classe entrent dans la file selon un processus de Poisson indépendant. Chaque client peut être traité en parallèle par plusieurs serveurs, selon des contraintes de compatibilité décrites par un graphe biparti entre les classes et les serveurs, et chaque serveur applique la politique premier arrivé, premier servi aux clients qui lui sont affectés. Nous prouvons que, si la demande de service de chaque client suit une loi exponentielle indépendante de moyenne unitaire, alors la performance moyenne sous cette politique simple est la même que sous l’équité équilibrée, une extension de processor-sharing connue pour son insensibilité à la loi de la demande de service. Une forme plus générale de ce résultat, reliant les files order-independent aux réseaux de Whittle, est aussi prouvée. Enfin, nous développons de nouvelles formules pour calculer des métriques de performance.Ces résultats théoriques sont ensuite mis en pratique. Nous commençons par proposer un algorithme d’ordonnancement qui étend le principe de round-robin à une grappe où chaque requête est affectée à un groupe d’ordinateurs par lesquels elle peut ensuite être traitée en parallèle. Notre seconde proposition est un algorithme de répartition de charge à base de jetons pour des grappes où les requêtes ont des contraintes d’affectation. Ces deux algorithmes sont approximativement insensibles à la loi de la taille des requêtes et s’adaptent dynamiquement à la demande. Leur performance peut être prédite en appliquant les formules obtenues pour la file multi-serveur. / The growing demand for cloud-based services encourages operators to maximize resource efficiency within computer clusters. This motivates the development of new technologies that make resource management more flexible. However, exploiting this flexibility to reduce the number of computers also requires efficient resource-management algorithms that have a predictable performance under stochastic demand. In this thesis, we design and analyze such algorithms using the framework of queueing theory.Our abstraction of the problem is a multi-server queue with several customer classes. Servers have heterogeneous capacities and the customers of each class enter the queue according to an independent Poisson process. Each customer can be processed in parallel by several servers, depending on compatibility constraints described by a bipartite graph between classes and servers, and each server applies first-come-first-served policy to its compatible customers. We first prove that, if the service requirements are independent and exponentially distributed with unit mean, this simple policy yields the same average performance as balanced fairness, an extension to processor-sharing known to be insensitive to the distribution of the service requirements. A more general form of this result, relating order-independent queues to Whittle networks, is also proved. Lastly, we derive new formulas to compute performance metrics.These theoretical results are then put into practice. We first propose a scheduling algorithm that extends the principle of round-robin to a cluster where each incoming job is assigned to a pool of computers by which it can subsequently be processed in parallel. Our second proposal is a load-balancing algorithm based on tokens for clusters where jobs have assignment constraints. Both algorithms are approximately insensitive to the job size distribution and adapt dynamically to demand. Their performance can be predicted by applying the formulas derived for the multi-server queue.
542

Att särskilt beakta ett nationellt prov : En kvalitativ studie om hur lärare uppfattar och tillämpar förordningen om att särskilt beakta provresultatet vid betygssättning

Sandberg, Erik January 2021 (has links)
This master thesis aims to shed light on teachers' responses to the regulation that states that a student's test results in a national test must be taken into a “special consideration" in the teacher's grading. Ten teachers took part in the study during the autumn of 2020, all being teachers in social science subjects in the Swedish secondary school (school year 7-9). A method based on semi-structured focus group interviews is used to answer three research questions. The study’s theoretical framework derives from curriculum theory (i.e. teacher agency; selective traditions and theory associated with assessment). A thematic content analysis identifies two categories of factors contributing to the teachers’ perceptions and application of the regulation. The first category consists of factors that relate to the teachers' views on the purpose of the tests, their principles for construction, their content and their level of difficulty. The second category includes factors related to teachers’ professional practice and agency. The results of the study show how teachers’ responses to this relatively new policy (carried out in 2018) is affected by circumstances relating to students’ need for adaptations (e.g. poor language skills). The main conclusion is that teachers respond to this change of policy in different ways. Thus, it could be argued that further and more detailed information is needed in order to successfully implement the changes and reach a consensus among the teachers. Prior to that, a reasonable assumption is that challenges regarding equality and justice in terms of students’ knowledge levels and final grades will remain.
543

Ekonomie a etika jako předpoklad obecného dobra a udržitelného rozvoje společnosti / The economics and ethics as the premises sustainable of common good and development the society

Červený, Petr January 2016 (has links)
The presented thesis deals with the basic premises of the common good and sustainable develompent the society. The thesis deals with the themes of freedom, fairness, trust, responsibility, morality, ethics and ekonomics tested on the specific phenomenon of the Global financial crisis in 2008. The attention is focused especially on the way of thinking and behavior of humans, which played a significant role, both on the part of institutions, politicians, economists, financiers and ordinary citizens. It reveals the key factors leading to the formation of crisis. In this context it suggests possible solutions to prevent similar crises in the future.
544

Towards fairness in Kidney Exchange Programs

St-Arnaud, William 08 1900 (has links)
Le traitement médical de choix pour la maladie rénale chronique est la transplantation d'organe. Cependant, plusieurs patients ne sont en mesure que de trouver un donneur direct avec lequel ils ne sont pas compatibles. Les Programmes de Don Croisé de Reins peuvent aider plusieurs paires donneur-patient incompatibles à échanger leur donneur entre elles. Typiquement, l'objectif principal d'un tel programme est de maximiser le nombre total de transplantations qui seront effectuées grâce à un plan d'échange. Plusieurs solutions optimales peuvent co-exister et comme la plupart correspondent à différents ensembles de patients obtenant un donneur compatible, il devient important de considérer quels individus seront sélectionnés. Fréquemment, ce problème n'est pas abordé et la première solution fournie par un solveur est choisie comme plan d'échange. Ceci peut mener à des parti-pris en faveur ou défaveur de certains patients, ce qui n'est pas considéré une approche juste. De plus, il est de la responsabilité des informaticiens de s'assurer du contrôle des résultats fournis par leurs algorithmes. Pour répondre à ce besoin, nous explorons l'emploi de multiples solutions optimales ainsi que la manière dont il est possible de sélectionner un plan d'échange parmi celles-ci. Nous proposons l'emploi de politiques aléatoires pour la sélection de solutions optimales suite à leur enumération. Cette tâche est accomplie grâce à la programmation en nombres entiers et à la programmation par contraintes. Nous introduisons aussi un nouveau concept intitulé équité individuelle. Ceci a pour but de trouver une politique juste pouvant être utilisée en collaboration avec les solutions énumerées. La mise à disposition de plusieurs métriques fait partie intégrante de la méthode. En faisant usage de la génération de colonnes en combinaison au métrique $L_1$, nous parvenons à applique la méthode à de plus larges graphes. Lors de l'évaluation de l'équité individuelle, nous analysons de façon systématique d'autres schémas d'équité tels que le principle d'Aristote, la justice Rawlsienne, le principe d'équité de Nash et les valeurs de Shapley. Nous étudions leur description mathématiques ainsi que leurs avantages et désavantages. Finalement, nous soulignons le besoin de considérer de multiples solutions, incluant des solutions non optimales en ce qui concerne le nombre de transplantations d'un plan d'échange. Pour la sélection d'une politique équitable ayant comme domaine un tel ensemble de solutions, nous notons l'importance de trouver un équilibre entre les mesures d'utilité et d'équité d'une solution. Nous utilisons le Programme de Bien-être Social de Nash afin de satisfaire à un tel objectif. Nous proposons aussi une méthodologie de décomposition qui permet d'étendre le système sous-jacent et de faciliter l'énumeration de solutions. / The preferred treatment for chronic kidney disease is transplantation. However, many patients can only find direct donors that are not fully compatible with them. Kidney Exchange Programs (KEPs) can help these patients by swapping the donors of multiple patient-donor pairs in order to accommodate them. Usually, the objective is to maximize the total number of transplants that can be realized as part of an exchange plan. Many optimal solutions can co-exist and since a large part of them features different subsets of patients that obtain a compatible donor, the question of who is selected becomes relevant. Often, this problem is not even addressed and the first solution returned by a solver is chosen as the exchange plan to be performed. This can lead to bias against some patients and thus is not considered a fair approach. Moreover, it is of the responsibility of computer scientists to have control of the output of the algorithms they design. To resolve this issue, we explore the use of multiple optimal solutions and how to pick an exchange plan among them. We propose the use of randomized policies for selecting an optimal solution, first by enumerating them. This task is achieved through both integer programming and constraint programming methods. We also introduce a new concept called individual fairness in a bid to find a fair policy over the enumerated solutions by making use of multiple metrics. We scale the method to larger instances by adding column generation as part of the enumeration with the $L_1$ metric. When evaluating individual fairness, we systematically review other fairness schemes such as Aristotle's principle, Rawlsian justice, Nash's principle of fairness, and Shapley values. We analyze their mathematical descriptions and their pros and cons. Finally, we motivate the need to consider solutions that are not optimal in the number of transplants. For the selection of a good policy over this larger set of solutions, we motivate the need to balance utility and our individual fairness measure. We use the Nash Social Welfare Program in order to achieve this, and we also propose a decomposition methodology to extend the machinery for an efficient enumeration of solutions.
545

Modelling human behaviour in social dilemmas using attributes and heuristics

Ebenhöh, Eva 16 October 2007 (has links)
A question concerning not only modellers but also practitioners is: Under what circumstances can mutual cooperation be established and maintained by a group of people facing a common pool dilemma" A step before this question of institutional influences there is need for a different way of modelling human behaviour that does not draw on the rational actor paradigm, because this kind of modelling needs to be able to integrate various deviations from this theory shown in economic experiments. We have chosen a new approach based on observations in form of laboratory and field observations of actual human behaviour. We model human decision making as using an adaptive toolbox following the notion of Gigerenzer. Humans draw on a number of simple heuristics that are meaningful in a certain situation but may be useless in another. This is incorporated into our agent-based model by having agents perceive their environment, draw on a pool of heuristics to choose an appropriate one and use that heuristic.Behavioural differences can be incorporated in two ways. First, each agent has a number of attributes that differ in values, for example there are more and less cooperative agents. The second behavioural difference lies in the way, in which heuristics are chosen. With this modelling approach we contribute to a new way of modelling human behaviour, which is simple enough to be included into morecomplex models while at the same time realistic enough to cover actual decision making processes of humans. Modellers should be able to use this approach without a need to get deep into psychological, sociological or economic theory. Stakeholders in social dilemmas, who may be confronted with such a model should understand, why an agent decides in the way it does.
546

Employees Expectation from Leaders' Ethics in Decision Making

Easter, Shirley 01 January 2019 (has links)
The presence of unethical behavior continues to plague the global business community, however, and its impact in the finance industry is widely thought to be having an even more devastating impact than ever before. Scholarly literature provides little understanding of what drives ethical decision making, or the processes involved and a little evidence that ethical standards have been developed as part of leadership decision making training in finance. The purpose of this qualitative single case study was to explore the drivers and processes in the development of training that supports ethical choices that leaders make in their decision-making role within the finance industry, as well as to understand what effects those decisions have on followers and on organizational culture. The research question examined the processes and training involved in ethical decision making in the field of finance. Rawls' justice as fairness theory provided the theoretical framework. The data were collected interviewing purposefully selected 7 directors and managers in the financial industry. The data were analyzed using a constant comparative approach and the development of vignettes based on Stake. The results showed that leaders were not able to make sound ethical decisions and the need for ethical standards. When these standards and values are compromised, leadership behaviors can affect organizational culture, as they tend to decrease commitment, performance and motivation of employees, while increasing absenteeism and turnover, thus adversely affecting company operations and incurring costs. The study results can have implications for social change through developing higher standards in ethics and adequate morale training.
547

Privacy evaluation of fairness-enhancing pre-processing techniques

Taillandier, Jean-Christophe 12 1900 (has links)
La prédominance d’algorithmes de prise de décision, qui sont souvent basés sur desmodèles issus de l’apprentissage machine, soulève des enjeux importants en termes de ladiscrimination et du manque d’équité par ceux-ci ainsi que leur impact sur le traitement degroupes minoritaires ou sous-représentés. Cela a toutefois conduit au développement de tech-niques dont l’objectif est de mitiger ces problèmes ainsi que les les difficultés qui y sont reliées. Dans ce mémoire, nous analysons certaines de ces méthodes d’amélioration de l’équitéde type «pré-traitement» parmi les plus récentes, et mesurons leur impact sur le compromiséquité-utilité des données transformées. Plus précisément, notre focus se fera sur troistechniques qui ont pour objectif de cacher un attribut sensible dans un ensemble de données,dont deux basées sur les modèles générateurs adversériaux (LAFTR [67] et GANSan [6])et une basée sur une transformation déterministe et les fonctions de densités (DisparateImpact Remover [33]). Nous allons premièrement vérifier le niveau de contrôle que cestechniques nous offrent quant au compromis équité-utilité des données. Par la suite, nousallons investiguer s’il est possible d’inverser la transformation faite aux données par chacunde ces algorithmes en construisant un auto-encodeur sur mesure qui tentera de reconstruireles données originales depuis les données transformées. Finalement, nous verrons qu’unacteur malveillant pourrait, avec les données transformées par ces trois techniques, retrouverl’attribut sensible qui est censé être protégé avec des algorithmes d’apprentissage machinede base. Une des conclusions de notre recherche est que même si ces techniques offrentdes garanties pratiques quant à l’équité des données produites, il reste souvent possible deprédire l’attribut sensible en question par des techniques d’apprentissage, ce qui annulepotentiellement toute protection que la technique voulait accorder, créant ainsi de sérieuxdangers au niveau de la vie privée. / The prevalence of decision-making algorithms, based on increasingly powerful patternrecognition machine learning algorithms, has brought a growing wave of concern about dis-crimination and fairness of those algorithm predictions as well as their impacts on equity andtreatment of minority or under-represented groups. This in turn has fuelled the developmentof new techniques to mitigate those issues and helped outline challenges related to such issues. n this work, we analyse recent advances in fairness enhancing pre-processing techniques,evaluate how they control the fairness-utility trade-off and the dataset’s ability to be usedsuccessfully in downstream tasks. We focus on three techniques that attempt to hide asensitive attribute in a dataset, two based onGenerative Adversarial Networksarchitectures(LAFTR [67] and GANSan [6]), and one deterministic transformation of dataset relyingon density functions (Disparate Impact Remover [33]). First we analyse the control overthe fairness-utility trade-off each of these techniques offer. We then attempt to revertthe transformation on the data each of these techniques applied using a variation of anauto-encoder built specifically for this purpose, which we calledreconstructor. Lastly wesee that even though these techniques offer practical guarantees of specific fairness metrics,basic machine learning classifiers are often able to successfully predict the sensitive attributefrom the transformed data, effectively enabling discrimination. This creates what we believeis a major issue in fairness-enhancing technique research that is in large part due to intricaterelationship between fairness and privacy.
548

Eorzean Justice : A Mixed Methods study on distributive system fairness and use within a Distributive Justice based society

Bridger, Matthew January 2022 (has links)
The purpose of this study was to provide data concerning Distributive Justice system use and perceived fairness by citizens who have participated in a Distributive Justice based society over a long period of time. This data filled a hole in academic studies which had previously focused on datasets from citizens in non-Distributive Justice based societies or participants in short-term experiments. Using a questionnaire, players of the online game Final Fantasy XIV were asked a series of questions concerning their perceptions of fairness and use of varying Distributive models, as well as questions on their perceived relationships to one another and time spent within Final Fantasy XIV. There were four main results of the study. Firstly, Utilitarian and Need based distribution systems were seen as the most fair and used. Secondly, perceived relationships between participants did not majorly affect perceived fairness of Distribution systems. Thirdly, more time spent in Final Fantasy XIV did affect perceived fairness of varying types of Distribution systems. And, finally, the results indicated a Sufficiency based reasoning for using Utilitarian and Need based resource distribution to individuals until groups hit a sufficiency level, at which point the Distribution systems changed to meet new criteria.
549

Capability, Social Justice and Education in the Niger Delta

Edozie, Imoh Colins 05 September 2019 (has links)
No description available.
550

Operationalizing FAccT : A Case Study at the Swedish Tax Agency / FAccT i praktiken : En fallstudie på Skatteverket

Jansson, Daniel, Strallhofer, Daniel January 2020 (has links)
Fairness, accountability and transparency (FAccT) in machine learning is an interdisciplinary area that concerns the design, development, deployment and maintenance of ethical AI and ML. Examples of research challenges in the field are detecting biased models, accountability issues that arise with systems that make decisions without human intervention or oversight, and the blackbox issues where decisions made by an AI system are untraceable. Whereas previous research within the FAccT domain typically only uses one perspective to investigate and research ethical AI, this paper takes the opposite approach of considering all three perspectives and uses them together to conduct a holistic case study. The aim of this paper is to provide tangible insights into how organizations can work with ethical AI and ML. The empirical evidence is gathered from the advanced data analytics (ADA) team at the Swedish Tax Agency in the form of interviews and quantitative data from a model developed by the team. Most notably, the quantitative and qualitative results show that: the data set used to train the model is biased, and there are risks with the current modus operandi due to (1) disagreeing views on accountability and (2) differences in literacy and understanding of ML and AI. Furthermore, this paper also features examples of how newly proposed frameworks such as SMACTR (a large scale AI systems audit framework), data sheets and model cards can be used by ADA in the development process to address these issues, and the potential benefits and caveats of the frameworks themselves. We also showcase how theoretical models such as Larssons 7 nuances of transparency and Bovens accountability framework can be applied in a practical setting and provide supporting evidence that shows their respective applicability. Finally, the implications of taking a collective approach to FAccT, the importance of ethics and transparency, and comparisons of different used frameworks are discussed. / Rättvisa, ansvarsskyldighet och transparens (eng. Fairness, accountability and transparency (FAccT) inom maskininlärning (ML) är ett tvärvetenskapligt område som berör designen, utvecklingen, implementeringen och underhållet av etisk AI och ML. Exempel på områdets forskningsutmaningar är att upptäcka partiska modeller, ansvarighetsfrågors som uppstår med system som fattar beslut utan mänskligt ingripande eller översikt, och black-box frågor där beslut som fattas av ett AI-system inte kan spåras. Medan tidigare forskning inom FAccT-domänen oftast använder ett av de tre tidigare nämnda perspektiven för att undersöka och utforska etisk AI tar denna artikel en motsatt strategi genom att beakta alla tre perspektiv för att tillsammans kunna genomföra en heltäckande fallstudie. Syftet med denna uppsats är ge konkreta insikter för hur organisationer kan arbeta med etisk AI och ML. De empiriska bevisen samlas in med hjälp av det avancerade dataanalysteamet (ADA) på Skatteverket via intervjuer. Kvantitativ data samlas även in från en modell som har utvecklats och används av ADA. De kvalitativa och kvantitativa resultaten visar att: datasetet som används för att träna modellen är partisk och det finns risker med den nuvarande modus operandi på grund av (1) oeniga åsikter om ansvarsskyldighet och (2) skillnader i läskunnighet och förståelse för AI och ML. Vidare så innehåller denna uppsats också exempel på hur nyligen utvecklade ramverk såsom SMACTR, datasheets och model cards kan användas av ADA i utvecklingsprocessen för att motverka dessa problem, samt de potentiella fördelarna och varningarna som ramverken har och ger. Vi visar även hur teoretiska modeller såsom Larssons 7 nyanser av transparens och Bovens ramverk för ansvarsskyldighet kan tillämpas i en praktisk miljö och ger underlag för deras respektive användbarhet. Slutligen diskuteras konsekvenserna av att ta en kollektiv inställning till FAccT, vikten av etik och transparens och en jämförelse av olika ramverk görs.

Page generated in 0.034 seconds