Spelling suggestions: "subject:"explanations"" "subject:"axplanations""
11 |
Defining a Security Council Mandate in Humanitarian Interventions : The Legal Status of Explanations of VoteHedenstierna, Sophie January 2015 (has links)
No description available.
|
12 |
An Experimental Investigation of Causal Explanations for Depression and Willingness to Accept TreatmentSalem, Taban 10 August 2018 (has links)
The present study was aimed at experimentally investigating effects of causal explanations for depression on treatment-seeking behavior and beliefs. Participants at a large Southern university (N = 139; 78% female; average age 19.77) received bogus screening results indicating high depression risk, then viewed an explanation of depression etiology (fixed biological vs. malleable) before receiving a treatment referral (antidepressant vs. psychotherapy). Participants accepted the cover story at face value, but some expressed doubts about the screening task’s ability to properly assess their individual depression. Within the skeptics, those given a fixed biological explanation for depression were relatively unwilling to accept either treatment, but those given a malleable explanation were much more willing to accept psychotherapy. Importantly, differences in skepticism were not due to levels of actual depressive symptoms. The present findings indicate that information about the malleability of depression may have a protective effect for persons who otherwise would not accept treatment.
|
13 |
Applying Toulmin's Argumentation Framework to Explanations in a Reform Oriented Mathematics ClassBrinkerhoff, Jennifer Alder 12 July 2007 (has links) (PDF)
This study looks at conceptual explanations given in a reform-oriented mathematics class for preservice secondary mathematics teachers and extends Toulmin's argumentation framework to account for some of the complexities of the explanations given by these students. This study explains the complexities that arose in applying Toulmin's framework to explanations and extends the framework by accounting for the features of conceptual explanations. The complexities of these explanations are that they are made up of multiple arguments that build on each other to reach a final conclusion and that they are also dependant upon the social aspects of the class in which they are situated. Recognizing that some statements have dual purposes in the explanation and that there are varying levels of justification used in the explanations helped to account for the first complexity of explanations. The classification of class conventions helps to account for the social influences on explanations. This study differs from other studies that use Toulmin's framework to analyze formal proofs or to identify taken-as-shared understanding in a classroom. This study instead focuses on using the framework to analyze the components of explanations and to provide insight into the structure of conceptually oriented explanations. This study contributes to the existing research by extending Toulmin's argumentation framework to account for how social influences help determine the appropriate components of an explanation.
|
14 |
Media Meltdown? Causal Self-Attributions in the US Press Following the 2016 Presidential ElectionMichel, Eva-Maria 03 July 2018 (has links)
No description available.
|
15 |
Explainable and Network-based Approaches for Decision-making in Emergency ManagementTabassum, Anika 19 October 2021 (has links)
Critical Infrastructures (CIs), such as power, transportation, healthcare, etc., refer to systems, facilities, technologies, and networks vital to national security, public health, and socio-economic well-being of people. CIs play a crucial role in emergency management. For example, the recent Hurricane Ida, Texas Winter storm, colonial cyber-attack that occurred during 2021 in the US, shows the CIs are highly inter-dependent with complex interactions. Hence power system failures and shutdown of natural gas pipelines, in turn, led to debilitating impacts on communication, waste systems, public health, etc. Consider power failures during a disaster, such as a hurricane. Subject Matter Experts (SMEs) such as emergency management authorities may be interested in several decision-making tasks. Can we identify disaster phases in terms of the severity of damage from analyzing changes in power failures? Can we tell the SMEs which power grids or regions are the most affected during each disaster phase and need immediate action to recover? Answering these questions can help SMEs to respond quickly and send resources for fast recovery from damage. Can we systematically provide how the failure of different power grids may impact the whole CIs due to inter-dependencies? This can help SMEs to better prepare and mitigate the risks by improving system resiliency.
In this thesis, we explore problems to efficiently operate decision-making tasks during a disaster for emergency management authorities. Our research has two primary directions, guide decision-making in resource allocation and plans to improve system resiliency. Our work is done in collaboration with the Oak Ridge National Laboratory to contribute impactful research in real-life CIs and disaster power failure data.
1. Explainable resource allocation: In contrast to the current interpretable or explainable model that provides answers to understand a model output, we view explanations as answers to guide resource allocation decision-making. In this thesis, we focus on developing a novel model and algorithm to identify disaster phases from changes in power failures. Also, pinpoint the regions which can get most affected at each disaster phase so the SMEs can send resources for fast recovery.
2. Networks for improving system resiliency: We view CIs as a large heterogeneous network with nodes as infrastructure components and dependencies as edges. Our goal is to construct a visual analytic tool and develop a domain-inspired model to identify the important components and connections to which the SMEs need to focus and better prepare to mitigate the risk of a disaster. / Doctor of Philosophy / Critical Infrastructure Systems (CIs) entitle multiple infrastructures valuable for maintaining public life and national security, e.g., power, water, transportation. US Federal Emergency Management Agency (FEMA) aims to protect the nation and citizens by mitigating all hazards during natural or man-made disasters. For this, they aim to adopt different decision-making strategies efficiently. E.g., During an ongoing disaster, when to quickly send resources, which regions to send resources first, etc. Besides, they also need to plan how to prepare for a future disaster and which CIs need maintenance to improve system resiliency.
We explore several data-mining problems which can guide FEMA towards developing efficient decision-making strategies. Our thesis emphasizes explainable and network-based models and algorithms to help decision-making operations for emergency management experts by leveraging critical infrastructures data.
|
16 |
Réponses manquantes : Débogage et Réparation de requêtes / Query Debugging and Fixing to Recover Missing Query ResultsTzompanaki, Aikaterini 14 December 2015 (has links)
La quantité croissante des données s’accompagne par l’augmentation du nombre de programmes de transformation de données, généralement des requêtes, et par la nécessité d’analyser et comprendre leurs résultats : (a) pourquoi telle réponse figure dans le résultat ? ou (b) pourquoi telle information n’y figure pas ? La première question demande de trouver l’origine ou la provenance des résultats dans la base, un problème très étudié depuis une 20taine d’années. Par contre, expliquer l’absence de réponses dans le résultat d’une requête est un problème peu exploré jusqu’à présent. Répondre à une question Pourquoi-Pas consiste à fournir des explications quant à l’absence de réponses. Ces explications identifient pourquoi et comment les données pertinentes aux réponses manquantes sont absentes ou éliminées par la requête. Notre travail suppose que la base de données n’est pas source d’erreur et donc cherche à fournir des explications fondées sur (les opérateurs de) la requête qui peut alors être raffinée ultérieurement en modifiant les opérateurs "fautifs". Cette thèse développe des outils formels et algorithmiques destinés au débogage et à la réparation de requêtes SQL afin de traiter des questions de type Pourquoi-Pas. Notre première contribution, inspirée par une étude critique de l’état de l’art, utilise un arbre de requête pour rechercher les opérateurs "fautifs". Elle permet de considérer une classe de requêtes incluant SPJA, l’union et l’agrégation. L’algorithme NedExplain développé dans ce cadre, a été validé formellement et expérimentalement. Il produit des explications de meilleure qualité tout en étant plus efficace que l’état de l’art.L’approche précédente s’avère toutefois sensible au choix de l’arbre de requête utilisé pour rechercher les explications. Notre deuxième contribution réside en la proposition d’une notion plus générale d’explication sous forme de polynôme qui capture toutes les combinaisons de conditions devant être modifiées pour que les réponses manquantes apparaissent dans le résultat. Cette méthode s’applique à la classe des requêtes conjonctives avec inégalités. Sur la base d’un premier algorithme naïf, Ted, ne passant pas à l’échelle, un deuxième algorithme, Ted++, a été soigneusement conçu pour éliminer entre autre les calculs itérés de sous-requêtes incluant des produits cartésien. Comme pour la première approche, une évaluation expérimentale a prouvé la qualité et l’efficacité de Ted++. Concernant la réparation des requêtes, notre contribution réside dans l’exploitation des explications polynômes pour guider les modifications de la requête initiale ce qui permet la génération de raffinements plus pertinents. La réparation des jointures "fautives" est traitée de manière originale par des jointures externes. L’ensemble des techniques de réparation est mis en oeuvre dans FixTed et permet ainsi une étude de performance et une étude comparative. Enfin, Ted++ et FixTed ont été assemblés dans une plate-forme pour le débogage et la réparation de requêtes relationnelles. / With the increasing amount of available data and data transformations, typically specified by queries, the need to understand them also increases. “Why are there medicine books in my sales report?” or “Why are there not any database books?” For the first question we need to find the origins or provenance of the result tuples in the source data. However, reasoning about missing query results, specified by Why-Not questions as the latter previously mentioned, has not till recently receivedthe attention it is worth of. Why-Not questions can be answered by providing explanations for the missing tuples. These explanations identify why and how data pertinent to the missing tuples were not properly combined by the query. Essentially, the causes lie either in the input data (e.g., erroneous or incomplete data) or at the query level (e.g., a query operator like join). Assuming that the source data contain all the necessary relevant information, we can identify the responsible query operators formingquery-based explanations. This information can then be used to propose query refinements modifying the responsible operators of the initial query such that the refined query result contains the expected data. This thesis proposes a framework targeted towards SQL query debugging and fixing to recover missing query results based on query-based explanations and query refinements.Our contribution to query debugging consist in two different approaches. The first one is a tree-based approach. First, we provide the formal framework around Why-Not questions, missing from the state-of-the-art. Then, we review in detail the state-of-the-art, showing how it probably leads to inaccurate explanations or fails to provide an explanation. We further propose the NedExplain algorithm that computes correct explanations for SPJA queries and unions there of, thus considering more operators (aggregation) than the state of the art. Finally, we experimentally show that NedExplain is better than the both in terms of time performance and explanation quality. However, we show that the previous approach leads to explanations that differ for equivalent query trees, thus providing incomplete information about what is wrong with the query. We address this issue by introducing a more general notion of explanations, using polynomials. The polynomial captures all the combinations in which the query conditions should be fixed in order for the missing tuples to appear in the result. This method is targeted towards conjunctive queries with inequalities. We further propose two algorithms, Ted that naively interprets the definitions for polynomial explanations and the optimized Ted++. We show that Ted does not scale well w.r.t. the size of the database. On the other hand, Ted++ is capable ii of efficiently computing the polynomial, relying on schema and data partitioning and advantageous replacement of expensive database evaluations by mathematical calculations. Finally, we experimentally evaluate the quality of the polynomial explanations and the efficiency of Ted++, including a comparative evaluation.For query fixing we propose is a new approach for refining a query by leveraging polynomial explanations. Based on the input data we propose how to change the query conditions pinpointed by the explanations by adjusting the constant values of the selection conditions. In case of joins, we introduce a novel type of query refinements using outer joins. We further devise the techniques to compute query refinements in the FixTed algorithm, and discuss how our method has the potential to be more efficient and effective than the related work.Finally, we have implemented both Ted++ and FixTed in an system prototype. The query debugging and fixing platform, short EFQ allows users to nteractively debug and fix their queries when having Why- Not questions.
|
17 |
Explainable AI techniques for sepsis diagnosis : Evaluating LIME and SHAP through a user studyNorrie, Christian January 2021 (has links)
Articial intelligence has had a large impact on many industries and transformed some domains quite radically. There is tremendous potential in applying AI to the eld of medical diagnostics. A major issue with applying these techniques to some domains is an inability for AI models to provide an explanation or justication for their predictions. This creates a problem wherein a user may not trust an AI prediction, or there are legal requirements for justifying decisions that are not met. This thesis overviews how two explainable AI techniques (Shapley Additive Explanations and Local Interpretable Model-Agnostic Explanations) can establish a degree of trust for the user in the medical diagnostics eld. These techniques are evaluated through a user study. User study results suggest that supplementing classications or predictions with a post-hoc visualization increases interpretability by a small margin. Further investigation and research utilizing a user study surveyor interview is suggested to increase interpretability and explainability of machine learning results.
|
18 |
Assessment of Predictive Models for Improving Default Settings in Streaming Services / Bedömning av prediktiva modeller för att förbättra standardinställningar i streamingtjänsterLattouf, Mouzeina January 2020 (has links)
Streaming services provide different settings where customers can choose a sound and video quality based on personal preference. The majority of users never make an active choice; instead, they get a default quality setting which is chosen automatically for them based on some parameters, like internet connection quality. This thesis explores personalising the default audio setting, intending to improve the user experience. It achieves this by leveraging machine learning trained on the fraction of users that have made active choices in changing the quality setting. The assumption that user similarity in users who make an active choice can be leveraged to impact user experience was the idea behind this thesis work. It was issued to study which type of data from different categories: demographic, product and consumption is most predictive of a user's taste in sound quality. A case study was conducted to achieve the goals for this thesis. Five predictive model prototypes were trained, evaluated, compared and analysed using two different algorithms: XGBoost and Logistic Regression, and targeting two regions: Sweden and Brazil. Feature importance analysis was conducted using SHapley Additive exPlanations(SHAP), a unified framework for interpreting predictions with a game theoretic approach, and by measuring coefficient weights to determine the most predictive features. Besides exploring the feature impact, the thesis also answers how reasonable it is to generalise these models to non-selecting users by performing hypothesis testing. The project also covered bias analysis between users with and without active quality settings and how that affects the models. The models with XGBoost had higher performance. The results showed that demographic and product data had a higher impact on model predictions in both regions. Although, different regions did not have the same data features as most predictive, so there were differences observed in feature importance between regions and also between platforms. The results of hypothesis testing did not indicate a valid reason to consider the models to work for non-selective users. However, the method is negatively affected by other factors such as small changes in big datasets that impact the statistical significance. Data bias in some data features was found, which indicated a correlation but not the causation behind the patterns. The results of this thesis additionally show how machine learning can improve user experience in regards to default sound quality settings, by leveraging models on user similarity in users who have changed the sound quality to the most suitable for them. / Streamingtjänster erbjuder olika inställningar där kunderna kan välja ljud- och videokvalitet baserat på personliga preferenser. Majoriteten av användarna gör aldrig ett aktivt val; de tilldelas istället en standardkvalitetsinställning som väljs automatiskt baserat på vissa parametrar, som internetanslutningskvalitet. Denna avhandling undersöker anpassning av standardljudinställningen, med avsikt att förbättra användarupplevelsen. Detta uppnås genom att tillämpa maskininlärning på den andel användare som har aktivt ändrat kvalitetsinställningen. Antagandet att användarlikhet hos användare som gör ett aktivt val kan utnyttjas för att påverka användarupplevelsen var tanken bakom detta examensarbete. Det utfärdades för att studera vilken typ av data från olika kategorier: demografi, produkt och konsumtion är mest förutsägande för användarens smak i ljudkvalitet. En fallstudie genomfördes för att uppnå målen för denna avhandling. Fem prediktiva modellprototyper tränades, utvärderades, jämfördes och analyserades med två olika algoritmer: XGBoost och Logistisk Regression, och inriktade på två regioner: Sverige och Brasilien. Analys av funktionsvikt genomfördes med SHapley Additive exPlanations (SHAP), en enhetlig ram för att tolka förutsägelser med en spelteoretisk metod, och genom att mäta koefficientvikter för att bestämma de mest prediktiva funktionerna. Förutom att utforska funktionens påverkan, svarar avhandlingen också på hur rimligt det är att generalisera dessa modeller för icke-selektiva användare genom att utföra hypotesprövning. Projektet omfattade också biasanalys mellan användare med och utan aktiva kvalitetsinställningar och hur det påverkar modellerna. Modellerna med XGBoost hade högre prestanda. Resultaten visade att demografisk data och produktdata hade en högre inverkan på modellförutsägelser i båda regionerna. Däremot hade olika regioner inte samma datafunktioner som mest prediktiva, skillnader observerades i funktionsvikt mellan regioner och även mellan plattformar. Resultaten av hypotesprövningen indikerade inte på vägande anledning för att anse att modellerna skulle fungera för icke-selektiva användare. Däremot har metoden påverkats negativt av andra faktorer som små förändringar i stora datamängder som påverkar den statistiska signifikansen. Data bias hittades i vissa datafunktioner, vilket indikerade en korrelation men inte orsaken bakom mönstren. Resultaten av denna avhandling visar dessutom hur maskininlärning kan förbättra användarupplevelsen när det gäller standardinställningar för ljudkvalitet, genom att utnyttja modeller för användarlikhet hos användare som har ändrat ljudkvaliteten till det mest lämpliga för dem.
|
19 |
Effects of genetic and experiential explanations for killing on subsequent bug-killing behaviour and moral acceptance of killingIsmail, Ibrahim January 2008 (has links)
This study examined people’s attitudes towards killing bugs and their bug-killing behaviour in the context of nature vs. nurture explanations of bug killing. Previous research shows that exposure to genetic (i.e., nature) explanations could have undesirable effects on people’s attitudes and behaviour, compared to the exposure to experiential(i.e., nurture) explanations. Genetic explanations for killing may affect attitudes towards killing and killing behaviour, because they suggest that killing behaviour is predetermined or programmed by nature. Such explanations may also be used by individuals to overcome guilt and dissonance from prior killing or killing in which they are about to participate. This study tested the idea that exposure to genetic explanations for bug killing would lead people to view killing bugs as more morally acceptable, as well as lead them to kill more bugs. A sample of university students was randomly assigned into three conditions, in which they read either genetic or experiential explanations for why people kill bugs or read a neutral passage. The study utilised a procedure in which participants were led to believe that they were killing bugs (although in actuality no bugs were killed), to observe their killing behaviour in a self-paced killing task. Half of the participants were also asked to kill a bug prior to the self-paced killing task. Results showed that participants who read genetic explanations viewed bug killing as more morally acceptable, compared to participants who read experiential explanations, and this occurred particularly among those who engaged in the prior killing task. However, no similar effects emerged for the number of bugs killed, though there was a positive correlation between the moral acceptance of bug killing and the number of bugs killed. Implications of genetic explanations with respect to aggression and killing are discussed.
|
20 |
Contrôle de la propagation et de la recherche dans un solveur de contraintes / Controlling propagation and search within a constraint solverPrud'homme, Charles 28 February 2014 (has links)
La programmation par contraintes est souvent décrite, utopiquement, comme un paradigme déclaratif dans lequel l’utilisateur décrit son problème et le solveur le résout. Bien entendu, la réalité des solveurs de contraintes est plus complexe, et les besoins de personnalisation des techniques de modélisation et de résolution évoluent avec le degré d’expertise des utilisateurs. Cette thèse porte sur l’enrichissement de l’arsenal des techniques disponibles dans les solveurs de contraintes. D’une part, nous étudions la contribution d’un système d’explications à l’exploration de l’espace de recherche, dans le cadre spécifique d’une recherche locale. Deux heuristiques de voisinages génériques exploitant singulièrement les explications sont décrites. La première se base sur la difficulté de réparer une solution partiellement détruite, la seconde repose sur la nature non-optimale de la solution courante. Ces heuristiques mettent à jour la structure interne des problèmes traités pour construire des voisins de bonne qualité pour une recherche à voisinage large. Elles sont complémentaires d’autres heuristiques de voisinages génériques, avec lesquels elles peuvent être combinées efficacement. De plus, nous proposons de rendre le système d’explications paresseux afin d’en minimiser l’empreinte. D’autre part, nous effectuons un état des lieux des savoir-faire relatifs aux moteurs de propagation pour les solveurs de contraintes. Ces données sont exploitées opérationnellement à travers un langage dédié qui permet de personnaliser la propagation au sein d’un solveur, en fournissant des structures d’implémentation et en définissant des points de contrôle dans le solveur. Ce langage offre des concepts de haut niveau permettant à l’utilisateur d’ignorer les détails de mise en œuvre du solveur, tout en conservant un bon niveau de flexibilité et certaines garanties. Il permet l’expression de schémas de propagation spécifiques à la structure interne de chaque problème. La mise en œuvre et les expérimentations ont été effectués dans le solveur de contraintes Choco. Cette thèse a donné lieu à une nouvelle version de l’outil globalement plus efficace et nativement expliqué. / Constraint programming is often described, idealistically, as a declarative paradigm in which the user describes the problem and the solver solves it. Obviously, the reality of constraint solvers is more complex, and the needs in customization of modeling and solving techniques change with the level of expertise of users. This thesis focuses on enriching the arsenal of available techniques in constraint solvers. On the one hand, we study the contribution of an explanation system to the exploration of the search space in the specific context of a local search. Two generic neighborhood heuristics which exploit explanations singularly are described. The first one is based on the difficulty of repairing a partially destroyed solution, the second one is based on the non-optimal nature of the current solution. These heuristics discover the internal structure of the problems to build good neighbors for large neighborhood search. They are complementary to other generic neighborhood heuristics, with which they can be combined effectively. In addition, we propose to make the explanation system lazy in order to minimize its footprint. On the other hand, we undertake an inventory of know-how relative to propagation engines of constraint solvers. These data are used operationally through a domain specific language that allows users to customize the propagation schema, providing implementation structures and defining check points within the solver. This language offershigh-level concepts that allow the user to ignore the implementation details, while maintaining a good level of flexibility and some guarantees. It allows the expression of propagation schemas specific to the internal structure of each problem solved. Implementation and experiments were carried out in the Choco constraint solver, developed in this thesis. This has resulted in a new version of the overall effectiveness and natively explained tool.
|
Page generated in 0.1016 seconds