Spelling suggestions: "subject:"explanations"" "subject:"axplanations""
41 |
Self-Regulatory Deficits and Childhood Trauma Histories: Bridging Two Causal Explanations for Sexually Abusive BehaviorLasher, M. P., Stinson, Jill D. 01 October 2015 (has links)
No description available.
|
42 |
A Machine Learning Ensemble Approach to Churn Prediction : Developing and Comparing Local Explanation Models on Top of a Black-Box Classifier / Maskininlärningsensembler som verktyg för prediktering av utträde : En studie i att beräkna och jämföra lokala förklaringsmodeller ovanpå svårförståeliga klassificerareOlofsson, Nina January 2017 (has links)
Churn prediction methods are widely used in Customer Relationship Management and have proven to be valuable for retaining customers. To obtain a high predictive performance, recent studies rely on increasingly complex machine learning methods, such as ensemble or hybrid models. However, the more complex a model is, the more difficult it becomes to understand how decisions are actually made. Previous studies on machine learning interpretability have used a global perspective for understanding black-box models. This study explores the use of local explanation models for explaining the individual predictions of a Random Forest ensemble model. The churn prediction was studied on the users of Tink – a finance app. This thesis aims to take local explanations one step further by making comparisons between churn indicators of different user groups. Three sets of groups were created based on differences in three user features. The importance scores of all globally found churn indicators were then computed for each group with the help of local explanation models. The results showed that the groups did not have any significant differences regarding the globally most important churn indicators. Instead, differences were found for globally less important churn indicators, concerning the type of information that users stored in the app. In addition to comparing churn indicators between user groups, the result of this study was a well-performing Random Forest ensemble model with the ability of explaining the reason behind churn predictions for individual users. The model proved to be significantly better than a number of simpler models, with an average AUC of 0.93. / Metoder för att prediktera utträde är vanliga inom Customer Relationship Management och har visat sig vara värdefulla när det kommer till att behålla kunder. För att kunna prediktera utträde med så hög säkerhet som möjligt har den senasteforskningen fokuserat på alltmer komplexa maskininlärningsmodeller, såsom ensembler och hybridmodeller. En konsekvens av att ha alltmer komplexa modellerär dock att det blir svårare och svårare att förstå hur en viss modell har kommitfram till ett visst beslut. Tidigare studier inom maskininlärningsinterpretering har haft ett globalt perspektiv för att förklara svårförståeliga modeller. Denna studieutforskar lokala förklaringsmodeller för att förklara individuella beslut av en ensemblemodell känd som 'Random Forest'. Prediktionen av utträde studeras påanvändarna av Tink – en finansapp. Syftet med denna studie är att ta lokala förklaringsmodeller ett steg längre genomatt göra jämförelser av indikatorer för utträde mellan olika användargrupper. Totalt undersöktes tre par av grupper som påvisade skillnader i tre olika variabler. Sedan användes lokala förklaringsmodeller till att beräkna hur viktiga alla globaltfunna indikatorer för utträde var för respektive grupp. Resultaten visade att detinte fanns några signifikanta skillnader mellan grupperna gällande huvudindikatorerna för utträde. Istället visade resultaten skillnader i mindre viktiga indikatorer som hade att göra med den typ av information som lagras av användarna i appen. Förutom att undersöka skillnader i indikatorer för utträde resulterade dennastudie i en välfungerande modell för att prediktera utträde med förmågan attförklara individuella beslut. Random Forest-modellen visade sig vara signifikantbättre än ett antal enklare modeller, med ett AUC-värde på 0.93.
|
43 |
Explanations In Contextual Graphs: A Solution To Accountability In Knowledge Based SystemsSherwell, Brian W 01 January 2005 (has links)
In order for intelligent systems to be a viable and utilized tool, a user must be able to understand how the system comes to a decision. Without understanding how the system arrived at an answer, a user will be less likely to trust its decision. One way to increase a user's understanding of how the system functions is by employing explanations to account for the output produced. There have been attempts to explain intelligent systems over the past three decades. However, each attempt has had shortcomings that separated the logic used to produce the output and that used to produce the explanation. By using the representational paradigm of Contextual Graphs, it is proposed that explanations can be produced to overcome these shortcomings. Two different temporal forms of explanations are proposed, a pre-explanation and a post-explanation. The pre-explanation is intended to help the user understand the decision making process. The post-explanation is intended to help the user understand how the system arrived at a final decision. Both explanations are intended to help the user gain a greater understanding of the logic used to compute the system's output, and thereby enhance the system's credibility and utility. A prototype system is constructed to be used as a decision support tool in a National Science Foundation research program. The researcher has spent the last year at the NSF collecting the knowledge implemented in the prototype system.
|
44 |
Development Of Seventh Grade Pre-algebra Students' Mathematical Problem Solving Through Written Explanations And JustificatiJones, Rebecca 01 January 2008 (has links)
In this action research, the interactions of seventh grade pre-algebra students in a mathematics classroom shared their explanation and justification processes through group work. Prior to the start of the study students were given a written pre-test to determine current conceptual thinking in mathematics. Over the next nine weeks, the teacher engaged the students in problem solving activities that included reasoning skills, communication and making connections through discussion with their peers. Following nine weeks of written and verbal discourse, students were provided a post-test to determine changes in their conceptual thinking. Overall students' grades, journal writings and test scores showed positive gains with the greatest changes occurring in written explanations of their conceptual thinking in mathematics.
|
45 |
The Effects Of Problem Solving Strategy Instruction, Journal Writing And Discourse On 6th Grade Advanced Mathematics Student PerWittcop, Melissa 01 January 2008 (has links)
There are two purposes to this study. The first was for me, as a teacher, to try something new in my instruction and grow from it. The second purpose of this study focused on the students. I wanted to see what level of performance in problem solving my students are at currently, and how the use of journaling and discourse affected the students' problem solving abilities. A problem-solving unit was taught heuristically in order to introduce students to the various strategies that could be used in problem solving. Math journals were also used for problem solving and reflection. Classroom discourse in discussion of problem solving situations was used as a means of identifying strategies used to solve the problem. Explanations and justifications were then used in writing and discourse to support students' solution and methods. An analytic problem-solving rubric was used to score the problems solved by the students. These scores, along with explanations and justifications, and discourse were used as data and analyzed for common themes. The results of this study demonstrate overall improvement in student performance in problem solving. Heuristic instruction the students received on strategies in problem solving helped to improve their ability to not only select an appropriate strategy, but also implement it. This unit, along with the problem solving prompts solved in the journals, helped to improve the students' performance in explanations. It was discourse combined with all the previous instruction that finally improved student performance in justification.
|
46 |
Concise Justifications Versus Detailed Proofs for Description Logic EntailmentsBorgwardt, Stefan 29 December 2023 (has links)
We discuss explanations in Description Logics (DLs), a family of logics used for knowledge representation. Initial work on explaining consequences for DLs had focused on justifications, which are minimal subsets of axioms that entail the consequence. More recently, it was proposed that proofs can provide more detailed information about why a consequence follows. Moreover, several measures have been proposed to estimate the comprehensibility of justifications and proofs, for example, their size or the complexity of logical expressions. In this paper, we analyze the connection between these measures, e.g. whether small justifications necessarily give rise to small proofs. We use a dataset of DL proofs that was constructed last year based on the ontologies of the OWL Reasoner Evaluation 2015. We find that, in general, less complex justifications indeed correspond to less complex proofs, and discuss some exceptions to this rule.
|
47 |
A tactic of displacement: explaining patterns of internal displacement in the Syrian civil warStevens, Lucy 29 September 2023 (has links)
The Syrian civil war and the subsequent displacement crisis it caused, changed the international community’s understanding of forced migration in the contemporary context. Even more than a decade after the conflict began, over half the population of Syria remains displaced indicating the continued importance of this crisis. The literature has overwhelmingly focused on those Syrians who crossed international borders. However, those who remain internally displaced, and the patterns that their displacement within Syria has taken, provide insightful information on the drivers of forced migration more widely. By looking at subnational variation in migration patterns, this study seeks to answer the question: what explains patterns of internal forced displacement within Syria? I argue that the patterns seen throughout the Syrian civil war are an outcome of state policies that push displacement in certain populations and regions of the country as a method of helping ensure regime victory. These tactics go beyond common decision-making explanations, putting culpability for displacement back onto government actors. A qualitative examination of strategies employed by the Syrian regime during the civil war as well as a spatial and temporal analysis of IDP movements within Syria between 2016 and 2019 show evidence for the tactics used by the regime that have driven Syrian internal displacement.
|
48 |
Machine Learning Survival Models : Performance and ExplainabilityAlabdallah, Abdallah January 2023 (has links)
Survival analysis is an essential statistics and machine learning field in various critical applications like medical research and predictive maintenance. In these domains understanding models' predictions is paramount. While machine learning techniques are increasingly applied to enhance the predictive performance of survival models, they simultaneously sacrifice transparency and explainability. Survival models, in contrast to regular machine learning models, predict functions rather than point estimates like regression and classification models. This creates a challenge regarding explaining such models using the known off-the-shelf machine learning explanation techniques, like Shapley Values, Counterfactual examples, and others. Censoring is also a major issue in survival analysis where the target time variable is not fully observed for all subjects. Moreover, in predictive maintenance settings, recorded events do not always map to actual failures, where some components could be replaced because it is considered faulty or about to fail in the future based on an expert's opinion. Censoring and noisy labels create problems in terms of modeling and evaluation that require to be addressed during the development and evaluation of the survival models. Considering the challenges in survival modeling and the differences from regular machine learning models, this thesis aims to bridge this gap by facilitating the use of machine learning explanation methods to produce plausible and actionable explanations for survival models. It also aims to enhance survival modeling and evaluation revealing a better insight into the differences among the compared survival models. In this thesis, we propose two methods for explaining survival models which rely on discovering survival patterns in the model's predictions that group the studied subjects into significantly different survival groups. Each pattern reflects a specific survival behavior common to all the subjects in their respective group. We utilize these patterns to explain the predictions of the studied model in two ways. In the first, we employ a classification proxy model that can capture the relationship between the descriptive features of subjects and the learned survival patterns. Explaining such a proxy model using Shapley Values provides insights into the feature attribution of belonging to a specific survival pattern. In the second method, we addressed the "what if?" question by generating plausible and actionable counterfactual examples that would change the predicted pattern of the studied subject. Such counterfactual examples provide insights into actionable changes required to enhance the survivability of subjects. We also propose a variational-inference-based generative model for estimating the time-to-event distribution. The model relies on a regression-based loss function with the ability to handle censored cases. It also relies on sampling for estimating the conditional probability of event times. Moreover, we propose a decomposition of the C-index into a weighted harmonic average of two quantities, the concordance among the observed events and the concordance between observed and censored cases. These two quantities, weighted by a factor representing the balance between the two, can reveal differences between survival models previously unseen using only the total Concordance index. This can give insight into the performances of different models and their relation to the characteristics of the studied data. Finally, as part of enhancing survival modeling, we propose an algorithm that can correct erroneous event labels in predictive maintenance time-to-event data. we adopt an expectation-maximization-like approach utilizing a genetic algorithm to find better labels that would maximize the survival model's performance. Over iteration, the algorithm builds confidence about events' assignments which improves the search in the following iterations until convergence. We performed experiments on real and synthetic data showing that our proposed methods enhance the performance in survival modeling and can reveal the underlying factors contributing to the explainability of survival models' behavior and performance.
|
49 |
The Effects of Web-Based Peer Review on Student WritingWooley, Ryan S. 13 December 2007 (has links)
No description available.
|
50 |
Developing a model of communication for pre-service elementary teachers' written mathematical explanationsIshii, Drew K. 13 July 2005 (has links)
No description available.
|
Page generated in 0.106 seconds