Spelling suggestions: "subject:"[een] EXPLANATION"" "subject:"[enn] EXPLANATION""
81 |
Matter and Explanation. On Aristotle's Metaphysics Book H / Matière et Explication. Sur le livre Η de la Métaphysique d'AristoteSeminara, Simone Giuseppe 13 June 2014 (has links)
Le titre de ma thèse est “Matter and Explanation. On Aristotle's Metaphysics Book Η”. Le but de cette recherche est de montrer la profonde unité argumentative du livre H (livre VIII), considéré habituellement comme un ensemble d'appendices au livre livre Z, qui le précède. Dans mon travail, conformément à la tendance dominante dans la littérature spécialisée des dernières années, je pars de l'indication donnée par M. Burnyeat dans “A Map of Metaphysics Ζ” (2001). D’après Burnyeat, H achèverait l'analyse de Z en développant le nouveau point de départ dans l'étude sur la substance établi dans le chapitre Z17. Dans ce texte, on considère la substance comme « principe et cause » et, par conséquent, on recherche « la cause pour laquelle la matière est quelque chose ». Cette indication a été utilisée jusqu'à présent pour voir en H l'endroit où ce principe serait appliqué. H aurait ainsi un rôle didactique, explicitant le principe méthodologique établi en Z17. Dans mon travail, je vise à montrer que l’attitude d’Aristote à propos de la notion de substance ne se borne pas, dans le livre H, à une simple synthèse exposant des résultats préalablement acquis. J’estime, au contraire, qu’il procède à une révision profonde du statut de substantialité qui est celui de la matière, c'est-à-dire du sujet ontologique, dont il s’agit alors d’expliquer l'organisation. Cette révision concerne les critères de référence, utilisés dans Z, qui avaient différemment contribué à imposer une lecture déflationniste de la notion de ὕλη. Dans H, au contraire, la matière est abordée en tant que sujet physique sous-jacent aux changements et à travers son rôle dispositionnel à l'intérieur des composées biologiques. Cette perspective de recherche s'accomplit en H6, où Aristote montre la supériorité explicative de son hylémorphisme par rapport à la doctrine platonicienne des Idées. / The main aim of my work – “Matter and Explanation. On Aristotle's Metaphysics Book Η” – is to show the argumentative unity of Book Η (VIII), which has been usually regarded as a mere collection of appendices to the previous Book Ζ. In my thesis I take on the main suggestion provided by M. Burnyeat in “A Map of Metaphysics Ζ” (2001). According to Burnyeat, Η accomplishes the enquiry of Ζ by developing Ζ17's fresh start into the analysis of sensible substances. Starting from Ζ17, Aristotle regards the notion of substance in its explanatory role as “principle and cause” and, as a consequence, he searches for “the cause by reason of which a certain matter is some definite thing”. Burnyeat's suggestion has been so far followed in order to look at Η as at that place where this search is accomplished. Thus, Η would play a didactical-expository role. In my work I aim at showing how in Book Η Aristotle does not confine himself to a mere exposition of the previous outcomes. By contrast, he provides a deep revision of the status of matter's substancehood. Namely of that ontological subject whose organization must be explained. Such a revision concerns those criteria, which in Book Ζ have provided a deflationary reading of the notion of ὕλη. On the contrary, in Η matter is read as subject of physical changes and in its dispositional role within the biological wholes. Such a framework is accomplished in Η6, where Aristotle shows the explanatory primacy of his own hylomorphism over the Platonic Doctrine of Forms.
|
82 |
Methods and measures for statistical fault localisationLandsberg, David January 2016 (has links)
Fault localisation is the process of finding the causes of a given error, and is one of the most costly elements of software development. One of the most efficient approaches to fault localisation appeals to statistical methods. These methods are characterised by their ability to estimate how faulty a program artefact is as a function of statistical information about a given program and test suite. However, the major problem facing statistical approaches is their effectiveness -- particularly with respect to finding single (or multiple) faults in large programs typical to the real world. A solution to this problem hinges on discovering new formal properties of faulty programs and developing scalable statistical techniques which exploit them. In this thesis I address this by identifying new properties of faulty programs, developing the formal frameworks and methods which are formally proven to exploit them, and demonstrating that many of our new techniques substantially and statistically significantly outperform competing algorithms at given fault localisation tasks (using p = 0.01) on what (to our knowledge) is one of the largest scale set of experiments in fault localisation to date. This research is thus designed to corroborate the following thesis statement: That the new algorithms presented in this thesis are effective and efficient at software fault localisation and outperform state of the art statistical techniques at a range of fault localisation tasks. In more detail, the major thesis contributions are as follows: 1. We perform a thorough investigation into the existing framework of (sbfl), which currently stands at the cutting edge of statistical fault localisation. To improve on the effectiveness of sbfl, our first contribution is to introduce and motivate many new statistical measures which can be used within this framework. First, we show that many are well motivated to the task of sbfl. Second, we formally prove equivalence properties of large classes of measures. Third, we show that many of the measures perform competitively with the existing measures in experimentation -- in particular our new measure m9185 outperforms all existing measures on average in terms of effectiveness, and along with Kulkzynski2, is in a class of measures which statistically significantly outperforms all other measures at finding a single fault in a program (p = 0.01). 2. Having investigated sbfl, our second contribution is to motivate, introduce, and formally develop a new formal framework which we call probabilistic fault localisation (pfl). pfl is similar to sbfl insofar as it can leverage any suspiciousness measure, and is designed to directly estimate the probability that a given program artefact is faulty. First, we formally prove that pfl is theoretically superior to sbfl insofar as it satisfies and exploits a number of desirable formal properties which sbfl does not. Second, we experimentally show that pfl methods (namely, our measure pfl-ppv) substantially and statistically significantly outperforms the best performing sbfl measures at finding a fault in large multiple fault programs (p = 0.01). Furthermore, we show that for many of our benchmarks it is theoretically impossible to design strictly rational sbfl measures which outperform given pfl techniques. 3. Having addressed the problem of localising a single fault in a pro- gram, we address the problem of localising multiple faults. Accord- ingly, our third major contribution is the introduction and motiva- tion of a new algorithm M<sub>Opt(g)</sub> which optimises any ranking-based method g (such as pfl/sbfl/Barinel) to the task of multiple fault localisation. First we prove that MOpt(g) formally satisfies and exploits a newly identified formal property of multiple fault optimality. Secondly, we experimentally show that there are values for g such that M<sub>Opt(g)</sub> substantially and statistically significantly outperforms given ranking-based fault localisation methods at the task of finding multiple faults (p = 0.01). 4. Having developed methods for localising faults as a function of a given test suite, we finally address the problem of optimising test suites for the purposes of fault localisation. Accordingly, we first present an algorithm which leverages model checkers to improve a given test suite by making it satisfy a property of single bug opti- mality. Second, we experimentally show that on small benchmarks single bug optimal test suites can be generated (from scratch) efficiently when the algorithm is used in conjunction with the cbmc model checker, and that the test suite generated can be used effectively for fault localisation.
|
83 |
The justificatory structure of OWL ontologiesBail, Samantha Patricia January 2013 (has links)
The Web Ontology Language OWL is based on the highly expressive description logic SROIQ, which allows OWL ontology users to employ out-of-the-box reasoners to compute information that is not only explicitly asserted, but entailed by the ontology. Explanation facilities for entailments of OWL ontologies form an essential part of ontology development tools, as they support users in detecting and repairing errors in potentially large and highly complex ontologies, thus helping to ensure ontology quality. Justifications, minimal subsets of an ontology that are sufficient for an entailment to hold, are currently the prevalent form of explanation in OWL ontology development tools. They have been found to significantly reduce the time and effort required to debug erroneous entailments. A large number of entailments, however, have not only one but many justifications, which can make it considerably more challenging for a user to find a suitable repair for the entailment.In this thesis, we investigate the relationships between multiple justifications for both single and multiple entailments, with the goal of exploiting this justificatory structure in order to devise new coping strategies for multiple justifications. We describe various aspects of the justificatory structure of OWL ontologies, such as shared axiom cores and structural similarities. We introduce a model for measuring user effort in the debugging process and propose debugging strategies that exploit the justificatory structure in order to reduce user effort. Finally, an analysis of a large corpus of ontologies from the biomedical domain reveals that OWL ontologies used in practice frequently exhibit a rich justificatory structure.
|
84 |
The effect of explanations and monetary incentives on effort allocation decisionsGuymon, Ronald Nathan 01 January 2008 (has links)
In this study I examine the joint effect of explanations and monetary incentives on employees' effort allocation decisions in a multi-action setting. A rich literature in economics indicates that monetary incentives substantially influence employees' decisions. This literature demonstrates that the size of the incentive for a given performance measure should consider the measure's sensitivity, congruence and precision. Research in psychology demonstrates the decision influencing effects of explanations (a non-monetary factor) on employees' decisions through perceptions of fairness. I expect that effort allocation decisions are influenced both by explanations and monetary incentives: I hypothesize that providing reasonable and complete explanations substantively alter agents' action choices relative to a setting with monetary incentives alone. Using student subjects in experiments, I find that monetary incentives matter. Moreover, for sizeable monetary incentives, providing a detailed explanation modifies behavior favorably relative to when an unclear explanation is provided. However, for all of the considered monetary incentives, merely requesting a desired course of action is also enough to modify behavior favorably. This study contributes to the accounting literature by providing evidence of a decision influencing benefit associated with the use of explanations such as causal maps employed by firms adopting the balanced scorecard. This study also contributes to the organizational justice literature by providing evidence regarding the interaction effect of multiple antecedents of justice.
|
85 |
A Layman's Interpretation of the Provisions of a 20-Year Pay Life Insurance PolicyJames, Albert W., Jr. 08 1900 (has links)
This thesis presents an attempt to simplify the language used in life insurance provisions.
|
86 |
Enhancing Students’ Ability to Correct Misconceptions in Natural Selection with Refutational Texts and Self-ExplanationJanuary 2020 (has links)
abstract: This study examined the effects of different constructed response prompts and text types on students’ revision of misconceptions, comprehension, and causal reasoning. The participants were randomly assigned to prompt (self-explain, think-aloud) and text type (refutational, non-refutational) in a 2x2, between-subjects design. While reading, the students were prompted to write responses at regular intervals in the text. After reading, students were administered the conceptual inventory of natural selection (CINS), for which a higher score indicates fewer misconceptions of natural selection. Finally, students were given text comprehension questions, and reading skill and prior knowledge measures. Linear mixed effects (LME) models showed that students with better reading skill and more prior knowledge had a higher CINS score and better comprehension compared to less skilled students, but there were no effects of text type or prompt. Linguistic analysis of students’ responses demonstrated a relationship of prompt, text, and reading skill on students’ causal reasoning. Less skilled students exhibited greater causal reasoning when self-explaining a non-refutational text compared to less skilled students prompted to think-aloud, and less skilled students who read the refutational text. The results of this study demonstrate a relationship between reading skill and misconceptions in natural selections. Furthermore, the linguistic analyses suggest that less skilled students’ causal reasoning improves when prompted to self-explain. / Dissertation/Thesis / Masters Thesis Psychology 2020
|
87 |
Roblocks: An Educational System for AI Planning and ReasoningJanuary 2019 (has links)
abstract: This research introduces Roblocks, a user-friendly system for learning Artificial Intelligence (AI) planning concepts using mobile manipulator robots. It uses a visual programming interface based on block-structured programming to make AI planning concepts easier to grasp for those who are new to robotics and AI planning. Users get to accomplish any desired tasks by dynamically populating puzzle shaped blocks encoding the robot’s possible actions, allowing them to carry out tasks like navigation, planning, and manipulation by connecting blocks instead of writing code. Roblocks has two levels, where in the first level users are made to re-arrange a jumbled set of actions of a plan in the correct order so that a given goal could be achieved. In the second level, they select actions of their choice but at each step only those actions pertaining to the current state are made available to them, thereby pruning down the vast number of possible actions and suggesting only the truly feasible and relevant actions. Both of these levels have a simulation where the user plan is executed. Moreover, if the user plan is invalid or fails to achieve the given goal condition then an explanation for the failure is provided in simple English language. This makes it easier for everyone (especially for non-roboticists) to understand the cause of the failure. / Dissertation/Thesis / Working of Roblocks / Masters Thesis Computer Science 2019
|
88 |
A Historical Approach to Understanding Explanatory Proofs Based on Mathematical PracticesOshiro, Erika 23 February 2019 (has links)
My dissertation focuses on mathematical explanation found in proofs looked at from a historical point of view, while stressing the importance of mathematical practices. Current philosophical theories on explanatory proofs emphasize the structure and content of proofs without any regard to external factors that influence a proof’s explanatory power. As a result, the major philosophical views have been shown to be inadequate in capturing general aspects of explanation. I argue that, in addition to form and content, a proof’s explanatory power depends on its targeted audience. History is useful here, because from it, we are able to follow the transition from a first-generation proof, which is usually non-explanatory, into its explanatory version. By tracking the similarities and differences between these proofs, we are able to gain a better understanding of what makes a proof explanatory according to mathematicians who have the relevant background to evaluate it as so.
My first chapter discusses why history is important for understanding mathematical practices. I describe two kinds of history: one that presents a narrative of events, which influenced developments in mathematics both directly and indirectly, and another, typically used in mathematical research, which concentrates only on technical developments. I contend that both versions of the past benefit the philosopher. History used in research gives us an idea of what mathematicians desire or find to be important, while history written by historians shows us what effects these have on mathematical practices.
The next two chapters are about explanatory proofs. My second chapter examines the main theories of mathematical explanation. I argue that these theories are short-sighted as they only consider what appears in a proof without considering the proof’s purported audience or background knowledge necessary to understand the proof. In the third chapter, I propose an alternative way of analyzing explanatory proofs. Here, I suggest looking at a theorem’s history, which includes its successive proofs, as well as the mathematicians who wrote them. From this, we can better understand how and why mathematicians prove theorems in multiple ways, which depends on the purposes of these theorems.
The last chapter is a case study on the computer proof of the Four Color Theorem by Appel and Haken. Here, I compare and contrast what philosophers and mathematicians have had to say about the proof. I argue that the main philosophical worry regarding the theorem—its unsurveyability—did not make a strong impact on the mathematical community and would have hindered mathematical development in computer-assisted proofs. By studying the history of the theorem, we learn that Appel and Haken relied on the strategy of Kempe’s flawed proof from the 1800s (which, obviously, did not involve a computer). Two later proofs, also aided by computer, were developed using similar methods. None of these proofs are explanatory, but not because of their massive lengths. Rather, the methods used in these proofs are a series of calculations that exhaust all possible configurations of maps.
|
89 |
Explanation of the Fast Fourier Transform and Some ApplicationsEndo, Alan Kazuo 01 May 1981 (has links)
This report describes the Fast Fourier Transform and so~ of its applications. It describes the continuous Fourier transform and some of its properties. Finally, it describes the Fast Fourier Transform and its applications to hurricane risk analysis, ocean wave analysis, and hydrology.
|
90 |
Using Hypertext and Case-based Explanation to Help Learners Access Explanations to Unexpected Grammar Forms Encountered in Native Speech ExamplesPacker, Kenneth B. 14 December 2012 (has links) (PDF)
Three hypertext implementation strategies were evaluated against one another and against a control group to determine which best supported the language learner. Each version was also applied to four languages with diverse grammatical structures. These included Mandarin Chinese, Japanese, Portuguese, and Spanish. Language students were tested to determine how useful each strategy was in facilitating rapid and accurate explanation of grammatical structures embedded in native speech examples. Speed and accuracy were also measured as respondents applied a targeted grammar structure to construction of their own unique sentences. With respect to the four different languages, results were also analyzed to judge whether the hypertext strategies were viable for each language. The strategy iteration that directed learners to a more detailed and specific explanation was deemed to be more successful than those with generalized explanations in assisting language learners. Moreover, the strategies seemed to provide the same relative benefit across the tested languages, suggesting they are portable and applicable even to non-researched languages. Variance in outcomes among languages within this study focus was also strongly correlated to the degree of difference in grammatical structure between a tested language and English – the learners' typical native language.
|
Page generated in 0.0464 seconds