• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 1
  • 1
  • Tagged with
  • 15
  • 10
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

On the Logic of Theory Change : Extending the AGM Model

Fermé, Eduardo January 2011 (has links)
This thesis consists in six articles and a comprehensive summary. • The pourpose of the summary is to introduce the AGM theory of belief change and to exemplify the diversity and significance of the research that has been inspired by the AGM article in the last 25 years. The research areas associated with AGM was divided in three parts: criticisms, where we discussed some of the more common criticisms of AGM. Extensions where the most common extensions and variations of AGM are presented and applications where we provided an overview of applications and connections with other areas of research. • Article I elaborates on the connection between partial meet contractions [AGM85] and kernel contractions [Han94a] in belief change theory. Also both functions are equivalent in belief sets, there are notequivalent in belief bases. A way to define incision functions (used in kernel contractions) from selection functions (used in partial meet contractions) and vice versa is presented. It is explained under which conditions there are exact correspondences between selection and incision functions so that the same contraction operations can be obtained by using either of them. • Article II proposes an axiomatic characterization for ensconcement-based contraction functions, belief base functions proposed byWilliams and relates this function with other kinds of base contraction functions. • Article III adapts the Fermé and Hansson model of Shielded Contraction [FH01] as well as Hansson et all Credibility-Limited Revision [HFCF01] for belief bases, to join two of the many variations of the AGM model [AGM85], i.e. those in which knowledge is represented through belief bases instead of logic theories, and those in which the object of the epistemic change does not get the priority over the existing information as it is the case in the AGM model. • Article IV introduces revision by comparison a refined method for changing beliefs by specifying constraints on the relative plausibility of propositions. Like the earlier belief revision models, the method proposed is a qualitative one, in the sense that no numbers are needed in order to specify the posterior plausibility of the new information. The method uses reference beliefs in order to determine the degree of entrenchment of the newly accepted piece of information. Two kinds of semantics for this idea are proposed and a logical characterization of the new model is given. • Article V focuses on the extension of AGM that allows change for a belief base by a set of sentences instead of a single sentence. In [FH94], Fuhrmann and Hansson presented an axiomatic for Multiple Contraction and a construction based on the AGM Partial Meet Contraction. This essay proposes for their model another way to construct functions: Multiple Kernel Contraction, that is a modification of Kernel Contraction,proposed by Hansson [Han94a] to construct classical AGM contractions and belief base contractions. • Article VI relates AGM model with the DFT model proposed by Carlos Alchourrón [Alc93]. Alchourrón devoted his last years to the analysis of the notion of defeasible conditionalization. His definition of the defeasible conditional is given in terms of strict implication operator and a modal operator f which is interpreted as a revision function at the language level. This essay points out that this underlying revision function is more general than AGM revision. In addition, a complete characterization of that more general kind of revision that permits to unify models of revision given by other authors is given. / QC 20110211
12

Analysis of multilateral software confidentiality requirements

Onabajo, Adeniyi 31 August 2009 (has links)
Ensuring privacy and confidentiality concerns of data owners is an important aspect of a secured information system. This is particularly important for integrated systems, which allow data exchange across organizations. Governments, regulatory bodies and organizations provide legislations, regulations and guidelines for information privacy and security to ensure proper data handling. These are usually specified in natural language formats, contain default requirements and exceptions, and are often ambiguous. In addition, interacting concerns, which are often multilayered and from different stakeholders, e.g., jurisdictions, need to be considered in software development. Similar to other security concerns, analysis of confidentiality concerns should be integrated into the early phase of software development in order to facilitate early identification of defects - incompleteness and inconsistencies, in the requirements. This dissertation presents research conducted to develop a method to detect these defects using goal models which support defaults and exceptions. The goal models are derived from annotations of the natural language sources. A prototype tool is also developed to support the method. The evaluations conducted indicate the method and tool provide benefits, including distinguishing requirement interferences and conflicts, exception handling, and navigation between annotated documents and the goal models. Although current limitations of the method include a manual user driven annotation step, the method provides features that assist in early analysis of confidentiality requirements from natural language sources.
13

Reasoning about Cyber Threat Actors

January 2018 (has links)
abstract: Reasoning about the activities of cyber threat actors is critical to defend against cyber attacks. However, this task is difficult for a variety of reasons. In simple terms, it is difficult to determine who the attacker is, what the desired goals are of the attacker, and how they will carry out their attacks. These three questions essentially entail understanding the attacker’s use of deception, the capabilities available, and the intent of launching the attack. These three issues are highly inter-related. If an adversary can hide their intent, they can better deceive a defender. If an adversary’s capabilities are not well understood, then determining what their goals are becomes difficult as the defender is uncertain if they have the necessary tools to accomplish them. However, the understanding of these aspects are also mutually supportive. If we have a clear picture of capabilities, intent can better be deciphered. If we understand intent and capabilities, a defender may be able to see through deception schemes. In this dissertation, I present three pieces of work to tackle these questions to obtain a better understanding of cyber threats. First, we introduce a new reasoning framework to address deception. We evaluate the framework by building a dataset from DEFCON capture-the-flag exercise to identify the person or group responsible for a cyber attack. We demonstrate that the framework not only handles cases of deception but also provides transparent decision making in identifying the threat actor. The second task uses a cognitive learning model to determine the intent – goals of the threat actor on the target system. The third task looks at understanding the capabilities of threat actors to target systems by identifying at-risk systems from hacker discussions on darkweb websites. To achieve this task we gather discussions from more than 300 darkweb websites relating to malicious hacking. / Dissertation/Thesis / Doctoral Dissertation Computer Engineering 2018
14

The theory and pedagody of semantic inconsistency in critical reasoning

Dixon, Scott Walton 05 1900 (has links)
One aspect of critical reasoning is the analysis and appraisal of claims and arguments. A typical problem, when analysing and appraising arguments, is inconsistent statements. Although several inconsistencies may have deleterious effects on rationality and action, not all of them do. As educators, we also have an obligation to teach this evaluation in a way that does justice to our normal reasoning practices and judgements of inconsistency. Thus, there is a need to determine the acceptable inconsistencies from those that are not, and to impart that information to students. We might ask: What is the best concept of inconsistency for critical reasoning and pedagogy? While the answer might appear obvious to some, the history of philosophy shows that there are many concepts of “inconsistency”, the most common of which comes from classical logic and its reliance on opposing truth-values. The current exemplar of this is the standard truth functional account from propositional logic. Initially, this conception is shown to be problematic, practically, conceptually and pedagogically speaking. Especially challenging from the classical perspective are the concepts of ex contradictione quodlibet and ex falso quodlibet. The concepts may poison the well against any notion of inconsistency, which is not something that should be done unreflectively. Ultimately, the classical account of inconsistency is rejected. In its place, a semantic conception of inconsistency is argued for and demonstrated to handle natural reasoning cases effectively. This novel conception utilises the conceptual antonym theory to explain semantic contrast and gradation, even in the absence of non-canonical antonym pairs. The semantic conception of inconsistency also fits with an interrogative argument model that exploits inconsistency to display semantic contrast in reasons and conclusions. A method for determining substantive inconsistencies follows from this argument model in a 4 straightforward manner. The conceptual fit is then incorporated into the pedagogy of critical reasoning, resulting in a natural approach to reasoning which students can apply to practical matters of everyday life, which include inconsistency. Thus, the best conception of inconsistency for critical reasoning and its pedagogy is the semantic, not the classical. / Philosophy Practical and Systematic Theology / D. Phil
15

A Lightweight Defeasible Description Logic in Depth: Quantification in Rational Reasoning and Beyond

Pensel, Maximilian 02 December 2019 (has links)
Description Logics (DLs) are increasingly successful knowledge representation formalisms, useful for any application requiring implicit derivation of knowledge from explicitly known facts. A prominent example domain benefiting from these formalisms since the 1990s is the biomedical field. This area contributes an intangible amount of facts and relations between low- and high-level concepts such as the constitution of cells or interactions between studied illnesses, their symptoms and remedies. DLs are well-suited for handling large formal knowledge repositories and computing inferable coherences throughout such data, relying on their well-founded first-order semantics. In particular, DLs of reduced expressivity have proven a tremendous worth for handling large ontologies due to their computational tractability. In spite of these assets and prevailing influence, classical DLs are not well-suited to adequately model some of the most intuitive forms of reasoning. The capability for abductive reasoning is imperative for any field subjected to incomplete knowledge and the motivation to complete it with typical expectations. When such default expectations receive contradicting evidence, an abductive formalism is able to retract previously drawn, conflicting conclusions. Common examples often include human reasoning or a default characterisation of properties in biology, such as the normal arrangement of organs in the human body. Treatment of such defeasible knowledge must be aware of exceptional cases - such as a human suffering from the congenital condition situs inversus - and therefore accommodate for the ability to retract defeasible conclusions in a non-monotonic fashion. Specifically tailored non-monotonic semantics have been continuously investigated for DLs in the past 30 years. A particularly promising approach, is rooted in the research by Kraus, Lehmann and Magidor for preferential (propositional) logics and Rational Closure (RC). The biggest advantages of RC are its well-behaviour in terms of formal inference postulates and the efficient computation of defeasible entailments, by relying on a tractable reduction to classical reasoning in the underlying formalism. A major contribution of this work is a reorganisation of the core of this reasoning method, into an abstract framework formalisation. This framework is then easily instantiated to provide the reduction method for RC in DLs as well as more advanced closure operators, such as Relevant or Lexicographic Closure. In spite of their practical aptitude, we discovered that all reduction approaches fail to provide any defeasible conclusions for elements that only occur in the relational neighbourhood of the inspected elements. More explicitly, a distinguishing advantage of DLs over propositional logic is the capability to model binary relations and describe aspects of a related concept in terms of existential and universal quantification. Previous approaches to RC (and more advanced closures) are not able to derive typical behaviour for the concepts that occur within such quantification. The main contribution of this work is to introduce stronger semantics for the lightweight DL EL_bot with the capability to infer the expected entailments, while maintaining a close relation to the reduction method. We achieve this by introducing a new kind of first-order interpretation that allocates defeasible information on its elements directly. This allows to compare the level of typicality of such interpretations in terms of defeasible information satisfied at elements in the relational neighbourhood. A typicality preference relation then provides the means to single out those sets of models with maximal typicality. Based on this notion, we introduce two types of nested rational semantics, a sceptical and a selective variant, each capable of deriving the missing entailments under RC for arbitrarily nested quantified concepts. As a proof of versatility for our new semantics, we also show that the stronger Relevant Closure, can be imbued with typical information in the successors of binary relations. An extensive investigation into the computational complexity of our new semantics shows that the sceptical nested variant comes at considerable additional effort, while the selective semantics reside in the complexity of classical reasoning in the underlying DL, which remains tractable in our case.

Page generated in 0.2475 seconds