• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 135
  • 34
  • 8
  • 6
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 247
  • 247
  • 82
  • 50
  • 40
  • 37
  • 36
  • 33
  • 29
  • 29
  • 28
  • 26
  • 24
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Implementation of a logic-based access control system with dynamic policy updates and temporal constraints

Crescini, Vino Fernando, University of Western Sydney, College of Health and Science, School of Computing and Mathematics January 2006 (has links)
As information systems evolve to cope with the ever increasing demand of today’s digital world, so does the need for more effective means of protecting information. In the early days of computing, information security started out as a branch of information technology. Over the years, several advances in information security have been made and, as a result, it is now considered a discipline in its own right. The most fundamental function of information security is to ensure that information flows to authorised entities, and at the same time, prevent unauthorised entities from accessing the protected information. In a typical information system, an access control system provides this function. Several advances in the field of information security have produced several access control models and implementations. However, as information technology evolves, the need for a better access control system increases. This dissertation proposes an effective, yet flexible access control system: the Policy Updater access control system. Policy Updater is a fully-implemented access control system that provides policy evaluations as well as dynamic policy updates. These functions are provided by the use of a logic-based language, L, to represent the underlying access control policies, constraints and policy update rules. The system performs authorisation query evaluations, as well as conditional and dynamic policy updates by translating language L policies to normal logic programs in a form suitable for evaluation using the well-known Stable Model semantics. In this thesis, we show the underlying mechanisms that make up the Policy Updater system, including the theoretical foundations of its formal language, the system structure, a full discussion of implementation issues and a performance analysis. Lastly, the thesis also proposes a non-trivial extension of the Policy Updater system that is capable of supporting temporal constraints. This is made possible by the integration of the well-established Temporal Interval Algebra into the extended authorisation language, language LT , which can also be translated into a normal logic program for evaluation. The formalisation of this extension, together with the full implementation details, are included in this dissertation. / Doctor of Philosophy (PhD)
82

Maintaining Integrity Constraints in Semantic Web

Fang, Ming 10 May 2013 (has links)
As an expressive knowledge representation language for Semantic Web, Web Ontology Language (OWL) plays an important role in areas like science and commerce. The problem of maintaining integrity constraints arises because OWL employs the Open World Assumption (OWA) as well as the Non-Unique Name Assumption (NUNA). These assumptions are typically suitable for representing knowledge distributed across the Web, where the complete knowledge about a domain cannot be assumed, but make it challenging to use OWL itself for closed world integrity constraint validation. Integrity constraints (ICs) on ontologies have to be enforced; otherwise conflicting results would be derivable from the same knowledge base (KB). The current trends of incorporating ICs into OWL are based on its query language SPARQL, alternative semantics, or logic programming. These methods usually suffer from limited types of constraints they can handle, and/or inherited computational expensiveness. This dissertation presents a comprehensive and efficient approach to maintaining integrity constraints. The design enforces data consistency throughout the OWL life cycle, including the processes of OWL generation, maintenance, and interactions with other ontologies. For OWL generation, the Paraconsistent model is used to maintain integrity constraints during the relational database to OWL translation process. Then a new rule-based language with set extension is introduced as a platform to allow users to specify constraints, along with a demonstration of 18 commonly used constraints written in this language. In addition, a new constraint maintenance system, called Jena2Drools, is proposed and implemented, to show its effectiveness and efficiency. To further handle inconsistencies among multiple distributed ontologies, this work constructs a framework to break down global constraints into several sub-constraints for efficient parallel validation.
83

ModuleInducer: Automating the Extraction of Knowledge from Biological Sequences

Korol, Oksana 14 October 2011 (has links)
In the past decade, fast advancements have been made in the sequencing, digitalization and collection of the biological data. However the bottleneck remains at the point of analysis and extraction of patterns from the data. We have developed a method that is aimed at widening this bottleneck by automating the knowledge extraction from the biological data. Our approach is aimed at discovering patterns in a set of DNA sequences based on the location of transcription factor binding sites or any other biological markers with the emphasis of discovering relationships. A variety of statistical and computational methods exists to analyze such data. However, they either require an initial hypothesis, which is later tested, or classify the data based on its attributes. Our approach does not require an initial hypothesis and the classification it produces is based on the relationships between attributes. The value of such approach is that is is able to uncover new knowledge about the data by inducing a general theory based on basic known rules. The core of our approach lies in an inductive logic programming engine, which, based on positive and negative examples as well as background knowledge, is able to induce a descriptive, human-readable theory, describing the data. An application provides an end-to-end analysis of DNA sequences. A simple to use Web interface accepts a set of related sequences to be analyzed, set of negative example sequences to contrast the main set (optional), and a set of possible genetic markers as position-specific scoring matrices. A Java-based backend formats the sequences, determines the location of the genetic markers inside them and passes the information to the ILP engine, which induces the theory. The model, assumed in our background knowledge, is a set of basic interactions between biological markers in any DNA sequence. This makes our approach applicable to analyze a wide variety of biological problems, including detection of cis-regulatory modules and analysis of ChIP-Sequencing experiments. We have evaluated our method in the context of such applications on two real world datasets as well as a number of specially designed synthetic datasets. The approach has shown to have merit even in situations when no significant classification could be determined.
84

Inconsistency and Incompleteness in Relational Databases and Logic Programs

Viswanath, Navin 08 July 2009 (has links)
The aim of this thesis is to study the role played by negation in databases and to develop data models that can handle inconsistent and incomplete information. We develop models that also allow incompleteness through disjunctive information under both the CWA and the OWA in relational databases. In the area of logic programming, extended logic programs allow explicit representation of negative information. As a result, a number of extended logic programs have an inconsistent semantics. We present a translation of extended logic programs to normal logic programs that is more tolerant to inconsistencies. Extended logic programs have also been used widely in order to compute the repairs of an inconsistent database. We present some preliminary ideas on how source information can be incorporated into the repair program in order to produce a subset of the set of all repairs based on a preference for certain sources over others.
85

ModuleInducer: Automating the Extraction of Knowledge from Biological Sequences

Korol, Oksana 14 October 2011 (has links)
In the past decade, fast advancements have been made in the sequencing, digitalization and collection of the biological data. However the bottleneck remains at the point of analysis and extraction of patterns from the data. We have developed a method that is aimed at widening this bottleneck by automating the knowledge extraction from the biological data. Our approach is aimed at discovering patterns in a set of DNA sequences based on the location of transcription factor binding sites or any other biological markers with the emphasis of discovering relationships. A variety of statistical and computational methods exists to analyze such data. However, they either require an initial hypothesis, which is later tested, or classify the data based on its attributes. Our approach does not require an initial hypothesis and the classification it produces is based on the relationships between attributes. The value of such approach is that is is able to uncover new knowledge about the data by inducing a general theory based on basic known rules. The core of our approach lies in an inductive logic programming engine, which, based on positive and negative examples as well as background knowledge, is able to induce a descriptive, human-readable theory, describing the data. An application provides an end-to-end analysis of DNA sequences. A simple to use Web interface accepts a set of related sequences to be analyzed, set of negative example sequences to contrast the main set (optional), and a set of possible genetic markers as position-specific scoring matrices. A Java-based backend formats the sequences, determines the location of the genetic markers inside them and passes the information to the ILP engine, which induces the theory. The model, assumed in our background knowledge, is a set of basic interactions between biological markers in any DNA sequence. This makes our approach applicable to analyze a wide variety of biological problems, including detection of cis-regulatory modules and analysis of ChIP-Sequencing experiments. We have evaluated our method in the context of such applications on two real world datasets as well as a number of specially designed synthetic datasets. The approach has shown to have merit even in situations when no significant classification could be determined.
86

Integrating top-down and bottom-up approaches in inductive logic programming: applications in natural language processing and relational data mining

Tang, Lap Poon Rupert 28 August 2008 (has links)
Not available / text
87

Automated reasoning about actions

Lee, Joohyung 28 August 2008 (has links)
Not available / text
88

The complexity of constraint satisfaction problems and symmetric Datalog /

Egri, László. January 2007 (has links)
Constraint satisfaction problems (CSPs) provide a unified framework for studying a wide variety of computational problems naturally arising in combinatorics, artificial intelligence and database theory. To any finite domain D and any constraint language Γ (a finite set of relations over D), we associate the constraint satisfaction problem CSP(Γ): an instance of CSP(Γ) consists of a list of variables x1, x2,..., x n and a list of constraints of the form "(x 7, x2,..., x5) ∈ R" for some relation R in Γ. The goal is to determine whether the variables can be assigned values in D such that all constraints are simultaneously satisfied. The computational complexity of CSP(Γ) is entirely determined by the structure of the constraint language Γ and, thus, one wishes to identify classes of Γ such that CSP(Γ) belongs to a particular complexity class. / In recent years, logical and algebraic perspectives have been particularly successful in classifying CSPs. A major weapon in the arsenal of the logical perspective is the database-theory-inspired logic programming language called Datalog. A Datalog program can be used to solve a restricted class of CSPs by either accepting or rejecting a (suitably encoded) set of input constraints. Inspired by Dalmau's work on linear Datalog and Reingold's breakthrough that undirected graph connectivity is in logarithmic space, we use a new restriction of Datalog called symmetric Datalog to identify a class of CSPs solvable in logarithmic space. We establish that expressibility in symmetric Datalog is equivalent to expressibility in a specific restriction of second order logic called Symmetric Restricted Krom Monotone SNP that has already received attention for its close relationship with logarithmic space. / We also give a combinatorial description of a large class of CSPs lying in L by showing that they are definable in symmetric Datalog. The main result of this thesis is that directed st-connectivity and a closely related CSP cannot be defined in symmetric Datalog. Because undirected st-connectivity can be defined in symmetric Datalog, this result also sheds new light on the computational differences between the undirected and directed st-connectivity problems.
89

REVISION PROGRAMMING: A KNOWLEDGE REPRESENTATION FORMALISM

Pivkina, Inna Valentinovna 01 January 2001 (has links)
The topic of the dissertation is revision programming. It is a knowledge representation formalismfor describing constraints on databases, knowledge bases, and belief sets, and providing acomputational mechanism to enforce them. Constraints are represented by sets of revision rules.Revision rules could be quite complex and are usually in a form of conditions (for instance, ifthese elements are present and those elements are absent, then this element must be absent). Inaddition to being a logical constraint, a revision rule specify a preferred way to satisfy the constraint.Justified revisions semantics assigns to any database a set (possibly empty) of revisions.Each revision satisfies the constraints, and all deletions and additions of elements in a transitionfrom initial database to the revision are derived from revision rules.Revision programming and logic programming are closely related. We established an elegantembedding of revision programs into logic programs, which does not increase the size of a program.Initial database is used in transformation of a revision program into the corresponding logicprogram, but it is not represented in the logic program.The connection naturally led to extensions of revision programming formalism which correspondto existing extensions of logic programming. More specific, a disjunctive and a nestedversions of revision programming were introduced.Also, we studied annotated revision programs, which allow annotations like confidence factors,multiple experts, etc. Annotations were assumed to be elements of a complete infinitely distributivelattice. We proposed a justified revisions semantics for annotated revision programs which agreedwith intuitions.Next, we introduced definitions of well-founded semantics for revision programming. It assignsto a revision problem a single "intended" model which is computable in polynomial time.Finally, we extended syntax of revision problems by allowing variables and implemented translatorsof revision programs into logic programs and a grounder for revision programs. The implementationallows us to compute justified revisions using existing implementations of the stablemodel semantics for logic programs.
90

Computing stable models of logic programs

Singhi, Soumya 01 January 2003 (has links)
Solution of any search problem lies in its search space. A search is a systematic examination of candidate solutions of a search problem. In this thesis, we present a search heuristic that we can cr-smodels. cr-smodels prunes the search space to quickly reach to the solution of a problem. The idea is to pick an atom for branching , that lowers the growth rate of the linear recurrence and thuse, minimizes the remaining search space. Our goal in developing cr-smodels is to develop a search heuristic that is efficient on a wide range of problems. Then, we test cr-smodels over a wide range of randomly generated benchmarks. we observed that often randomly generated graphs with no Hamiltonian cycle were trivial to solve. Since, Hamiltonian cycle is an important benchmark problem, my other goal is to develop techniques that generate hard instances of graphs with no Hamiltonian cycle.

Page generated in 0.1201 seconds