• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 8
  • 5
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 72
  • 72
  • 72
  • 16
  • 15
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Application of local semantic analysis in fault prediction and detection

Shao, Danhua 06 October 2010 (has links)
To improve quality of software systems, change-based fault prediction and scope-bounded checking have been used to predict or detect faults during software development. In fault prediction, changes to program source code, such as added lines or deleted lines, are used to predict potential faults. In fault detection, scope-bounded checking of programs is an effective technique for finding subtle faults. The central idea is to check all program executions up to a given bound. The technique takes two basic forms: scope-bounded static checking, where all bounded executions of a program are transformed into a formula that represents the violation of a correctness property and any solution to the formula represents a counterexample; or scope-bounded testing where a program is tested against all (small) inputs up to a given bound on the input size. Although the accuracies of change-based fault prediction and scope-bounded checking have been evaluated with experiments, both of them have effectiveness and efficiency limitations. Previous change-based fault predictions only consider the code modified by a change while ignoring the code impacted by a change. Scope-bounded testing only concerns the correctness specifications, and the internal structure of a program is ignored. Although scope-bounded static checking considers the internal structure of programs, formulae translated from structurally complex programs might choke the backend analyzer and fail to give a result within a reasonable time. To improve effectiveness and efficiency of these approaches, we introduce local semantic analysis into change-based fault prediction and scope-bounded checking. We use data-flow analysis to disclose internal dependencies within a program. Based on these dependencies, we identify code segments impacted by a change and apply fault prediction metrics on impacted code. Empirical studies with real data showed that semantic analysis is effective and efficient in predicting faults in large-size changes or short-interval changes. While generating inputs for scope-bounded testing, we use control-flow to guide test generation so that code coverage can be achieved with minimal tests. To increase the scalability of scope-bounded checking, we split a bounded program into smaller sub-programs according to data-flow and control-flow analysis. Thus the problem of scope-bounded checking for the given program reduces to several sub-problems, where each sub-problem requires the constraint solver to check a less complex formula, thereby likely reducing the solver’s overall workload. Experimental results show that our approach provides significant speed-ups over the traditional approach. / text
42

Sobre o conceito semântico de satisfação

Alves, Carlos Roberto Teixeira 14 December 2015 (has links)
Made available in DSpace on 2016-04-27T17:27:13Z (GMT). No. of bitstreams: 1 Carlos Roberto Teixeira Alves.pdf: 1331347 bytes, checksum: cebe03a83120937101a3595710844df2 (MD5) Previous issue date: 2015-12-14 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This work aims to show the current treatment of the semantic notion of satisfiability to the logic of the first order, the relevant problems of Tarski's solution to define this notion - in this case, the use of infinite sequences to satisfy the formulas - and propose an alternative to circumvent this problem. The notion established by Tarski became, in discussions on the subject, standard solution and resulted in rich tools to work with the languages, in particular tools such as the Theory of Models. However, from a philosophical point of view, it is very important to broaden perspectives and look at the problem from a new dimension. Our proposal is to avoid the counterintuitive idea of using infinite sequences of objects to satisfy the finite formulas, knowing that these infinite sequences are composed almost entirely of 'superfluous terms', expendable in the process of satisfaction, but they should and are listed and indexed in the process. It would be interesting to solve the issue involving sequences without 'superfluous terms'. We propose a structure of first-order language that dispenses variables and constants. The notion of satisfaction in this case is distinct, which increases the possibilities and provides an alternative to the satisfaction of infinite sequences. In the end, we show how our solution can produce the satisfaction of formulas of a first-order language within a framework where satisfaction is interpreted according to certain specific criteria and can be performed by finite sequences, differing essentially from Tarski solution / Este trabalho tem por objetivo mostrar o tratamento atual da noção semântica de satisfatibilidade para as lógicas de primeira ordem, os problemas relevantes da solução de Tarski para definir essa noção no caso, o uso de sequências infinitas para a satisfação das fórmulas , e propor uma alternativa que contorne esse problema. A noção estabelecida por Tarski tornou-se, nas discussões a respeito do tema, a solução padrão e resultou em ferramentas ricas para operar com as linguagens, em especial ferramentas como a Teoria dos Modelos. No entanto, de um ponto de vista filosófico, é sadio ampliar as perspectivas e olhar o problema sob uma dimensão nova. Nossa proposta é superar a ideia contraintuitiva de elencarmos sequências infinitas de objetos para satisfação das formulas finitas, sabendo que essas sequências infinitas são compostas quase que totalmente de termos supérfluos , dispensáveis no processo de satisfação, mas que devem e são enumerados e indexados no processo. Seria interessante solucionar a questão envolvendo sequências sem termos supérfluos . Proporemos uma estrutura de linguagem de primeira ordem que dispensa variáveis e constantes. A noção de satisfação nesse caso é distinta, o que amplia as possibilidades e fornece uma alternativa à satisfação por sequências infinitas. No fim, mostraremos como nossa solução consegue produzir a satisfação de fórmulas de uma linguagem de primeira ordem dentro de uma estrutura interpretada onde a satisfação ocorre segundo certos critérios específicos e consegue ser realizada por sequências finitas, diferindo essencialmente da solução de Tarski
43

Deep Learning Black Box Problem

Hussain, Jabbar January 2019 (has links)
Application of neural networks in deep learning is rapidly growing due to their ability to outperform other machine learning algorithms in different kinds of problems. But one big disadvantage of deep neural networks is its internal logic to achieve the desired output or result that is un-understandable and unexplainable. This behavior of the deep neural network is known as “black box”. This leads to the following questions: how prevalent is the black box problem in the research literature during a specific period of time? The black box problems are usually addressed by socalled rule extraction. The second research question is: what rule extracting methods have been proposed to solve such kind of problems? To answer the research questions, a systematic literature review was conducted for data collection related to topics, the black box, and the rule extraction. The printed and online articles published in higher ranks journals and conference proceedings were selected to investigate and answer the research questions. The analysis unit was a set of journals and conference proceedings articles related to the topics, the black box, and the rule extraction. The results conclude that there has been gradually increasing interest in the black box problems with the passage of time mainly because of new technological development. The thesis also provides an overview of different methodological approaches used for rule extraction methods.
44

Un modèle de données pour bibliothèques numériques / A data model for digital libraries

Yang, Jitao 30 May 2012 (has links)
Les bibliothèques numériques sont des systèmes d'information complexes stockant des ressources numériques (par exemple, texte, images, sons, audio), ainsi que des informations sur les ressources numériques ou non-numériques; ces informations sont appelées des métadonnées. Nous proposons un modèle de données pour les bibliothèques numériques permettant l'identification des ressources, l’utilisation de métadonnées et la réutilisation des ressources stockées, ainsi qu’un langage de requêtes pour l’interrogation de ressources. Le modèle que nous proposons est inspiré par l'architecture du Web, qui forme une base solide et universellement acceptée pour les notions et les services attendus d'une bibliothèque numérique. Nous formalisons notre modèle comme une théorie du premier ordre, afin d’exprimer les concepts de bases de la bibliothèque numérique, sans aucune contrainte technique. Les axiomes de la théorie donnent la sémantique formelle des notions du modèle, et en même temps fournissent une définition de la connaissance qui est implicite dans une bibliothèque numérique. La théorie est traduite en un programme Datalog qui, étant donnée une bibliothèque numérique, permet de la compléter efficacement avec les connaissances implicites. Le but de notre travail est de contribuer à la technologie de gestion des informations des bibliothèques numériques. De cette façon, nous pouvons montrer la faisabilité théorique de notre modèle, en montrant qu'il peut être efficacement appliqué. En outre, nous démontrons la faisabilité pratique du modèle en fournissant une traduction complète du modèle en RDF et du langage de requêtes en SPARQL.Nous fournissons un calcul sain et complet pour raisonner sur les graphes RDF résultant de la traduction. Selon ce calcul, nous prouvons la correction de ces deux traductions, montrant que les fonctions de traduction préservent la sémantique de la bibliothèque numérique et de son langage de requêtes. / Digital Libraries are complex information systems, storing digital resources (e.g., text, images, sound, audio), as well as knowledge about digital or non-digital resources; this knowledge is referred to as metadata. We propose a data model for digital libraries supporting resource identification, use of metadata and re-use of stored resources, as well as a query language supporting discovery of resources. The model that we propose is inspired by the architecture of the Web, which forms a solid, universally accepted basis for the notions and services expected from a digital library. We formalize our model as a first-order theory, in order to be able to express the basic concepts of digital libraries without being constrained by any technical considerations. The axioms of the theory give the formal semantics of the notions of the model, and at the same time, provide a definition of the knowledge that is implicit in a digital library. The theory is then translated into a Datalog program that, given a digital library, allows to efficiently complete the digital library with the knowledge implicit in it. The goal of our research is to contribute to the information management technology of digital libraries. In this way, we are able to demonstrate the theoretical feasibility of our digital library model, by showing that it can be efficiently implemented. Moreover, we demonstrate our model’s practical feasibility by providing a full translation of the model into RDF and of the query language into SPARQL. We provide a sound and complete calculus for reasoning on the RDF graphs resulting from translation. Based on this calculus, we prove the correctness of both translations, showing that the translation functions preserve the semantics of the digital library and of the query language.
45

Logics of Knowledge and Cryptography : Completeness and Expressiveness

Cohen, Mika January 2007 (has links)
An understanding of cryptographic protocols requires that we examine the knowledge of protocol participants and adversaries: When a participant receives a message, does she know who sent it? Does she know that the message is fresh, and not merely a replay of some old message? Does a network spy know who is talking to whom? This thesis studies logics of knowledge and cryptography. Specifically, the thesis addresses the problem of how to make the concept of knowledge reflect feasible computability within a Kripke-style semantics. The main contributions are as follows. 1. A generalized Kripke semantics for first-order epistemic logic and cryptography, where the later is modeled using private constants and arbitrary cryptographic operations, as in the Applied Pi-calculus. 2. An axiomatization of first-order epistemic logic which is sound and complete relative to an underlying theory of cryptographic terms, and to an omega-rule for quantifiers. Besides standard axioms and rules from first-order epistemic logic, the axiomatization includes some novel axioms for the interaction between knowledge and cryptography. 3. Epistemic characterizations of static equivalence and Dolev-Yao message deduction. 4. A generalization of Kripke semantics for propositional epistemic logic and symmetric cryptography. 5. Decidability, soundness and completeness for propositional BAN-like logics with respect to message passing systems. Completeness and decidability are generalised to logics induced from an arbitrary base of protocol specific assumptions. 6. An epistemic definition of message deduction. The definition lies between weaker and stronger versions of Dolev-Yao deduction, and coincides with weaker Dolev-Yao regarding all atomic messages. For composite messages, the definition withstands a well-known counterexample to Dolev-Yao deduction. 7. Protocol examples using mixes, a Crowds style protocol, and electronic payments. / QC 20100524
46

Learning with Markov logic networks : transfer learning, structure learning, and an application to Web query disambiguation

Mihalkova, Lilyana Simeonova 18 March 2011 (has links)
Traditionally, machine learning algorithms assume that training data is provided as a set of independent instances, each of which can be described as a feature vector. In contrast, many domains of interest are inherently multi-relational, consisting of entities connected by a rich set of relations. For example, the participants in a social network are linked by friendships, collaborations, and shared interests. Likewise, the users of a search engine are related by searches for similar items and clicks to shared sites. The ability to model and reason about such relations is essential not only because better predictive accuracy is achieved by exploiting this additional information, but also because frequently the goal is to predict whether a set of entities are related in a particular way. This thesis falls within the area of Statistical Relational Learning (SRL), which combines ideas from two traditions within artificial intelligence, first-order logic and probabilistic graphical models to address the challenge of learning from multi-relational data. We build on one particular SRL model, Markov logic networks (MLNs), which consist of a set of weighted first-order-logic formulae and provide a principled way of defining a probability distribution over possible worlds. We develop algorithms for learning of MLN structure both from scratch and by transferring a previously learned model, as well as an application of MLNs to the problem of Web query disambiguation. The ideas we present are unified by two main themes: the need to deal with limited training data and the use of bottom-up learning techniques. Structure learning, the task of automatically acquiring a set of dependencies among the relations in the domain, is a central problem in SRL. We introduce BUSL, an algorithm for learning MLN structure from scratch that proceeds in a more bottom-up fashion, breaking away from the tradition of top-down learning typical in SRL. Our approach first constructs a novel data structure called a Markov network template that is used to restrict the search space for clauses. Our experiments in three relational domains demonstrate that BUSL dramatically reduces the search space for clauses and attains a significantly higher accuracy than a structure learner that follows a top-down approach. Accurate and efficient structure learning can also be achieved by transferring a model obtained in a source domain related to the current target domain of interest. We view transfer as a revision task and present an algorithm that diagnoses a source MLN to determine which of its parts transfer directly to the target domain and which need to be updated. This analysis focuses the search for revisions on the incorrect portions of the source structure, thus speeding up learning. Transfer learning is particularly important when target-domain data is limited, such as when data on only a few individuals is available from domains with hundreds of entities connected by a variety of relations. We also address this challenging case and develop a general transfer learning approach that makes effective use of such limited target data in several social network domains. Finally, we develop an application of MLNs to the problem of Web query disambiguation in a more privacy-aware setting where the only information available about a user is that captured in a short search session of 5-6 previous queries on average. This setting contrasts with previous work that typically assumes the availability of long user-specific search histories. To compensate for the scarcity of user-specific information, our approach exploits the relations between users, search terms, and URLs. We demonstrate the effectiveness of our approach in the presence of noise and show that it outperforms several natural baselines on a large data set collected from the MSN search engine. / text
47

Axiomatized Relationships between Ontologies

Chui, Carmen 21 November 2013 (has links)
This work focuses on the axiomatized relationships between different ontologies of varying levels of expressivity. Motivated by experiences in the decomposition of first-order logic ontologies, we partially decompose the Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE) into modules. By leveraging automated reasoning tools to semi-automatically verify the modules, we provide an account of the meta-theoretic relationships found between DOLCE and other existing ontologies. As well, we examine the composition process required to determine relationships between DOLCE modules and the Process Specification Language (PSL) ontology. Then, we propose an ontology based on the semantically-weak Computer Integrated Manufacturing Open System Architecture (CIMOSA) framework by augmenting its constructs with terminology found in PSL. Finally, we attempt to map two semantically-weak product ontologies together to analyze the applications of ontology mappings in e-commerce.
48

Axiomatized Relationships between Ontologies

Chui, Carmen 21 November 2013 (has links)
This work focuses on the axiomatized relationships between different ontologies of varying levels of expressivity. Motivated by experiences in the decomposition of first-order logic ontologies, we partially decompose the Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE) into modules. By leveraging automated reasoning tools to semi-automatically verify the modules, we provide an account of the meta-theoretic relationships found between DOLCE and other existing ontologies. As well, we examine the composition process required to determine relationships between DOLCE modules and the Process Specification Language (PSL) ontology. Then, we propose an ontology based on the semantically-weak Computer Integrated Manufacturing Open System Architecture (CIMOSA) framework by augmenting its constructs with terminology found in PSL. Finally, we attempt to map two semantically-weak product ontologies together to analyze the applications of ontology mappings in e-commerce.
49

Αλληλεπιδραστικό σύστημα μετατροπής προτάσεων φυσικής γλώσσας σε κατηγορηματική λογική πρώτης τάξης με αυτόματη εισαγωγή προτάσεων και δημιουργία υποδείξεων για το χρήστη

Περίκος, Ισίδωρος 07 April 2011 (has links)
Η αναπαράσταση γνώσης αποτελεί ένα σημαντικό πεδίο της τεχνητής νοημοσύνης. Ενώ η αναπαράσταση γνώσης για τον κόσμο στην καθημερινή ζωή μας γίνεται σε φυσική γλώσσα, για τα υπολογιστικά συστήματα είναι απαραίτητο να χρησιμοποιηθεί ένας συμβολισμός που να παρέχει ακριβή αναπαράσταση της γνώσης, κάτι που δεν μπορεί να παρέχει η φυσική γλώσσα λόγω της πολυσημαντικότητας των προτάσεων. Μια γλώσσα αναπαράστασης είναι η Κατηγορηματική Λογική Πρώτης Τάξης –ΚΛΠΤ (First Order Logic-FOL). Η ΚΛΠΤ ως γλώσσα αναπαράσταση γνώσης και αυτομάτου συλλογισμού έχει πολλές πτυχές. Μια από αυτές με την οποία ασχολούμαστε στην παρούσα διπλωματική είναι η μετατροπή φυσικής γλώσσας (ΦΓ) σε Κατηγορηματική Λογική Πρώτης Τάξης (ΚΛΠΤ). Πρόκειται για μια ad-hoc διαδικασία, για την οποία δεν υπάρχει κάποιος συγκεκριμένος αλγόριθμος. Στα πλαίσια της παρούσας διπλωματικής εργασίας αναπτύχθηκε ένα σύστημα το οποίο μοντελοποιεί την διαδικασία της μετατροπή φυσικής γλώσσας (ΦΓ) σε κατηγορηματική λογική (ΚΛΠΤ) και αυτοματοποιεί την διαδικασία εισαγωγής προτάσεων-παραδειγμάτων για τον χρήστη-διδάσκοντα. Παράλληλα μέσω μιας αλληλεπιδραστικής διεπαφής (User Interface) κατευθύνει τον χρήστη-φοιτητή κατά την διάρκεια της μετατροπής παρέχοντας βοήθειες και υποδείξεις για κάθε πρόταση. Ο χρήστης-διδάσκοντας μπορεί να εισάγει προτάσεις-παραδείγματα σε ΚΛΠΤ στο σύστημα. Στην συνέχεια κάθε πρόταση ΚΛΠΤ αναλύεται αυτόματα στα βήματα της διαδικασίας και αποθηκεύονται τα κατάλληλα στοιχεία. Μια άλλη πτυχή της διπλωματικής αποτελεί η υλοποίηση της ημι-αυτοματοποίησης της παραγωγής κατάλληλων υποδείξεων σε όλα βήματα της διαδικασίας για κάθε πρόταση. Για την υλοποίηση αυτή χρειάστηκε να γίνει μια κατηγοριοποίηση των επιπέδων των παρεχόμενων υποδείξεων και μια τυποποίηση των λεκτικών εκφράσεων των αντίστοιχων μηνυμάτων. / Knowledge Representation is a fundamental topic of Artificial Intelligence. In everyday life people use natural language to communicate, however natural language cannot be used for knowledge representation in computer systems. The main reason is that natural language has not clear semantic. A basic KR language is First-Order Logic (FOL), the main representative of logic-based representation languages, which is part of almost any introductory AI course and textbook. Teaching FOL as a knowledge representation and reasoning language includes many aspects. One of them is the translation of natural language (NL) sentences into FOL formulas, often called logic formalization of NL sentences. It is an ad-hoc process; there is no specific algorithm that can be automated within a computer. This is mainly due to the fact that NL has no clear semantics as FOL does. During this master thesis, a web-based interactive system has been developed. It’s main aim is to provide a structured process to students and guide them in translating a NL sentence into a FOL one. Also an assistant system has been created to automate the insert of new sentences into the system. The teacher can insert the sentence in natural language and it’s FOL formula. Then the formula is automatically analyzed and the necessary information for the translation is extracted and stored. Another work done during this master thesis is the implementation of a semi-automatic help generation system. The aim of this system is to recognize the students’ errors and provide them help and guidelines during the stages of the conversion process.
50

Validating reasoning heuristics using next generation theorem provers

Steyn, Paul Stephanes 31 January 2009 (has links)
The specification of enterprise information systems using formal specification languages enables the formal verification of these systems. Reasoning about the properties of a formal specification is a tedious task that can be facilitated much through the use of an automated reasoner. However, set theory is a corner stone of many formal specification languages and poses demanding challenges to automated reasoners. To this end a number of heuristics has been developed to aid the Otter theorem prover in finding short proofs for set-theoretic problems. This dissertation investigates the applicability of these heuristics to next generation theorem provers. / Computing / M.Sc. (Computer Science)

Page generated in 0.0357 seconds