• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 71
  • 32
  • 19
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 528
  • 528
  • 148
  • 139
  • 124
  • 123
  • 119
  • 111
  • 103
  • 101
  • 97
  • 83
  • 80
  • 65
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

The diagrammatic specification and automatic generation of geometry subroutines

Li, Yulin, Ph. D. 20 October 2010 (has links)
Programming has advanced a great deal since the appearance of the stored-program architecture. Through the successive generations of machine codes, assembly languages, high-level languages, and object-oriented languages, the drive has been toward program descriptions that express more meaning in a shorter space. This trend continues today with domain-specific languages. However, conventional languages rely on a textual formalism (commands, statements, lines of code) to capture the programmer's intent, which, regardless of its level of abstraction, imposes inevitable overheads. Before successful programming activities can take place, the syntax has to be mastered, names and keywords memorized, the library routines mastered, etc. Existing visual programming languages avoid some of these overheads, but do not release the programmer from the task of specifying the program logic, which consumes the main portion of programming time and also is the major source of difficult bugs. Our work aims to minimize the demands a formalism imposes on the programmer of geometric subroutines other than what is inherent in the problem itself. Our approach frees the programmer from syntactic constraints and generates logically correct programs automatically from program descriptions in the form of diagrams. To write a program, the programmer simply draws a few diagrams to depict the problem context and specifies all the necessary parameters through menu operations. Diagrams are succinct, easy to learn, and intuitive to use. They are much easier to modify than code, and they help the user visualize and analyze the problem, in addition to providing information to the computer. Furthermore, diagrams describe a situation rather than a task and thus are reusable for different tasks—in general, a single diagram can generate many programs. For these reasons, we have chosen diagrams as the main specification mechanism. In addition, we leverage the power of automatic inference to reason about diagrams and generic components—the building blocks of our programs—and discover the logic for assembling these components into correct programs. To facilitate inference, symbolic facts encode entities present in the diagrams, their spatial relationships, and the preconditions and effects of reusable components. We have developed a reference implementation and tested it on a number of real-world examples to demonstrate the feasibility and efficacy of our approach. / text
102

DATALOG WITH CONTRAINTS: A NEW ANSWER-SET PROGRAMMING FORMALISM

East, Deborah J. 01 January 2001 (has links)
Knowledge representation and search are two fundamental areas of artificial intelligence. Knowledge representation is the area of artificial intelligence which deals with capturing, in a formal language, the properties of objects and the relationships between objects. Search is a systematic examination of all possible candidate solutions to a problem that is described as a theory in some knowledge representation formalism. We compare traditional declarative programming formalisms such as PROLOG and DATALOG with answer-set programming formalisms such as logic programming with stable model semantic. In this thesis we develop an answer-set formalism we can DC. The logic of DC is based on the logic of prepositional schemata and a version of Closed World Assumption. Two important features of the DC logic is that it supports modeling of the cardinalities of sets and Horn clauses. These two features facilitate modeling of search problems. The DC system includes and implementation of a grounder and a solver. The grounder for the DC system grounds instances of problems retaining the structure of the cardinality of sets. The resulting theories are thus more concise. In addition, the solver for the DC system utilizes the structure of cardinality of sets to perform more efficient search. The second feature, Horn clauses, are used when transitive closure will eliminate the need for additional variables. The semantics of the Horn clauses are retained in the grounded theories. This also results in more concise theories. Our goal in developing DC is to provide the computer science community with a system which facilitates modeling of problems, is easy to use, is efficient and captures the class of problems in NP-search. We show experimental results comparing DC to other systems. These results show that DC is always competitive with state-of-the-art answer-set programming systems and for many problems DC is more efficient.
103

A case-based system for lesson plan construction

Saad, Aslina January 2011 (has links)
Planning for teaching imposes a significant burden on teachers, as teachers need to prepare different lesson plans for different classes according to various constraints. Statistical evidence shows that lesson planning in the Malaysian context is done in isolation and lesson plan sharing is limited. The purpose of this thesis is to investigate whether a case-based system can reduce the time teachers spend on constructing lesson plans. A case-based system was designed SmartLP. In this system, a case consists of a problem description and solution pair and an attributevalue representation for the case is used. SmartLP is a synthesis type of CBR system which attempts to create a new solution by combining parts of previous solutions in the adaptation. Five activities in the CBR cycle retrieve, reuse, revise, review and retain are created via three types of design: application, architectural and user interface. The inputs are the requirements and constraints of the curriculum and the student facilities available, and the output is the solution, i.e. appropriate elements of a lesson plan. The retrieval module consists of five types of search advanced search, hierarchical, Boolean, basic and browsing. Solving a problem in this system involves obtaining a problem description, measuring the similarity of the current problem to previous problems stored in a database, retrieving one or more similar cases and attempting to reuse the solution of the retrieved cases, possibly after adaptation. Case adaptation for multiple lesson plans helps teachers to customise the retrieved plan to suit their constraints. This is followed by case revision, which allows users to access and revise their constructed lesson plans in the system. Validation mechanisms, through case verification, ensure that the retained cases are of quality. A formative study was conducted to investigate the effects of SmartLP on performance. The study revealed that all the lesson plans constructed with SmartLP assistance took significantly less time than the control lesson plans constructed without SmartLP assistance, although they might have access to computers and other tools. No significant difference in writing quality, measured by a scoring system, was noticed for the control group, who constructed lesson plans on the same tasks without receiving any assistance. The limitations of SmartLP are indicated and the focus of further research is proposed. Keywords: Case-based system, CBR approach, knowledge acquisition, knowledge representation, case representation, evaluation, lesson planning.
104

A framework to support semantic interoperability in product design and manufacture

Chungoora, Nitishal January 2010 (has links)
It has been recognised that the ability to communicate the meaning of concepts and their intent within and across system boundaries, for supporting key decisions in product design and manufacture, is impaired by the semantic interoperability issues that are presently encountered. This work contributes to the field of semantic interoperability in product design and manufacture. An attribution is made to the understanding and application of relevant concepts coming from the computer science world, notably ontology-based approaches, to help resolve semantic interoperability problems. A novel ontological approach, identified as the Semantic Manufacturing Interoperability Framework (SMIF), has been proposed following an exploration of the important requirements to be satisfied. The framework, built on top of a Common Logic-based ontological formalism, consists of a manufacturing foundation to capture the semantics of core feature-based design and manufacture concepts, over which the specialisation of domain models can take place. Furthermore, the framework supports the mechanisms for allowing the reconciliation of semantics, thereby improving the knowledge sharing capability between heterogeneous domains that need to interoperate and have been based on the same manufacturing foundation. This work also analyses a number of test case scenarios, where the framework has been deployed for fostering knowledge representation and reconciliation of models involving products with standard hole features and their related machining process sequences. The test cases have shown that the Semantic Manufacturing Interoperability Framework (SMIF) provides effective support towards achieving semantic interoperability in product design and manufacture. Proposed extensions to the framework are additionally identified so as to provide a view on imminent future work.
105

Information Representation and Computation of Spike Trains in Reservoir Computing Systems with Spiking Neurons and Analog Neurons

Almassian, Amin 23 March 2016 (has links)
Real-time processing of space-and-time-variant signals is imperative for perception and real-world problem-solving. In the brain, spatio-temporal stimuli are converted into spike trains by sensory neurons and projected to the neurons in subcortical and cortical layers for further processing. Reservoir Computing (RC) is a neural computation paradigm that is inspired by cortical Neural Networks (NN). It is promising for real-time, on-line computation of spatio-temporal signals. An RC system incorporates a Recurrent Neural Network (RNN) called reservoir, the state of which is changed by a trajectory of perturbations caused by a spatio-temporal input sequence. A trained, non- recurrent, linear readout-layer interprets the dynamics of the reservoir over time. Echo-State Network (ESN) [1] and Liquid-State Machine (LSM) [2] are two popular and canonical types of RC system. The former uses non-spiking analog sigmoidal neurons – and, more recently, Leaky Integrator (LI) neurons – and a normalized random connectivity matrix in the reservoir. Whereas, the reservoir in the latter is composed of Leaky Integrate-and-Fire (LIF) neurons, distributed in a 3-D space, which are connected with dynamic synapses through a probability function. The major difference between analog neurons and spiking neurons is in their neuron model dynamics and their inter-neuron communication mechanism. However, RC systems share a mysterious common property: they exhibit the best performance when reservoir dynamics undergo a criticality [1–6] – governed by the reservoirs’ connectivity parameters, |λmax| ≈ 1 in ESN, λ ≈ 2 and w in LSM – which is referred to as the edge of chaos in [3–5]. In this study, we are interested in exploring the possible reasons for this commonality, despite the differences imposed by different neuron types in the reservoir dynamics. We address this concern from the perspective of the information representation in both spiking and non-spiking reservoirs. We measure the Mutual Information (MI) between the state of the reservoir and a spatio-temporal spike-trains input, as well as that, between the reservoir and a linearly inseparable function of the input, temporal parity. In addition, we derive Mean Cumulative Mutual Information (MCMI) quantity from MI to measure the amount of stable memory in the reservoir and its correlation with the temporal parity task performance. We complement our investigation by conducting isolated spoken-digit recognition and spoken-digit sequence-recognition tasks. We hypothesize that a performance analysis of these two tasks will agree with our MI and MCMI results with regard to the impact of stable memory in task performance. It turns out that, in all reservoir types and in all the tasks conducted, reservoir performance peaks when the amount of stable memory in the reservoir is maxi-mized. Likewise, in the chaotic regime (when the network connectivity parameter is greater than a critical value), the absence of stable memory in the reservoir seems to be an evident cause for performance decrease in all conducted tasks. Our results also show that the reservoir with LIF neurons possess a higher stable memory of the input (quantified by input-reservoir MCMI) and outperforms the reservoirs with analog sigmoidal and LI neurons in processing the temporal parity and spoken-digit recognition tasks. From an efficiency stand point, the reservoir with 100 LIF neurons outperforms the reservoir with 500 LI neurons in spoken- digit recognition tasks. The sigmoidal reservoir falls short of solving this task. The optimum input-reservoir MCMI’s and output-reservoir MCMI’s we obtained for the reservoirs with LIF, LI, and sigmoidal neurons are 4.21, 3.79, 3.71, and 2.92, 2.51, and 2.47 respectively. In our isolated spoken-digits recognition experiments, the maximum achieved mean-performance by the reservoirs with N = 500 LIF, LI, and sigmoidal neurons are 97%, 79% and 2% respectively. The reservoirs with N = 100 neurons could solve the task with 80%, 68%, and 0.9% respectively. Our study sheds light on the impact of the information representation and memory of the reservoir on the performance of RC systems. The results of our experiments reveal the advantage of using LIF neurons in RC systems for computing spike-trains to solve memory demanding, real-world, spatio-temporal problems. Our findings have applications in engineering nano-electronic RC systems that can be used to solve real-world spatio-temporal problems.
106

Attitude and Adoption: Understanding Climate Change Through Predictive Modeling

Jackson B Bennett (7042994) 12 August 2019 (has links)
Climate change has emerged as one of the most critical issues of the 21st century. It stands to impact communities across the globe, forcing individuals and governments alike to adapt to a new environment. While it is critical for governments and organizations to make strides to change business as usual, individuals also have the ability to make an impact. The goal of this thesis is to study the beliefs that shape climate-related attitudes and the factors that drive the adoption of sustainable practices and technologies using a foundation in statistical learning. Previous research has studied the factors that influence both climate-related attitude and adoption, but comparatively little has been done to leverage recent advances in statistical learning and computing ability to advance our understanding of these topics. As increasingly large amounts of relevant data become available, it will be pivotal not only to use these emerging sources to derive novel insights on climate change, but to develop and improve statistical frameworks designed with climate change in mind. This thesis presents two novel applications of statistical learning to climate change, one of which includes a more general framework that can easily be extended beyond the field of climate change. Specifically, the work consists of two studies: (1) a robust integration of social media activity with climate survey data to relate climate-talk to climate-thought and (2) the development and validation of a statistical learning model to predict renewable energy installations using social, environmental, and economic predictors. The analysis presented in this thesis supports decision makers by providing new insights on the factors that drive climate attitude and adoption.
107

Des spécifications en langage naturel aux spécifications formelles via une ontologie comme modèle pivot / From natural language specifications to formal specifications via an ontology as a pivot model

Sadoun, Driss 17 June 2014 (has links)
Le développement d'un système a pour objectif de répondre à des exigences. Aussi, le succès de sa réalisation repose en grande partie sur la phase de spécification des exigences qui a pour vocation de décrire de manière précise et non ambiguë toutes les caractéristiques du système à développer.Les spécifications d'exigences sont le résultat d'une analyse des besoins faisant intervenir différentes parties. Elles sont généralement rédigées en langage naturel (LN) pour une plus large compréhension, ce qui peut mener à diverses interprétations, car les textes en LN peuvent contenir des ambiguïtés sémantiques ou des informations implicites. Il n'est donc pas aisé de spécifier un ensemble complet et cohérent d'exigences. D'où la nécessité d'une vérification formelle des spécifications résultats.Les spécifications LN ne sont pas considérées comme formelles et ne permettent pas l'application directe de méthodes vérification formelles.Ce constat mène à la nécessité de transformer les spécifications LN en spécifications formelles.C'est dans ce contexte que s'inscrit cette thèse.La difficulté principale d'une telle transformation réside dans l'ampleur du fossé entre spécifications LN et spécifications formelles.L'objectif de mon travail de thèse est de proposer une approche permettant de vérifier automatiquement des spécifications d'exigences utilisateur, écrites en langage naturel et décrivant le comportement d'un système.Pour cela, nous avons exploré les possibilités offertes par un modèle de représentation fondé sur un formalisme logique.Nos contributions portent essentiellement sur trois propositions :1) une ontologie en OWL-DL fondée sur les logiques de description, comme modèle de représentation pivot permettant de faire le lien entre spécifications en langage naturel et spécifications formelles; 2) une approche d'instanciation du modèle de représentation pivot, fondée sur une analyse dirigée par la sémantique de l'ontologie, permettant de passer automatiquement des spécifications en langage naturel à leur représentation conceptuelle; et 3) une approche exploitant le formalisme logique de l'ontologie, pour permettre un passage automatique du modèle de représentation pivot vers un langage de spécifications formelles nommé Maude. / The main objective of system development is to address requirements. As such, success in its realisation is highly dependent on a requirement specification phase which aims to describe precisely and unambiguously all the characteristics of the system that should be developed. In order to arrive at a set of requirements, a user needs analysis is carried out which involves different parties (stakeholders). The system requirements are generally written in natural language to garantuee a wider understanding. However, since NL texts can contain semantic ambiguities, implicit information, or other inconsistenties, this can lead to diverse interpretations. Hence, it is not easy to specify a set of complete and consistent requirements, and therefore, the specified requirements must be formally checked. Specifications written in NL are not considered to be formal and do not allow for a direct application of formal methods. We must therefore transform NL requirements into formal specifications. The work presented in this thesis was carried out in this framework. The main difficulty of such transformation is the gap between NL requirements and formal specifications. The objective of this work is to propose an approach for an automatic verification of user requirements which are written in natural language and describe a system's expected behaviour. Our approach uses the potential offered by a representation model based on a logical formalism. Our contribution has three main aspects: 1) an OWL-DL ontology based on description logic, used as a pivot representation model that serves as a link between NL requirements to formal specifications; 2) an approach for the instantiation of the pivot ontology, which allows an automatic transformation of NL requirements to their conceptual representations; and 3) an approach exploiting the logical formalism of the ontology in order to automatically translate the ontology into a formal specification language called Maude.
108

Explicação em sistemas que utilizam diagramas de influências como formalismo de representação do conhecimento / Explanation in systems that use influence diagrams for Knowledge representation.

Castiñeira, Maria Inés 18 October 1996 (has links)
O presente trabalho discute a necessidade da representação e manipulação de incertezas na resolução de problemas por sistemas baseados em conhecimento, e como isto pode ser realizado utilizando redes de crenças. Este tipo de representação do conhecimento combina a teoria das probabilidades e teoria da decisão, para representar incertezas, com a teoria dos grafos, esta última apropriada para representar as relações de dependência entre as variáveis do modelo. Os diagramas de inferência - redes de crenças que permitem representar incertezas, decisões e preferências do usuário - são discutidos e adotados neste trabalho para desenvolver um sistema normativo de apoio à decisão. A problemática da explicação em sistemas bayesianos, relativamente nova quando comparada com a dos sistemas baseados em regras, é abordada. Neste contexto dois mecanismos de explicação para diagramas de influência são propostos: análise de sensibilidade e as redes probabilísticas qualitativas. Estes mecanismos são usados para gerar conclusões genéricas bem como para entender qualitativamente as relações entre as ações e eventos que fazem parte do modelo. Uma ferramenta gráfica de apoio à decisão baseada em diagramas de influências foi implementada na linguagem Smalltalk. Este aplicativo não só permite representar e avaliar o problema do usuário como também incorpora as facilidades de explicação acima descritas. A possibilidade de observar graficamente o que acontece com o modelo quando os valores das variáveis são modificados - análise de sensibilidade - permite compreender melhor o problema descobrindo quais as variáveis que influenciam as decisões e auxilia a refinar os valores das variáveis envolvidas. Por outro lado às redes probabilísticas qualitativas permitem realizar abstrações e simplificações apropriadas do modelo, i.e., obter as relações qualitativas do modelo a partir de seu nível quantitativo. As conclusões genéricas obtidas servem tanto para limitar o espaço da estratégia ótima quanto para entender qualitativamente as relações entre as ações e eventos que fazem parte do modelo. / This work discusses the knowledge representation and uncertainty handling of knowledge based systems that use belief networks for this purpose. These sorts of networks combine the theory of probability and decision theory to represent uncertainty- with graph theory to represent the dependence relations between the model parameters. Systems that use belief networks as knowledge representation are named Bayesian or normative systems. This work investigates and adopts influence diagrams -belief networks that represent uncertainty, decisions and preferences- to develop a normative decision support system. Comprehensible explanations for probabilistic reasoning systems are a prerequisite for wider acceptance of Bayesian methods. Two schemes for explaining influence diagrams are proposed: sensitivity analysis and qualitative probabilistic networks, aiming to find general conclusions and to qualitatively understand the relations between the actions and events of the model. A graphical decision support system that represents the user problem as influence diagrams has been implemented in Smalltalk. This system allows to represent and evaluate decision problems and incorporates the explanation facilities mentioned above. The possibility to observe graphically the model as the variables change -sensitivity analysis- permits a better understanding of the problem by finding the significant variables. This process also helps to adjust the variables values. Furthermore, the qualitative probabilistic networks allow to realize model abstractions and simplifications, i.e., to obtain the qualitative relations from the quantitative level. These general conclusions limit the optimal strategy space and allow to qualitatively understanding the relations between actions and events in the model.
109

Contribution aux méthodes d'argumentation pour la prise de décision. Application à l'arbitrage au sein de la filière céréalière. / Contribution to the methods of argumentation for decision making. Application to arbitration within the cereal industry.

Bourguet, Jean-Rémi 16 December 2010 (has links)
L'objectif de notre travail est la conception d'un cadre théorique et méthodologique permettant l'aide à la décision au sein d'un modèle de représentation des connaissances, illustré par un cas d'étude issu de la filière céréalière. Le domaine d'application plus particulièrement considéré est la définition de la qualité alimentaire, pour laquelle entrent en jeu différents points de vue (intérêt nutritionnel, qualités gustatives, sécurité sanitaire des produits) et différents acteurs (industriels, chercheurs, citoyens) dont les intentions divergent. La base de notre approche est l'utilisation de systèmes d'argumentation issus de la littérature en IA. Les systèmes d'argumentation sont des cadres formels visant à représenter des arguments, les interactions entre ces arguments, et à déterminer quels énoncés sont inférables par un ensemble d'arguments jugé cohérent, ces énoncés pouvant par exemple correspondre à des croyances ou à des décisions à prendre. L'un des cadres formels les plus abstraits, qui fait référence dans le domaine, est celui proposé par Dung en 1995. Dans ce cadre, un système d'argumentation est défini par un ensemble fini d'arguments et une relation binaire sur cet ensemble, appelée relation d'attaque. On peut également voir un tel système comme un graphe étiqueté dont les sommets sont les arguments et les arcs représentent la relation d'attaque directe. Un argument en "attaque'' un autre s'il existe un chemin de longueur impaire du premier au second, et il le "défend'' s'il en existe un de longueur paire. Un argument est inférable s'il appartient à un ensemble d'arguments ayant certaines propriétés relatives aux notions d'attaque et de défense. C'est en ce sens que l'acceptabilité des arguments est dite collective. Le système d'argumentation de Dung a été étendu notamment par l'ajout de préférences entre arguments. Celles-ci, agrégées aux attaques, donnent une relation de "défaite'', changeant le calcul de l'acceptabilité collective des arguments. Ainsi, sur la base de celle-ci, nous proposerons une méthode pour déterminer l'équivalence entre deux systèmes d'argumentation afin d'unifier ces systèmes abstraits d'argumentation à base de préférences. Un système contextuel à base de préférences est ainsi proposé (les préférences et les attaques entre arguments ont une validité contextuelle), des méthodes d'agrégations entre attaques et préférences et de fusions entre contextes sont investiguées au regard de la consistance entre arguments collectivement acceptés. La consistance est obtenue lorsque de tels ensembles ne contiennent pas de conflits en termes d'informations véhiculées et de conclusions et/ou de décisions supportées au niveau de leurs arguments. Notre démarche s'appuie sur trois courants bien connus de l'argumentation : nous proposons une vue emboîtée de l'argument répondant aux attentes du courant "micro'', qui s'attache à définir les structures internes de l'argument; nous proposons de générer des attaques entre arguments basées sur les actions qu'ils soutiennent ou qu'ils rejettent. Ceci nous permet de nous intéresser également aux préoccupations du courant "macro'' pour le traitement des relations entre arguments en vue du calcul d'une acceptabilité collective. Enfin, nous nous intéressons à certains aspects du courant "rhétorique'', à savoir la définition d'audiences donnant une force contextuelle à l'argument et générant des préférences. Ce dernier aspect nous permet notamment d'établir des recommandations contextuelles. L'ensemble de la démarche, illustrée au travers d'exemples situationnels et d'un cas d'application, est inclus dans un modèle d'arbitrage argumenté, lui même en partie implémenté dans un formalisme de représentation des connaissances et de raisonnement (les graphes conceptuels). / The objective of our work is to design a theoretical and methodological framework enabling decision support within a model of knowledge representation, illustrated by a case study from the cereal industry.The specific scope considered is the definition of food quality, for which we must consider different points of view (nutritional value, flavor quality, hygiene assurance of the products) and different stakeholders (industry workers, researchers, members of the general public) whose objectives diverge.The basis of our approach is the use of argumentation systems presented in AI literature. The argumentation systems are formal frameworks that aim to represent arguments, the interactions between these arguments and determine which statements are inferred by a set of arguments considered coherent. These statements may, for example, correspond to beliefs or to decisions to make. One of the most abstract formal frameworks, a benchmark in the field, is that proposed by Dung in 1995.In this context, an argumentation system is defined by a finite set of arguments and a binary relation on this set, called the attack relation. One can also view such a system as a labeled graph on which the vertices are the arguments and the arcs represent the relationship of direct attack. An argument "attacks'' another if the path from the first to the second is of an odd length and "defends'' it if it is of an even length. An argument is inferable if it belongs to a set of arguments with some properties related to the notions of attack and defense. This is the basis for collective acceptability of arguments.Dung's argumentation framework was extended notably by the addition of preferences between arguments. These, aggregated to attacks, give a "defeat'' relationship, changing the calculation of the collective acceptability of arguments. Thus, on the basis of collective acceptability, we propose a method for determining the similarities between two argumentation systems, in order to unify these abstract, preference-based argumentation frameworks. A contextual preferences-based argumentation framework is proposed (the preferences and the attacks between arguments have a contextual validity), methods of aggregations between attacks and preferences and the mergence between contexts are investigated in terms of consistency between the collectively accepted arguments. Consistency is obtained when such sets do not contain conflicts in terms of information conveyed and conclusions and/or decisions supported by their arguments. Our approach is based on three common trend of argumentation . Firstly, we propose a nested view of argumentation that meets the expectations of the "micro" trend, which attempts to define the internal structures of the argument. Secondly, we propose to generate attacks between arguments, based on the actions they support or reject. This allows us to investigate the concerns of the "macro" trend in the treatment of relationships between arguments in view of calculated collective acceptability. Finally, we investigate some aspects of the "rhetoric" trend, to determine the definition of audiences giving contextual strength to the argument and generating preferences. This last aspect allows us to establish such contextual recommendations. The entire approach, illustrated through situational examples and an application case, is included in an argumentation-based arbitration model, which in turn is implemented in a formalism of knowledge representation and reasoning (the conceptual graphs).
110

Interrogation de grandes bases de connaissances : algorithmes de réécriture de requêtes conjonctives en présence de règles existentielles / Querying large knowledge bases

König, Mélanie 24 October 2014 (has links)
La problématique d'interrogation d'une base de données en présence d'une ontologie (OBQA pour "Ontology-based Query Answering") consiste à prendre en compte des connaissances générales, typiquement une ontologie de domaine, lors de l'évaluation d'une requête. Dans le cadre de cette thèse, ces connaissances sont représentées par des formules de la logique du premier ordre appelées "règles existentielles". Les règles existentielles sont aussi connues sous le nom de règles Datalog+/- ou "tuple-generating dependencies". Nous considérons une approche couramment utilisée, qui consiste à réécrire la requête en exploitant les règles de façon à se ramener à un problème classique l'interrogation d'une base de données. Nous définissons un cadre théorique d'étude des algorithmes de réécriture d'une requête conjonctive en une union de requêtes conjonctives, accompagné d'un algorithme de réécriture générique, prenant en paramètre un opérateur de réécriture. Nous proposons ensuite plusieurs opérateurs de réécriture et développons différentes optimisations, que nous évaluons sur des benchmarks du domaine. / The issue of querying a knowledge base, also called Ontology-based Query Answering (OBQA), consists of taking into account general knowledge, typically a domain ontology, when evaluating queries. In this thesis, ontological knowledge is represented by first-order logical formulas, called existential rules. Existential rules are also known as Datalog+/- and tuple-generating dependencies. We adopt a well-known approach, which consists of rewriting the query with the rules to reduce the problem to a classical database query answering problem. We define a theoretical framework to study algorithms that rewrite a conjunctive query into a union of conjunctive queries, as well as a generic rewriting algorithm that takes into account a rewriting operator. Then, we propose several rewriting operators and develop several optimisations, which we evaluate on benchmarks of the domain.

Page generated in 0.1034 seconds