• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 11
  • 4
  • 3
  • Tagged with
  • 44
  • 44
  • 37
  • 16
  • 16
  • 11
  • 10
  • 10
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Les systèmes cognitifs dans les réseaux autonomes : une méthode d'apprentissage distribué et collaboratif situé dans le plan de connaissance pour l'auto-adaptation / Cognitive systems in automatic networks : a distributed and collaborative learning method in knoledge plane for self-adapting function

Mbaye, Maïssa 17 December 2009 (has links)
L'un des défis majeurs pour les décennies à venir, dans le domaine des technologies de l'information et de la communication, est la réalisation du concept des réseaux autonomes. Ce paradigme a pour objectif de rendre les équipements réseaux capables de s'autogérer, c'est-à-dire qu'ils pourront s'auto-configurer, s'auto-optimiser, s'auto-protéger et s'auto-restaurer en respectant les objectifs de haut niveau de leurs concepteurs. Les architectures majeures de réseaux autonomes se basent principalement sur la notion de boucle de contrôle fermée permettant l'auto-adaptation (auto-configuration et auto-optimisation) de l'équipement réseau en fonction des événements qui surviennent sur leur environnement. Le plan de connaissance est une des approches, très mise en avant ces dernières années par le monde de la recherche, qui suggère l'utilisation des systèmes cognitifs (l'apprentissage et le raisonnement) pour fermer la boucle de contrôle. Cependant, bien que les architectures majeures de gestion autonomes intègrent des modules d'apprentissage sous forme de boite noire, peu de recherches s'intéressent véritablement au contenu de ces boites. C'est dans ce cadre que nous avons fait une étude sur l'apport potentiel de l'apprentissage et proposé une méthode d'apprentissage distribué et collaboratif. Nous proposons une formalisation du problème d'auto-adaptation sous forme d'un problème d'apprentissage d'état-actions. Cette formalisation nous permet de définir un apprentissage de stratégies d'auto-adaptation qui se base sur l'utilisation de l'historique des transitions et utilise la programmation logique inductive pour découvrir de nouvelles stratégies à partir de celles déjà découvertes. Nous définissons, aussi un algorithme de partage de la connaissance qui permet d'accélérer le processus d'apprentissage. Enfin, nous avons testé l'approche proposé dans le cadre d'un réseau DiffServ et montré sa transposition sur le contexte du transport de flux multimédia dans les réseaux sans-fil 802.11. / One of the major challenges for decades to come, in the field of information technologies and the communication, is realization of autonomic paradigm. It aims to enable network equipments to self-manage, enable them to self-configure, self-optimize, self-protect and self-heal according to high-level objectives of their designers. Major architectures of autonomic networking are based on closed control loop allowing self-adapting (self-configuring and self-optimizing) of the network equipment according to the events which arise on their environment. Knowledge plane is one approach, very emphasis these last years by researchers, which suggests the use of the cognitive systems (machine learning and the reasoning) to realize closed control loop. However, although the major autonomic architectures integrate machine learning modules as functional block, few researches are really interested in the contents of these blocks. It is in this context that we made a study on the potential contribution machine learning and proposed a method of distributed and collaborative machine learning. We propose a formalization self-adapting problem in term of learning configuration strategies (state-actions) problem. This formalization allows us to define a strategies machine learning method for self-adapting which is based on the history observed transitions and uses inductive logic programming to discover new strategies from those already discovered. We defined, also a knowledge sharing algorithm which makes network components collaborate to improve learning process. Finally, we tested our approach in DiffServ context and showed its transposition on multimedia streaming in 802.11 wireless networks.
42

Learning OWL Class Expressions

Lehmann, Jens 09 June 2010 (has links)
With the advent of the Semantic Web and Semantic Technologies, ontologies have become one of the most prominent paradigms for knowledge representation and reasoning. The popular ontology language OWL, based on description logics, became a W3C recommendation in 2004 and a standard for modelling ontologies on the Web. In the meantime, many studies and applications using OWL have been reported in research and industrial environments, many of which go beyond Internet usage and employ the power of ontological modelling in other fields such as biology, medicine, software engineering, knowledge management, and cognitive systems. However, recent progress in the field faces a lack of well-structured ontologies with large amounts of instance data due to the fact that engineering such ontologies requires a considerable investment of resources. Nowadays, knowledge bases often provide large volumes of data without sophisticated schemata. Hence, methods for automated schema acquisition and maintenance are sought. Schema acquisition is closely related to solving typical classification problems in machine learning, e.g. the detection of chemical compounds causing cancer. In this work, we investigate both, the underlying machine learning techniques and their application to knowledge acquisition in the Semantic Web. In order to leverage machine-learning approaches for solving these tasks, it is required to develop methods and tools for learning concepts in description logics or, equivalently, class expressions in OWL. In this thesis, it is shown that methods from Inductive Logic Programming (ILP) are applicable to learning in description logic knowledge bases. The results provide foundations for the semi-automatic creation and maintenance of OWL ontologies, in particular in cases when extensional information (i.e. facts, instance data) is abundantly available, while corresponding intensional information (schema) is missing or not expressive enough to allow powerful reasoning over the ontology in a useful way. Such situations often occur when extracting knowledge from different sources, e.g. databases, or in collaborative knowledge engineering scenarios, e.g. using semantic wikis. It can be argued that being able to learn OWL class expressions is a step towards enriching OWL knowledge bases in order to enable powerful reasoning, consistency checking, and improved querying possibilities. In particular, plugins for OWL ontology editors based on learning methods are developed and evaluated in this work. The developed algorithms are not restricted to ontology engineering and can handle other learning problems. Indeed, they lend themselves to generic use in machine learning in the same way as ILP systems do. The main difference, however, is the employed knowledge representation paradigm: ILP traditionally uses logic programs for knowledge representation, whereas this work rests on description logics and OWL. This difference is crucial when considering Semantic Web applications as target use cases, as such applications hinge centrally on the chosen knowledge representation format for knowledge interchange and integration. The work in this thesis can be understood as a broadening of the scope of research and applications of ILP methods. This goal is particularly important since the number of OWL-based systems is already increasing rapidly and can be expected to grow further in the future. The thesis starts by establishing the necessary theoretical basis and continues with the specification of algorithms. It also contains their evaluation and, finally, presents a number of application scenarios. The research contributions of this work are threefold: The first contribution is a complete analysis of desirable properties of refinement operators in description logics. Refinement operators are used to traverse the target search space and are, therefore, a crucial element in many learning algorithms. Their properties (completeness, weak completeness, properness, redundancy, infinity, minimality) indicate whether a refinement operator is suitable for being employed in a learning algorithm. The key research question is which of those properties can be combined. It is shown that there is no ideal, i.e. complete, proper, and finite, refinement operator for expressive description logics, which indicates that learning in description logics is a challenging machine learning task. A number of other new results for different property combinations are also proven. The need for these investigations has already been expressed in several articles prior to this PhD work. The theoretical limitations, which were shown as a result of these investigations, provide clear criteria for the design of refinement operators. In the analysis, as few assumptions as possible were made regarding the used description language. The second contribution is the development of two refinement operators. The first operator supports a wide range of concept constructors and it is shown that it is complete and can be extended to a proper operator. It is the most expressive operator designed for a description language so far. The second operator uses the light-weight language EL and is weakly complete, proper, and finite. It is straightforward to extend it to an ideal operator, if required. It is the first published ideal refinement operator in description logics. While the two operators differ a lot in their technical details, they both use background knowledge efficiently. The third contribution is the actual learning algorithms using the introduced operators. New redundancy elimination and infinity-handling techniques are introduced in these algorithms. According to the evaluation, the algorithms produce very readable solutions, while their accuracy is competitive with the state-of-the-art in machine learning. Several optimisations for achieving scalability of the introduced algorithms are described, including a knowledge base fragment selection approach, a dedicated reasoning procedure, and a stochastic coverage computation approach. The research contributions are evaluated on benchmark problems and in use cases. Standard statistical measurements such as cross validation and significance tests show that the approaches are very competitive. Furthermore, the ontology engineering case study provides evidence that the described algorithms can solve the target problems in practice. A major outcome of the doctoral work is the DL-Learner framework. It provides the source code for all algorithms and examples as open-source and has been incorporated in other projects.
43

[en] CAUSAL REASONING AND INDUCTION IN DAVID HUME / [pt] RACIOCÍNIO CAUSAL E INFERÊNCIA INDUTIVA NO PENSAMENTO DE DAVID HUME

CARLOS JACINTO NASCIMENTO MOTTA 25 November 2005 (has links)
[pt] Esta dissertação tem por objetivo apresentar os resultados da pesquisa de mestrado em que se procurou evidenciar algumas características da relação de David Hume com a indução. Segundo a interpretação corrente, Hume é o responsável por mostrar que nossa razão não é capaz de justificar qualquer um de nossos raciocínios indutivos. O problema de Hume também se caracteriza por ser um problema acerca da racionalidade da ciência, pois se seu método principal, a indução, não pode receber suporte racional, parece lícito afirmar que o resultado de uma inferência indutiva é irracional. A fim de delinear o campo exato em que se insere a crítica humeana, este texto irá mostrar como Hume apresenta suas teorias acerca do raciocínio causal em seu Tratado da natureza humana, traçar as características exatas do raciocínio causal de Hume e confrontá-las com as formas de interpretação presentes em alguns de seus principais comentadores. Procuramos tornar claras as falhas apresentadas nestas interpretações. Em seguida trataremos de discutir algumas das mais celebradas interpretações da filosofia de Hume, centrando nossa análise nos textos de Mackie, Beauchamp e Mappes. O capítulo final tem por objetivo mostrar as características racionais que podem ser atribuídas aos raciocínios causais humeanos, salientando o caráter particular de suas inferências. Finalizando, mostraremos como a origem do princípio da cópia pode ser um exemplo do uso de inferências indutivas por parte de Hume, o que nos leva a considerações heterodoxas a respeito de sua visão a respeito da racionalidade. / [en] The aim of this work is to present the results of my master´s degree research, which tried to show some of the characteristics of David Hume´s approach to induction. According to the standard interpretation, Hume is responsible for showing that our reason is not able to justify any of our inductive reasonings. Hume´s problem also characterizes itself by being a problem about the rationality of science, for, since his main method, induction, cannot receive a rational foundation, it seems licit to assert that the result of any inductive inference is irrational. In order to precisely describe the Humean criticism I am going to show how Hume presents his theories concerning causal reasoning in this A Treatise of Human Nature, define the exact characteristics of causal reasoning according to him, and compare this analysis to those by some of his main critics. We shall try to bring to light the proposed inadequacy of the latter. Next we will discuss some of the most celebrated interpretations of Hume´s philosophy, specially those by of Mackie, Beauchamp and Mappes. The final chapter aims at showing the rational characteristics that can be assigned to Humean causal reasoning emphasizing the particular character of his inferences. Finally, we show how the origin of the copy principle can be an instance of the use of inductive inferences by Hume, which allows us to risk some heterodox hypotheses concerning his view of rationality.
44

Representation of Compositional Relational Programs

Paçacı, Görkem January 2017 (has links)
Usability aspects of programming languages are often overlooked, yet have a substantial effect on programmer productivity. These issues are even more acute in the field of Inductive Synthesis, where programs are automatically generated from sample expected input and output data, and the programmer needs to be able to comprehend, and confirm or reject the suggested programs. A promising method of Inductive Synthesis, CombInduce, which is particularly suitable for synthesizing recursive programs, is a candidate for improvements in usability as the target language Combilog is not user-friendly. The method requires the target language to be strictly compositional, hence devoid of variables, yet have the expressiveness of definite clause programs. This sets up a challenging problem for establishing a user-friendly but equally expressive target language. Alternatives to Combilog, such as Quine's Predicate-functor Logic and Schönfinkel and Curry's Combinatory Logic also do not offer a practical notation: finding a more usable representation is imperative. This thesis presents two distinct approaches towards more convenient representations which still maintain compositionality. The first is Visual Combilog (VC), a system for visualizing Combilog programs. In this approach Combilog remains as the target language for synthesis, but programs can be read and modified by interacting with the equivalent diagrams instead. VC is implemented as a split-view editor that maintains the equivalent Combilog and VC representations on-the-fly, automatically transforming them as necessary. The second approach is Combilog with Name Projection (CNP), a textual iteration of Combilog that replaces numeric argument positions with argument names. The result is a language where argument names make the notation more readable, yet compositionality is preserved by avoiding variables. Compositionality is demonstrated by implementing CombInduce with CNP as the target language, revealing that programs with the same level of recursive complexity can be synthesized in CNP equally well, and establishing the underlying method of synthesis can also work with CNP. Our evaluations of the user-friendliness of both representations are supported by a range of methods from Information Visualization, Cognitive Modelling, and Human-Computer Interaction. The increased usability of both representations are confirmed by empirical user studies: an often neglected aspect of language design.

Page generated in 0.0907 seconds