• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 71
  • 32
  • 19
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 528
  • 528
  • 148
  • 139
  • 124
  • 123
  • 119
  • 111
  • 103
  • 101
  • 97
  • 83
  • 80
  • 65
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Answering Deep Queries Specified in Natural Language with Respect to a Frame Based Knowledge Base and Developing Related Natural Language Understanding Components

January 2015 (has links)
abstract: Question Answering has been under active research for decades, but it has recently taken the spotlight following IBM Watson's success in Jeopardy! and digital assistants such as Apple's Siri, Google Now, and Microsoft Cortana through every smart-phone and browser. However, most of the research in Question Answering aims at factual questions rather than deep ones such as ``How'' and ``Why'' questions. In this dissertation, I suggest a different approach in tackling this problem. We believe that the answers of deep questions need to be formally defined before found. Because these answers must be defined based on something, it is better to be more structural in natural language text; I define Knowledge Description Graphs (KDGs), a graphical structure containing information about events, entities, and classes. We then propose formulations and algorithms to construct KDGs from a frame-based knowledge base, define the answers of various ``How'' and ``Why'' questions with respect to KDGs, and suggest how to obtain the answers from KDGs using Answer Set Programming. Moreover, I discuss how to derive missing information in constructing KDGs when the knowledge base is under-specified and how to answer many factual question types with respect to the knowledge base. After having the answers of various questions with respect to a knowledge base, I extend our research to use natural language text in specifying deep questions and knowledge base, generate natural language text from those specification. Toward these goals, I developed NL2KR, a system which helps in translating natural language to formal language. I show NL2KR's use in translating ``How'' and ``Why'' questions, and generating simple natural language sentences from natural language KDG specification. Finally, I discuss applications of the components I developed in Natural Language Understanding. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2015
292

Improving AI Planning by Using Extensible Components

January 2016 (has links)
abstract: Despite incremental improvements over decades, academic planning solutions see relatively little use in many industrial domains despite the relevance of planning paradigms to those problems. This work observes four shortfalls of existing academic solutions which contribute to this lack of adoption. To address these shortfalls this work defines model-independent semantics for planning and introduces an extensible planning library. This library is shown to produce feasible results on an existing benchmark domain, overcome the usual modeling limitations of traditional planners, and accommodate domain-dependent knowledge about the problem structure within the planning process. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2016
293

Representing and Reasoning about Dynamic Multi-Agent Domains: An Action Language Approach

January 2018 (has links)
abstract: Reasoning about actions forms the basis of many tasks such as prediction, planning, and diagnosis in a dynamic domain. Within the reasoning about actions community, a broad class of languages, called action languages, has been developed together with a methodology for their use in representing and reasoning about dynamic domains. With a few notable exceptions, the focus of these efforts has largely centered around single-agent systems. Agents rarely operate in a vacuum however, and almost in parallel, substantial work has been done within the dynamic epistemic logic community towards understanding how the actions of an agent may effect not just his own knowledge and/or beliefs, but those of his fellow agents as well. What is less understood by both communities is how to represent and reason about both the direct and indirect effects of both ontic and epistemic actions within a multi-agent setting. This dissertation presents ongoing research towards a framework for representing and reasoning about dynamic multi-agent domains involving both classes of actions. The contributions of this work are as follows: the formulation of a precise mathematical model of a dynamic multi-agent domain based on the notion of a transition diagram; the development of the multi-agent action languages mA+ and mAL based upon this model, as well as preliminary investigations of their properties and implementations via logic programming under the answer set semantics; precise formulations of the temporal projection, and planning problems within a multi-agent context; and an investigation of the application of the proposed approach to the representation of, and reasoning about, scenarios involving the modalities of knowledge and belief. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
294

Immersion dans des documents scientifiques et techniques : unités, modèles théoriques et processus / Immersion in scientific and technical documents : units, theoretical models and processes

Andreani, Vanessa 23 September 2011 (has links)
Cette thèse aborde la problématique de l'accès à l'information scientifique et technique véhiculée par de grands ensembles documentaires. Pour permettre à l'utilisateur de trouver l'information qui lui est pertinente, nous avons oeuvré à la définition d'un modèle répondant à l'exigence de souplesse de notre contexte applicatif industriel ; nous postulons pour cela la nécessité de segmenter l'information tirée des documents en plans ontologiques. Le modèle résultant permet une immersion documentaire, et ce grâce à trois types de processus complémentaires : des processus endogènes (exploitant le corpus pour analyser le corpus), exogènes (faisant appel à des ressources externes) et anthropogènes (dans lesquels les compétences de l'utilisateur sont considérées comme ressource) sont combinés. Tous concourent à l'attribution d'une place centrale à l'utilisateur dans le système, en tant qu'agent interprétant de l'information et concepteur de ses connaissances, dès lors qu'il est placé dans un contexte industriel ou spécialisé. / This thesis adresses the issue of accessing scientific and technical information conveyed by large sets of documents. To enable the user to find his own relevant information, we worked on a model meeting the requirement of flexibility imposed by our industrial application context ; to do so, we postulated the necessity of segmenting information from documents into ontological facets. The resulting model enables a documentary immersion, thanks to three types of complementary processes : endogenous processes (exploiting the corpus to analyze the corpus), exogenous processes (using external resources) and anthropogenous ones (in which the user's skills are considered as a resource) are combined. They all contribute to granting the user a fundamental role in the system, as an interpreting agent and as a knowledge creator, provided that he is placed in an industrial or specialised context.
295

Mapeamento sistemático sobre o uso de ontologias em informática médica

Mota, Moises Roberto de Araujo 06 July 2013 (has links)
Made available in DSpace on 2015-05-14T12:36:49Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 2692269 bytes, checksum: 7a18c5177603a00982be9d7f6234196b (MD5) Previous issue date: 2013-07-06 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The number of studies on the use of Ontologies in Medical Informatics has grown through the years. It shows the interest in developing technologies with semantic bases for this new area of science. However, little work has been documented about how the area has developed, thus hampering the creation process of relevant projects in the field of Medical Informatics, since it is not possible to map the main research opportunities, gaps and needs of this area. Therefore, this project aims to map the overall development of this area, specifically on the use of Ontologies in Medical Informatics, in order to highlight and fill the area needs. For this, we used the methodology of systematic research known as Systematic Mapping, which follows a research protocol clearly defined, transparent and rigorous, enabling the study be evaluated and validated by the scientific community interested reproduced. In this way, the current knowledge was gathered in the literature on the Use of Ontologies in Medical Informatics. From automatic and manual searches, it was returned 23788 studies related to this research area. After conducting a rigorous selection, this number dropped to 511 relevant papers, which were performed on the further analysis for the proposed construction of Systematic Mapping. It was detected 07 characteristics of these studies, which allowed the observation of the main trends of growth of the area as a whole. The quality assessment of the selected works provided security to take conclusions of this research, considering that around 95% have strong evidence to support the presented results. Despite these results, we found some gaps in relation to the depth of this search. We concluded that the use of Ontologies in Medical Informatics has grown as expected,considering the reuse of ontologies, integration and interoperability of systems and different ontologies. About the opportunities, we identified in this area a need for methods for evaluation, validation, correctness, completeness and maintenance of new or already established ontologies, and also the development of applications and studies related totelemedicine, public health, education, robotics, evidence-based research, and financial management, focused on Medical Informatics / A quantidade de estudos no uso de Ontologias em Informática Médica tem crescido através dos anos. Isto demonstra o interesse no desenvolvimento de tecnologias com bases semânticas para esta nova área da ciência. No entanto, pouco tem sido documentado a respeito de como a área tem se desenvolvido, dificultando, assim, o processo de criação de projetos relevantes na área da Informática Médica, uma vez que não é possível mapear as principais oportunidades de pesquisa, lacunas e necessidades da área. Logo, o presente projeto tem o objetivo de mapear o desenvolvimento geral desta área, especificamente sobre o uso de Ontologias em Informática Médica, de modo a evidenciar e preencher as necessidades da área. Para tanto, foi utilizada a metodologia de pesquisa sistematizada conhecida como Mapeamento Sistemático, a qual segue um protocolo de pesquisa bem definido, transparente e rigoroso, permitindo que o estudo seja avaliado, validado e reproduzido pela comunidade científica interessada. Deste modo, foi reunido o conhecimento atual encontrado na literatura sobre o Uso de Ontologias em Informática Médica. A partir das buscas automáticas e manuais, foram retornados 23788 estudos relacionados ao domínio desta pesquisa. Após a realização de uma seleção rigorosa, este número caiu para 511 artigos relevantes, sobre os quais foram realizadas as análises mais aprofundadas para a construção do Mapeamento Sistemático proposto. Foram observadas 07 características destes estudos, que permitiram observar as principais tendências de crescimento da área como um todo. A avaliação da qualidade dos trabalhos selecionados forneceu segurança para a construção das conclusões desta pesquisa, tendo em vista que em torno de 95% possuem fortes evidências para apoiar os resultados apresentados. Apesar dos resultados, foram encontradas algumas lacunas no que diz respeito à profundidade da pesquisa. Foi possível concluir que o Uso de Ontologias em Informática Médica tem crescido de acordo com o esperado, considerando o reuso de ontologias, a integração e interoperabilidade de sistemas e ontologias diferentes. Quanto às oportunidades, identificamos que nesta área há a necessidade de métodos para avaliação, validação, corretude, completude e manutenção de ontologias novas ou já estabelecidas, como também o desenvolvimento de aplicações e estudos relacionados a telemedicina, saúde pública, educação, robótica, pesquisas baseadas em evidências, e gestão financeira, voltados para a Informática Médica
296

Answer Set Programming and Other Computing Paradigms

January 2013 (has links)
abstract: Answer Set Programming (ASP) is one of the most prominent and successful knowledge representation paradigms. The success of ASP is due to its expressive non-monotonic modeling language and its efficient computational methods originating from building propositional satisfiability solvers. The wide adoption of ASP has motivated several extensions to its modeling language in order to enhance expressivity, such as incorporating aggregates and interfaces with ontologies. Also, in order to overcome the grounding bottleneck of computation in ASP, there are increasing interests in integrating ASP with other computing paradigms, such as Constraint Programming (CP) and Satisfiability Modulo Theories (SMT). Due to the non-monotonic nature of the ASP semantics, such enhancements turned out to be non-trivial and the existing extensions are not fully satisfactory. We observe that one main reason for the difficulties rooted in the propositional semantics of ASP, which is limited in handling first-order constructs (such as aggregates and ontologies) and functions (such as constraint variables in CP and SMT) in natural ways. This dissertation presents a unifying view on these extensions by viewing them as instances of formulas with generalized quantifiers and intensional functions. We extend the first-order stable model semantics by by Ferraris, Lee, and Lifschitz to allow generalized quantifiers, which cover aggregate, DL-atoms, constraints and SMT theory atoms as special cases. Using this unifying framework, we study and relate different extensions of ASP. We also present a tight integration of ASP with SMT, based on which we enhance action language C+ to handle reasoning about continuous changes. Our framework yields a systematic approach to study and extend non-monotonic languages. / Dissertation/Thesis / Ph.D. Computer Science 2013
297

UMA ONTOLOGIA PARA REPRESENTAÇÃO DO CONHECIMENTO DO DOMINIO DA QUIMICA ANALITICA COM ADIÇÃO DE NOVOS AGENTES E FUNCIONALIDADES PARA ANÁLISE E MONITORAMENTO DE COMBUSTIVEIS. / AN ONTOLOGY FOR REPRESENTATION OF KNOWLEDGE OF FIELD ANALYTICAL CHEMISTRY WITH ADDITION OF NEW AGENTS AND FEATURES FOR ANALYSIS AND MONITORING OF FUELS.

Corrêa, Paulo José Melo Gomes 14 September 2009 (has links)
Made available in DSpace on 2016-08-17T14:53:04Z (GMT). No. of bitstreams: 1 Paulo Jose Melo Gomes Correa.pdf: 5684127 bytes, checksum: c4aae2b2365a502a84da14364cdf5d4d (MD5) Previous issue date: 2009-09-14 / This research presents studies involving Electricity Engineering and Oil and Biofuel Analytical Chemistry areas, whose objective is the perfectioning of chemical analysis steps for the Fuel Quality Monitoring Program, instituted by the National Agency of Petroleum and Biofuel - ANP, executed in the Maranhão State by the Laboratory of Analyses and Research in Analytical Chemistry of Petroleum and Biofuel - LAPQAP. For this, improvements were proposed for the Fuel Quality Control Multiagente System, and help for the decisions taking of the laboratory. The inclusion of new agents to the multiagent society is still considered, where the objective is to add a new technique automatized for chemical analyses, beyond additional functionalities. A fuel ontology was considered for the communication mechanism that will be shaped using 101 methodology, whose objective is the representation of domain knowledge of chemical analyses beyond supply of a communication language among the society agents. In order to reach the objectives we used Artificial Intelligence techniques, inference motor JESS (Java Expert System Shell), an ontology technology to represent the domain knowledge serving as vocabulary of the communication process, the middleware JADE (Java Agent DEvelopment framework) for environment execution with their improvements and the development methodology of multiagent systems named PASSI for the system modeling. / Esta pesquisa apresenta estudos envolvendo as áreas de Engenharia de Eletricidade e Química Analítica de Petróleo e Bicombustíveis, tendo como objetivo o aperfeiçoamento das etapas de análise químicas do Programa de Monitoramento da Qualidade de Combustíveis (PMQC), instituído pela Agência Nacional de Petróleo e Biocombustíveis ANP, executado no Estado do Maranhão pelo Laboratório de Análises e Pesquisa em Química Analítica de Petróleo e Biocombustíveis - LAPQAP. Para isto, são propostas melhorias no Sistema Inteligente de Monitoramento e Controle da Qualidade de Combustíveis SIMCQC utilizado pelo laboratório no auxilio a tomada de decisões. Propõe-se a inclusão de novos agentes à sociedade multiagente, tendo como objetivo aumentar a quantidade de técnicas análises químicas no SIMCQC. Para o mecanismo de comunicação é mostrada a criação de uma ontologia de combustíveis que foi modelada utilizando-se da metodologia 101, cujo objetivo é a representação do conhecimento do domínio de análises químicas e o fornecimento de uma linguagem de conteúdo para o mecanismo de comunicação dos agentes da sociedade. Para o alcance dos objetivos foram utilizadas técnicas de Inteligência Artificial, motor de inferência JESS (Java Expert System Shell), a tecnologia de Ontologia para representar o conhecimento do domínio e servir como vocabulário do processo de comunicação, o middleware JADE (Java Agent DEvelopment Framework) para execução do ambiente com suas melhorias e a metodologia de desenvolvimento de sistemas multiagente PASSI para a modelagem do sistema.
298

Cooperative knowledge discovery from cooperative activity : application on design projects / Découverte de connaissance coopérative à partir de l'activité coopérative : application sur des projets de conception

Dai, Xinghang 17 July 2015 (has links)
Les projets de conception deviennent de plus en plus complexes et multidisciplinaires en termes d’organisation. Ces projets sont menés actuellement d’une manière collaborative. La connaissance coopérative (liée à la négociation et à l’organisation) produite dans ce type de projets est généralement perdue. Par ailleurs, la gestion des connaissances permet à une entreprise de réutiliser l’expérience afin d’améliorer l’apprentissage organisationnel. Plusieurs méthodologies sont définies pour recueillir les connaissances métier. Cependant, ces approches présentent des limites pour recueillir et modéliser les connaissances coopératives. Un acteur ne peut pas expliquer globalement l’activité coopérative sans préjugé. Comment réutiliser la connaissance coopérative des projets de conception devient un défi en gestion de connaissances. Dans ma thèse « Découverte des connaissances coopératives à partir des activités coopératives : application sur les projets de conception », le terme « «découverte des connaissances » est redéfinit selon la méthodologie de l’ingénierie des connaissances et la gestion des connaissances. La nature de la connaissance coopérative est étudiée, ensuite une nouvelle approche de classification est proposée afin de découvrir la connaissance coopérative dans les activités coopératives. Cette approche est également élaborée dans le contexte de projet de conception. Des tests sur des projets en ingénierie de logiciel, en éco-conception et en conception mécanique sont réalisés / Modern design projects tend to be more and more complex and multi-disciplinary in terms of both organization and process. Knowledge management enables a company to reuse its experience in order to improve organizational learning. Several knowledge engineering methods are defined to obtain expert knowledge. However, no knowledge approaches have succeeded to extract cooperative knowledge due to its particular features: cooperative knowledge is produced in cooperative activities; no single actor can claim to explain globally the cooperative activity with no personal bias. How can we reuse cooperative design project knowledge is the new challenge. In my thesis “knowledge discovery from cooperative activities, application on design projects”, the term “knowledge discovery” is redefined according to knowledge engineering approaches, and guided by the spirit of knowledge management. The nature of cooperative knowledge is studied and a novel approach of classification is proposed to discover knowledge from cooperative activities, and it is further elaborated in the context of design projects, examples on software engineering, eco-design and mechanical design are demonstrated
299

Semantics and Implementation of Knowledge Operators in Approximate Databases / Semantik och implementation för kunskapsoperatorer i approximativa databaser

Sjö, Kristoffer January 2004 (has links)
In order that epistemic formulas might be coupled with approximate databases, it is necessary to have a well-defined semantics for the knowledge operator and a method of reducing epistemic formulas to approximate formulas. In this thesis, two possible definitions of a semantics for the knowledge operator are proposed for use together with an approximate relational database: * One based upon logical entailment (being the dominating notion of knowledge in literature); sound and complete rules for reduction to approximate formulas are explored and found not to be applicable to all formulas. * One based upon algorithmic computability (in order to be practically feasible); the correspondence to the above operator on the one hand, and to the deductive capability of the agent on the other hand, is explored. Also, an inductively defined semantics for a"know whether"-operator, is proposed and tested. Finally, an algorithm implementing the above is proposed, carried out using Java, and tested.
300

Extending the Stream Reasoning in DyKnow with Spatial Reasoning in RCC-8

Lazarovski, Daniel January 2012 (has links)
Autonomous systems require a lot of information about the environment in which they operate in order to perform different high-level tasks. The information is made available through various sources, such as remote and on-board sensors, databases, GIS, the Internet, etc. The sensory input especially is incrementally available to the systems and can be represented as streams. High-level tasks often require some sort of reasoning over the input data, however raw streaming input is often not suitable for the higher level representations needed for reasoning. DyKnow is a stream processing framework that provides functionalities to represent knowledge needed for reasoning from streaming inputs. DyKnow has been used within a platform for task planning and execution monitoring for UAVs. The execution monitoring is performed using formula progression with monitor rules specified as temporal logic formulas. In this thesis we present an analysis for providing spatio-temporal functionalities to the formula progressor and we extend the formula progression with spatial reasoning in RCC-8. The result implementation is capable of evaluating spatio-temporal logic formulas using progression over streaming data. In addition, a ROS implementation of the formula progressor is presented as a part of a spatio-temporal stream reasoning architecture in ROS. / Collaborative Unmanned Aircraft Systems (CUAS)

Page generated in 0.1374 seconds