• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 217
  • 71
  • 32
  • 19
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 525
  • 525
  • 145
  • 138
  • 122
  • 121
  • 118
  • 109
  • 102
  • 100
  • 96
  • 81
  • 79
  • 64
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Partitioning semantics for entity resolution and link repairs in bibliographic knowledge bases / Sémantique de partitionnement pour l'identification d'entités et réparation de liens dans une base de connaissances bibliographiques

Guizol, Léa 21 November 2014 (has links)
Nous proposons une approche qualitative pour la résolution d'entités et la réparation de liens dans une base de connaissances bibliographiques. Notre question de recherche est : "Comment détecter et réparer les liens erronés dans une base de connaissances bibliographiques en utilisant des méthodes qualitatives ?". L'approche proposée se décompose en deux grandes parties. La première contribution est une sémantique de partitionnement utilisant des critères symboliques et servant à détecter les liens erronés. La seconde contribution est un algorithme réparant les liens erronés. Nous avons implémenté notre approche et proposé une évaluation qualitative et quantitative pour la sémantique de partitionnement ainsi que prouvé les propriétés des algorithmes utilisés pour la réparation de liens. / We propose a qualitative entity resolution approach to repair links in a bibliographicknowledge base. Our research question is: "How to detect and repair erroneouslinks in a bibliographic knowledge base using qualitative methods?" Theproposed approach is decomposed into two major parts. The first contributionconsists in a partitioning semantics using symbolic criteria used in order to detecterroneous links. The second one consists in a repair algorithm restoring link quality.We implemented our approach and proposed qualitative and quantitative evaluationfor the partitioning semantics as well as proving certain properties for the repairalgorithms.
112

The Perceived Impact of Technology-Based Informal Learning on Membership Organizations

Unknown Date (has links)
Educational leadership goes beyond the boundaries of the classroom; skills needed for talent development professionals in business closely align with those needed in traditional educational leadership positions as both are responsible for the development and growth of others. Traditionally, the role of professional membership associations or organizations such as the Association for Talent Development (ATD, formerly known as the American Society for Training and Development), the group dedicated to individuals in the field of workplace learning and development, is to provide learning opportunities, set standards, identify best practices in their respective fields, and allow members to network with other professionals who share their interests. However, with the rampant increase in the use of technology and social networking, individuals are now able to access a vast majority of information for free online via tools such as LinkedIn, Facebook, Google, and YouTube. Where has this left organizations that typically charged for access to this type of information in the past? Surveys and interviews were conducted with ATD members in this mixed-methods study to answer the following research questions: 1. What are the perceptions of Association for Talent Development (ATD) members regarding the effect of technology-based informal learning on the role of ATD? 2. How do ATD members utilize technology for informal learning? 3. Are there factors such as gender, age, ethnicity, educational level, or length of time in the field that predict a member's likelihood to utilize technology for informal learning? 4. Are there certain ATD competency areas for which informal learning is preferred over non-formal or formal learning? The significance of the study includes the identification of how the Association for Talent Development (ATD, formerly ASTD) can continue to support professionals in our constantly evolving te chnological society as well as advancing the field by contributing research connecting informal learning with technology to membership organization roles. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2015. / FAU Electronic Theses and Dissertations Collection
113

Análise de pós-design para aplicações de planejamento em IA. / Post-design analysis for AI planning applications.

Vaquero, Tiago Stegun 22 January 2011 (has links)
Desde o final da década de 1990 existe um interesse crescente na aplicação de técnicas de planejamento automático em IA para resolver problemas reais de engenharia. Além das características dos problemas acadêmicos, tais como a necessidade de raciocinar sobre as ações, problemas reais requerem elicitação, engenharia e gerenciamento detalhado do conhecimento do domínio. Para tais aplicações reais, um processo de design sistemático é necessário onde as ferramentas de Engenharia do Conhecimento e de Requisitons têm um papel fundamental. Esforços acadêmicos recentes na área da Engenharia do Conhecimento em planejamento automático vêm desenvolvido ferramentas e técnicas de apoio ao processo de design de modelos do conhecimento. Porém, dada a natural incompletude do conhecimento, experiência prática em aplicações reais, como por exemplo exploração do espaço, tem mostrado que, mesmo com um processo disciplinado de design, requisitos de pontos de vista diferente (por exemplo, especialistas, usuários e patrocinadores) ainda surgem após a análise, geração e execução de planos. A tese central deste texto é que uma fase de análise de pós-design para o desenvolvimento de aplicações de planejamento em IA resulta em modelos do conhecimento mais ricos e, conseqüentemente, aumenta a qualidade dos planos gerados e a performance dos planejadores automáticos. Neste texto, nós investigamos como os conhecimentos e requisitos ocultos podem ser adquiridos e reutilizados durante a fase de análise de plans (posterior ao design do modelo) e como estes conhecimentos afetam o desempenho do processo de planejamento automático. O texto descreve um framework de post-design chamado postDAM que combina (1) uma ferramenta de engenharia de conhecimento para a aquisição de requisitos e avaliação do plano, (2) um ambiente de prototipagem virtual para a análise e simulação de planos, (3) um sistema de banco de dados para armazenamento de avaliações de planos, e (4) um sistema de raciocínio ontológico para o re-uso e descoberta de conhecimento sobre o domínio. Com o framework postDAM demonstramos que a análise de pós-design auxilia a descoberta de requisitos ocultos e orienta o ciclo de refinamento do modelo. Este trabalho apresenta três estudos de caso com domínios conhecidos na literatura e oito planejadores do estado da arte. Nossos resultados demonstram que melhorias significativas na qualidade do plano e um aumento na velocidade dos planejadores de até três ordens de grandeza pode ser alcançada através de um processo disciplinado e cuidados de pós-design. Nós demonstramos também que rationales provenientes dos usuários capturados durante as avaliações de planos podem ser úteis e reutilizáveis em novas avaliações de plano e em novos projetos. Nós argumentamos que esse processo de pós-design é fundamental para a implantação da tecnologia de planejamento automático em aplicações do mundo real. Até onde sabemos, este é o primeiro trabalho que investiga a análise de pós-design em aplicações de planejamento automático da IA. / Since the end of the 1990s there has been an increasing interest in the application of AI planning techniques to solve real-life problems. In addition to characteristics of academic problems, such as the need to reason about actions, real-life problems require detailed knowledge elicitation, engineering, and management. A systematic design process in which Knowledge and Requirements Engineering techniques and tools play a fundamental role is necessary in such applications. Research on Knowledge Engineering for planning and scheduling has created tools and techniques to support the design process of planning domain models. However, given the natural incompleteness of the knowledge, practical experience in real applications such as space exploration has shown that, even with a disciplined process of design, requirements from different viewpoints (e.g. stakeholders, experts, users) still emerge after plan generation, analysis and execution. The central thesis of this dissertation is that an post-design analysis phase in the development of AI planning applications leads to richer knowledge models and, consequently, to high-performance and high-quality plans. In this dissertation, we investigate how hidden knowledge and requirements can be acquired and re-used during a plan analysis phase that follows model design and how they affect planning performance. We describe a post-design framework called postDAM that combines (1) a knowledge engineering tool for requirements acquisition and plan evaluation, (2) a virtual prototyping environment for the analysis and simulation of plans, (3) a database system for storing plan evaluations, and (4) an ontological reasoning system for knowledge re-use and discovery. Our framework demonstrates that post-design analysis supports the discovery of missing requirements and guides the model refinement cycle. We present three case studies using benchmark domains and eight state-of-the-art planners. Our results demonstrate that significant improvements in plan quality and an increase in planning speed of up to three orders of magnitude can be achieved through a careful post-design process. We also demonstrate that rationales captured during plan evaluations from users can be useful and reusable in further plan evaluations and in new application designs. We argue that such a post-design process is critical for deployment of planning technology in real-world applications. To our knowledge, this is the first work that investigate post-design analysis for AI planning applications.
114

A predicated network formalism for commonsense reasoning.

January 2000 (has links)
Chiu, Yiu Man Edmund. / Thesis submitted in: December 1999. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2000. / Includes bibliographical references (leaves 269-248). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgments --- p.iii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Beginning Story --- p.2 / Chapter 1.2 --- Background --- p.3 / Chapter 1.2.1 --- History of Nonmonotonic Reasoning --- p.3 / Chapter 1.2.2 --- Formalizations of Nonmonotonic Reasoning --- p.6 / Chapter 1.2.3 --- Belief Revision --- p.13 / Chapter 1.2.4 --- Network Representation of Knowledge --- p.17 / Chapter 1.2.5 --- Reference from Logic Programming --- p.21 / Chapter 1.2.6 --- Recent Work on Network-type Automatic Reasoning Sys- tems --- p.22 / Chapter 1.3 --- A Novel Inference Network Approach --- p.23 / Chapter 1.4 --- Objectives --- p.23 / Chapter 1.5 --- Organization of the Thesis --- p.24 / Chapter 2 --- The Predicate Inference Network PIN --- p.25 / Chapter 2.1 --- Preliminary Terms --- p.26 / Chapter 2.2 --- Overall Structure --- p.27 / Chapter 2.3 --- Object Layer --- p.29 / Chapter 2.3.1 --- Virtual Object --- p.31 / Chapter 2.4 --- Predicate Layer --- p.33 / Chapter 2.4.1 --- Node Values --- p.34 / Chapter 2.4.2 --- Information Source --- p.35 / Chapter 2.4.3 --- Belief State --- p.36 / Chapter 2.4.4 --- Predicates --- p.37 / Chapter 2.4.5 --- Prototypical Predicates --- p.37 / Chapter 2.4.6 --- Multiple Inputs for a Single Belief --- p.39 / Chapter 2.4.7 --- External Program Call --- p.39 / Chapter 2.5 --- Variable Layer --- p.40 / Chapter 2.6 --- Inter-Layer Links --- p.42 / Chapter 2.7 --- Chapter Summary --- p.43 / Chapter 3 --- Computation for PIN --- p.44 / Chapter 3.1 --- Computation Functions for Propagation --- p.45 / Chapter 3.1.1 --- Computational Functions for Combinative Links --- p.45 / Chapter 3.1.2 --- Computational Functions for Alternative Links --- p.49 / Chapter 3.2 --- Applying the Computation Functions --- p.52 / Chapter 3.3 --- Relations Represented in PIN --- p.55 / Chapter 3.3.1 --- Relations Represented by Combinative Links --- p.56 / Chapter 3.3.2 --- Relations Represented by Alternative Links --- p.59 / Chapter 3.4 --- Chapter Summary --- p.61 / Chapter 4 --- Dynamic Knowledge Update --- p.62 / Chapter 4.1 --- Operations for Knowledge Update --- p.63 / Chapter 4.2 --- Logical Expression --- p.63 / Chapter 4.3 --- Applicability of Operators --- p.64 / Chapter 4.4 --- Add Operation --- p.65 / Chapter 4.4.1 --- Add a fully instantiated single predicate proposition with no virtual object --- p.66 / Chapter 4.4.2 --- Add a fully instantiated pure disjunction --- p.68 / Chapter 4.4.3 --- Add a fully instantiated expression which is a conjunction --- p.71 / Chapter 4.4.4 --- Add a human biased relation --- p.74 / Chapter 4.4.5 --- Add a single predicate expression with virtual objects --- p.76 / Chapter 4.4.6 --- Add a IF-THEN rule --- p.80 / Chapter 4.5 --- Remove Operation --- p.88 / Chapter 4.5.1 --- Remove a Belief --- p.88 / Chapter 4.5.2 --- Remove a Rule --- p.91 / Chapter 4.6 --- Revise Operation --- p.94 / Chapter 4.6.1 --- Revise a Belief --- p.94 / Chapter 4.6.2 --- Revise a Rule --- p.96 / Chapter 4.7 --- Consistency Maintenance --- p.97 / Chapter 4.7.1 --- Logical Suppression --- p.98 / Chapter 4.7.2 --- Example on Handling Inconsistent Information --- p.99 / Chapter 4.8 --- Chapter Summary --- p.102 / Chapter 5 --- Knowledge Query --- p.103 / Chapter 5.1 --- Domains of Quantification --- p.104 / Chapter 5.2 --- Reasoning through Recursive Rules --- p.109 / Chapter 5.2.1 --- Infinite Looping Control --- p.110 / Chapter 5.2.2 --- Proof of the finite termination of recursive rules --- p.111 / Chapter 5.3 --- Query Functions --- p.117 / Chapter 5.4 --- Type I Queries --- p.119 / Chapter 5.4.1 --- Querying a Simple Single Predicate Proposition (Type I) --- p.122 / Chapter 5.4.2 --- Querying a Belief with Logical Connective(s) (Type I) --- p.128 / Chapter 5.5 --- Type II Queries --- p.132 / Chapter 5.5.1 --- Querying Single Predicate Expressions (Type II) --- p.134 / Chapter 5.5.2 --- Querying an Expression with Logical Connectives (Type II) --- p.143 / Chapter 5.6 --- Querying an Expression with Virtual Objects --- p.152 / Chapter 5.6.1 --- Type I Queries Involving Virtual Object --- p.152 / Chapter 5.6.2 --- Type II Queries involving Virtual Objects --- p.156 / Chapter 5.7 --- Chapter Summary --- p.157 / Chapter 6 --- Uniqueness and Finite Termination --- p.159 / Chapter 6.1 --- Proof Structure --- p.160 / Chapter 6.2 --- Proof for Completeness and Finite Termination of Domain Search- ing Procedure --- p.161 / Chapter 6.3 --- Proofs for Type I Queries --- p.167 / Chapter 6.3.1 --- Proof for Single Predicate Expressions --- p.167 / Chapter 6.3.2 --- Proof of Type I Queries on Expressions with Logical Con- nectives --- p.172 / Chapter 6.3.3 --- General Proof for Type I Queries --- p.174 / Chapter 6.4 --- Proofs for Type II Queries --- p.175 / Chapter 6.4.1 --- Proof for Type II Queries on Single Predicate Expressions --- p.176 / Chapter 6.4.2 --- Proof for Type II Queries on Disjunctions --- p.178 / Chapter 6.4.3 --- Proof for Type II Queries on Conjunctions --- p.179 / Chapter 6.4.4 --- General Proof for Type II Queries --- p.181 / Chapter 6.5 --- Proof for Queries Involving Virtual Objects --- p.182 / Chapter 6.6 --- Uniqueness and Finite Termination of PIN Queries --- p.183 / Chapter 6.7 --- Chapter Summary --- p.184 / Chapter 7 --- Lifschitz's Benchmark Problems --- p.185 / Chapter 7.1 --- Structure --- p.186 / Chapter 7.2 --- Default Reasoning --- p.186 / Chapter 7.2.1 --- Basic Default Reasoning --- p.186 / Chapter 7.2.2 --- Default Reasoning with Irrelevant Information --- p.187 / Chapter 7.2.3 --- Default Reasoning with Several Defaults --- p.188 / Chapter 7.2.4 --- Default Reasoning with a Disabled Default --- p.190 / Chapter 7.2.5 --- Default Reasoning in Open Domain --- p.191 / Chapter 7.2.6 --- Reasoning about Unknown Exceptions I --- p.193 / Chapter 7.2.7 --- Reasoning about Unknown Exceptions II --- p.194 / Chapter 7.2.8 --- Reasoning about Unknown Exceptions III --- p.196 / Chapter 7.2.9 --- Priorities between Defaults --- p.198 / Chapter 7.2.10 --- Priorities between Instances of a Default --- p.199 / Chapter 7.2.11 --- Reasoning about Priorities --- p.199 / Chapter 7.3 --- Inheritance --- p.200 / Chapter 7.3.1 --- Linear Inheritance --- p.200 / Chapter 7.3.2 --- Tree-Structured Inheritance --- p.202 / Chapter 7.3.3 --- One-Step Multiple Inheritance --- p.203 / Chapter 7.3.4 --- Multiple Inheritance --- p.204 / Chapter 7.4 --- Uniqueness of Names --- p.205 / Chapter 7.4.1 --- Unique Names Hypothesis for Objects --- p.205 / Chapter 7.4.2 --- Unique Names Hypothesis for Functions --- p.206 / Chapter 7.5 --- Reasoning about Action --- p.206 / Chapter 7.6 --- Autoepistemic Reasoning --- p.206 / Chapter 7.6.1 --- Basic Autoepistemic Reasoning --- p.206 / Chapter 7.6.2 --- Autoepistemic Reasoning with Incomplete Information --- p.207 / Chapter 7.6.3 --- Autoepistemic Reasoning with Open Domain --- p.207 / Chapter 7.6.4 --- Autoepistemic Default Reasoning --- p.208 / Chapter 8 --- Comparison with PROLOG --- p.214 / Chapter 8.1 --- Introduction of PROLOG --- p.215 / Chapter 8.1.1 --- Brief History --- p.215 / Chapter 8.1.2 --- Structure and Inference --- p.215 / Chapter 8.1.3 --- Why Compare PIN with Prolog --- p.216 / Chapter 8.2 --- Representation Power --- p.216 / Chapter 8.2.1 --- Close World Assumption and Negation as Failure --- p.216 / Chapter 8.2.2 --- Horn Clauses --- p.217 / Chapter 8.2.3 --- Quantification --- p.218 / Chapter 8.2.4 --- Build-in Functions --- p.219 / Chapter 8.2.5 --- Other Representation Issues --- p.220 / Chapter 8.3 --- Inference and Query Processing --- p.220 / Chapter 8.3.1 --- Unification --- p.221 / Chapter 8.3.2 --- Resolution --- p.222 / Chapter 8.3.3 --- Computation Efficiency --- p.225 / Chapter 8.4 --- Knowledge Updating and Consistency Issues --- p.227 / Chapter 8.4.1 --- PIN and AGM Logic --- p.228 / Chapter 8.4.2 --- Knowledge Merging --- p.229 / Chapter 8.5 --- Chapter Summary --- p.229 / Chapter 9 --- Conclusion and Discussion --- p.230 / Chapter 9.1 --- Conclusion --- p.231 / Chapter 9.1.1 --- General Structure --- p.231 / Chapter 9.1.2 --- Representation Power --- p.231 / Chapter 9.1.3 --- Inference --- p.232 / Chapter 9.1.4 --- Dynamic Update and Consistency --- p.233 / Chapter 9.1.5 --- Soundness and Completeness Versus Efficiency --- p.233 / Chapter 9.2 --- Discussion --- p.234 / Chapter 9.2.1 --- Different Selection Criteria --- p.234 / Chapter 9.2.2 --- Link Order --- p.235 / Chapter 9.2.3 --- Inheritance Reasoning --- p.236 / Chapter 9.3 --- Future Work --- p.237 / Chapter 9.3.1 --- Implementation --- p.237 / Chapter 9.3.2 --- Application --- p.237 / Chapter 9.3.3 --- Probabilistic and Fuzzy PIN --- p.238 / Chapter 9.3.4 --- Temporal Reasoning --- p.238 / Bibliography --- p.239
115

Software Engineering Using design RATionale

Burge, Janet E 02 May 2005 (has links)
For a number of years, members of the Artificial Intelligence (AI) in Design community have studied Design Rationale (DR), the reasons behind decisions made while designing. DR is invaluable as an aid for revising, maintaining, documenting, evaluating, and learning the design. The presence of DR would be especially valuable for software maintenance. The rationale would provide insight into why the system is the way it is by giving the reasons behind the design decisions, could help to indicate where changes might be needed during maintenance if design goals change, and help the maintainer avoid repeating earlier mistakes by explicitly documenting alternatives that were tried earlier that did not work. Unfortunately, while everyone agrees that design rationale is useful, it is still not used enough in practice. Possible reasons for this are that the uses proposed for rationale are not compelling enough to justify the effort involved in its capture and that there are few systems available to support rationale use and capture. We have addressed this problem by developing and evaluating a system called SEURAT (Software Engineering Using RATionale) which integrates with a software development environment and goes beyond mere presentation of rationale by inferencing over it to check for completeness and consistency in the reasoning used while a software system is being developed and maintained. We feel that the SEURAT system will be invaluable during development and maintenance of software systems. During development, SEURAT will help the developers ensure that the systems they build are complete and consistent. During maintenance, SEURAT will provide insight into the reasons behind the choices made by the developers during design and implementation. The benefits of DR are clear but only with appropriate tool support, such as that provided by SEURAT, can DR live up to its full potential as an aid for revising, maintaining, and documenting the software design and implementation.
116

Visual soccer match analysis

Machado, Vinícius Fritzen January 2016 (has links)
Futebol é um esporte fascinante que capta a atenção de milhões de pessoas no mundo. Equipes de futebol profissionais, bem como os meios de comunicação, têm um profundo interesse na análise de partidas de futebol. Análise estatística é a abordagem mais usada para descrever um jogo de futebol, no entanto, muitas vezes eles não conseguem captar a evolução do jogo e as mudanças de estratégias que aconteceram. Neste trabalho, apresentamos Visual Soccer Match Analysis (VSMA), uma ferramenta para a compreensão dos diferentes aspectos relacionados com a evolução de um jogo de futebol. A nossa ferramenta recebe como entrada as coordenadas de cada jogador durante o jogo, bem como os eventos associados. Apresentamos um design visual que permite identificar rapidamente padrões relevantes em jogo. A abordagem foi desenvolvida em conjunto com colegas da área da educação física com experiência em análise de futebol. Validamos a utilidade da nossa abordagem utilizando dados de várias partidas, juntamente com avaliações de especialistas. / Soccer is a fascinating sport that captures the attention of millions of people in the world. Professional soccer teams, as well as the broadcasting media, have a deep interest in the analysis of soccer matches. Statistical summaries are the most-used approach to describe a soccer match. However, they often fail to capture the evolution of the game and changes of strategies that happen. In this work, we present the Visual Soccer Match Analysis (VSMA) system, a tool for understanding the different aspects associated with the evolution of a soccer match. Our tool receives as input the coordinates of each player throughout the match and related events. We present a visual design that allows to quickly identify relevant patterns in the match. Our approach was developed in conjunction with colleagues from the physical education field with expertise in soccer analysis. We validated the system utility using several matches together with expert evaluations.
117

Framework para representação do conhecimento de projeto de produto aplicando o paradigma da orientação a objetos / Framework for representing product design knowledge applying the object oriented paradigm

Barros, Alexandre Monteiro de January 2017 (has links)
O projeto de produtos e sistemas técnicos complexos requer a compreensão em nível de sistemas e subsistemas para formular soluções eficientes e integradas ao seu contexto. Para auxiliar esta compreensão, o conhecimento de projeto deve ser representado utilizando níveis adequados de abstração de acordo com a fase do projeto. A fase de projeto conceitual requer tipos de representação capazes de atingir um alto nível de abstração para a exploração de conceitos que conduzam a soluções criativas. O paradigma da orientação a objetos, que é fundamentado pela abstração, faz parte da engenharia de software, mas também pode ser aplicado para o projeto de artefatos físicos porque permite a representação dos elementos do mundo real através de uma linguagem simples, acessível e com alto nível de abstração. Ademais, o paradigma da orientação a objetos permite a reutilização do conhecimento de projeto devido à sua capacidade de estruturar a informação em um formato adequado para isto. O presente trabalho propõe um framework para representar o conhecimento de projeto de produto aplicando o paradigma da orientação a objeto. Inicialmente, foram identificados os elementos conceituais da tese e suas relações, para na sequência definir o modelo do framework e o seu método de aplicação O framework utiliza uma linguagem de representação diagramática que pode evoluir desde um mapa mental, com elementos diversificados e pouco ordenados, até uma rede estruturada de classes e relacionamentos em um modelo de classes. Um modelo de classes pode concentrar conhecimento sobre o projeto, servindo como uma estrutura geral que conecta e relaciona diferentes blocos de informação associados aos produtos e sistemas que estão sendo elaborados. A verificação da aplicabilidade do framework foi realizada por especialistas da área de design mediante o desenvolvimento de um projeto de produto em nível conceitual e do preenchimento de questionário de avaliação. / The design of complex technical products requires understanding at the system and subsystem level to formulate efficient and integrated solutions to their context. To support this understanding, the project knowledge can be represented using appropriate levels of abstraction according to the project phase. The conceptual design phase requires types of representation that reach a high level of abstraction for the exploration of concepts that lead to creative solutions. The object-oriented paradigm is based on abstraction and is part of software engineering, but can also be applied to the design of physical artifacts because it allows the representation of realworld elements through simple, accessible and in high-level abstraction language. In addition, the object orientation paradigm supports the reusability of project knowledge due to its capacity to structure the information in patterns. The present work proposes a framework to represent product design knowledge using the objectoriented paradigm First, the conceptual elements of the thesis and their relationships were identified, after; the framework model and their method of application were constructed. The framework uses a diagrammatic representation language in which a mental map, with diversified and unordered elements, can progress into a structured network of classes and relationships in a class model. A class model can focus knowledge about the project, serving as a general structure that connects and relates different blocks of information associated with the products and systems being developed. The verification of the applicability of the framework was carried out by specialists in the design area through the development of a product design at conceptual level and the answering an evaluation questionnaire.
118

Um arcabouço cognitivamente inspirado para representação de conhecimento e raciocínio

Carbonera, Joel Luis January 2016 (has links)
Seres humanos são capazes de desenvolver complexas estruturas de conhecimento que podem ser utilizadas de modo flexível para lidar com o ambiente de maneira apropriada. Estas estruturas de conhecimento constituem um núcleo que suporta processos cognitivos, tais como a percepção, a categorização, o planejamento, etc. A Inteligência Artificial, enquanto área de investigação, ocupa-se de desenvolver meios que viabilizem a reprodução destas capacidades cognitivas em agentes artificiais. Por este motivo, a investigação de abordagens que permitam a representação de conhecimento de um modo flexível se revela altamente relevante. Com o objetivo de superar algumas das limitações típicas da teoria clássica, que é adotada por várias abordagens propostas na Inteligência Artificial, este trabalho propõe um arcabouço cognitivamente inspirado para representação de conhecimento e raciocínio que integra aspectos de três diferentes teorias cognitivas a respeito de como conceitos são representados na cognição humana: teoria clássica, teoria do protótipo e teoria do exemplar. O arcabouço resultante é capaz de suportar a composicionalidade, a tipicalidade, a representação de instâncias atípicas dos conceitos, e a representação da variabilidade de indivíduos classificados por cada conceito. Consequentemente, o arcabouço proposto também suporta raciocínio lógico e baseado em similaridade. As principais contribuições deste trabalho são a concepção teórica e a formalização de um arcabouço cognitivamente inspirado para representação de conhecimento e raciocínio. Uma outra contribuição deste trabalho é uma abordagem de raciocínio para classificação que utiliza a abordagem de representação de conhecimento proposta. Além disso, este trabalho também apresenta duas abordagens para seleção de exemplares representativos de cada conceito e uma abordagem para extração de protótipos de conceitos. Nesta tese também é apresentado um sistema para interpretação automática de processos deposicionais que adota o arcabouço proposto. Experimentos realizados em uma tarefa de classificação sugerem que o arcabouço proposto é capaz de oferecer classificações mais informativas que as oferecidas por uma abordagem puramente clássica. / Human beings can develop complex knowledge structures that can be used for dealing with the environment in suitable ways. These knowledge structures constitute a core that supports several cognitive processes, such as perception, categorization, planning, etc. The Artificial Intelligence, as a research field, aims at developing approaches for mimicking these cognitive capabilities in machines. Due to this, it is important to investigate approaches that allow representing the knowledge in flexible ways. In order to overcome some limitations of the classical theory of knowledge representation, which is adopted by several approaches proposed in the Artificial Intelligence field, this work proposes a cognitively-inspired framework for knowledge representation and reasoning which integrates aspects from three different cognitive theories about concept representation in the human cognition: classical theory, prototype theory and exemplar theory. The resulting framework can support compositionality, typicality, representation of atypical instances of concepts, and representation of the variability of the individuals classified by each concept. Consequently, the proposed framework also supports logical reasoning and similarity-based reasoning. The main contributions of this work are the formalization of a cognitively-inspired framework for knowledge representation and reasoning, two approaches for selecting representative exemplars of each concept and an approach of reasoning for classification that integrates logical reasoning and similarity-based reasoning and that is supported by definitions, prototypes and exemplars of concepts. This thesis also presents a system for automatic interpretation of depositional processes application that adopts the proposed framework. The experiments, which were performed on a classification task, suggest that the proposed framework provides classifications that are more informative than the ones provided by a classical approach.
119

Représentation sémantique des biomarqueurs d’imagerie dans le domaine médical / Semantic representation of imaging biomarkers in the medical field

Amdouni, Emna 07 December 2017 (has links)
En médecine personnalisée, les mesures et les descriptions radiologiques jouent un rôle important. En particulier, elles facilitent aux cliniciens l’établissement du diagnostic, la prise de décision thérapeutique ainsi que le suivi de la réponse au traitement. On peut citer à titre d’exemple, les critères d’évaluation RECIST (en anglais Response Evaluation Criteria in Solid Tumors). De nombreuses études de corrélation en radiologie-pathologie montrent que les caractéristiques d'imagerie quantitative et qualitative sont associées aux altérations génétiques et à l'expression des gènes. Par conséquent, une gestion appropriée des phénotypes d'imagerie est nécessaire pour faciliter leur utilisation et leur réutilisation dans de multiples études concernant les mesures radiologiques. En littérature, les mesures radiologiques qui caractérisent les processus biologiques des sujets imagés sont appelées biomarqueurs d'imagerie. L'objectif principal de cette thèse est de proposer une conceptualisation ontologique des biomarqueurs d'imagerie pour rendre leur sens explicite et formel, améliorer le reporting structuré des images. La première partie de la thèse présente une ontologie générique qui définit les aspects fondamentaux du concept de biomarqueur d'imagerie, à savoir : les caractéristiques biologiques mesurées, les protocoles de mesure et les rôles des biomarqueurs imagerie dans la prise de décision. La deuxième partie de la thèse traite des problèmes de modélisation sémantique liés à la description des données d’observation en neuro-imagerie en utilisant les connaissances biomédicales existantes. Ainsi, elle propose des solutions ''pertinentes'' aux situations les plus typiques qui doivent être modélisées dans le glioblastome. / In personalized medicine, radiological measurements and observations play an important role; in particular they help clinicians in making their diagnosis, selecting the appropriate treatment and monitoring the therapeutic response to an intervention as for example the Response Evaluation Criteria in Solid Tumors (RECIST). Many radiology-pathology correlation studies show that quantitative and qualitative imaging features are associated to genetic alterations and gene expression. Therefore, suitable management of imaging phenotypes is needed to facilitate their use and reuse in multiple studies regarding radiological measurements. In litterature, radiological measurements that characterize biological processes of imaged subjects are called imaging biomarkers. The main objective of this thesis is to propose an ontological conceptualisation of imaging biomarkers to make their meaning explicit and formal, improve structured reporting of images. The first part of the thesis presents a generic ontology that defines basic aspects of the imaging biomarker concept, namely; measured biological characteristic, measurement protocols and role in decision making application. The second part of the thesis adresses important semantic modeling challenges related to the description of neuro-imaging data using existing biomedical knowledge, as well as it proposes some “relevant” solutions to the most typical situations that need to be modeled in glioblastoma.
120

Consequence-based reasoning for SRIQ ontologies

Bate, Andrew January 2016 (has links)
Description logics (DLs) are knowledge representation formalisms with numerous applications and well-understood model-theoretic semantics and computational properties. SRIQ is a DL that provides the logical underpinning for the semantic web language OWL 2, which is the W3C standard for knowledge representation on the web. A central component of most DL applications is an efficient and scalable reasoner, which provides services such as consistency testing and classification. Despite major advances in DL reasoning algorithms over the last decade, however, ontologies are still encountered in practice that cannot be handled by existing DL reasoners. Consequence-based calculi are a family of reasoning techniques for DLs. Such calculi have proved very effective in practice and enjoy a number of desirable theoretical properties. Up to now, however, they were proposed for either Horn DLs (which do not support disjunctive reasoning), or for DLs without cardinality constraints. In this thesis we present a novel consequence-based algorithm for TBox reasoning in SRIQ - a DL that supports both disjunctions and cardinality constraints. Combining the two features is non-trivial since the intermediate consequences that need to be derived during reasoning cannot be captured using DLs themselves. Furthermore, cardinality constraints require reasoning over equality, which we handle using the framework of ordered paramodulation - a state-of-the-art method for equational theorem proving. We thus obtain a calculus that can handle an expressive DL, while still enjoying all the favourable properties of existing consequence-based algorithms, namely optimal worst-case complexity, one-pass classification, and pay-as-you-go behaviour. To evaluate the practicability of our calculus, we implemented it in Sequoia - a new DL reasoning system. Empirical results show substantial robustness improvements over well-established algorithms and implementations, and performance competitive with closely related work.

Page generated in 0.1386 seconds