• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 71
  • 32
  • 19
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 526
  • 526
  • 146
  • 138
  • 122
  • 121
  • 118
  • 109
  • 102
  • 100
  • 96
  • 82
  • 79
  • 64
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Formalisation, acquisition et mise en œuvre de connaissances pour l’intégration virtuelle de bases de données géographiques : les spécifications au cœur du processus d’intégration / Formalisation, acquisition and implementation of specifications knowledge for geographic databases integration

Abadie, Nathalie 20 November 2012 (has links)
Cette thèse traite de l'intégration de bases de données topographiques qui consiste à expliciter les relations de correspondance entre bases de données hétérogènes, de sorte à permettre leur utilisation conjointe. L'automatisation de ce processus d'intégration suppose celle de la détection des divers types d'hétérogénéité pouvant intervenir entre les bases de données topographiques à intégrer. Ceci suppose de disposer, pour chacune des bases à intégrer, de connaissances sur leurs contenus respectifs. Ainsi, l'objectif de cette thèse réside dans la formalisation, l'acquisition et l'exploitation des connaissances nécessaires pour la mise en œuvre d'un processus d'intégration virtuelle de bases de données géographiques vectorielles. Une première étape du processus d'intégration de bases de données topographiques consiste à apparier leurs schémas conceptuels. Pour ce faire, nous proposons de nous appuyer sur une source de connaissances particulière : les spécifications des bases de données topographiques. Celles-ci sont tout d'abord mises à profit pour la création d'une ontologie du domaine de la topographie. Cette ontologie est utilisée comme ontologie de support, dans le cadre d'une première approche d'appariement de schémas de bases de données topographiques, fondée sur des techniques d'appariement terminologiques et structurelles. Une seconde approche, inspirée des techniques d'appariement fondées sur la sémantique, met en œuvre cette ontologie pour la représentation des connaissances sur les règles de sélection et de représentation géométrique des entités géographiques issues des spécifications dans le langage OWL 2, et leur exploitation par un système de raisonnement / This PhD thesis deals with topographic databases integration. This process aims at facilitating the use of several heterogeneous databases by making the relationships between them explicit. To automatically achieve databases integration, several aspects of data heterogeneity must be detected and solved. Identifying heterogeneities between topographic databases implies comparing some knowledge about their respective contents. Therefore, we propose to formalise and acquire this knowledge and to use it for topographic databases integration. Our work focuses on the specific problem of topographic databases schema matching, as a first step in an integration application. To reach this goal, we propose to use a specific knowledge source, namely the databases specifications, which describe the data implementing rules. Firstly, they are used as the main resource for the knowledge acquisition process in an ontology learning application. As a first approach for schema matching, the domain ontology created from the texts of IGN's databases specifications is used as a background knowledge source in a schema matching application based on terminological and structural matching techniques. In a second approach, this ontology is used to support the representation, in the OWL 2 language, of topographic entities selection and geometry capture rules described in the databases specifications. This knowledge is then used by a reasoner in a semantic-based schema matching application
262

Relational transfer across reinforcement learning tasks via abstract policies. / Transferência relacional entre tarefas de aprendizado por reforço via políticas abstratas.

Koga, Marcelo Li 21 November 2013 (has links)
When designing intelligent agents that must solve sequential decision problems, often we do not have enough knowledge to build a complete model for the problems at hand. Reinforcement learning enables an agent to learn behavior by acquiring experience through trial-and-error interactions with the environment. However, knowledge is usually built from scratch and learning the optimal policy may take a long time. In this work, we improve the learning performance by exploring transfer learning; that is, the knowledge acquired in previous source tasks is used to accelerate learning in new target tasks. If the tasks present similarities, then the transferred knowledge guides the agent towards faster learning. We explore the use of a relational representation that allows description of relationships among objects. This representation simplifies the use of abstraction and the extraction of the similarities among tasks, enabling the generalization of solutions that can be used across different, but related, tasks. This work presents two model-free algorithms for online learning of abstract policies: AbsSarsa(λ) and AbsProb-RL. The former builds a deterministic abstract policy from value functions, while the latter builds a stochastic abstract policy through direct search on the space of policies. We also propose the S2L-RL agent architecture, containing two levels of learning: an abstract level and a ground level. The agent simultaneously builds a ground policy and an abstract policy; not only the abstract policy can accelerate learning on the current task, but also it can guide the agent in a future task. Experiments in a robotic navigation environment show that these techniques are effective in improving the agents learning performance, especially during the early stages of the learning process, when the agent is completely unaware of the new task. / Na construção de agentes inteligentes para a solução de problemas de decisão sequenciais, o uso de aprendizado por reforço é necessário quando o agente não possui conhecimento suficiente para construir um modelo completo do problema. Entretanto, o aprendizado de uma política ótima é em geral muito lento pois deve ser atingido através de tentativa-e-erro e de repetidas interações do agente com o ambiente. Umas das técnicas para se acelerar esse processo é possibilitar a transferência de aprendizado, ou seja, utilizar o conhecimento adquirido para se resolver tarefas passadas no aprendizado de novas tarefas. Assim, se as tarefas tiverem similaridades, o conhecimento prévio guiará o agente para um aprendizado mais rápido. Neste trabalho é explorado o uso de uma representação relacional, que explicita relações entre objetos e suas propriedades. Essa representação possibilita que se explore abstração e semelhanças estruturais entre as tarefas, possibilitando a generalização de políticas de ação para o uso em tarefas diferentes, porém relacionadas. Este trabalho contribui com dois algoritmos livres de modelo para construção online de políticas abstratas: AbsSarsa(λ) e AbsProb-RL. O primeiro constrói uma política abstrata determinística através de funções-valor, enquanto o segundo constrói uma política abstrata estocástica através de busca direta no espaço de políticas. Também é proposta a arquitetura S2L-RL para o agente, que possui dois níveis de aprendizado: o nível abstrato e o nível concreto. Uma política concreta é construída simultaneamente a uma política abstrata, que pode ser utilizada tanto para guiar o agente no problema atual quanto para guiá-lo em um novo problema futuro. Experimentos com tarefas de navegação robótica mostram que essas técnicas são efetivas na melhoria do desempenho do agente, principalmente nas fases inicias do aprendizado, quando o agente desconhece completamente o novo problema.
263

Transfer Learning for Medication Adherence Prediction from Social Forums Self-Reported Data

Kyle Haas (5931056) 17 January 2019 (has links)
<div> <div> <div> <p>Medication non-adherence and non-compliance left unaddressed can compound into severe medical problems for patients. Identifying patients that are likely to become non-adherent can help reduce these problems. Despite these benefits, monitoring adherence at scale is cost-prohibitive. Social forums offer an easily accessible, affordable, and timely alternative to the traditional methods based on claims data. This study investigates the potential of medication adherence prediction based on social forum data for diabetes and fibromyalgia therapies by using transfer learning from the Medical Expenditure Panel Survey (MEPS). </p><p><br></p> <p>Predictive adherence models are developed by using both survey and social forums data and different random forest (RF) techniques. The first of these implementations uses binned inputs from k-means clustering. The second technique is based on ternary trees instead of the widely used binary decision trees. These techniques are able to handle missing data, a prevalent characteristic of social forums data. </p><p><br></p> <p>The results of this study show that transfer learning between survey models and social forum models is possible. Using MEPS survey data and the techniques listed above to derive RF models, less than 5% difference in accuracy was observed between the MEPS test dataset and the social forum test dataset. Along with these RF techniques, another RF implementation with imputed means for the missing values was developed and shown to predict adherence for social forum patients with an accuracy >70%. </p> </div> </div> <div> <div> <p><br></p> </div> </div> </div> <div> <div> <div> <p>This thesis shows that a model trained with verified survey data can be used to complement traditional medical adherence models by predicting adherence from unverified, self-reported data in a dynamic and timely manner. Furthermore, this model provides a method for discovering objective insights from subjective social reports. Additional investigation is needed to improve the prediction accuracy of the proposed model and to assess biases that may be inherent to self-reported adherence measures in social health networks. </p> </div> </div> </div>
264

Extração de conhecimento de laudos de radiologia torácica utilizando técnicas de processamento estatístico de linguagem natural. / Knowledge extraction from reports of radiology thoracic using techniques of statistical processing of natural language.

Leandro Zerbinatti 15 April 2010 (has links)
Este trabalho promove um estudo em informática em saúde no qual se analisam laudos de radiologia torácica através de métodos de processamento estatístico de linguagem natural com o intuito de subsidiar a interoperabilidade entre sistemas de saúde. Foram utilizados 2000 laudos de radiologia do tórax para a extração de conhecimento identificando-se as palavras, n-gramas e frases que os compõem. Foi calculado o índice de Zipf e verificou-se que poucas palavras compõem a maioria dos laudos e que a maioria das palavras não tem representatividade estatística A partir dos termos identificados foi realizada a tradução e a comparação da existência desses em um vocabulário médico padronizado com terminologia internacional, o SNOMEDCT. Os termos que tinham uma relação completa e direta com os termos traduzidos foram incorporados nos termos de referência juntamente com a classe à qual o termo pertence e seu identificador. Foram selecionados outros 200 laudos de radiologia de tórax para realizar o experimento de rotulação dos termos em relação à referência. A eficiência obtida neste estágio, que é o percentual de rotulação dos laudos, foi de 45,55%. A partir de então foram incorporados aos termos de referência, sob a classe de conceito de ligação, artigos, preposições e pronomes. É importante ressaltar que esses termos não adicionam conhecimento de saúde ao texto. A eficiência obtida foi de 73,23%, aumentando significativamente a eficiência obtida anteriormente. Finalizamos o trabalho com algumas formas de aplicação dos laudos rotulados para a interoperabilidade de sistemas, utilizando para isto ontologias, o HL7 CDA (Clinical Documents Architecture) e o modelo de arquétipos da Fundação OpenEHR. / This work promotes a study in health informatics technology which analyses reports of chest X-ray through statistical natural language processing methods for the purpose of supporting the interoperability between health systems. Two thousand radiology reports were used for the extraction of knowledge by identifying the words, n-grams and phrases of reports. Zipfs constant was studied and it was determined that few words make up the majority of the reports and that most of the words do not have statistical significance. The translation and comparison with exisiting standardized medical vocabulary with international terminology, called SNOMED-CT, was done based on the terms identified. The terms that had a complete and direct correlation with the translated terms were incorporated into the reference terms along with its class and the word identifier. Another 200 reports of chest x-rays were selected to perform the terms tagging experiment of with respect to the reference. The efficiency obtained, which is the percentage of labeling of the reports, was 45.55%. Subsequentely, articles, prepositions and pronouns were incorporated into the terms of reference under the linkage concept of class. It is important to note that these terms do not carry health knowledge to the text. Thus, the efficiency ratio was 73.23%, significantly increasing the efficiency obtained previously. The study was concluded with some forms of application of the reports tagged for system interoperability, using different ontologies, the HL7 CDA (Clinical Documents Architecture) and the archetypes at OpenEHR Fondation.
265

Semantic Representation of a Heterogeneous Document Corpus for an Innovative Information Retrieval Model : Application to the Construction Industry / Représentation Sémantique de Corpus de Documents Hétérogènes pour un Modèle de Recherche d'Information Novateur : Application au Domaine du Bâtiment

Charbel, Nathalie 21 December 2018 (has links)
Les avancées récentes des Technologies de l'Information et de la Communication (TIC) ont entraîné des transformations radicales de plusieurs secteurs de l'industrie. L'adoption des technologies du Web Sémantique a démontré plusieurs avantages, surtout dans une application de Recherche d'Information (RI) : une meilleure représentation des données et des capacités de raisonnement sur celles-ci. Cependant, il existe encore peu d’applications industrielles car il reste encore des problèmes non résolus, tels que la représentation de documents hétérogènes interdépendants à travers des modèles de données sémantiques et la représentation des résultats de recherche accompagnés d'informations contextuelles.Dans cette thèse, nous abordons deux défis principaux. Le premier défi porte sur la représentation de la connaissance relative à un corpus de documents hétérogènes couvrant à la fois le contenu des documents fortement lié à un domaine métier ainsi que d'autres aspects liés à la structure de ces documents tels que leurs métadonnées, les relations inter et intra-documentaires (p. ex., les références entre documents ou parties de documents), etc. Le deuxième défi porte sur la construction des résultats de RI, à partir de ce corpus de documents hétérogènes, aidant les utilisateurs à mieux interpréter les informations pertinentes de leur recherche surtout quand il s'agit d'exploiter les relations inter/intra-documentaires.Pour faire face à ces défis, nous proposons tout d'abord une représentation sémantique du corpus de documents hétérogènes à travers un modèle de graphe sémantique couvrant à la fois les dimensions structurelle et métier du corpus. Ensuite, nous définissons une nouvelle structure de données pour les résultats de recherche, extraite à partir de ce graphe, qui incorpore les informations pertinentes directes ainsi qu'un contexte structurel et métier. Afin d'exploiter cette nouvelle structure dans un modèle de RI novateur, nous proposons une chaine de traitement automatique de la requête de l'utilisateur, allant du module d'interprétation de requête, aux modules de recherche, de classement et de présentation des résultats. Bien que nous proposions une chaine de traitement complète, nos contributions se focalisent sur les modules de recherche et de classement.Nous proposons une solution générique qui peut être appliquée dans différents domaines d'applications métiers. Cependant, dans cette thèse, les expérimentations ont été appliquées au domaine du Bâtiment et Travaux Publics (BTP), en s'appuyant sur des projets de construction. / The recent advances of Information and Communication Technology (ICT) have resulted in the development of several industries. Adopting semantic technologies has proven several benefits for enabling a better representation of the data and empowering reasoning capabilities over it, especially within an Information Retrieval (IR) application. This has, however, few applications in the industries as there are still unresolved issues, such as the shift from heterogeneous interdependent documents to semantic data models and the representation of the search results while considering relevant contextual information. In this thesis, we address two main challenges. The first one focuses on the representation of the collective knowledge embedded in a heterogeneous document corpus covering both the domain-specific content of the documents, and other structural aspects such as their metadata, their dependencies (e.g., references), etc. The second one focuses on providing users with innovative search results, from the heterogeneous document corpus, helping the users in interpreting the information that is relevant to their inquiries and tracking cross document dependencies.To cope with these challenges, we first propose a semantic representation of a heterogeneous document corpus that generates a semantic graph covering both the structural and the domain-specific dimensions of the corpus. Then, we introduce a novel data structure for query answers, extracted from this graph, which embeds core information together with structural-based and domain-specific context. In order to provide such query answers, we propose an innovative query processing pipeline, which involves query interpretation, search, ranking, and presentation modules, with a focus on the search and ranking modules.Our proposal is generic as it can be applicable in different domains. However, in this thesis, it has been experimented in the Architecture, Engineering and Construction (AEC) industry using real-world construction projects.
266

A framework for modelling spatial proximity

Brennan, Jane, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
The concept of proximity is an important aspect of human reasoning. Despite the diversity of applications that require proximity measures, the most intuitive notion is that of spatial nearness. The aim of this thesis is to investigate the underpinnings of the notion of nearness, propose suitable formalisations and apply them to the processing of GIS data. More particularly, this work offers a framework for spatial proximity that supports the development of more intuitive tools for users of geographic data processing applications. Many of the existing spatial reasoning formalisms do not account for proximity at all while others stipulate it by using natural language expressions as symbolic values. Some approaches suggest the association of spatial relations with fuzzy membership grades to be calculated for locations in a map using Euclidean distance. However, distance is not the only factor that influences nearness perception. Hence, previous work suggests that nearness should be defined from a more basic notion of influence area. I argue that this approach is flawed, and that nearness should rather be defined from a new, richer notion of impact area that takes both the nature of an object and the surrounding environment into account. A suitable notion of nearness considers the impact areas of both objects whose degree of nearness is assessed. This is opposed to the common approach of only taking one of both objects, seen as a reference to assess the nearness of the other to it, into consideration. Cognitive findings are incorporated to make the framework more relevant to the users of Geographic Information Systems (GIS) with respect to their own spatial cognition. GIS users bring a wealth of knowledge about physical space, particularly geographic space, into the processing of GIS data. This is taken into account by introducing the notion of context. Context represents either an expert in the context field or information from the context field as collated by an expert. In order to evaluate and to show the practical implications of the framework, experiments are conducted on a GIS dataset incorporating expert knowledge from the Touristic Road Travel domain.
267

Semantics and Implementation of Knowledge Operators in Approximate Databases / Semantik och implementation för kunskapsoperatorer i approximativa databaser

Sjö, Kristoffer January 2004 (has links)
<p>In order that epistemic formulas might be coupled with approximate databases, it is necessary to have a well-defined semantics for the knowledge operator and a method of reducing epistemic formulas to approximate formulas. In this thesis, two possible definitions of a semantics for the knowledge operator are proposed for use together with an approximate relational database: </p><p>* One based upon logical entailment (being the dominating notion of knowledge in literature); sound and complete rules for reduction to approximate formulas are explored and found not to be applicable to all formulas. </p><p>* One based upon algorithmic computability (in order to be practically feasible); the correspondence to the above operator on the one hand, and to the deductive capability of the agent on the other hand, is explored.</p><p>Also, an inductively defined semantics for a"know whether"-operator, is proposed and tested. Finally, an algorithm implementing the above is proposed, carried out using Java, and tested.</p>
268

Phase Space Navigator: Towards Automating Control Synthesis in Phase Spaces for Nonlinear Control Systems

Zhao, Feng 01 April 1991 (has links)
We develop a novel autonomous control synthesis strategy called Phase Space Navigator for the automatic synthesis of nonlinear control systems. The Phase Space Navigator generates global control laws by synthesizing flow shapes of dynamical systems and planning and navigating system trajectories in the phase spaces. Parsing phase spaces into trajectory flow pipes provide a way to efficiently reason about the phase space structures and search for global control paths. The strategy is particularly suitable for synthesizing high-performance control systems that do not lend themselves to traditional design and analysis techniques.
269

Associative classification, linguistic entity relationship extraction, and description-logic representation of biomedical knowledge applied to MEDLINE

Rak, Rafal 11 1900 (has links)
MEDLINE, a large and constantly increasing collection of biomedical article references, has been the source of numerous investigations related to textual information retrieval and knowledge capture, including article categorization, bibliometric analysis, semantic query answering, and biological concept recognition and relationship extraction. This dissertation discusses the design and development of novel methods that contribute to the tasks of document categorization and relationship extraction. The two investigations result in a fast tool for building descriptive models capable of categorizing documents to multiple labels and a highly effective method able to extract broad range of relationships between entities embedded in text. Additionally, an application that aims at representing the extracted knowledge in a strictly defined but highly expressive structure of ontology is presented. The classification of documents is based on an idea of building association rules that consist of frequent patterns of words appearing in documents and classes these patterns are likely to be assigned to. The process of building the models is based on a tree enumeration technique and dataset projection. The resulting algorithm offers two different tree traversing strategies, breadth-first and depth-first. The classification scenario involves the use of two alternative thresholding strategies based on either the document-independent confidence of the rules or a similarity measure between a rule and a document. The presented classification tool is shown to perform faster than other methods and is the first associative-classification solution to incorporate multiple classes and the information about recurrence of words in documents. The extraction of relations between entities embedded in text involves the utilization of the output of a constituent parser and a set of manually developed tree-like patterns. Both serve as the input of a novel algorithm that solves the newly formulated problem of constrained constituent tree inclusion with regular expression matching. The proposed relation extraction method is demonstrated to be parser-independent and outperforms in terms of effectiveness dependency-parser-based and machine-learning-based solutions. The extracted knowledge is further embedded in an existing ontology, which together with the structure-driven modification of the ontology results in a comprehensible, inference-consistent knowledge base constituting a tangible representation of knowledge and a potential component of applications such as semantically enhanced query answering systems.
270

Exploratory and Exploitative Knowledge Sharing in Interorganizational Relationships

Im, Ghiyoung 06 December 2006 (has links)
A growing body of research investigates the role that organizational learning plays in generating superior firm performance. Researchers, however, have given limited attention to this learning effect in the context of long-term interorganizational relationships. This paper focuses on a specific aspect of learning, that is, explorative and exploitative knowledge sharing, and examines its impacts on sustained performance. We examine interorganizational design mechanisms and digitally-enabled knowledge representation as antecedents of knowledge sharing. The empirical context is dyadic relationship between a supply chain solutions vendor and its customers for two major classes of supply chain services. Our theoretical predictions are tested by using data collected from both sides of this customer-vendor dyad. The findings suggest that dual emphasis on exploration and exploitation is important for sustained relationship performance for customers. The customer evaluates balancing exploration and exploitation important whereas the vendor emphasizes only on exploitation.

Page generated in 0.0952 seconds