• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 199
  • 187
  • 118
  • 26
  • 15
  • 8
  • 7
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • Tagged with
  • 622
  • 167
  • 161
  • 159
  • 135
  • 116
  • 98
  • 96
  • 94
  • 87
  • 82
  • 70
  • 63
  • 62
  • 58
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Towards the French Biomedical Ontology Enrichment / Vers l'enrichissement d'ontologies biomédicales françaises

Lossio-Ventura, Juan Antonio 09 November 2015 (has links)
En biomedicine, le domaine du « Big Data » (l'infobésité) pose le problème de l'analyse de gros volumes de données hétérogènes (i.e. vidéo, audio, texte, image). Les ontologies biomédicales, modèle conceptuel de la réalité, peuvent jouer un rôle important afin d'automatiser le traitement des données, les requêtes et la mise en correspondance des données hétérogènes. Il existe plusieurs ressources en anglais mais elles sont moins riches pour le français. Le manque d'outils et de services connexes pour les exploiter accentue ces lacunes. Dans un premier temps, les ontologies ont été construites manuellement. Au cours de ces dernières années, quelques méthodes semi-automatiques ont été proposées. Ces techniques semi-automatiques de construction/enrichissement d'ontologies sont principalement induites à partir de textes en utilisant des techniques du traitement du langage naturel (TALN). Les méthodes de TALN permettent de prendre en compte la complexité lexicale et sémantique des données biomédicales : (1) lexicale pour faire référence aux syntagmes biomédicaux complexes à considérer et (2) sémantique pour traiter l'induction du concept et du contexte de la terminologie. Dans cette thèse, afin de relever les défis mentionnés précédemment, nous proposons des méthodologies pour l'enrichissement/la construction d'ontologies biomédicales fondées sur deux principales contributions.La première contribution est liée à l'extraction automatique de termes biomédicaux spécialisés (complexité lexicale) à partir de corpus. De nouvelles mesures d'extraction et de classement de termes composés d'un ou plusieurs mots ont été proposées et évaluées. L'application BioTex implémente les mesures définies.La seconde contribution concerne l'extraction de concepts et le lien sémantique de la terminologie extraite (complexité sémantique). Ce travail vise à induire des concepts pour les nouveaux termes candidats et de déterminer leurs liens sémantiques, c'est-à-dire les positions les plus pertinentes au sein d'une ontologie biomédicale existante. Nous avons ainsi proposé une approche d'extraction de concepts qui intègre de nouveaux termes dans l'ontologie MeSH. Les évaluations, quantitatives et qualitatives, menées par des experts et non experts, sur des données réelles soulignent l'intérêt de ces contributions. / Big Data for biomedicine domain deals with a major issue, the analyze of large volume of heterogeneous data (e.g. video, audio, text, image). Ontology, conceptual models of the reality, can play a crucial role in biomedical to automate data processing, querying, and matching heterogeneous data. Various English resources exist but there are considerably less available in French and there is a strong lack of related tools and services to exploit them. Initially, ontologies were built manually. In recent years, few semi-automatic methodologies have been proposed. The semi-automatic construction/enrichment of ontologies are mostly induced from texts by using natural language processing (NLP) techniques. NLP methods have to take into account lexical and semantic complexity of biomedical data : (1) lexical refers to complex phrases to take into account, (2) semantic refers to sense and context induction of the terminology.In this thesis, we propose methodologies for enrichment/construction of biomedical ontologies based on two main contributions, in order to tackle the previously mentioned challenges. The first contribution is about the automatic extraction of specialized biomedical terms (lexical complexity) from corpora. New ranking measures for single- and multi-word term extraction methods have been proposed and evaluated. In addition, we present BioTex software that implements the proposed measures. The second contribution concerns the concept extraction and semantic linkage of the extracted terminology (semantic complexity). This work seeks to induce semantic concepts of new candidate terms, and to find the semantic links, i.e. relevant location of new candidate terms, in an existing biomedical ontology. We proposed a methodology that extracts new terms in MeSH ontology. The experiments conducted on real data highlight the relevance of the contributions.
142

Semantic knowledge extraction from relational databases

Mogotlane, Kgotatso Desmond 05 1900 (has links)
M. Tech. (Information Technology, Department of Information and Communications Technology, Faculty of Applied an Computer Sciences), Vaal University of Technolog / One of the main research topics in Semantic Web is the semantic extraction of knowledge stored in relational databases through ontologies. This is because ontologies are core components of the Semantic Web. Therefore, several tools, algorithms and frameworks are being developed to enable the automatic conversion of relational databases into ontologies. Ontologies produced with these tools, algorithms and frameworks needs to be valid and competent for them to be useful in Semantic Web applications within the target knowledge domains. However, the main challenges are that many existing automatic ontology construction tools, algorithms, and frameworks fail to address the issue of ontology verification and ontology competency evaluation. This study investigates possible solutions to these challenges. The study began with a literature review in the semantic web field. The review let to the conceptualisation of a framework for semantic knowledge extraction to deal with the abovementioned challenges. The proposed framework had to be evaluated in a real life knowledge domain. Therefore, a knowledge domain was chosen as a case study. The data was collected and the business rules of the domain analysed to develop a relational data model. The data model was further implemented into a test relational database using Oracle RDBMS. Thereafter, Protégé plugins were applied to automatically construct ontologies from the relational database. The resulting ontologies are further validated to match their structures against existing conceptual database-to-ontology mapping principles. The matching results show the performance and accuracy of Protégé plugins in automatically converting relational databases into ontologies. Finally, the study evaluated the resulting ontologies against the requirements of the knowledge domain. The requirements of the domain are modelled with competency questions (CQs) and mapped to the ontology using SPARQL queries design, execution and analysis against users’ views of CQs answers. Experiments show that, although users have different views of the answers to CQs, the execution of the SPARQL translations of CQs against the ontology does produce outputs instances that satisfy users’ expectations. This indicates that Protégé plugins generated ontology from relational database embodies domain and semantic features to be useful in Semantic Web applications.
143

Contribution to the elaboration of a decision support system based on modular ontologies for ecological labelling / Contribution à l’élaboration d’un système d’aide à la décision basé sur les ontologies modulaires pour la labellisation écologique.

Xu, Da 15 November 2017 (has links)
L’usine du futur et les performances environnementales sont de nos jours au cœur des préoccupations. Les produits et services éco-labellisés sont de plus en plus populaires. En plus des coûts financiers engendrés, les processus d’éco-labellisation sont longs et complexes, ce qui démotive parfois les fabricants et les fournisseurs de services à demander des certifications. Dans ce contexte, ce travail de recherche, propose une démarche et une plateforme d’aide à la décision visant à améliorer et à accélérer ce processus afin de démocratiser l’accès à la certification écologique. Les bases de connaissances traditionnelles étant généralement peu interopérables, difficiles à être réutilisées et ne supportant pas les inférences, la plate-forme proposée repose sur une base de connaissances composée de diverses ontologies de domaine construites selon la documentation officielle européenne sur les écolabels. Cette base est composée de modules d'ontologies interconnectées couvrant divers produits et services. Elle permet d’automatiser le raisonnement sur ces connaissances et de les interroger en tenant compte de la sémantique. Un schéma de modularisation orienté suivant le domaine et la catégorie du produit, et portant sur les critères d’écolabels européens des produits détergents est utilisé comme cas d'application. Afin de permettre une réutilisation aisée des modules d'ontologie pour différents groupes de produits, ce schéma de modularisation fait la distinction entre la connaissance de base du domaine et les connaissances variables concernant les critères de labélisation de chaque groupe. La méthode de raisonnement utilisée exploite les mécanismes d'inférence sur des règles SWRL, et fournit des résultats argumentés pour l’aide à la décision. La modélisation adoptée pour la représentation des connaissances n’est pas uniquement dédiée à la plateforme proposée. Elle permet également une exploitation des connaissances via des outils du Web sémantique. Afin de favoriser la réutilisation des modules d'ontologie, une approche de contextualisation pour la fédération d’ontologies a été proposée. Elle permet de pallier les inconvénients de "OWL: imports". Contrairement aux approches existantes, où il est nécessaire de réaliser soit un mapping, soit d’ajouter des relations sémantiques modifiant les modules d’ontologies de base, notre approche n’affecte pas et ne nécessite pas l’importation de tous les concepts de ces ontologies. Pour faciliter la mise en œuvre de cette approche, nous proposons un nouveau plug-in pour l'éditeur d'ontologie « Protégé ». / With the rising concern of sustainability and environmental performance, eco-labeled products and services are becoming more and more popular. In addition to the financial costs, the long and complex process of eco-labeling sometimes demotivates manufacturers and service providers to be certificated. In this research work, we propose a decision support process and implement a decision support platform aiming at further improvement and acceleration of the eco-labeling process in order to democratize a broader application and certification of eco-labels. The decision support platform is based on a comprehensive knowledge base composed of various domain ontologies that are constructed according to official eco-label criteria documentation. Traditional knowledge base in relational data model is low interoperable, lack of inference support and difficult to be reused. In our research, the knowledge base composed of interconnected ontologies modules covers various products and services, and allows reasoning and semantic querying. A domain-centric modularization scheme about EU Eco-label laundry detergent product criteria is introduced as an application case. This modularization scheme separates the entity knowledge and rule knowledge so that the ontology modules can be reused easily in other domains. We explore a reasoning methodology based on inference with SWRL (Semantic Web Rule Language) rules which allows decision making with explanation. Through standard RDF (Resource Description Framework) and OWL (Web Ontology Language) ontology query interface, the assets of the decision support platform will stimulate domain knowledge sharing and can be applied into other application. In order to foster the reuse of ontology modules, we also proposed a usercentric approach for federate contextual ontologies (mapping and integration). This approach will create an ontology federation by a contextual configuration that avoid the “OWL:imports” disadvantages. Instead of putting mapping or new semantics in ontology modules, our approach will conserve the extra contextual information separately without impacting original ontologies or without importing all ontologies’ concepts. By introducing this contextualization, it becomes easier to support more expressive semantics in term of ontology integration itself, then it will also facilitate application agents to access and reuse ontologies. To realize this approach, we elaborate a new plug-in for the Protégé ontology editor.
144

Web ontologies e rappresentazione della conoscenza. Concetti e strumenti per la didattica / Web Ontologies and Knowledge Representation

CARMINATI, VERA MARIA 02 April 2007 (has links)
Il lavoro mette in luce le reciproche implicazioni di due mondi, quello delle tecnologie e quello dell'educazione, rispetto a temi di interesse condiviso: l'evoluzione della Rete in termini semantici attraverso l'impiego di ontologie informatiche e i complessi rapporti tra formalismi per la rappresentazione della conoscenza e didattica. La ricostruzione storica delle relazioni tra didattica, tecnologie e sistemi di espressione e comunicazione dei saperi ci ha condotto alla ricomprensione del Semantic Web nell'archeologia delle forme di rappresentazione della conoscenza, osservate con attenzione alle loro potenzialità didattiche e in rapporto all'evoluzione della cultura occidentale. La trattazione intende provvedere un modello per la lettura delle intersezioni tra Web Ontologies e scienze dell'educazione. Con uno sguardo al panorama internazionale della ricerca educativa su questi temi, si sono isolate e descritte alcune esperienze significative di impiego dell'approccio ontologico in ambienti e sistemi per l'e-learning, per calare nella realtà delle applicazioni e degli strumenti il discorso teorico proposto. / The work highlights the mutual implications between technologies and education: the two worlds have interest in common matters such as Internet semantic evolution through the implementation of informatic ontologies and the connections we can draw between knowledge representation and didactics. The historical reconstruction of the relationships among didactics, technologies and cultural resources leads us to the Semantic Web as a stage in the knowledge representation archaeology, regarded from cultural transmission and educational mediation perspectives. The work provides an explanation model for the intersections we can see between ontologies and sciences of education. With regard to the international research panorama about these themes, we point out and describe some significant experiences, which put the theory in practice. We analyze tools and applications involving the ontological approach in the development of e-learning environments and systems.
145

CASSANDRA: drug gene association prediction via text mining and ontologies

Kissa, Maria 28 January 2015 (has links) (PDF)
The amount of biomedical literature has been increasing rapidly during the last decade. Text mining techniques can harness this large-scale data, shed light onto complex drug mechanisms, and extract relation information that can support computational polypharmacology. In this work, we introduce CASSANDRA, a fully corpus-based and unsupervised algorithm which uses the MEDLINE indexed titles and abstracts to infer drug gene associations and assist drug repositioning. CASSANDRA measures the Pointwise Mutual Information (PMI) between biomedical terms derived from Gene Ontology (GO) and Medical Subject Headings (MeSH). Based on the PMI scores, drug and gene profiles are generated and candidate drug gene associations are inferred when computing the relatedness of their profiles. Results show that an Area Under the Curve (AUC) of up to 0.88 can be achieved. The algorithm can successfully identify direct drug gene associations with high precision and prioritize them over indirect drug gene associations. Validation shows that the statistically derived profiles from literature perform as good as (and at times better than) the manually curated profiles. In addition, we examine CASSANDRA’s potential towards drug repositioning. For all FDA-approved drugs repositioned over the last 5 years, we generate profiles from publications before 2009 and show that the new indications rank high in these profiles. In summary, co-occurrence based profiles derived from the biomedical literature can accurately predict drug gene associations and provide insights onto potential repositioning cases.
146

Une approche d'ingénierie ontologique pour l'acquisition et l'exploitation des connaissances à partir de documents textuels : vers des objets de connaissances et d'apprentissage

Zouaq, Amal January 2007 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal
147

LORESA : un système de recommandation d'objets d'apprentissage basé sur les annotations sémantiques

Benlizidia, Sihem January 2007 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
148

Abordagem de recomendação baseada em conteúdo utilizando ontologia fuzzy de domínio e ontologia crisp de preferência do usuário

Baldárrago, Arturo Elias Urquizo 30 July 2012 (has links)
Made available in DSpace on 2016-06-02T19:05:58Z (GMT). No. of bitstreams: 1 4477.pdf: 9424807 bytes, checksum: 7fc7288ca2c87d6b86aed1053e7d8903 (MD5) Previous issue date: 2012-07-30 / Financiadora de Estudos e Projetos / This paper presents an approach for developing content-based recommendation applications with focus on the use of a specific domain fuzzy ontology along with a user preference ontology. The approach falls into two stages: Ontology Engineering and Recommendation System Engineering. In the Ontology Engineering, a domain ontology with fuzzy relationships and a user ontology are built. The user ontology is set as an instance of the domain ontology, but it is modeled in a way that allows to store each user s preferences. The usage of the ontologies produced in Ontology Engineering provides a gain in precision for the results obtained by applications in the Recommendation System Engineering stage. For evaluation purposes, we instantiated the proposed approach in the development of a Recommender System for the field of electronic commerce, focusing on the mobile devices commerce domain. Following the experimental methodology, An evaluation was conducted in order to assess the approach s impact on the accuracy of results provided by the developed Recommender System. The results showed that the use of our approach contributed to increase the accuracy of the results, in terms of prediction, classification and ranking. The contributions of this work include: the approach for developing content-based recommendation applications by using a specific domain fuzzy ontology along with a user preference ontology; the definition of the UPFON methodology, which integrates the approach, to construct fuzzy ontologies; an instantiation of a fuzzy ontology for the mobile devices domain and a strategy to capture; and propagate the user preferences by means of ontologies. / Esta dissertação apresenta uma abordagem para o desenvolvimento de aplicações de recomendação baseadas em conteúdo utilizando ontologia específica de domínio e ontologia de preferência de usuário. Tal abordagem está dividida em duas etapas: a Engenharia de Ontologia e a Engenharia do Sistema de Recomendação. Na Engenharia de Ontologia são construídas: uma ontologia de domínio com relacionamentos difusos; e uma ontologia crisp de usuário definida como uma instância da ontologia de domínio, porém modelada de forma que permita refletir as preferências de cada usuário para o domínio instanciado. A utilização das ontologias produzidas na Engenharia de Ontologia proporciona um ganho de precisão nos resultados obtidos por aplicações desenvolvidas conforme a abordagem proposta. Para fins de avaliação, a abordagem proposta foi instanciada no domínio de comércio de dispositivos móveis. Seguindo a metodologia experimental, foi conduzida uma experimentação com o objetivo de avaliar o impacto da abordagem na precisão dos resultados fornecidos pelo Sistema de Recomendação. Os resultados evidenciaram que o uso da abordagem proposta colaborou para o incremento da precisão dos resultados. As contribuições deste trabalho incluem: a abordagem para o desenvolvimento de aplicações de recomendação baseadas em conteúdo utilizando ontologia fuzzy específica de domínio e ontologia de preferência de usuário; a definição da metodologia de construção de ontologias fuzzy chamada UPFON; a instanciação de uma ontologia fuzzy no domínio dos dispositivos móveis e a estratégia para capturar as preferências do usuário e propagá-las em uma ontologia crisp de usuário.
149

Modélisation des connaissances et raisonnement à base d'ontologies spatio-temporelles : application à la robotique ambiante d'assistance / Knowledge modeling and reasoning based on spatio-temporal ontologies : application to ambient assisted-robotics

Ayari, Naouel 15 December 2016 (has links)
Dans cette thèse, nous proposons un cadre générique pour la modélisation et la gestion du contexte dans le cadre des systèmes intelligents ambiants et robotiques. Les connaissances contextuelles considérées sont de plusieurs types et issues de perceptions multimodales : connaissances spatiales et/ou temporelles, changement d’états et de propriétés d’entités, énoncés en langage naturel. Pour ce faire, nous avons proposé une extension du langage NKRL (Narrative Knowledge Representation and Reasoning) pour parvenir à une représentation unifiée des connaissances contextuelles qu’elles soient spatiales, temporelles ou spatio-temporelles et effectuer les raisonnements associés. Nous avons exploité l’expressivité des ontologies n-aires sur lesquelles repose le langage NKRL pour pallier aux problèmes rencontrés dans les approches de représentation des connaissances spatiales et dynamiques à base d’ontologies binaires, communément utilisées en intelligence ambiante et en robotique. Il en résulte une modélisation plus riche, plus fine et plus cohérente du contexte permettant une meilleure adaptation des services d’assistance à l’utilisateur dans le cadre des systèmes intelligents ambiants et robotiques. La première contribution concerne la modélisation des connaissances spatiales et/ou temporelles et des changements de contexte, et les inférences spatiales, temporelles ou spatio-temporelles. La deuxième contribution concerne, quant à elle, le développement d’une méthodologie permettant d’effectuer un traitement syntaxique et une annotation sémantique pour extraire, à partir d’un énoncé en langage naturel, des connaissances contextuelles spatiales ou temporelles en NKRL. Ces contributions ont été validées et évaluées en termes de performances (temps de traitement, taux d’erreurs, et taux de satisfaction des usagers) dans le cadre de scénarios mettant en œuvre différentes formes de services : assistance au bien-être, assistance de type aide sociale, assistance à la préparation d’un repas / In this thesis, we propose a generic framework for modeling and managing the context in ambient and robotic intelligent systems. The contextual knowledge considered is of several types and derived from multimodal perceptions : spatial and / or temporal knowledge, change of states and properties of entities, statements in natural language. To do this, we proposed an extension of the Narrative Knowledge Representation and Reasoning (NKRL) language to reach a unified representation of contextual knowledge whether spatial, temporal or spatio-temporal and perform the associated reasoning. We have exploited the expressiveness of the n-ary ontologies on which the NKRL language is based to bearing on the problems encountered in the spatial and dynamic knowledge representation approaches based on binary ontologies, commonly used in ambient intelligence and robotics. The result is a richer, finer and more coherent modeling of the context allowing a better adaptation of user assistance services in the context of ambient and robotic intelligent systems. The first contribution concerns the modeling of spatial and / or temporal knowledge and contextual changes, and spatial, temporal or spatial-temporal inferences. The second contribution concerns the development of a methodology allowing to carry out a syntactic treatment and a semantic annotation to extract, from a statement in natural language, spatial or temporal contextual knowledge in NKRL. These contributions have been validated and evaluated in terms of performance (processing time, error rate, and user satisfaction rate) in scenarios involving different forms of services: wellbeing assistance, social assistance, assistance with the preparation of a meal
150

Amélioration de la qualité des données produits échangées entre l'ingénierie et la production à travers l'intégration de systèmes d'information dédiés / Quality Improvement of product data exchanged between engineering and production through the integration of dedicated information systems.

Ben Khedher, Anis 27 February 2012 (has links)
Le travail présenté dans ce mémoire de thèse apporte sa contribution à l'amélioration de la qualité des données échangées entre la production et les services d'ingénierie dédiés à la conception du produit et du système de production associé. Cette amélioration de la qualité des données passe par l'étude des interactions entre la gestion du cycle de vie du produit et la gestion de la production.Ces deux concepts étant supportés, tout ou partie, par des systèmes d'information industriels, l'étude de leurs interactions a ensuite conduit à l'intégration de ces systèmes d'information (PLM, ERP et MES). Dans un contexte de forte concurrence et de mondialisation, les entreprises sont obligées d'innover et de minimiser les coûts, notamment ceux de production. Face à ces enjeux, le volume des données de production et leur fréquence de modification ne cessent d'augmenter en raison de la réduction constante de la durée de vie et de mise sur le marché des produits, de la personnalisation accrue des produits et en n de la généralisation des démarches d'amélioration continue en production. La conséquence directe est alors la nécessité de formaliser et de gérer l'ensemble des données de production devant être fournies aux opérateurs de production et aux machines. Suite à une analyse du point de vue de la qualité des données pour chaque architecture existante démontrant ainsi leur incapacité à répondre à cette problématique, une architecture basée sur l'intégration des trois systèmes d'information directement impliqués dans la production (PLM, ERP et MES) a été proposée. Cette architecture nous a menés à deux sous-problématiques complémentaires qui sont respectivement la construction d'une architecture basée sur des Web Services permettant d'améliorer l'accessibilité, la sécurité et la complétude des données échangées, et la construction d'une architecture d'intégration, basée sur les ontologies, permettant d'offrir des mécanismes d'intégration basés sur la sémantique dans le but d'assurer la bonne interprétation des données échangées.Enfin, la maquette de l'outil logiciel supportant la solution proposée et permettant d'assurer l'intégration des données échangées entre ingénierie et production a été réalisée. / The research work contributes to improve the quality of data exchanged between the production and the engineering units which dedicated to product design and production system design. This improvement is qualified by studying the interactions between the product life cycle management and the production management. These two concepts are supported, wholly or partly by industrial information systems, the study of the interactions then lead to the integration of information systems (PLM, ERP and MES).In a highly competitive environment and globalization, companies are forced to innovate and reduce costs, especially the production costs. Facing with these challenges, the volume and frequency change of production data are increasing due to the steady reduction of the lifetime and the products marketing, the increasing of product customization and the generalization of continuous improvement in production. Consequently, the need to formalize and manage all production data is required. These data should be provided to the production operators and machines.After analysis the data quality for each existing architecture demonstrating the inability to address this problem, an architecture, based on the integration of three information systems involved in the production (PLM, ERP and MES) has been proposed. This architecture leads to two complementary sub-problems. The first one is the development of an architecture based on Web services to improve the accessibility, safety and completeness of data exchanged. The second is the integration architecture of integration based on ontologies to offer the integration mechanisms based on the semantics in order to ensure the correct interpretation of the data exchanged. Therefore, the model of the software tool supports the proposed solution and ensures that integration of data exchanged between engineering and production was carried out.

Page generated in 0.0363 seconds