Spelling suggestions: "subject:"ontologies"" "subject:"antologies""
421 |
Decision making for ontology matching under the theory of belief functions / Prise de décision lors de l'appariement des ontologies dans le cadre de la théorie des fonctions de croyanceEssaid, Amira 01 June 2015 (has links)
L'appariement des ontologies est une tâche primordiale pour palier au problème d'hétérogénéité sémantique et ainsi assurer une interopérabilité entre les applications utilisant différentes ontologies. Il consiste en la mise en correspondance de chaque entité d'une ontologie source à une entité d'une ontologie cible et ceci par application des techniques d'alignement fondées sur des mesures de similarité. Individuellement, aucune mesure de similarité ne permet d'obtenir un alignement parfait. C'est pour cette raison qu'il est intéressant de tenir compte de la complémentarité des mesures afin d'obtenir un meilleur alignement. Dans cette thèse, nous nous sommes intéressés à proposer un processus de décision crédibiliste pour l'appariement des ontologies. Étant données deux ontologies, on procède à leur appariement et ceci par application de trois techniques. Les alignements obtenus seront modélisés dans le cadre de la théorie des fonctions de croyance. Des règles de combinaison seront utilisées pour combiner les résultats d'alignement. Une étape de prise de décision s'avère utile, pour cette raison nous proposons une règle de décision fondée sur une distance et capable de décider sur une union d'hypothèses. Cette règle sera utilisée dans notre processus afin d'identifier pour chaque entité source le ou les entités cibles. / Ontology matching is a solution to mitigate the effect of semantic heterogeneity. Matching techniques, based on similarity measures, are used to find correspondences between ontologies. Using a unique similarity measure does not guarantee a perfect alignment. For that reason, it is necessary to use more than a similarity measure to take advantage of features of each one and then to combine the different outcomes. In this thesis, we propose a credibilistic decision process by using the theory of belief functions. First, we model the alignments, obtained after a matching process, under the theory of belief functions. Then, we combine the different outcomes through using adequate combination rules. Due to our awareness that making decision is a crucial step in any process and that most of the decision rules of the belief function theory are able to give results on a unique element, we propose a decision rule based on a distance measure able to make decision on union of elements (i.e. to identify for each source entity its corresponding target entities).
|
422 |
Construction et évolution d'une ressource termino-ontologique dédiée à la représentation de relations n-aires / Construction and evolution of an Ontological and Terminological Resource dedicated to the representation of n-ary relationsTouhami, Rim 05 September 2014 (has links)
Les ontologies sont devenues incontournables pour définir des vocabulaires standardisés ainsi qu'une représentation partagée d'un domaine d'intérêt. La notion de Ressource Termino-Ontologique (RTO) permet d'associer une partie terminologique et/ou linguistique aux ontologies afin d'établir une distinction claire entre la manifestation linguistique (le terme) et la notion qu'elle dénote (le concept). Les RTOs sont actuellement au cœur de nombreuses méthodes, outils et applications de l'Ingénierie des Connaissances (IC), discipline de l'Intelligence Artificielle permettant en particulier de développer des méthodes et des outils de capitalisation de connaissances.L'objectif de cette thèse, qui s'inscrit dans les problématiques de l'IC, est de capitaliser des données expérimentales issues de documents textuels (articles scientifiques, rapports de projet, etc.) afin de pouvoir les réutiliser dans des outils d'aide à la décision. Nous avons d'abord défini la notion de relation n-aire permettant de relier plusieurs arguments et l'avons modélisée dans une nouvelle RTO, baptisée naRyQ. Cette notion de relation n-aire nous a permis de modéliser des mesures expérimentales (e.g. diffusivité de l'oxygène dans un aliment, perméabilité à l'oxygène d'un emballage, broyage d'une biomasse, etc.) réalisées sur différents objets d'études (produit alimentaire, emballage, procédé de transformation, etc.). Afin d'implémenter la plateforme de capitalisation, nommée @Web, nous avons modélisé la RTO naRyQ en OWL/SKOS et défini l'ensemble des contraintes de cohérence qu'elle doit respecter. Enfin, une RTO étant amenée à évoluer pour répondre aux besoins de changement, nous avons proposé une méthode de gestion de l'évolution de cette RTO qui permet de maintenir sa cohérence de manière préventive. Cette méthode est implémentée dans le plug-in Protégé, nommé DynarOnto. / This PhD thesis in Artificial Intelligence deals with knowledge engineering. Ontology, which can be defined as a controlled vocabulary allowing a community to share a common representation of a given area, is one of the key elements of knowledge engineering. Our framework is the capitalization of experimental data extracted from scientific documents (scientific articles, project reports, etc.), in order to feed decision support systems. The capitalization is guided by an ontological and terminological resource (OTR). An OTR associates an ontology with a terminological and/or a linguistic part in order to establish a clear distinction between the term and the notion it denotes (the concept). Experimental data can be represented by n-ary relations linking arguments of the experimentation, i.e. experimental measurements (e.g. oxygen diffusivity in food, oxygen permeability in packaging, biomass grinding, etc.), with studied objects (food, packaging, transformation process, etc.). We have defined the n-ary relation concept and a nary Relation between Quantitative experimental data OTR, called naRyQ. Our modeling relies on OWL2-DL and SKOS, W3C languages. Moreover, we have studied the evolution of such an OTR, extending the existing works taking into account i) the specificity of our OTR which deals with interdependent concepts and ii) its language representation. For that, we have proposed a preventive ontology evolution methodology defining elementary and composed changes based on a set of consistency constraints defined for our naRyQ OTR. Our contributions are implemented in two systems : our naRyQ OTR is nowadays the core of the existing capitalization system @Web and our evolution method is implemented in a Protégé plug-in called DynarOnto.
|
423 |
Efficient Storage and Domain-Specific Information Discovery on Semistructured DocumentsFarfan, Fernando R 12 November 2009 (has links)
The increasing amount of available semistructured data demands efficient mechanisms to store, process, and search an enormous corpus of data to encourage its global adoption. Current techniques to store semistructured documents either map them to relational databases, or use a combination of flat files and indexes. These two approaches result in a mismatch between the tree-structure of semistructured data and the access characteristics of the underlying storage devices. Furthermore, the inefficiency of XML parsing methods has slowed down the large-scale adoption of XML into actual system implementations. The recent development of lazy parsing techniques is a major step towards improving this situation, but lazy parsers still have significant drawbacks that undermine the massive adoption of XML. Once the processing (storage and parsing) issues for semistructured data have been addressed, another key challenge to leverage semistructured data is to perform effective information discovery on such data. Previous works have addressed this problem in a generic (i.e. domain independent) way, but this process can be improved if knowledge about the specific domain is taken into consideration. This dissertation had two general goals: The first goal was to devise novel techniques to efficiently store and process semistructured documents. This goal had two specific aims: We proposed a method for storing semistructured documents that maps the physical characteristics of the documents to the geometrical layout of hard drives. We developed a Double-Lazy Parser for semistructured documents which introduces lazy behavior in both the pre-parsing and progressive parsing phases of the standard Document Object Model's parsing mechanism. The second goal was to construct a user-friendly and efficient engine for performing Information Discovery over domain-specific semistructured documents. This goal also had two aims: We presented a framework that exploits the domain-specific knowledge to improve the quality of the information discovery process by incorporating domain ontologies. We also proposed meaningful evaluation metrics to compare the results of search systems over semistructured documents.
|
424 |
[en] A METHOD AND AN ENVIRONMENT FOR THE SEMANTIC WEB APPLICATIONS DEVELOPMENT / [pt] UM MÉTODO E UM AMBIENTE PARA O DESENVOLVIMENTO DE APLICAÇÕES NA WEB SEMÂNTICAMAURICIO HENRIQUE DE SOUZA BOMFIM 29 July 2011 (has links)
[pt] A crescente disponibilização de dados e ontologias segundo os padrões da
Web Semântica tem levado à necessidade de criação de métodos e ferramentas
de desenvolvimento de aplicações que considerem a utilização e
disponibilização dos dados distribuídos na rede segundo estes padrões. O
objetivo desta dissertação é apresentar um método, incluindo processos e
modelos, e um ambiente para o desenvolvimento de aplicações na Web
Semântica. Mais especificamente, este trabalho apresenta a evolução do método
SHDM (Semantic Hypermedia Design Method), que é um método para o
desenvolvimento de aplicações hipermídia na Web Semântica e o Synth, que é
um ambiente de desenvolvimento de aplicações projetadas segundo o método
SHDM. / [en] The growing availability of data and ontologies according to the Semantic
Web standards has led to the need of methods and tools for applications
development that take account the use and availability of the data distributed
according to these standards. The goal of this dissertation is to present a
method, including processes and models, and an environment for the Semantic
Web applications development. More specifically, this work shows the
evolution of the SHDM (Semantic Hypermedia Design Method), which is a
method for the Semantic Web hypermedia application development and the
Synth, which is an environment to build applications designed according to the
SHDM.
|
425 |
Extração e consulta de informações do Currículo Lattes baseada em ontologias / Ontology-based Queries and Information Extraction from the Lattes CVEduardo Ferreira Galego 06 November 2013 (has links)
A Plataforma Lattes é uma excelente base de dados de pesquisadores para a sociedade brasileira, adotada pela maioria das instituições de fomento, universidades e institutos de pesquisa do País. Entretanto, é limitada quanto à exibição de dados sumarizados de um grupos de pessoas, como por exemplo um departamento de pesquisa ou os orientandos de um ou mais professores. Diversos projetos já foram desenvolvidos propondo soluções para este problema, alguns inclusive desenvolvendo ontologias a partir do domínio de pesquisa. Este trabalho tem por objetivo integrar todas as funcionalidades destas ferramentas em uma única solução, a SOS Lattes. Serão apresentados os resultados obtidos no desenvolvimento desta solução e como o uso de ontologias auxilia nas atividades de identificação de inconsistências de dados, consultas para construção de relatórios consolidados e regras de inferência para correlacionar múltiplas bases de dados. Além disto, procura-se por meio deste trabalho contribuir com a expansão e disseminação da área de Web Semântica, por meio da criação de uma ferramenta capaz de extrair dados de páginas Web e disponibilizar sua estrutura semântica. Os conhecimentos adquiridos durante a pesquisa poderão ser úteis ao desenvolvimento de novas ferramentas atuando em diferentes ambientes. / The Lattes Platform is an excellent database of researchers for the Brazilian society , adopted by most Brazilian funding agencies, universities and research institutes. However, it is limited as to displaying summarized data from a group of people, such as a research department or students supervised by one or more professor. Several projects have already been developed which propose solutions to this problem, including some developing ontologies from the research domain. This work aims to integrate all the functionality of these tools in a single solution, SOS Lattes. The results obtained in the development of this solution are presented as well as the use of ontologies to help identifying inconsistencies in the data, queries for building consolidated reports and rules of inference for correlating multiple databases. Also, this work intends to contribute to the expansion and dissemination of the Semantic Web, by creating a tool that can extract data from Web pages and provide their semantic structure. The knowledge gained during the study may be useful for the development of new tools operating in different environments.
|
426 |
Využití ontologií k modelovaní flexibilní výroby v Průmyslu 4.0 / Utilization of Ontologies for Flexible Production Modelling in Industry 4.0Matyáš, Petr January 2021 (has links)
The topic of this Master thesis is ontologies and the methods of their design using OWL. The thesis aims to provide a manual on the design of ontologies and demonstrate the use of this manual in the environment of Industry 4.0. The actual manual is preceded by chapters presenting the used semantic web technologies from a state-of-the-art point of view. The work is concluded by the author's evaluation of the suitability of the use of given technologies. 1
|
427 |
CASSANDRA: drug gene association prediction via text mining and ontologiesKissa, Maria 20 January 2015 (has links)
The amount of biomedical literature has been increasing rapidly during the last decade. Text mining techniques can harness this large-scale data, shed light onto complex drug mechanisms, and extract relation information that can support computational polypharmacology. In this work, we introduce CASSANDRA, a fully corpus-based and unsupervised algorithm which uses the MEDLINE indexed titles and abstracts to infer drug gene associations and assist drug repositioning. CASSANDRA measures the Pointwise Mutual Information (PMI) between biomedical terms derived from Gene Ontology (GO) and Medical Subject Headings (MeSH). Based on the PMI scores, drug and gene profiles are generated and candidate drug gene associations are inferred when computing the relatedness of their profiles.
Results show that an Area Under the Curve (AUC) of up to 0.88 can be achieved. The algorithm can successfully identify direct drug gene associations with high precision and prioritize them over indirect drug gene associations. Validation shows that the statistically derived profiles from literature perform as good as (and at times better than) the manually curated profiles.
In addition, we examine CASSANDRA’s potential towards drug repositioning. For all FDA-approved drugs repositioned over the last 5 years, we generate profiles from publications before 2009 and show that the new indications rank high in these profiles. In summary, co-occurrence based profiles derived from the biomedical literature can accurately predict drug gene associations and provide insights onto potential repositioning cases.
|
428 |
Knowledge Integration and Representation for Biomedical AnalysisAlachram, Halima 04 February 2021 (has links)
No description available.
|
429 |
Ontologien als semantische Zündstufe für die digitale Musikwissenschaft?Münnich, Stefan 20 December 2019 (has links)
Ontologien spielen eine zentrale Rolle für die formalisierte Repräsentation von Wissen und Informationen sowie für die Infrastruktur des sogenannten semantic web. Trotz früherer Initiativen der Bibliotheken und Gedächtnisinstitutionen hat sich die deutschsprachige Musikwissenschaft insgesamt nur sehr zögerlich dem Thema genähert. Im Rahmen einer Bestandsaufnahme werden neben der Erläuterung grundlegender Konzepte, Herausforderungen und Herangehensweisen bei der Modellierung von Ontologien daher auch vielversprechende Modelle und bereits erprobte Anwendungsbeispiele für eine ‚semantische‘ digitale Musikwissenschaft identifiziert. / Ontologies play a crucial role for the formalised representation of knowledge and information as well as for the infrastructure of the semantic web. Despite early initiatives that were driven by libraries and memory institutions, German musicology as a whole has turned very slowly to the subject. In an overview the author addresses basic concepts, challenges, and approaches for ontology design and identifies models and use cases with promising applications for a ‚semantic‘ digital musicology.
|
430 |
[pt] MODELAGEM DE EVENTOS DE TRÂNSITO COM BASE EM CLIPPING DE GRANDES MASSAS DE DADOS DA WEB / [en] TRAFFIC EVENTS MODELING BASED ON CLIPPING OF HUGE QUANTITY OF DATA FROM THE WEBLUCIANA ROSA REDLICH 28 January 2015 (has links)
[pt] Este trabalho consiste no desenvolvimento de um modelo que auxilie na análise de eventos ocorridos no trânsito das grandes cidades. Utilizando uma grande massa de dados publicados na Internet, em especial no twitter, por usuários comuns, este trabalho fornece uma ontologia para eventos do trânsito publicados em notícias da internet e uma aplicação que use o modelo proposto para realizar consultas aos eventos modelados. Para isso, as notícias publicadas em linguagem natural são processadas, isto é, as entidades relevantes no texto são identificadas e depois estruturadas de tal forma que seja feita uma analise semântica da notícia publicada. As notícias publicadas são estruturadas no modelo proposto de eventos e com isso é possível que sejam feitas consultas sobre suas propriedades e relacionamentos, facilitando assim a análise do processo do trânsito e dos eventos ocorridos nele. / [en] This work proposes a traffic event model to assist the analysis of traffic events on big cities. This paper aims to provide not only an ontology for traffic events considering published news over the Internet, but also a prototype of a software architecture that uses the proposed model to perform queries on the events, using a huge quantity of published data on the Internet by regular users, especially on twitter. To do so, the news published in natural language is processed, and the relevant entities in the text are identified and structured in order to make a semantic analysis of them. The news reported is structured in the proposed model of events and thus the queries about their properties and relationships could be answered. As a consequence, the result of this work facilitates the analysis of the events occurred on the traffic process.
|
Page generated in 0.0513 seconds