Spelling suggestions: "subject:"ontology capping"" "subject:"ontology crapping""
1 |
Automatic data integration with generalized mapping definitionsTian, Aibo 18 September 2014 (has links)
Data integration systems provide uniform access to a set of heterogeneous structured data sources. An essential component of a data integration system is the mapping between the federated data model and each data source. The scale of interconnect among data sources in the big data era is a new impetus for automating the mapping process. Despite decades of research on data integration, generating mappings still requires extensive labor. The thesis of this research is that the progress on automatic data integration has been limited by a narrow definition of mapping. The common mapping process is to find correspondences between pairs of entities in the data models, and create logic expressions over the correspondences as executable mappings. This does not cover all issues in real world applications. This research aims to overcome this problem in two ways: (1) generalize the common mapping definition for relational databases; (2) address the problem in a more general framework, the Semantic Web. The Semantic Web provides flexible graph based data models and reasoning capabilities as in knowledge representation systems. The new graph data model introduces opportunities for new mapping definitions. The comparison of mapping definitions and solutions for both relational databases and the Semantic Web is discussed. In this dissertation, I propose two generalizations of mapping problems. First, the common schema matching definition for relational databases is generalized from finding correspondences between pairs of attributes to finding correspondences consisting of relations, attributes, and data values. This generalization solves real world issues that are not previously covered. The same generalization can be applied to ontology matching in the Semantic Web. The second piece of work generalizes the ontology mapping definition from finding correspondences between pairs of entities to pairs of graph paths (sequences of entities). As a path provides more context than a single entity, mapping between paths can solve two challenges in data integration: the missing mapping challenge and the ambiguous mapping challenge. Combining the two proposed generalizations together, I demonstrate a complete data integration system using the Semantic Web techniques. The complete system includes the components of automatic ontology mapping and query reformulation, and semi-automatically federates the query results from multiple data sources. / text
|
2 |
A Tool to Support Ontology Creation Based on Incremental Mini-Ontology MergingLian, Zonghui 26 March 2008 (has links) (PDF)
This thesis addresses the problem of tool support for semi-automatic ontology mapping and merging. Solving this problem contributes to ontology creation and evolution by relieving users from tedious and time-consuming work. This thesis shows that a tool can be built that will take a “mini-ontology” and a “growing ontology” as input and make it possible to produce manually, semi-automatically, or automatically an extended growing ontology as output. Characteristics of this tool include: (1) a graphical, interactive user interface with features that will allow users to map and merge ontologies, and (2) a framework supporting pluggable, semi-automatic, and automatic mapping and merging algorithms.
|
3 |
A Unified Approach for Dealing with Ontology Mappings and their Defects / Une approche Unifiée au Traitement de “Mappings” d’Ontologies et de leurs DéfautsAbbas, Muhammad Aun 14 December 2016 (has links)
Un mapping d’ontologies est un ensemble de correspondances. Chaque correspondance relie des artefacts, typiquement concepts et propriétés, d’une ontologie avec ceux d’une autre ontologie. Le mapping entre ontologies a suscité beaucoup d’intérêt durant ces dernières années. En effet, le mapping d’ontologies est largement utilisé pour mettre en oeuvre de l’interopérabilité et intégration (transformation de données, réponse à la requête, composition de web service) dans les applications, et également dans la création de nouvelles ontologies. D’une part, vérifier l’exactitude (logique) d’un mapping est devenu un prérequis fondamentale à son utilisation. D’autre part, pour deux ontologies données, plusieurs mappings peuvent être établis, obtenus par différentes méthodes d’alignement, ou définis manuellement. L’utilisation de plusieurs mappings entre deux ontologies dans une seule application ou pour synthétiser un seul mapping tirant profit de ces plusieurs mappings, peut générer des erreurs dans l’application ou dans le mapping synthétisé car ces plusieurs mappings peuvent être contradictoires. Dans les deux situations décrites ci-dessus, l’exactitude, la non-contradiction et autres propriétés sont généralement exprimées de façon formelle et vérifiées dans le contexte des ontologies formelles (par exemple, lorsque les ontologies sont représentées en logique) La vérification de ces propriétés est généralement effectuée à l’aide d’un seul formalisme, exigeant d’une part que les ontologies soient représentées par ce seul formalisme et, d’autre part, qu’une représentation formelle des mappings soit fournie, complétée par des notions formalisant les propriétés recherchées. Cependant, il existe une multitude de formalismes hétérogènes pour exprimer les ontologies, allant des plus informels (par exemple, du texte contrôlé, des modèles en UML) aux formels (par exemple, des logiques de description ou des catégories). Ceci implique que pour appliquer les approches existantes, les ontologies hétérogènes doivent être traduites (ou juste transformées, si l’ontologie source est exprimée de façon informelle ou si la traduction complète pour maintenir l’équivalence n’est pas possible) dans un seul formalisme commun et les mappings sont reformulés à chaque fois : seulement à l’issu de ce processus, les propriétés recherchées peuvent être établies. Même si cela est possible, ce processus peut produire à la fois des mappings corrects et incorrects vis-à-vis de ces propriétés, en fonction de la traduction (transformation) opérée. En effet, les propriétés recherchées dépendent du formalisme employé pour exprimer les ontologies et les mappings. Dans cette dissertation, des différentes propriétés ont été a été reformulées d’une manière unifiée dans le contexte d’ontologies hétérogènes utilisant la théorie de Galois. Dans ce contexte, les ontologies sont représentées comme treillis, et les mappings sont reformulés comme fonctions entre ces treillis. Les treillis sont des structures naturelles pour la représentation directe d’ontologies sans obligation de traduire ou transformer les formalismes dans lesquels les ontologies sont exprimées à l’origine. Cette reformulation unifiée a permis d’introduire une nouvelle notion de mappings compatibles et incompatibles. Il est ensuite formellement démontré que cette nouvelle notion couvre plusieurs parmi les propriétés recherchées de mappings, mentionnées dans l’état de l’art. L’utilisation directe de mappings compatibles et incompatibles est démontrée par l’application à des mappings d’ontologies de haut niveau. La notion de mappings compatibles et incompatibles est aussi appliquée sur des ontologies de domaine, mettant en évidence comment les mappings incompatibles génèrent des résultats incorrects pour la fusion d’ontologies. / An ontology mapping is a set of correspondences. Each correspondence relates artifacts, such as concepts and properties, of one ontology to artifacts of another ontology. In the last few years, a lot of attention has been paid to establish mappings between source ontologies. Ontology mapping is widely and effectively used for interoperability and integration tasks (data transformation, query answering, or web-service composition, to name a few), and in the creation of new ontologies. On the one side, checking the (logical) correctness of ontology mappings has become a fundamental prerequisite of their use. On the other side, given two ontologies, there are several ontology mappings between them that can be obtained by using different ontology matching methods or just stated manually. Using ontology mappings between two ontologies in combination within a single application or for synthesizing one mapping taking the advantage of two original mappings, may cause errors in the application or in the synthesized mapping because those original mappings may be contradictory (conflicting). In both situations, correctness is usually formalized and verified in the context of fully formalized ontologies (e.g. in logics), even if some “weak” notions of correctness have been proposed when ontologies are informally represented or represented in formalisms preventing a formalization of correctness (such as UML). Verifying correctness is usually performed within one single formalism, requiring on the one side that ontologies need to be represented in this unique formalism and, on the other side, a formal representation of mapping is provided, equipped with notions related to correctness (such as consistency). In practice, there exist several heterogeneous formalisms for expressing ontologies, ranging from informal (text, UML and others) to formal (logical and algebraic). This implies that, willing to apply existing approaches, heterogeneous ontologies should be translated (or just transformed if, the original ontology is informally represented or when full translation, keeping equivalence, is not possible) in one common formalism, mappings need each time to be reformulated, and then correctness can be established. This is possible but possibly leading to correct mappings under one translation and incorrect mapping under another translation. Indeed, correctness (e.g. consistency) depends on the underlying employed formalism in which ontologies and mappings are expressed. Different interpretations of correctness are available within the formal or even informal approaches questioning about what correctness is indeed. In the dissertation, correctness has been reformulated in the context of heterogeneous ontologies by using the theory of Galois connections. Specifically ontologies are represented as lattices and mappings as functions between those lattices. Lattices are natural structures for directly representing ontologies, without changing the original formalisms in which ontologies are expressed. As a consequence, the (unified) notion of correctness has been reformulated by using Galois connection condition, leading to the new notion of compatible and incompatible mappings. It is formally shown that the new notion covers the reviewed correctness notions, provided in distinct state of the art formalisms, and, at the same time, can naturally cover heterogeneous ontologies. The usage of the proposed unified approach is demonstrated by applying it to upper ontology mappings. Notion of compatible and incompatible ontology mappings is also applied on domain ontologies to highlight that incompatible ontology mappings give incorrect results when used for ontology merging.
|
4 |
Machine Learning-Based Ontology Mapping Tool to Enable Interoperability in Coastal Sensor NetworksBheemireddy, Shruthi 11 December 2009 (has links)
In today’s world, ontologies are being widely used for data integration tasks and solving information heterogeneity problems on the web because of their capability in providing explicit meaning to the information. The growing need to resolve the heterogeneities between different information systems within a domain of interest has led to the rapid development of individual ontologies by different organizations. These ontologies designed for a particular task could be a unique representation of their project needs. Thus, integrating distributed and heterogeneous ontologies by finding semantic correspondences between their concepts has become the key point to achieve interoperability among different representations. In this thesis, an advanced instance-based ontology matching algorithm has been proposed to enable data integration tasks in ocean sensor networks, whose data are highly heterogeneous in syntax, structure, and semantics. This provides a solution to the ontology mapping problem in such systems based on machine-learning methods and string-based methods.
|
5 |
Semantic Enrichment of Ontology MappingsArnold, Patrick 04 January 2016 (has links) (PDF)
Schema and ontology matching play an important part in the field of data integration and semantic web. Given two heterogeneous data sources, meta data matching usually constitutes the first step in the data integration workflow, which refers to the analysis and comparison of two input resources like schemas or ontologies. The result is a list of correspondences between the two schemas or ontologies, which is often called mapping or alignment. Many tools and research approaches have been proposed to automatically determine those correspondences. However, most match tools do not provide any information about the relation type that holds between matching concepts, for the simple but important reason that most common match strategies are too simple and heuristic to allow any sophisticated relation type determination.
Knowing the specific type holding between two concepts, e.g., whether they are in an equality, subsumption (is-a) or part-of relation, is very important for advanced data integration tasks, such as ontology merging or ontology evolution. It is also very important for mappings in the biological or biomedical domain, where is-a and part-of relations may exceed the number of equality correspondences by far. Such more expressive mappings allow much better integration results and have scarcely been in the focus of research so far.
In this doctoral thesis, the determination of the correspondence types in a given mapping is the focus of interest, which is referred to as semantic mapping enrichment. We introduce and present the mapping enrichment tool STROMA, which obtains a pre-calculated schema or ontology mapping and for each correspondence determines a semantic relation type. In contrast to previous approaches, we will strongly focus on linguistic laws and linguistic insights. By and large, linguistics is the key for precise matching and for the determination of relation types. We will introduce various strategies that make use of these linguistic laws and are able to calculate the semantic type between two matching concepts. The observations and insights gained from this research go far beyond the field of mapping enrichment and can be also applied to schema and ontology matching in general.
Since generic strategies have certain limits and may not be able to determine the relation type between more complex concepts, like a laptop and a personal computer, background knowledge plays an important role in this research as well. For example, a thesaurus can help to recognize that these two concepts are in an is-a relation. We will show how background knowledge can be effectively used in this instance, how it is possible to draw conclusions even if a concept is not contained in it, how the relation types in complex paths can be resolved and how time complexity can be reduced by a so-called bidirectional search. The developed techniques go far beyond the background knowledge exploitation of previous approaches, and are now part of the semantic repository SemRep, a flexible and extendable system that combines different lexicographic resources.
Further on, we will show how additional lexicographic resources can be developed automatically by parsing Wikipedia articles. The proposed Wikipedia relation extraction approach yields some millions of additional relations, which constitute significant additional knowledge for mapping enrichment. The extracted relations were also added to SemRep, which thus became a comprehensive background knowledge resource. To augment the quality of the repository, different techniques were used to discover and delete irrelevant semantic relations.
We could show in several experiments that STROMA obtains very good results w.r.t. relation type detection. In a comparative evaluation, it was able to achieve considerably better results than related applications. This corroborates the overall usefulness and strengths of the implemented strategies, which were developed with particular emphasis on the principles and laws of linguistics.
|
6 |
Semantic Enrichment of Ontology MappingsArnold, Patrick 15 December 2015 (has links)
Schema and ontology matching play an important part in the field of data integration and semantic web. Given two heterogeneous data sources, meta data matching usually constitutes the first step in the data integration workflow, which refers to the analysis and comparison of two input resources like schemas or ontologies. The result is a list of correspondences between the two schemas or ontologies, which is often called mapping or alignment. Many tools and research approaches have been proposed to automatically determine those correspondences. However, most match tools do not provide any information about the relation type that holds between matching concepts, for the simple but important reason that most common match strategies are too simple and heuristic to allow any sophisticated relation type determination.
Knowing the specific type holding between two concepts, e.g., whether they are in an equality, subsumption (is-a) or part-of relation, is very important for advanced data integration tasks, such as ontology merging or ontology evolution. It is also very important for mappings in the biological or biomedical domain, where is-a and part-of relations may exceed the number of equality correspondences by far. Such more expressive mappings allow much better integration results and have scarcely been in the focus of research so far.
In this doctoral thesis, the determination of the correspondence types in a given mapping is the focus of interest, which is referred to as semantic mapping enrichment. We introduce and present the mapping enrichment tool STROMA, which obtains a pre-calculated schema or ontology mapping and for each correspondence determines a semantic relation type. In contrast to previous approaches, we will strongly focus on linguistic laws and linguistic insights. By and large, linguistics is the key for precise matching and for the determination of relation types. We will introduce various strategies that make use of these linguistic laws and are able to calculate the semantic type between two matching concepts. The observations and insights gained from this research go far beyond the field of mapping enrichment and can be also applied to schema and ontology matching in general.
Since generic strategies have certain limits and may not be able to determine the relation type between more complex concepts, like a laptop and a personal computer, background knowledge plays an important role in this research as well. For example, a thesaurus can help to recognize that these two concepts are in an is-a relation. We will show how background knowledge can be effectively used in this instance, how it is possible to draw conclusions even if a concept is not contained in it, how the relation types in complex paths can be resolved and how time complexity can be reduced by a so-called bidirectional search. The developed techniques go far beyond the background knowledge exploitation of previous approaches, and are now part of the semantic repository SemRep, a flexible and extendable system that combines different lexicographic resources.
Further on, we will show how additional lexicographic resources can be developed automatically by parsing Wikipedia articles. The proposed Wikipedia relation extraction approach yields some millions of additional relations, which constitute significant additional knowledge for mapping enrichment. The extracted relations were also added to SemRep, which thus became a comprehensive background knowledge resource. To augment the quality of the repository, different techniques were used to discover and delete irrelevant semantic relations.
We could show in several experiments that STROMA obtains very good results w.r.t. relation type detection. In a comparative evaluation, it was able to achieve considerably better results than related applications. This corroborates the overall usefulness and strengths of the implemented strategies, which were developed with particular emphasis on the principles and laws of linguistics.
|
7 |
A Language for Inconsistency-Tolerant Ontology MappingSengupta, Kunal 01 September 2015 (has links)
No description available.
|
8 |
Métricas de avaliação de alinhamento de ontologias / Measures of Evaluation of Ontology AlignmentsBispo Junior, Esdras Lins 04 August 2011 (has links)
Na área de emparelhamento de ontologias, são utilizadas algumas métricas para avaliar os alinhamentos produzidos. As métricas baseadas em alinhamento têm como princípio básico confrontar um alinhamento proposto com um alinhamento de referência. Algumas destas métricas, entretanto, não têm alcançado êxito suficiente porque (i) não conseguem discriminar sempre entre um alinhamento totalmente errado e um quase correto; e (ii) não conseguem estimar o esforço do usuário para refinar o alinhamento resultante. Este trabalho tem como objetivo apresentar uma nova abordagem para avaliar os alinhamentos de ontologias. A nossa abordagem apresenta uma métrica na qual utilizamos as próprias consultas normalmente já realizadas nas ontologias originais para julgar a qualidade do alinhamento proposto. Apresentamos também alguns resultados satisfatórios de nossa abordagem em relação às outras métricas já existentes e largamente utilizadas. / In the ontology matching field, different metrics are used to evaluate the resulting alignments. Metrics based on alignment adopt the basic principle of verifying a proposed alignment against a reference alignment. Some of these metrics do not achieve good results because (i) they cannot always distinguish between a totally wrong alignment and one which is almost correct; and (ii) they cannot estimate the effort for the user to refine the resulting alignment. This work aims to present a new approach to evaluate ontology alignments. Our approach presents a measure that uses the usual queries in the original ontologies to assess the quality of the proposed alignment. We also present some satisfactory results of our approach with regard to widely used metrics.
|
9 |
Ontology Matching based On Class Context: to solve interoperability problem at Semantic WebLera Castro, Isaac 17 May 2012 (has links)
When we look at the amount of resources to convert formats to other formats, that is to say, to make information systems useful, it is the time when we realise that our communication model is inefficient. The transformation of information, as well as the transformation of energy, remains inefficient for the efficiency of the converters. In this work, we propose a new way to ``convert'' information, we propose a mapping algorithm of semantic information based on the context of the information in order to redefine the framework where this paradigm merges with multiple techniques. Our main goal is to offer a new view where we can make further progress and, ultimately, streamline and minimize the communication chain in integration process
|
10 |
Data Warehouse Change Management Based on OntologyTsai, Cheng-Sheng 12 July 2003 (has links)
In the thesis, we provide a solution to solve a schema change problem. In a data warehouse system, if schema changes occur in a data source, the overall system will lose the consistency between the data sources and the data warehouse. These schema changes will render the data warehouse obsolete. We have developed three stages to handle schema changes occurring in databases. They are change detection, diagnosis, and handling. Recommendations are generated by DB-agent to information DW-agent to notify the DBA what and where a schema change affects the star schema. In the study, we mainly handle seven schema changes in a relational database. All of them, we not only handle non-adding schema changes but also handling adding schema changes. A non-adding schema change in our experiment has high correct mapping rate as using a traditional mappings between a data warehouse and a database. For an adding schema change, it has many uncertainties to diagnosis and handle. For this reason, we compare similarity between an adding relation or attribute and the ontology concept or concept attribute to generate a good recommendation. The evaluation results show that the proposed approach is capable to detect these schema changes correctly and to recommend the DBA about the changes appropriately.
|
Page generated in 0.0787 seconds