• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1085
  • 658
  • 266
  • 51
  • 51
  • 50
  • 38
  • 35
  • 32
  • 25
  • 22
  • 17
  • 16
  • 12
  • 12
  • Tagged with
  • 2742
  • 558
  • 498
  • 424
  • 341
  • 334
  • 324
  • 279
  • 256
  • 225
  • 209
  • 207
  • 180
  • 167
  • 154
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Integration of Ontology Alignment and Ontology Debugging for Taxonomy Networks

Ivanova, Valentina January 2014 (has links)
Semantically-enabled applications, such as ontology-based search and data integration, take into account the semantics of the input data in their algorithms. Such applications often use ontologies, which model the application domains in question, as well as alignments, which provide information about the relationships between the terms in the different ontologies. The quality and reliability of the results of such applications depend directly on the correctness and completeness of the ontologies and alignments they utilize. Traditionally, ontology debugging discovers defects in ontologies and alignments and provides means for improving their correctness and completeness, while ontology alignment establishes the relationships between the terms in the different ontologies, thus addressing completeness of alignments. This thesis focuses on the integration of ontology alignment and debugging for taxonomy networks which are formed by taxonomies, the most widely used kind of ontologies, connected through alignments. The contributions of this thesis include the following. To the best of our knowledge, we have developed the first approach and framework that integrate ontology alignment and debugging, and allow debugging of modelling defects both in the structure of the taxonomies as well as in their alignments. As debugging modelling defects requires domain knowledge, we have developed algorithms that employ the domain knowledge intrinsic to the network to detect and repair modelling defects. Further, a system has been implemented and several experiments with real-world ontologies have been performed in order to demonstrate the advantages of our integrated ontology alignment and debugging approach. For instance, in one of the experiments with the well-known ontologies and alignment from the Anatomy track in Ontology Alignment Evaluation Initiative 2010, 203 modelling defects (concerning incomplete and incorrect information) were discovered and repaired.
22

Learning ontology from Web documents for supporting Web query

Hsueh, Ju-Fen 28 August 2003 (has links)
This thesis proposes a query expansion mechanism based on ontology. Automatic query expansion has facilitated web pages search in several ways. An external knowledge resource can help user identify the searching domain and efficient keywords. Ontology is taken as the metadata of a knowledge domain. Query could be expanding in different approaches based on ontology. In this research, an ontology learning process is implemented. With no initial ontology as backbone, domain ontology is constructed from World Wild Web document semi-automatically. Three expanding approaches based on concepts and their relations are proposed. Ontology learning result and expanding approaches are evaluated by comparing the different search results in atypical IR system.
23

A Prototype for Automating Ontology Learning and Ontology Evolution

Wohlgenannt, Gerhard, Belk, Stefan, Schett, Matthias January 2013 (has links) (PDF)
Ontology learning supports ontology engineers in the complex task of creating an ontology. Updating ontologies at regular intervals greatly increases the need for expensive expert contribution. This naturally leads to endeavors to automate the process wherever applicable. This paper presents a model for automated ontology learning and a prototype which demonstrates the feasibility of the proposed approach in learning lightweight domain ontologies. The system learns ontologies from heterogeneous sources periodically and delegates all evaluation processes, eg. the verification of new concept candidates, to a crowdsourcing framework which currently relies on Games with a Purpose. Furthermore, we sketch ontology evolution experiments to trace trends and patterns facilitated by the system.(authors' abstract)
24

A Top-Domain Ontology for Software testing.

Srikanth, Rakesh, Ahmad, Asman January 2016 (has links)
In software testing process a large amount of information is required and generated. This information can be stored as knowledge that needs to be managed and maintained using principles of knowledge management. Ontologies can act as a bridge by representing this testing knowledge in an accessible and understandable way.
25

The development of a fuzzy expert system to help top decision makers in political and investment domains

Alshayji, Sameera January 2012 (has links)
The world’s increasing interconnectedness and the recent increase in the number of notable regional and international events pose greater and greater challenges for political decision-making, especially the decision to strengthen bilateral economic relationships between friendly nations. Typically, such critical decisions are influenced by certain factors and variables that are based on heterogeneous and vague information that exists in different domains. A serious problem that the decision-maker faces is the difficulty in building efficient political decision support systems (DSS) with heterogeneous factors. One must take many factors into account, for example, language (natural or human language), the availability, or lack thereof, of precise data (vague information), and possible consequences (rule conclusions). The basic concept is a linguistic variable whose values are words rather than numbers and are therefore closer to human intuition. A common language is thus needed to describe such information which requires human knowledge for interpretation. To achieve robustness and efficiency of interpretation, we need to apply a method that can be used to generate high-level knowledge and information integration. Fuzzy logic is based on natural language and is tolerant of imprecise data. Fuzzy logic’s greatest strength lies in its ability to handle imprecise data, and it is perfectly suited for this situation. In this thesis, we propose to use ontology to integrate the scattered information resources from the political and investment domains. The process started with understanding each concept and extracting key ideas and relationships between sets of information by constructing object paradigm ontology. Re-engineering according to the object-paradigm (OP) provided quality for the developed ontology where conceptualization can provide more expressive, reusable object and temporal ontology. Then fuzzy logic has been integrated with ontology. And a fuzzy ontology membership value that reflects the strength of an inter-concept relationship to represent pairs of concepts across ontology has been consistently used. Each concept is assigned a fixed numerical value representing the concept consistency. Concept consistency is computed as a function of strength of all the relationships associated with the concept. Fuzzy expert systems enable one to weigh the consequences (rule conclusions) of certain choices based on vague information. Rule conclusions follow from rules composed of two parts, the if antecedent (input) and the then consequent (output). With fuzzy expert systems, one uses fuzzy logic toolbox graphical user interface (GUI) tools to build up a fuzzy inference system (FIS) to aid in decision-making. This research includes four main phases to develop a prototype architecture for an intelligent DSS that can help top political decision makers.
26

Evaluating a Semantic Approach to Address Data Interoperability

Tewolde, Noh Teamrat January 2014 (has links)
Semantic approaches have been used to facilitate interoperability in different fields of study. Current literature, however, shows that the semantic approach has not been used to facilitate the interoperability of addresses across domains. Addresses are important reference data used to identify locations and /or delivery points. Interoperability of address data across address or application domains is important because it facilitates the sharing of address data, addressing software and tools which can be used across domains. The aim of this research study has been to evaluate how a semantic (ontologies) approach could be used to facilitate address data interoperability and what the challenges and benefits of the semantic approach are. To realize the hypothesis and answer the research problems, a multi-tier hierarchy of ontology architecture was designed to integrate (across domain) address data with different levels of granularities. Four-tier hierarchy of ontologies was argued to be the optimal architecture for address data interoperability. At the top of the hierarchy was Foundation-Tier that includes vocabularies for location-related information and semantic language rules and concepts. The second tier has address reference ontology (called Base Address Ontology) that was developed to facilitate interoperability across the address domains. Developing optimal address reference ontology was one of the major goals of the research. Different domain ontologies were developed at the third tier of the hierarchy. Domain ontologies extend the vocabulary of the BAO (address reference ontology) with domain specific concepts. At the bottom of the hierarchy are application ontologies that are designed for specific purpose within an address domain or domains. Multiple scenarios of address data usage were considered to answer the research questions from different perspectives. Two interoperable address systems were developed to demonstrate the proof of concepts for the semantic approach. These interoperable environments were created using the UKdata+UPUdata ontology and UKpostal ontology, which illustrate different use cases of ontologies that facilitate interoperability. Ontology reason, inference, and SPARQL query tools were used to share, exchange, and process address data across address domains. Ontology inferences were done to exchange address data attributes between the UK administrative address data and UK postal service address data systems in the UKdata+UPUdata ontology. SPARQL queries were, furthermore, run to extract and process information from different perspective of an address domain and from combined perspectives of two (UK administrative and UK postal) address domains. The second interoperable system (UKpostal ontology) illustrated the use of ontology inference tools to share address data between two address data systems that provide different perspectives of a domain. / Dissertation (MSc)--University of Pretoria, 2014. / tm2015 / Computer Science / MSc / Unrestricted
27

AUTOMATIC SELECTION OF MEDIATING ONTOLOGY FOR ALIGNING BIOMEDICAL ONTOLOGIES

Xia, Weiguo 23 November 2015 (has links)
No description available.
28

Ontology Based Framework for Conceptualizing Human Affective States and Their Influences

Abaalkhail, Rana 12 November 2018 (has links)
The study of human affective states and their influences has been a research interest in psychology for some time. Fortunately, the presence of an affective computing paradigm allows us to use theories and findings from the discipline of psychology in the representation and development of human affective applications. However, because of the complexity of the subject, it is possible to misunderstand concepts that are shared via human and/or computer communications. With the appearance of technological innovations in our lives, for instance the SemanticWeb and the Web Ontology Language (OWL), there is a stronger need for computers to better understand human affective states and their influences. The use of an ontology can be beneficial in order to represent human affective states and their influences in a machine-understandable format. Truly, ontologies provide powerful tools to make sense of data. Our thesis proposes HASIO, a Human Affective States and their Influences Ontology, designed based on existing psychological theories. HASIO was developed to represent the knowledge that is necessary to model affective states and their influences in a computerized format. It describes the human affective states (Emotion, Mood and Sentiment) and their influences (Personality, Need and Subjective well-being) and conceptualizes their models and recognition methods. HASIO also represents the relationships between affective states and the factors that influence them. We surveyed and analyzed existing ontologies regarding human affective states and their influences to realize the significance and profit of developing our proposed ontology (HASIO). We follow the Methontology approach, a comprehensive engineering methodology for ontology building, to design and build HASIO. An important aspect in determining the ontology scope is Competency Questions (CQs). We configure HASIO CQs by analyzing the resources from psychology theories, available lexicons and existing ontologies. In this thesis, we present the development, modularization and evaluation of HASIO. HASIO can profit from the modularization process by dividing the whole ontology in self-contained modules that are easy to reuse and maintain. The ontology is evaluated through Question Answering system (HASIOQA), a task-based evaluation system, for validation. We design and develop a natural language interface system for this purpose. Moreover, the proposed ontology was evaluated through the Ontology Pitfall Scanner for verification and correctness against several criteria. Furthermore, HASIO was used in sentiment analysis on diffrent Twitter dataset. We designed and developed a tweet polarity calculation algorithm. Additionally, we compare our ontology result with machine learning technique. We demonstrate and highlight the advantage of using ontology in sentiment analysis.
29

ONTOSELF: A 3D ONTOLOGY VISUALIZATION TOOL

Somasundaram, Ramanathan 17 April 2007 (has links)
No description available.
30

Requirements-Oriented Methodology for Evaluating Ontologies

Yu, Jonathan, Jonathan.Yu@csiro.au January 2009 (has links)
Ontologies play key roles in many applications today. Therefore, whether using a newly-specified ontology or an existing ontology for use in its target application, it is important to determine the suitability of an ontology to the application at hand. This need is addressed by carrying out ontology evaluation, which determines qualities of an ontology using methodologies, criteria or measures. However, for addressing the ontology requirements from a given application, it is necessary to determine what the appropriate set of criteria and measures are. In this thesis, we propose a Requirements-Oriented Methodology for Evaluating Ontologies (ROMEO). ROMEO outlines a methodology for determining appropriate methods for ontology evaluation that incorporates a suite of existing ontology evaluation criteria and measures. ROMEO helps ontology engineers to determine relevant ontology evaluation measures for a given set of ontology requirements by linking these requirements to existing ontology evaluation measures through a set of questions. There are three main parts to ROMEO. First, ontology requirements are elicited from a given application and form the basis for an appropriate evaluation of ontologies. Second, appropriate questions are mapped to each ontology requirement. Third, relevant ontology evaluation measures are mapped to each of those questions. From the ontology requirements of an application, ROMEO is used to determine appropriate methods for ontology evaluation by mapping applicable questions to the requirements and mapping those questions to appropriate measures. In this thesis, we perform the ROMEO methodology to obtain appropriate ontology evaluation methods for ontology-driven applications through case studies of Lonely Planet and Wikipedia. Since the mappings determined by ROMEO are dependent on the analysis of the ontology engineer, the validation of these mappings is needed. As such, in addition to proposing the ROMEO methodology, a method for the empirical validation of ROMEO mappings is proposed in this thesis. We report on two empirical validation experiments that are carried out in controlled environments to examine the performance of the ontologies over a set of tasks. These tasks vary and are used to compare the performance of a set of ontologies in the respective experimental environment. The ontologies used vary on a specific ontology quality or measure being examined. Empirical validation experiments are conducted for two mappings between questions and their associated measures, which are drawn from case studies of Lonely Planet and Wikipedia. These validation experiments focus on mappings between questions and their measures. Furthermore, as these mappings are application-independent, they may be reusable in subsequent applications of the ROMEO methodology. Using a ROMEO mapping from the Lonely Planet case study, we validate a mapping of a coverage question to the F-measure. The validation experiment carried out for this mapping was inconclusive, thus requiring further analysis. Using a ROMEO mapping from the Wikipedia case study, we carry out a separate validation experiment examining a mapping between an intersectedness question and the tangledness measure. The results from this experiment showed the mapping to be valid. For future work, we propose additional validation experiments for mappings that have been identified between questions and measures.

Page generated in 0.0568 seconds