1 |
Detecting and managing relationships between ontologiesAllocca, Carlo January 2011 (has links)
Semantic Web can be seen as the web scale knowledge base where ontologies are in- tended to provide knowledge engineering and artificial intelligence support for mod- elling intelligent applications. Semantic Web Search Engine (SWSE), such as Watson, Swoogle and Sindice represent the state of the art of computer based system to sup- port users in finding and therefore in reusing, the formalised knowledge and semantic data (ontologies) available on line. However, to find ontologies is a complex process. It generally requires formulating multiple queries, browsing many pages of results and assessing the returned ontologies against each other to obtain a relevant and adequate subset for the intended use. In this thesis we mainly focused on the investigation of the following research hypothesis: part of the issue with searching ontologies through SWSE systems comes from the lack of structure in the search results, where ontologies that are implicitly related to each other are presented as disconnected. Making explicit relationships between ontologies in the results of SWSEs improves their efficiency and the user satisfaction in searching ontologies. We use an ontology-based framework in order to both identify the characteristic of such relationships and design mechanisms for detecting and managing them in large scale ontology repository. We show how this framework is used on top of the repository of the Watson semantic web search engine, and integrated with its interface, in order to provide explicit information re- garding the various types of relationships ontologies share in the results from Watson. We evaluate the benefit of this approach to ontology search in a user study, showing through multiple indicators that finding ontologies is made easier and more efficient when information regarding relationships between ontologies is used to provide links and structuring mechanisms in the results of the search engine.
|
2 |
Semantically-enabled keyword search for expert witness discovery applied to a legal professional networkSitthisarn, Siraya January 2012 (has links)
Legal professionals often need to discover expert witnesses with specialized expertise and experience to give an expert opinion in a legal dispute resolution case. The common practice is that legal professionals use their personal networks and internet searches to discover and verify suitable expert witnesses. In addition they may use online systems such as directory services and social network sites (e.g. LinkedIn). However these systems describe experts using broad categories and shallow vocabularies making it difficult to identify expert witnesses for a specific domain such toy safety dispute cases. Keyword searches in these systems are usually based on conventional data models or unstructured text files. This means that although the search results have high recall, but low precision. This resulted in many irrelevant expert witnesses being identified. This thesis reports on the potential of using semantic web technology and social networking to better support expert witness discovery and improve the precision of the keyword search. The case study used in this research was from the toy safety disputes domain. The research was primarily advised by a barrister with good knowledge of this area of law. This thesis reports on a novel "semantically-enabled keyword search for expert witness discovery" that has been developed. A semantically enriched expert witness information knowledge base has been built to enhance the expert witness profile for use within the social network. The semantic data model enabled the information about expert witnesses to be stored and retrieved with higher precision and recall. Unfortunately formal semantic query languages (such as SPARQL) used to search the knowledge base require the user to understand the ontology and master the syntax. For this reason, a prototype "Semantic and Keyword interface engine" (SKengine) was developed. The SKengine automatically generates and selects a set of SPARQL queries derived from the user-input keywords. It then extracts the possible meanings of the keywords from the domain specific knowledge base, then generates and selects the SP ARQL query that best fitted the keywords entered by the user. Finally the generated SPARQL query is executed to retrieve the selected expert witness information from the knowledge base. The result of the semantic query is then returned to the user. To generate the SPARQL query the SKengine used a novel "fix-root query graph construction" algorithm. This was demonstrated to be sufficient for the discovery of expert witnesses. The algorithm avoids generating query trees with irrelevant roots that are not involved with expert witness discovery. The experimental results showed that the prototype has significantly improved the precision and relevance of the query results. In addition, evaluation was conducted to understand time performance of SKengine.
|
3 |
Generic searches that result in semantic web matchmaking and rankingHe, Xin January 2011 (has links)
The Semantic Web, as a complement of the World Wide Web, has attracted much attention from academic and industrial organisations. Part of the Semantic Web vision is to provide web-scale access to semantically described content. In the same way as web pages are the basic building blocks of the conventional Web, RDF Resources, are the fundamental components of the Semantic Web. Therefore, searching resources on the Semantic Web is equivalent in importance to the retrieval of conventional web pages. In recent years, research efforts have been focused on generic querying and matchmaking approaches tailored to Semantic Web data. These are known as Semantic Web search engines. However, these systems have disadvantages especially in terms of the indexing and ranking schemes deployed. In this study, by analysing the limitations present in the existing efforts and considering the specific way that semantic data is stored, a Semantic Web query solution is proposed, powered by an engine called xhSearch. xhSearch is primarily a unary relation-centred system. It does not assume that the resources in RDF datasets belong to any specific domain, or that the structure of each resource is known prior to the parsing of RDF datasets. This thesis, has demonstrated how RDF graph structures are indexed and explored using a specific tree-based model; how query performance is improved by using internal identifiers; and how textual information is effectively searched by re-using existing information retrieval technologies. Moreover, a ranking mechanism is proposed such that it takes into account multiple factors, including relevance, importance, and query-length. The experimental tests performed with xhSearch have demonstrated scalable performance.
|
4 |
Developing a compositional ontology alignment framework for unifying business and engineering domainsAzzam, Said Rabah January 2012 (has links)
In the context of the Semantic Web, ontologies refer to the consensual and formal description of shared concepts in a domain. Ontologies are said to be a way to aid communication between humans and machines and also between machines for agent communication. The importance of ontologies for providing a shared understanding of common domains, and as a means for data exchange at the syntactic and semantic level has increased considerably in the last years. Therefore, ontology management becomes a significant task to make distributed and heterogeneous knowledge bases available to the end users. Ontology alignment is the process where ontology from different domains can be matched and processed further together, hence sharing a common understanding of the structure of information among different people. This research starts from a comprehensive review of the current development of ontology, the concepts of ontology alignments and relevant approaches. The first motivation of this work is trying to summarise the common features of ontology alignment and identify underdevelopment areas of ontology alignment. It then works on how complex businesses can be designed and managed by semantic modelling which can help define the data and the relationships between these entities, which provides the ability to abstract different kinds of data and provides an understanding of how the data elements relate. The main contributions of this work is to develop a framework of handling an important category of ontology alignment based on the logical composition of classes, especially under a case that one class from a certain domain becomes a logic prerequisites (assumption) of another class from a different domain (commitment) which only happens if the class from the first domain becomes valid. Under this logic, previously un-alignable classes or miss-aligned classes can be aligned in a significantly improved manner. A well-known rely/guarantee method has been adopted to clearly express such relationships between newly-alignable classes. The proposed methodology has be implemented and evaluated on a realistic case study.
|
5 |
Link integrity for the Semantic WebVesse, Robert January 2012 (has links)
The usefulness and usability of data on the Semantic Web is ultimately reliant on the ability of clients to retrieve Resource Description Framework (RDF) data from the Web. When RDF data is unavailable clients reliant on that data may either fail to function entirely or behave incorrectly. As a result there is a need to investigate and develop techniques that aim to ensure that some data is still retrievable, even in the event that the primary source of the data is unavailable. Since this problem is essentially the classic link integrity problem from hypermedia and the Web we look at the range of techniques that have been suggested by past research and attempt to adapt these to the Semantic Web. Having studied past research we identified two potentially promising strategies for solving the problem: 1) Replication and Preservation; and 2) Recovery. Using techniques developed to implement these strategies for hypermedia and the Web as a starting point we designed our own implementations which adapted these appropriately for the Semantic Web. We describe the design, implementation and evaluation of our adaptations before going on to discuss the implications of the usage of such techniques. In this research we show that such approaches can be used to successfully apply link integrity to the Semantic Web for a variety of datasets on the Semantic Web but that further research is needed before such solutions can be widely deployed.
|
6 |
A Bayesian network model for entity-oriented semantic web searchKoumenides, Christos January 2013 (has links)
The rise of standards for semi-structured machine processable information and the increasing awareness of the potentials of a semantic Web are leading the way towards a more meaningful Web of data. Questions regarding location and retrieval of relevant data remain fundamental in achieving a good integration of disparate resources and the effective delivery of data items to the needs of particular applications and users. We consider the basis of such a framework as an Information Retrieval system that can cope with semi-structured data. This thesis examines the development of an Information Retrieval model to support text-based search over formal Semantic Web knowledge bases. Our semantic search model adapts Bayesian Networks as a unifying modelling framework to represent, and make explicit in the retrieval process, the presence of multiple relations that potentially link semantic resources together or with primitive data values, as it is customary with Semantic Web data. We achieve this by developing a generative model that is capable to express Semantic Web data and expose their structure to statistical scrutiny and generation of inference procedures. We employ a variety of techniques to bring together a unified ranking strategy with a sound mathematical foundation and potential for further extensions and modifications. Part of our goal in designing this model has been to enable reasoning with more complex or expressive information requests, with semantics specified explicitly by users or incorporated via more implicit bindings. The ground foundations of the model offer a rich and extensible setting to satisfy an interesting set of queries and incorporate a variety of techniques for fusing probabilistic evidence, both new and familiar. Empirical evaluation of the model is carried out using conventional Recall/Precision effectiveness metrics to demonstrate its performance over a collection of RDF transposed government catalogue records. Statistical significance tests are employed to compare different implementations of the model over different query sets of relative complexity.
|
7 |
Towards ontology design patterns to model multiple classification criteria of domain concepts in the Semantic WebRodriguez Castro, Benedicto January 2012 (has links)
This thesis explores a very recurrent modeling scenario in ontology design that deals with the notion of real world concepts that can be classified according to multiple criteria. Current ontology modeling guidelines do not explicitly consider this aspect in the representation of such concepts. Such void leaves ample room for ad-hoc practices that can lead to unexpected or undesired results in ontology artifacts. The aim is to identify best practices and design patterns to represent such concepts in OWL DL ontologies suitable for deployment in the Web of Data and the Semantic Web. To assist with these issues, an initial set of basic design guidelines is put forward, that mitigates the opportunity for ad-hoc modeling decisions in the development of ontologies for the problem scenario described. These guidelines relies upon an existing simplified methodology for facet analysis from the field of Library and Information Science. The outcome of this facet analysis produces a Faceted Classification Scheme (FCS) for the concept in question where in most cases a facet would correspond to a classification criterion. The Value Partition, the Class As Property Value and the Normalisation Ontology Design Patterns (ODPs) are revisited to produce an ontology representation of a FCS. A comparative analysis between a FCS and the Normalisation ODP in particular, revealed the existence of key similarities between the elements in the generic structure of both knowledge representation paradigms. These similarities allow to establish a series of mappings to transform a FCS into an OWL DL ontology that contains a valid representation of the classification criteria involved in the characterization of the domain concept. An existing FCS example in the domain of \Dishwasher Detergent" and existing ontology examples in the domain of \Pizza", \Wine" and \Fault" (in the context of a computer system) are used to illustrate the outcome of this research
|
8 |
Archaeology and the Semantic WebIsaksen, Leif January 2011 (has links)
This thesis explores the application of Semantic Web technologies to the discipline of Archaeology. Part One (Chapters 1-3) offers a discussion of historical developments in this field. It begins with a general comparison of the supposed benefits of semantic technologies and notes that they partially align with the needs of archaeologists. This is followed by a literature review which identifies two different perspectives on the Semantic Web: Mixed-Source Knowledge Representation (MSKR), which focuses on data interoperability between closed systems, and Linked Open Data (LOD), which connects decentralized, open resources. Part One concludes with a survey of 40 Cultural Heritage projects that have used semantic technologies and finds that they are indeed divided between these two visions. Part Two (Chapters 4-7) uses a case study, Roman Port Networks, to explore ways of facilitating MSKR. Chapter 4 describes a simple ontology and vocabulary framework, by means of which independently produced digital datasets pertaining to amphora finds at Roman harbour sites can be combined. The following chapters describe two entirely different approaches to converting legacy data to an ontology-compliant semantic format. The first, TRANSLATION, uses a 'Wizard'-style toolkit. The second, 'Introducing Semantics', is a wiki-based cookbook. Both methods are evaluated and found to be technically capable but socially impractical. The final chapter argues that the reason for this impracticality is the small-to-medium scale typical of MSKR projects. This does not allow for sufficient analytical return on the high level of investment required of project partners to convert and work with data in a new and unfamiliar format. It further argues that the scale at which such investment pays off is only likely to arise in an open and decentralized data landscape. Thus, for Archaeology to benefit from semantic technologies would require a severe sociological shift from current practice towards openness and decentralization. Whether such a shift is either desirable or feasible is raised as a topic for future work.
|
9 |
Role of description logic reasoning in ontology matchingReul, Quentin H. January 2012 (has links)
Semantic interoperability is essential on the Semantic Web to enable different information systems to exchange data. Ontology matching has been recognised as a means to achieve semantic interoperability on the Web by identifying similar information in heterogeneous ontologies. Existing ontology matching approaches have two major limitations. The first limitation relates to similarity metrics, which provide a pessimistic value when considering complex objects such as strings and conceptual entities. The second limitation relates to the role of description logic reasoning. In particular, most approaches disregard implicit information about entities as a source of background knowledge. In this thesis, we first present a new similarity function, called the degree of commonality coefficient, to compute the overlap between two sets based on the similarity between their elements. The results of our evaluations show that the degree of commonality performs better than traditional set similarity metrics in the ontology matching task. Secondly, we have developed the Knowledge Organisation System Implicit Mapping (KOSIMap) framework, which differs from existing approaches by using description logic reasoning (i) to extract implicit information as background knowledge for every entity, and (ii) to remove inappropriate correspondences from an alignment. The results of our evaluation show that the use of Description Logic in the ontology matching task can increase coverage. We identify people interested in ontology matching and reasoning techniques as the target audience of this work
|
Page generated in 0.0305 seconds