• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 375
  • 218
  • 76
  • 53
  • 24
  • 20
  • 20
  • 18
  • 18
  • 16
  • 8
  • 7
  • 7
  • 6
  • 6
  • Tagged with
  • 915
  • 915
  • 269
  • 205
  • 192
  • 160
  • 156
  • 126
  • 112
  • 109
  • 107
  • 107
  • 106
  • 106
  • 104
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Correctness-Aware High-Level Functional Matching Approaches For Semantic Web Services

Elgedawy, Islam Moukhtar, islam_elgedawy@yahoo.com.au January 2007 (has links)
Existing service matching approaches trade precision for recall, creating the need for humans to choose the correct services, which is a major obstacle for automating the service matching and the service aggregation processes. To overcome this problem, the matchmaker must automatically determine the correctness of the matching results according to the defined users' goals. That is, only service(s)-achieving users' goals are considered correct. This requires the high-level functional semantics of services, users, and application domains to be captured in a machine-understandable format. Also this requires the matchmaker to determine the achievement of users' goals without invoking the services. We propose the G+ model to capture the high-level functional specifications of services and users (namely goals, achievement contexts and external behaviors) providing the basis for automated goal achievement determination; also we propose the concepts substitutability graph to capture the application domains' semantics. To avoid the false negatives resulting from adopting existing constraint and behavior matching approaches during service matching, we also propose new constraint and behavior matching approaches to match constraints with different scopes, and behavior models with different number of state transitions. Finally, we propose two correctness-aware matching approaches (direct and aggregate) that semantically match and aggregate semantic web services according to their G+ models, providing the required theoretical proofs and the corresponding verifying simulation experiments.
102

Utökning av LaTeX med stöd för semantisk information

Löfqvist, Ronny January 2007 (has links)
<p>The semantic web is a vision of the Internets future, there machines and humans can understand the same information. To make this possible, documents have to be provided with metadata in a general language. W3C has created Web Ontology Language (owl) for this purpose.</p><p>This report present the creation of a LaTeX package, which makes it possible to include metadata in pdf files. It also presents how you can create annotations, which are bound to the metadata that's been generated. With the help of this package it's easy to create pdf documents with automatically generated metadata and annotations.</p>
103

Surviving the Information Explosion: How People Find Their Electronic Information

Alvarado, Christine, Teevan, Jaime, Ackerman, Mark S., Karger, David 15 April 2003 (has links)
We report on a study of how people look for information within email, files, and the Web. When locating a document or searching for a specific answer, people relied on their contextual knowledge of their information target to help them find it, often associating the target with a specific document. They appeared to prefer to use this contextual information as a guide in navigating locally in small steps to the desired document rather than directly jumping to their target. We found this behavior was especially true for people with unstructured information organization. We discuss the implications of our findings for the design of personal information management tools.
104

Context Mediation in the Semantic Web: Handling OWL Ontology and Data Disparity through Context Interchange

Tan, Philip Eik Yeow, Tan, Kian Lee, Madnick, Stuart E. 01 1900 (has links)
The COntext INterchange (COIN) strategy is an approach to solving the problem of interoperability of semantically heterogeneous data sources through context mediation. COIN has used its own notation and syntax for representing ontologies. More recently, the OWL Web Ontology Language is becoming established as the W3C recommended ontology language. We propose the use of the COIN strategy to solve context disparity and ontology interoperability problems in the emerging Semantic Web – both at the ontology level and at the data level. In conjunction with this, we propose a version of the COIN ontology model that uses OWL and the emerging rules interchange language, RuleML. / Singapore-MIT Alliance (SMA)
105

Learning Applications based on Semantic Web Technologies

Palmér, Matthias January 2012 (has links)
The interplay between learning and technology is a growing field that is often referred to as Technology Enhanced Learning (TEL). Within this context, learning applications are software components that are useful for learning purposes, such as textbook replacements, information gathering tools, communication and collaboration tools, knowledge modeling tools, rich lab environments that allows experiments etc. When developing learning applications, the choice of technology depends on many factors. For instance, who and how many the intended end-users are, if there are requirements to support in-application collaboration, platform restrictions, the expertise of the developers, requirements to inter-operate with other systems or applications etc. This thesis provides guidance on a how to develop learning applications based on Semantic Web technology. The focus on Semantic Web technology is due to its basic design that allows expression of knowledge at the web scale. It also allows keeping track of who said what, providing subjective expressions in parallel with more authoritative knowledge sources. The intended readers of this thesis include practitioners such as software architects and developers as well as researchers in TEL and other related fields. The empirical part of the this thesis is the experience from the design and development of two learning applications and two supporting frameworks. The first learning application is the web application Confolio/EntryScape which allows users to collect files and online material into personal and shared portfolios. The second learning application is the desktop application Conzilla, which provides a way to create and navigate a landscape of interconnected concepts. Based upon the experience of design and development as well as on more theoretical considerations outlined in this thesis, three major obstacles have been identified: The first obstacle is: lack of non-expert and user friendly solutions for presenting and editing Semantic Web data that is not hard-coded to use a specific vocabulary. The thesis presents five categories of tools that support editing and presentation of RDF. The thesis also discusses a concrete software solution together with a list of the most important features that have crystallized during six major iterations of development. The second obstacle is: lack of solutions that can handle both private and collaborative management of resources together with related Semantic Web data. The thesis presents five requirements for a reusable read/write RDF framework and a concrete software solution that fulfills these requirements. A list of features that have appeared during four major iterations of development is also presented. The third obstacle is: lack of recommendations for how to build learning applications based on Semantic Web technology. The thesis presents seven recommendations in terms of architectures, technologies, frameworks, and type of application to focus on. In addition, as part of the preparatory work to overcome the three obstacles, the thesis also presents a categorization of applications and a derivation of the relations between standards, technologies and application types. / <p>QC 20121105</p>
106

Maintaining Integrity Constraints in Semantic Web

Fang, Ming 10 May 2013 (has links)
As an expressive knowledge representation language for Semantic Web, Web Ontology Language (OWL) plays an important role in areas like science and commerce. The problem of maintaining integrity constraints arises because OWL employs the Open World Assumption (OWA) as well as the Non-Unique Name Assumption (NUNA). These assumptions are typically suitable for representing knowledge distributed across the Web, where the complete knowledge about a domain cannot be assumed, but make it challenging to use OWL itself for closed world integrity constraint validation. Integrity constraints (ICs) on ontologies have to be enforced; otherwise conflicting results would be derivable from the same knowledge base (KB). The current trends of incorporating ICs into OWL are based on its query language SPARQL, alternative semantics, or logic programming. These methods usually suffer from limited types of constraints they can handle, and/or inherited computational expensiveness. This dissertation presents a comprehensive and efficient approach to maintaining integrity constraints. The design enforces data consistency throughout the OWL life cycle, including the processes of OWL generation, maintenance, and interactions with other ontologies. For OWL generation, the Paraconsistent model is used to maintain integrity constraints during the relational database to OWL translation process. Then a new rule-based language with set extension is introduced as a platform to allow users to specify constraints, along with a demonstration of 18 commonly used constraints written in this language. In addition, a new constraint maintenance system, called Jena2Drools, is proposed and implemented, to show its effectiveness and efficiency. To further handle inconsistencies among multiple distributed ontologies, this work constructs a framework to break down global constraints into several sub-constraints for efficient parallel validation.
107

Spatial Ontology for the Production Domain of Petroleum Geology

Liadey, Dickson M. 11 May 2012 (has links)
ABSTRACT The availability of useful information for research strongly depends on well structured relationships between consistently defined concepts (terms) in that domain. This can be achieved through ontologies. Ontologies are models of the knowledge of specific domain such as petroleum geology, in a computer understandable format. Knowledge is a collection of facts. Facts are represented by RDF triples (subject-predicate-object). A domain ontology is therefore a collection of many RDF triples, which represent facts of that domain. The SWEET ontologies are upper or top-level ontologies (foundation ontologies) consisting of thousands of very general concepts. These concepts are obtained from of Earth System science and include other related concepts. The work in this thesis deals with scientific knowledge representation in which the SWEET ontologies are extended to include wider, more specific and specialized concepts used in Petroleum Geology. Thus Petroleum Geology knowledge modeling is presented in this thesis.
108

Ordering, Indexing, and Searching Semantic Data: A Terminology Aware Index Structure

Pound, Jeffrey January 2008 (has links)
Indexing data for efficient search capabilities is a core problem in many domains of computer science. As applications centered around semantic data sources become more common, the need for more sophisticated indexing and querying capabilities arises. In particular, the need to search for specific information in the presence of a terminology or ontology (i.e. a set of logic based rules that describe concepts and their relations) becomes of particular importance, as the information a user seeks may exists as an entailment of the explicit data by means of the terminology. This variant on traditional indexing and search problems forms the foundation of a range of possible technologies for semantic data. In this work, we propose an ordering language for specifying partial orders over semantic data items modeled as descriptions in a description logic. We then show how these orderings can be used as the basis of a search tree index for processing \emph{concept searches} in the presence of a terminology. We study in detail the properties of the orderings and the associated index structure, and also explore a relationship between ordering descriptions called \emph{order refinement}. A sound and complete procedure for deciding refinement is given. We also empirically evaluate a prototype implementation of our index structure, validating its potential efficacy in semantic query problems.
109

Providing Resources to Target User Groups through Customization of Web Site

Shao, Hong, Amirfallah, Aida January 2012 (has links)
In this thesis, we plan to use a group-based semantic-expansion approach to design a new personalised system framework. Semantic web and group preference offer solution to the above problem. In this thesis, ontologies and semantic techniques are applied in different components of the framework. Information has been gathered from different resources and each of the resource might be using various types of identifiers for the same concept, therefore semantic web technologies are used to find out if the concept is the same or not. On the other hand, we create group preference in our personalization system. If the system fails to obtain personal preference from new user, group preference supports the system providing recommendation to the new user according to group classification.
110

Ordering, Indexing, and Searching Semantic Data: A Terminology Aware Index Structure

Pound, Jeffrey January 2008 (has links)
Indexing data for efficient search capabilities is a core problem in many domains of computer science. As applications centered around semantic data sources become more common, the need for more sophisticated indexing and querying capabilities arises. In particular, the need to search for specific information in the presence of a terminology or ontology (i.e. a set of logic based rules that describe concepts and their relations) becomes of particular importance, as the information a user seeks may exists as an entailment of the explicit data by means of the terminology. This variant on traditional indexing and search problems forms the foundation of a range of possible technologies for semantic data. In this work, we propose an ordering language for specifying partial orders over semantic data items modeled as descriptions in a description logic. We then show how these orderings can be used as the basis of a search tree index for processing \emph{concept searches} in the presence of a terminology. We study in detail the properties of the orderings and the associated index structure, and also explore a relationship between ordering descriptions called \emph{order refinement}. A sound and complete procedure for deciding refinement is given. We also empirically evaluate a prototype implementation of our index structure, validating its potential efficacy in semantic query problems.

Page generated in 0.0826 seconds