Spelling suggestions: "subject:"ehe semantic web"" "subject:"hhe semantic web""
141 |
A framework for semantic web implementation based on context-oriented controlled automatic annotationHatem, Muna Salman January 2009 (has links)
The Semantic Web is the vision of the future Web. Its aim is to enable machines to process Web documents in a way that makes it possible for the computer software to "understand" the meaning of the document contents. Each document on the Semantic Web is to be enriched with meta-data that express the semantics of its contents. Many infrastructures, technologies and standards have been developed and have proven their theoretical use for the Semantic Web, yet very few applications have been created. Most of the current Semantic Web applications were developed for research purposes. This project investigates the major factors restricting the wide spread of Semantic Web applications. We identify the two most important requirements for a successful implementation as the automatic production of the semantically annotated document, and the creation and maintenance of semantic based knowledge base. This research proposes a framework for Semantic Web implementation based on context-oriented controlled automatic Annotation; for short, we called the framework the Semantic Web Implementation Framework (SWIF) and the system that implements this framework the Semantic Web Implementation System (SWIS). The proposed architecture provides for a Semantic Web implementation of stand-alone websites that automatically annotates Web pages before being uploaded to the Intranet or Internet, and maintains persistent storage of Resource Description Framework (RDF) data for both the domain memory, denoted by Control Knowledge, and the meta-data of the Web site's pages. We believe that the presented implementation of the major parts of SWIS introduce a competitive system with current state of art Annotation tools and knowledge management systems; this is because it handles input documents in the ii context in which they are created in addition to the automatic learning and verification of knowledge using only the available computerized corporate databases. In this work, we introduce the concept of Control Knowledge (CK) that represents the application's domain memory and use it to verify the extracted knowledge. Learning is based on the number of occurrences of the same piece of information in different documents. We introduce the concept of Verifiability in the context of Annotation by comparing the extracted text's meaning with the information in the CK and the use of the proposed database table Verifiability_Tab. We use the linguistic concept Thematic Role in investigating and identifying the correct meaning of words in text documents, this helps correct relation extraction. The verb lexicon used contains the argument structure of each verb together with the thematic structure of the arguments. We also introduce a new method to chunk conjoined statements and identify the missing subject of the produced clauses. We use the semantic class of verbs that relates a list of verbs to a single property in the ontology, which helps in disambiguating the verb in the input text to enable better information extraction and Annotation. Consequently we propose the following definition for the annotated document or what is sometimes called the 'Intelligent Document' 'The Intelligent Document is the document that clearly expresses its syntax and semantics for human use and software automation'. This work introduces a promising improvement to the quality of the automatically generated annotated document and the quality of the automatically extracted information in the knowledge base. Our approach in the area of using Semantic Web iii technology opens new opportunities for diverse areas of applications. E-Learning applications can be greatly improved and become more effective.
|
142 |
Serviços Web Semânticos: da modelagem à composição / Semantic web services: from modeling to compositionPrazeres, Cássio Vinícius Serafim 31 July 2009 (has links)
A automação de tarefas como descoberta, composição e invocação de Serviços Web é um requisito importante para o sucesso da Web Semântica. Nos casos de insucesso na busca por um serviço, por não existir disponível um serviço completo que atenda plenamente a requisição do usuário, uma possibilidade de contorno é compor o serviço procurado a partir de elementos básicos que atendam parcialmente a requisição inicial e que se completem. A composição de Serviços Web pode ser realizada de forma manual ou de forma automática. Na composição manual, o desenvolvedor de Serviços Web pode tirar proveito da sua expertise sobre os serviços envolvidos na composição e sobre o resultado que se deseja alcançar. Esta tese aborda problemas e apresenta contribuições relacionadas ao processo de composição automática de Serviços Web. A composição automática de Serviços Web requer que os serviços sejam descritos e publicados de forma a modelar o conhecimento (semântica explícita) que o desenvolvedor utiliza para realizar a composição manual. A descoberta automática baseada nas descrições semânticas do serviço é também um passo crucial na direção da composição automática, pois é um estágio anterior necessário para a seleção dos serviços candidatos à composição. Trabalhos da área de pesquisa em Serviços Web Semânticos exploram a utilização dos padrões da Web Semântica para enriquecer, com semântica explícita, a descrição dos Serviços Web. O problema da composição automática de Serviços Web é tratado neste trabalho por meio de três linhas de investigação: modelagem dos Serviços Web Semânticos; descoberta automática de Serviços Web Semânticos; e composição automática de Serviços Web Semânticos. As contribuições desta tese incluem: a plataforma RALOWS para modelagem de aplicações Web como Serviços Web Semânticos, tendo como estudo de caso aplicações para realização de experimentos remotos; um algoritmo para descoberta automática de Serviços Web Semânticos; uma proposta baseada em grafos e caminhos de custo mínimo para prover composição automática de Serviços Web Semânticos; uma infra-estrutura e ferramentas de apoio à descrição, publicação, descoberta e composição de Serviços Web Semânticos / The automation of the discovery, composition and invocation of Web Services is an important step to the success of the Semantic Web. If no single Web Service satisfies the functionality required by one user, an alternative is to combine existing services that solve parts of the problem in order to reach a complete solution. Web Services composition can be achieved manually or automatically. When composing services manually, Web Service developers can take advantage of their expertise and knowledge about the composition services and the target service. This thesis addresses issues and presents contributions related to the process of automating Web Services composition. The automatic composition of Web services requires the description and publication of the services in order to model the necessary knowledge (explicit semantics) that the developer uses to perform the manual composition. The automatic Web Service discovery is a crucial step toward the automatic composition, because it is a previous stage necessary to the selection of composition service candidates. Semantic Web Services researches explore the use of the Semantic Web technologies to enrich the Web Services descriptions with explicit semantics. Three main lines of investigation are adopted in this thesis to explore the process of automatic composition of Web Services. They are the following: Semantic Web Services modeling; automatic discovery of Semantic Web Services; and automatic composition of Semantic Web Services. The main contributions of this thesis include: the RALOWS platform for modeling Web applications as Semantic Web Services; an algorithm for the automatic discovery of Semantic Web Services; a graph-based approach to the automatic composition of Semantic Web Services; and an infrastructure and tools to support the Semantic Web Services description, publishing, discovery and composition
|
143 |
Serviços Web Semânticos: da modelagem à composição / Semantic web services: from modeling to compositionCássio Vinícius Serafim Prazeres 31 July 2009 (has links)
A automação de tarefas como descoberta, composição e invocação de Serviços Web é um requisito importante para o sucesso da Web Semântica. Nos casos de insucesso na busca por um serviço, por não existir disponível um serviço completo que atenda plenamente a requisição do usuário, uma possibilidade de contorno é compor o serviço procurado a partir de elementos básicos que atendam parcialmente a requisição inicial e que se completem. A composição de Serviços Web pode ser realizada de forma manual ou de forma automática. Na composição manual, o desenvolvedor de Serviços Web pode tirar proveito da sua expertise sobre os serviços envolvidos na composição e sobre o resultado que se deseja alcançar. Esta tese aborda problemas e apresenta contribuições relacionadas ao processo de composição automática de Serviços Web. A composição automática de Serviços Web requer que os serviços sejam descritos e publicados de forma a modelar o conhecimento (semântica explícita) que o desenvolvedor utiliza para realizar a composição manual. A descoberta automática baseada nas descrições semânticas do serviço é também um passo crucial na direção da composição automática, pois é um estágio anterior necessário para a seleção dos serviços candidatos à composição. Trabalhos da área de pesquisa em Serviços Web Semânticos exploram a utilização dos padrões da Web Semântica para enriquecer, com semântica explícita, a descrição dos Serviços Web. O problema da composição automática de Serviços Web é tratado neste trabalho por meio de três linhas de investigação: modelagem dos Serviços Web Semânticos; descoberta automática de Serviços Web Semânticos; e composição automática de Serviços Web Semânticos. As contribuições desta tese incluem: a plataforma RALOWS para modelagem de aplicações Web como Serviços Web Semânticos, tendo como estudo de caso aplicações para realização de experimentos remotos; um algoritmo para descoberta automática de Serviços Web Semânticos; uma proposta baseada em grafos e caminhos de custo mínimo para prover composição automática de Serviços Web Semânticos; uma infra-estrutura e ferramentas de apoio à descrição, publicação, descoberta e composição de Serviços Web Semânticos / The automation of the discovery, composition and invocation of Web Services is an important step to the success of the Semantic Web. If no single Web Service satisfies the functionality required by one user, an alternative is to combine existing services that solve parts of the problem in order to reach a complete solution. Web Services composition can be achieved manually or automatically. When composing services manually, Web Service developers can take advantage of their expertise and knowledge about the composition services and the target service. This thesis addresses issues and presents contributions related to the process of automating Web Services composition. The automatic composition of Web services requires the description and publication of the services in order to model the necessary knowledge (explicit semantics) that the developer uses to perform the manual composition. The automatic Web Service discovery is a crucial step toward the automatic composition, because it is a previous stage necessary to the selection of composition service candidates. Semantic Web Services researches explore the use of the Semantic Web technologies to enrich the Web Services descriptions with explicit semantics. Three main lines of investigation are adopted in this thesis to explore the process of automatic composition of Web Services. They are the following: Semantic Web Services modeling; automatic discovery of Semantic Web Services; and automatic composition of Semantic Web Services. The main contributions of this thesis include: the RALOWS platform for modeling Web applications as Semantic Web Services; an algorithm for the automatic discovery of Semantic Web Services; a graph-based approach to the automatic composition of Semantic Web Services; and an infrastructure and tools to support the Semantic Web Services description, publishing, discovery and composition
|
144 |
An Indexation and Discovery Architecture for Semantic Web Services and its Application in BioinformaticsYu, Liyang 09 June 2006 (has links)
Recently much research effort has been devoted to the discovery of relevant Web services. It is widely recognized that adding semantics to service description is the solution to this challenge. Web services with explicit semantic annotation are called Semantic Web Services (SWS). This research proposes an indexation and discovery architecture for SWS, together with a prototype application in the area of bioinformatics. In this approach, a SWS repository is created and maintained by crawling both ontology-oriented UDDI registries and Web sites that hosting SWS. For a given service request, the proposed system invokes the matching algorithm and a candidate set is returned with different degree of matching considered. This approach can add more flexibility to the current industry standards by offering more choices to both the service requesters and publishers. Also, the prototype developed in this research shows the value can be added by using SWS in application areas such as bioinformatics.
|
145 |
Ein Integrations- und Darstellungsmodell für verteilte und heterogene kontextbezogene Informationen / An Integration and Representation Model for Distributed and Heterogeneous Contextual InformationGoslar, Kevin 07 February 2007 (has links) (PDF)
Die "Kontextsensitivität" genannte systematische Berücksichtigung von Umweltinformationen durch Anwendungssysteme kann als Querschnittsfunktion im betrieblichen Umfeld in vielen Bereichen einen Nutzen stiften. Wirklich praxistaugliche kontextsensitive Anwendungssysteme, die sich analog zu einem mitdenkenden menschlichen Assistenten harmonisch in die ablaufenden Vorgänge in der Realwelt einbringen, haben einen enormen Bedarf nach umfassenden, d.h. diverse Aspekte der Realwelt beschreibenden Kontextinformationen, die jedoch prinzipbedingt verteilt in verschiedenen Datenquellen, etwa Kontexterfassungssystemen, Endgeräten sowie prinzipiell auch in beliebigen anderen, z.T. bereits existierenden Anwendungen entstehen. Ziel dieser Arbeit ist die Verringerung der Komplexität des Beschaffungsvorganges von verteilten und heterogenen Kontextinformationen durch Bereitstellung einer einfach verwendbaren Methode zur Darstellung eines umfassenden, aus verteilten und heterogenen Datenquellen zusammengetragenen Kontextmodells. Im Besonderen werden durch diese Arbeit zwei Probleme addressiert, zum einen daß ein Konsument von umfassenden Kontextinformationen mehrere Datenquellen sowohl kennen und zugreifen können und zum anderen über die zwischen den einzelnen Kontextinformationen in verschiedenen Datenquellen existierenden, zunächst nicht modellierten semantischen Verbindungen Bescheid wissen muß. Das dazu entwickelte Kontextinformationsintegrations- und -darstellungsverfahren kombiniert daher ein die Beschaffung und Integration von Kontextinformationen aus diversen Datenquellen modellierendes Informationsintegrationsmodell mit einem Kontextdarstellungsmodell, welches die abzubildende Realweltdomäne basierend auf ontologischen Informationen durch in problemspezifischer Weise erweiterte Verfahren des Semantic Web in einer möglichst intuitiven, wiederverwendbaren und modularen Weise modelliert. Nach einer fundierten Anforderungsanalyse des entwickelten Prinzips wird dessen Verwendung und Nutzen basierend auf der Skizzierung der wichtigsten allgemeinen Verwendungsmöglichkeiten von Kontextinformationen im betrieblichen Umfeld anhand eines komplexen betrieblichen Anwendungsszenarios demonstriert. Dieses beinhaltet ein Nutzerprofil, das von diversen Anwendungen, u.a. einem kontextsensitiven KFZ-Navigationssystem, einer Restaurantsuchanwendung sowie einem Touristenführer verwendet wird. Probleme hinsichtlich des Datenschutzes, der Integration in existierende Umgebungen und Abläufe sowie der Skalierbarkeit und Leistungsfähigkeit des Verfahrens werden ebenfalls diskutiert. / Context-awareness, which is the systematic consideration of information from the environment of applications, can provide significant benefits in the area of business and technology. To be really useful, i.e. harmonically support real-world processes as human assistants do it, practical applications need a comprehensive and detailed contextual information base that describes all relevant aspects of the real world. As a matter of principle, comprehensive contextual information arises in many places and data sources, e.g. in context-aware infrastructures as well as in "normal" applications, which may have knowledge about the context based on their functionality to support a certain process in the real world. This thesis facilitates the use of contextual information by reducing the complexity of the procurement process of distributed and heterogenous contextual information. Particularly, it addresses the two problems that a consumer of comprehensive contextual information needs to be aware of and able to access several different data sources and must know how to combine the contextual information taken from different and isolated data sources into a meaningful representation of the context. Especially the latter information cannot be modelled using the current state of the art. These problems are addressed by the development of an integration and representation model for contextual information that allows to compose comprehensive context models using information inside distributed and heterogeneous data sources. This model combines an information integration model for distributed and heterogenous information (which consists of an access model for heterogeneous data sources, an integration model and an information relation model) with a representation model for context that formalizes the representation of the respective real world domain, i.e. of the real world objects and their semantic relations in an intuitive, reusable and modular way based on ontologies. The resulting model consists of five layers that represent different aspects of the information integration solution. The achievement of the objectives is rated based on a requirement analysis of the problem domain. The technical feasibility and usefulness of the model is demonstrated by the implementation of an engine to support the approach as well as a complex application scenario consisting of a user profile that integrates information from several data sources and a couple of context-aware applications, e.g. a context-aware navigation system, a restaurant finder application as well as an enhanced tourist guide that use the user profile. Problems regarding security and social effects, the integration of this solution into existing environments and infrastructures as well as technical issues like the scalability and performance of this model are discussed too.
|
146 |
Portale und OntologienZimmermann, Kerstin 24 June 2005 (has links) (PDF)
Kerstin Zimmermann, DERI Innsbruck, stellte herkömmliche Portale vor sowie sog. „Ontologien“. Diese erwachsen aus den elementaren Fragen nach „What“ (topic), „Who“ (person), „When“ (time/event), „Where“ (location) und „How“ (meta). Entscheidend bei den Ontologien: Zur Erstellung ist einiger Aufwand notwendig, der aber sich in Mehrwert auszahlt. Mehr dazu unter http://sw-portal.deri.org/ontologies/swportal.html
|
147 |
Portale und OntologienZimmermann, Kerstin 21 August 2007 (has links) (PDF)
Das Original-Dokument wurde in das Format pdf umgewandelt.
Kerstin Zimmermann, DERI Innsbruck, stellte herkömmliche Portale vor sowie sog. „Ontologien“. Diese erwachsen aus den elementaren Fragen nach „What“ (topic), „Who“ (person), „When“ (time/event), „Where“ (location) und „How“ (meta). Entscheidend bei den Ontologien: Zur Erstellung ist einiger Aufwand notwendig, der aber sich in Mehrwert auszahlt. Mehr dazu unter http://sw-portal.deri.org/ontologies/swportal.html
|
148 |
Inhaltsbasierte Erschließung und Suche in multimedialen ObjektenSack, Harald, Waitelonis, Jörg 25 January 2012 (has links) (PDF)
Das kulturelle Gedächtnis speichert immer gewaltigere Mengen von Informationen und Daten. Doch nur ein verschwindend geringer Teil der Inhalte ist derzeit über digitale Kanäle recherchierbar und verfügbar.
Die Projekte mediaglobe und yovisto ermöglichen, den wachsenden Bestand an audiovisuellen Dokumenten auffindbar und nutzbar zu machen und begleiten Medienarchive in die digitale Zukunft.
mediaglobe hat das Ziel, durch automatisierte und semantische Verfahren audiovisuelle Dokumente zur deutschen Zeitgeschichte zu erschließen und verfügbar zu machen. Die Vision von mediaglobe ist ein web-basierter Zugang zu umfassenden digitalen AV-Inhalten in Medienarchiven. Dazu bietet mediaglobe zahlreiche automatisierte Verfahren zur Analyse von audiovisuellen Daten, wie z.B. strukturelle Analyse, Texterkennung im Video, Sprachanalyse oder Genreanalyse. Der Einsatz semantischer Technologien verknüpft die Ergebnisse der AV-Analyse und verbessert qualitativ und quantitativ die Ergebnisse der Multimedia-Suche. Ein Tool zum Rechtemanagement liefert Informationen über die Verfügbarkeit der Inhalte. Innovative und intuitiv bedienbare Benutzeroberflächen machen den Zugang zu kulturellem Erbe aktiv erlebbar.
mediaglobe vereinigt die Projektpartner Hasso-Plattner Institut für Softwaresystemtechnik (HPI), Medien-Bildungsgesellschaft Babelsberg, FlowWorks und das Archiv der defa Spektrum. mediaglobe wird im Rahmen des Forschungsprogramms »THESEUS – Neue Technologien für das Internet der Dienste« durch das Bundesministerium für Wirtschaft und Technologie gefördert.
Die Videosuchmaschine yovisto hingegen ist spezialisiert auf Aufzeichnungen akademischer Lehrveranstaltungen und implementiert explorative und semantische Suchstrategien.
yovisto unterstützt einen mehrstufigen 'explorativen' Suchprozess, in dem der Suchende die Möglichkeit erhält, den Bestand des zugrundeliegenden Medienarchivs über vielfältige Pfade entsprechend seines jeweiligen Interesses zu erkunden, so dass am Ende dieses Suchprozesses Informationen entdeckt werden, von deren Existenz der Suchende bislang nichts wusste. Um dies zu ermöglichen vereinigt yovisto automatisierte semantische Medienanalyse mit benutzergenerierten Metadaten zur inhaltlichen Erschließung von AV-Daten und ermöglicht dadurch eine punktgenaue inhaltsbasierte Suche in Videoarchiven.
|
149 |
Managing and Consuming Completeness Information for RDF Data SourcesDarari, Fariz 04 July 2017 (has links) (PDF)
The ever increasing amount of Semantic Web data gives rise to the question: How complete is the data? Though generally data on the Semantic Web is incomplete, many parts of data are indeed complete, such as the children of Barack Obama and the crew of Apollo 11. This thesis aims to study how to manage and consume completeness information about Semantic Web data. In particular, we first discuss how completeness information can guarantee the completeness of query answering. Next, we propose optimization techniques of completeness reasoning and conduct experimental evaluations to show the feasibility of our approaches. We also provide a technique to check the soundness of queries with negation via reduction to query completeness checking. We further enrich completeness information with timestamps, enabling query answers to be checked up to when they are complete. We then introduce two demonstrators, i.e., CORNER and COOL-WD, to show how our completeness framework can be realized. Finally, we investigate an automated method to generate completeness statements from text on the Web via relation cardinality extraction.
|
150 |
Statistical Extraction of Multilingual Natural Language Patterns for RDF Predicates: Algorithms and ApplicationsGerber, Daniel 07 June 2016 (has links)
The Data Web has undergone a tremendous growth period.
It currently consists of more then 3300 publicly available knowledge bases describing millions of resources from various domains, such as life sciences, government or geography, with over 89 billion facts.
In the same way, the Document Web grew to the state where approximately 4.55 billion websites exist, 300 million photos are uploaded on Facebook as well as 3.5 billion Google searches are performed on average every day.
However, there is a gap between the Document Web and the Data Web, since for example knowledge bases available on the Data Web are most commonly extracted from structured or semi-structured sources, but the majority of information available on the Web is contained in unstructured sources such as news articles, blog post, photos, forum discussions, etc.
As a result, data on the Data Web not only misses a significant fragment of information but also suffers from a lack of actuality since typical extraction methods are time-consuming and can only be carried out periodically.
Furthermore, provenance information is rarely taken into consideration and therefore gets lost in the transformation process.
In addition, users are accustomed to entering keyword queries to satisfy their information needs.
With the availability of machine-readable knowledge bases, lay users could be empowered to issue more specific questions and get more precise answers.
In this thesis, we address the problem of Relation Extraction, one of the key challenges pertaining to closing the gap between the Document Web and the Data Web by four means.
First, we present a distant supervision approach that allows finding multilingual natural language representations of formal relations already contained in the Data Web.
We use these natural language representations to find sentences on the Document Web that contain unseen instances of this relation between two entities.
Second, we address the problem of data actuality by presenting a real-time data stream RDF extraction framework and utilize this framework to extract RDF from RSS news feeds.
Third, we present a novel fact validation algorithm, based on natural language representations, able to not only verify or falsify a given triple, but also to find trustworthy sources for it on the Web and estimating a time scope in which the triple holds true.
The features used by this algorithm to determine if a website is indeed trustworthy are used as provenance information and therewith help to create metadata for facts in the Data Web.
Finally, we present a question answering system that uses the natural language representations to map natural language question to formal SPARQL queries, allowing lay users to make use of the large amounts of data available on the Data Web to satisfy their information need.
|
Page generated in 0.1026 seconds