• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 2
  • 1
  • Tagged with
  • 12
  • 12
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Semantic Geospatial Search and Ranking in the Context of the Geographical Information System TerraFly

Alkhawaja, Mortadha Ali 01 January 2010 (has links)
Modern Web based GIS systems have responded significantly to semantic Web technology as it offers opportunities to overcome interoperability and integration problems. There are abundant needs especially for the systems intending to provide more than just a map with basic geographical information. More sophisticated systems can offer more than navigation services and can integrate with several data sources, thereby providing a richer, wider and highly usable information service to be used in business, governmental and different life domains. Search is an essential part of any GIS system because of the huge amount of data representing different meanings that are stored in one or distributed data sources. A model is presented which focuses on searching for geospatial information to answer query semantics rather than query syntax. This model used the most recent and approved standards among the semantic Web communities, and was applied on TerraFly a GIS system. Since ranking is a critical factor in measuring the quality of any search engine, a ranking algorithm is also proposed and evaluated.
2

Exploring potential improvements to term-based clustering of web documents

Arac̆ić, Damir, January 2007 (has links) (PDF)
Thesis (M.S.)--Washington State University, December 2007. / Includes bibliographical references (p. 66-69).
3

Semantic information systems engineering : a query-based approach for semi-automatic annotation of web services

Al Asswad, Mohammad Mourhaf January 2011 (has links)
There has been an increasing interest in Semantic Web services (SWS) as a proposed solution to facilitate automatic discovery, composition and deployment of existing syntactic Web services. Successful implementation and wider adoption of SWS by research and industry are, however, profoundly based on the existence of effective and easy to use methods for service semantic description. Unfortunately, Web service semantic annotation is currently performed by manual means. Manual annotation is a difficult, error-prone and time-consuming task and few approaches exist aiming to semi-automate that task. Existing approaches are difficult to use since they require ontology building. Moreover, these approaches employ ineffective matching methods and suffer from the Low Percentage Problem. The latter problem happens when a small number of service elements - in comparison to the total number of elements – are annotated in a given service. This research addresses the Web services annotation problem by developing a semi-automatic annotation approach that allows SWS developers to effectively and easily annotate their syntactic services. The proposed approach does not require application ontologies to model service semantics. Instead, a standard query template is used: This template is filled with data and semantics extracted from WSDL files in order to produce query instances. The input of the annotation approach is the WSDL file of a candidate service and a set of ontologies. The output is an annotated WSDL file. The proposed approach is composed of five phases: (1) Concept extraction; (2) concept filtering and query filling; (3) query execution; (4) results assessment; and (5) SAWSDL annotation. The query execution engine makes use of name-based and structural matching techniques. The name-based matching is carried out by CN-Match which is a novel matching method and tool that is developed and evaluated in this research. The proposed annotation approach is evaluated using a set of existing Web services and ontologies. Precision (P), Recall (R), F-Measure (F) and Percentage of annotated elements are used as evaluation metrics. The evaluation reveals that the proposed approach is effective since - in relation to manual results - accurate and almost complete annotation results are obtained. In addition, high percentage of annotated elements is achieved using the proposed approach because it makes use of effective ontology extension mechanisms.
4

Semantic highlighting : an approach to communicating information and knowledge through visual metadata

Hussam, Ali January 1999 (has links)
No description available.
5

Composição automática de serviços web semânticos : uma abordagem com times assíncronos e operadores genéticos / Automatic composition of semantic web services : an approach with asynchronous teams and genetic operators

Tizzo, Neil Paiva 20 August 2018 (has links)
Orientadores: Eleri Cardozo, Juan Manuel Adán Coello / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação / Made available in DSpace on 2018-08-20T11:09:42Z (GMT). No. of bitstreams: 1 Tizzo_NeilPaiva_D.pdf: 4158360 bytes, checksum: 2da573ad2a127a6b19a4e75f6b8f2d76 (MD5) Previous issue date: 2012 / Resumo: A automação da composição de serviços Web é, na visão do autor, um dos problemas mais importantes da área de serviços Web. Além de outras características, destaca-se que somente a composição automática é capaz de lidar com ambientes mutáveis onde os serviços são permanentemente inseridos, removidos e modificados. Os métodos existentes para realizar a composição automática de serviços apresentam várias limitações. Alguns tratam de um número muito restrito de fluxos de controles e outros não consideram a marcação semântica dos serviços. Em adição, em muitos casos não há avaliações quantitativas do desempenho dos métodos. Desta forma, o objetivo desta tese é propor um método para realizar a composição automática de serviços Web semânticos que considera os cinco tipos básico de fluxo de controle identificados pela Workflow Management Coalition, a saber: sequencial, separação paralela, sincronização, escolha-exclusiva e união simples; bem como para o fluxo de controle em laço, considerado um fluxo do tipo estrutural. As regras que descrevem a composição entre os serviços são híbridas, baseadas em semântica e em técnicas de recuperação de informação. Os serviços são descritos em OWL-S, uma ontologia descrita em OWL que permite descrever semanticamente os atributos IOPE (parâmetros de entrada, de saída, pré-requisitos e efeitos) de um serviço, mas somente os parâmetros de entrada e saída foram levados em consideração neste trabalho. Para validar a abordagem foi implementado um protótipo que utilizou times assíncronos (A-Teams) com agentes baseados em algoritmos genéticos para realizar a composição segundo os padrões de fluxo sequencial, paralelo e sincronização. A avaliação experimental do algoritmo de composição foi realizada utilizando uma coleção de serviços Web semânticos pública composta de mais de 1000 descrições de serviços. As avaliações de desempenho, em vários cenários típicos, medidas em relação ao tempo de resposta médio e à quantidade de vezes em que a função de avaliação (função fitness) é calculada são igualmente apresentadas. Para os casos mais simples de composição, o algoritmo conseguiu reduzir o tempo de resposta em relação a uma busca cega em aproximadamente 97%. Esta redução aumenta à medida que a complexidade da composição também aumenta / Abstract: The automation of the composition of Web services is, in the view of the author, one of the most important problems in the area of Web services. Beyond other characteristics, only the automatic composition can deal with a changing environment where the services are permanently inserted, removed, and modified. Existing methods performing the automatic service composition have several limitations. Some deal with a very limited number of control flow patterns, while others do not consider the semantic markup of services. In addition, in many cases there is no quantitative evaluation of the method's performance. In such a way, the objective of this thesis is to propose a method to perform the automatic composition of semantic Web services considering the five basic types of control flow identified by the Workflow Management Coalition, namely: sequential, parallel split, synchronization, exclusive choice and simple merge; and for loop control flow, classified as a structural control flow pattern. The rules that describe the composition of the service are hybrid: based in semantics and in information retrieval techniques. Services are described in OWL-S, an ontology described in OWL that allows the semantically description of the IOPE attributes (input, output, prerequisite and effect) of a service, but only the input and output parameters were taken into consideration in this work. A prototype was implemented to validate the proposed rules. An asynchronous Team (A-Team) algorithm with genetic agents was used to carry out the composition according to the sequential, parallel and synchronization control flows. The experimental evaluation of the composition algorithm employed a public collection of semantic Web services composed of more than 1000 descriptions of services. An experimental performance evaluation showed that, for simple composition cases, the algorithm reduced the average response time in approximately 97%, when compared to blind search. This reduction increases as the composition complexity increases / Doutorado / Engenharia de Computação / Doutor em Engenharia Elétrica
6

Role of description logic reasoning in ontology matching

Reul, Quentin H. January 2012 (has links)
Semantic interoperability is essential on the Semantic Web to enable different information systems to exchange data. Ontology matching has been recognised as a means to achieve semantic interoperability on the Web by identifying similar information in heterogeneous ontologies. Existing ontology matching approaches have two major limitations. The first limitation relates to similarity metrics, which provide a pessimistic value when considering complex objects such as strings and conceptual entities. The second limitation relates to the role of description logic reasoning. In particular, most approaches disregard implicit information about entities as a source of background knowledge. In this thesis, we first present a new similarity function, called the degree of commonality coefficient, to compute the overlap between two sets based on the similarity between their elements. The results of our evaluations show that the degree of commonality performs better than traditional set similarity metrics in the ontology matching task. Secondly, we have developed the Knowledge Organisation System Implicit Mapping (KOSIMap) framework, which differs from existing approaches by using description logic reasoning (i) to extract implicit information as background knowledge for every entity, and (ii) to remove inappropriate correspondences from an alignment. The results of our evaluation show that the use of Description Logic in the ontology matching task can increase coverage. We identify people interested in ontology matching and reasoning techniques as the target audience of this work
7

Traitement et raisonnement distribués des flux RDF / Distributed RDF stream processing and reasoning

Ren, Xiangnan 19 November 2018 (has links)
Le traitement en temps réel des flux de données émanant des capteurs est devenu une tâche courante dans de nombreux scénarios industriels. Dans le contexte de l'Internet des objets (IoT), les données sont émises par des sources de flux hétérogènes, c'est-à-dire provenant de domaines et de modèles de données différents. Cela impose aux applications de l'IoT de gérer efficacement l'intégration de données à partir de ressources diverses. Le traitement des flux RDF est dès lors devenu un domaine de recherche important. Cette démarche basée sur des technologies du Web Sémantique supporte actuellement de nombreuses applications innovantes où les notions de temps réel et de raisonnement sont prépondérantes. La recherche présentée dans ce manuscrit s'attaque à ce type d'application. En particulier, elle a pour objectif de gérer efficacement les flux de données massifs entrants et à avoir des services avancés d’analyse de données, e.g., la détection d’anomalie. Cependant, un moteur de RDF Stream Processing (RSP) moderne doit prendre en compte les caractéristiques de volume et de vitesse rencontrées à l'ère du Big Data. Dans un projet industriel d'envergure, nous avons découvert qu'un moteur de traitement de flux disponible 24/7 est généralement confronté à un volume de données massives, avec des changements dynamiques de la structure des données et les caractéristiques de la charge du système. Pour résoudre ces problèmes, nous proposons Strider, un moteur de traitement de flux RDF distribué, hybride et adaptatif qui optimise le plan de requête logique selon l’état des flux de données. Strider a été conçu pour garantir d'importantes propriétés industrielles telles que l'évolutivité, la haute disponibilité, la tolérance aux pannes, le haut débit et une latence acceptable. Ces garanties sont obtenues en concevant l'architecture du moteur avec des composants actuellement incontournables du Big Data: Apache Spark et Apache Kafka. De plus, un nombre croissant de traitements exécutés sur des moteurs RSP nécessitent des mécanismes de raisonnement. Ils se traduisent généralement par un compromis entre le débit de données, la latence et le coût computationnel des inférences. Par conséquent, nous avons étendu Strider pour prendre en charge la capacité de raisonnement en temps réel avec un support d'expressivité d'ontologies en RDFS + (i.e., RDFS + owl:sameAs). Nous combinons Strider avec une approche de réécriture de requêtes pour SPARQL qui bénéficie d'un encodage intelligent pour les bases de connaissances. Le système est évalué selon différentes dimensions et sur plusieurs jeux de données, pour mettre en évidence ses performances. Enfin, nous avons exploré le raisonnement du flux RDF dans un contexte d'ontologies exprimés avec un fragment d'ASP (Answer Set Programming). La considération de cette problématique de recherche est principalement motivée par le fait que de plus en plus d'applications de streaming nécessitent des tâches de raisonnement plus expressives et complexes. Le défi principal consiste à gérer les dimensions de débit et de latence avec des méthologies efficaces. Les efforts récents dans ce domaine ne considèrent pas l'aspect de passage à l'échelle du système pour le raisonnement des flux. Ainsi, nous visons à explorer la capacité des systèmes distribuées modernes à traiter des requêtes d'inférence hautement expressive sur des flux de données volumineux. Nous considérons les requêtes exprimées dans un fragment positif de LARS (un cadre logique temporel basé sur Answer Set Programming) et proposons des solutions pour traiter ces requêtes, basées sur les deux principaux modèles d’exécution adoptés par les principaux systèmes distribuées: Bulk Synchronous Parallel (BSP) et Record-at-A-Time (RAT). Nous mettons en œuvre notre solution nommée BigSR et effectuons une série d’évaluations. Nos expériences montrent que BigSR atteint un débit élevé au-delà du million de triplets par seconde en utilisant un petit groupe de machines / Real-time processing of data streams emanating from sensors is becoming a common task in industrial scenarios. In an Internet of Things (IoT) context, data are emitted from heterogeneous stream sources, i.e., coming from different domains and data models. This requires that IoT applications efficiently handle data integration mechanisms. The processing of RDF data streams hence became an important research field. This trend enables a wide range of innovative applications where the real-time and reasoning aspects are pervasive. The key implementation goal of such application consists in efficiently handling massive incoming data streams and supporting advanced data analytics services like anomaly detection. However, a modern RSP engine has to address volume and velocity characteristics encountered in the Big Data era. In an on-going industrial project, we found out that a 24/7 available stream processing engine usually faces massive data volume, dynamically changing data structure and workload characteristics. These facts impact the engine's performance and reliability. To address these issues, we propose Strider, a hybrid adaptive distributed RDF Stream Processing engine that optimizes logical query plan according to the state of data streams. Strider has been designed to guarantee important industrial properties such as scalability, high availability, fault-tolerant, high throughput and acceptable latency. These guarantees are obtained by designing the engine's architecture with state-of-the-art Apache components such as Spark and Kafka. Moreover, an increasing number of processing jobs executed over RSP engines are requiring reasoning mechanisms. It usually comes at the cost of finding a trade-off between data throughput, latency and the computational cost of expressive inferences. Therefore, we extend Strider to support real-time RDFS+ (i.e., RDFS + owl:sameAs) reasoning capability. We combine Strider with a query rewriting approach for SPARQL that benefits from an intelligent encoding of knowledge base. The system is evaluated along different dimensions and over multiple datasets to emphasize its performance. Finally, we have stepped further to exploratory RDF stream reasoning with a fragment of Answer Set Programming. This part of our research work is mainly motivated by the fact that more and more streaming applications require more expressive and complex reasoning tasks. The main challenge is to cope with the large volume and high-velocity dimensions in a scalable and inference-enabled manner. Recent efforts in this area still missing the aspect of system scalability for stream reasoning. Thus, we aim to explore the ability of modern distributed computing frameworks to process highly expressive knowledge inference queries over Big Data streams. To do so, we consider queries expressed as a positive fragment of LARS (a temporal logic framework based on Answer Set Programming) and propose solutions to process such queries, based on the two main execution models adopted by major parallel and distributed execution frameworks: Bulk Synchronous Parallel (BSP) and Record-at-A-Time (RAT). We implement our solution named BigSR and conduct a series of evaluations. Our experiments show that BigSR achieves high throughput beyond million-triples per second using a rather small cluster of machines
8

Xodx – Konzeption und Implementierung eines Distributed Semantic Social Network Knotens

Arndt, Natanael 26 February 2018 (has links)
Betrieb eines Knotens in einem Distributed Semantic Social Network. Der Knoten umfasst Funktionen zur Erstellung einer persönlichen Beschreibung, zur Verwaltung von Freundschaftsbeziehungen und zur Kommunikation mit anderen Teilnehmern des Netzwerks. Die entstandene Implementierung ist bereits auf leistungsschwacher, kostengünstiger und energieeffizienter Hardware praktisch im Einsatz. Zusätzlich wurden ihre Skalierungseigenschaften in einem Testaufbau mit mehreren Knoten untersucht.
9

A generic architecture for semantic enhanced tagging systems

Magableh, Murad January 2011 (has links)
The Social Web, or Web 2.0, has recently gained popularity because of its low cost and ease of use. Social tagging sites (e.g. Flickr and YouTube) offer new principles for end-users to publish and classify their content (data). Tagging systems contain free-keywords (tags) generated by end-users to annotate and categorise data. Lack of semantics is the main drawback in social tagging due to the use of unstructured vocabulary. Therefore, tagging systems suffer from shortcomings such as low precision, lack of collocation, synonymy, multilinguality, and use of shorthands. Consequently, relevant contents are not visible, and thus not retrievable while searching in tag-based systems. On the other hand, the Semantic Web, so-called Web 3.0, provides a rich semantic infrastructure. Ontologies are the key enabling technology for the Semantic Web. Ontologies can be integrated with the Social Web to overcome the lack of semantics in tagging systems. In the work presented in this thesis, we build an architecture to address a number of tagging systems drawbacks. In particular, we make use of the controlled vocabularies presented by ontologies to improve the information retrieval in tag-based systems. Based on the tags provided by the end-users, we introduce the idea of adding “system tags” from semantic, as well as social, resources. The “system tags” are comprehensive and wide-ranging in comparison with the limited “user tags”. The system tags are used to fill the gap between the user tags and the search terms used for searching in the tag-based systems. We restricted the scope of our work to tackle the following tagging systems shortcomings: - The lack of semantic relations between user tags and search terms (e.g. synonymy, hypernymy), - The lack of translation mediums between user tags and search terms (multilinguality), - The lack of context to define the emergent shorthand writing user tags. To address the first shortcoming, we use the WordNet ontology as a semantic lingual resource from where system tags are extracted. For the second shortcoming, we use the MultiWordNet ontology to recognise the cross-languages linkages between different languages. Finally, to address the third shortcoming, we use tag clusters that are obtained from the Social Web to create a context for defining the meaning of shorthand writing tags. A prototype for our architecture was implemented. In the prototype system, we built our own database to host videos that we imported from real tag-based system (YouTube). The user tags associated with these videos were also imported and stored in the database. For each user tag, our algorithm adds a number of system tags that came from either semantic ontologies (WordNet or MultiWordNet), or from tag clusters that are imported from the Flickr website. Therefore, each system tag added to annotate the imported videos has a relationship with one of the user tags on that video. The relationship might be one of the following: synonymy, hypernymy, similar term, related term, translation, or clustering relation. To evaluate the suitability of our proposed system tags, we developed an online environment where participants submit search terms and retrieve two groups of videos to be evaluated. Each group is produced from one distinct type of tags; user tags or system tags. The videos in the two groups are produced from the same database and are evaluated by the same participants in order to have a consistent and reliable evaluation. Since the user tags are used nowadays for searching the real tag-based systems, we consider its efficiency as a criterion (reference) to which we compare the efficiency of the new system tags. In order to compare the relevancy between the search terms and each group of retrieved videos, we carried out a statistical approach. According to Wilcoxon Signed-Rank test, there was no significant difference between using either system tags or user tags. The findings revealed that the use of the system tags in the search is as efficient as the use of the user tags; both types of tags produce different results, but at the same level of relevance to the submitted search terms.
10

Uma arquitetura para gerenciamento e recomendação de ações baseadas em contexto lógico mediante dispositivos móveis

Dametto, Andrigo 12 March 2013 (has links)
Submitted by William Justo Figueiro (williamjf) on 2015-06-18T23:11:45Z No. of bitstreams: 1 33.pdf: 1765508 bytes, checksum: d921fbdef8015531446e414c52c66bf9 (MD5) / Made available in DSpace on 2015-06-18T23:11:45Z (GMT). No. of bitstreams: 1 33.pdf: 1765508 bytes, checksum: d921fbdef8015531446e414c52c66bf9 (MD5) Previous issue date: 2012 / Nenhuma / Este trabalho elabora de uma arquitetura de software que contempla dentro de dispositivos móveis na plataforma Android, a coleta de informações de contexto físico de localização (informações que são apenas coletadas em ambientes externos) e geração de contexto lógico de localização (informações que precisam de um processamento dos dados para ser encontradas em ambientes internos), estas informações são armazenadas em uma estrutura Web Semântica a qual sofrerá inferências para gerar mais um contexto lógico de recomendação de uso de recursos disponíveis no dispositivo móvel e anteriormente utilizados pelo usuário em um dado instante e local. A funcionalidade desta arquitetura será verificada com a construção de um protótipo na plataforma Android. Um dos desafios deste trabalho será coletar o contexto lógico de localização do dispositivo em locais internos, como prédios e casas, onde a intensidade do sinal do sistema de posicionamento global (GPS) é insuficiente para ser identificada, portanto neste trabalho será utilizado sensores acelerômetro e giroscópio presentes nos dispositivos móveis para calcular seu deslocamento. A localização interna será integrada a localização externa, formando um percurso contínuo. As informações coletadas no contexto físico são armazenadas em uma ontologia dentro do dispositivo móvel e sincronizadas com um servidor remoto. Outro desafio deste trabalho é o desenvolvimento de um agente de software que através dos dados armazenados na ontologia local, faz inferências nos dados armazenados na forma de Web Semântica e disponibiliza recomendações de uso de um determinado recurso, fundamentado apenas nos dados históricos de utilização destes recursos, relacionando a aproximação em determinado local com a frequência no tempo em relação ao mesmo horário do dia ou ao mesmo dia da semana e ao mesmo dia do mês. O armazenamento do contexto coletado, em uma estrutura Web Semântica, possibilita a união destas informações com demais informações coletadas de outros dispositivos contendo contextos que caracterizem um equipamento, um indivíduo ou uma sociedade. O resultado esperado da arquitetura apresentada neste trabalho, será o maior grau possível de precisão na posição geográfica identificada e a coerência das recomendações de uso de recursos disponíveis no dispositivo móvel em um dado instante e local. / This paper elaborates a software architecture that addresses within mobile devices on the Android platform, collecting information from the physical context of location (only information that is collected outdoors) and generation of logical context of location (information they need processing of the data to be found indoors) and stores this information in a Semantic Web structure which suffer inferences to generate a context logical of recommendation to use resources available on the mobile device and used previously by the user at a given time and local. The functionality of this architecture will be test by construction a prototype on the Android platform. One of the challenges of this work will be to collect the context of logical device location in indoor locations such as buildings and houses where the signal strength of the Global Positioning System (GPS) is insufficient to be identified, so this work will be used and accelerometer sensors gyroscope present in mobile devices to calculate your speed and direction. The location will be integrated inside the external location, forming a continuous path. The information collected in the physical context is stored in the ontology within the mobile device and synchronized with a remote server. Another challenge of this work is the development of a software agent that through data stored in the ontology on device, makes inferences on the data stored in the form of Web Semantic and provides recommendations for use of a given resource, based only on historical data of these resources by relating the approach in a certain place with the frequency in time over the same time of day or the same day of the week and the same day of the month. The architecture of this work is being called and Context Manager is integrated with the other two studies did not present this work: a Semantic Desktop with the task of identifying a resource that is being used to send and manager context; and Context's Federation, serving as a remote server, with the task of receiving context data collected by the context manager. The storage of context collected in a Web Semantic structure enables the union of this information with other context information that characterize a device, an individual or a society. The expected outcome of the architecture presented here will be the greatest possible degree of accuracy in the identified geographical position and consistency of recommendations for the use of resources available on the mobile device at a given time and place.

Page generated in 0.2152 seconds