• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 376
  • 218
  • 76
  • 53
  • 24
  • 20
  • 20
  • 18
  • 18
  • 16
  • 8
  • 7
  • 7
  • 6
  • 6
  • Tagged with
  • 917
  • 917
  • 270
  • 206
  • 192
  • 160
  • 156
  • 126
  • 112
  • 109
  • 107
  • 107
  • 107
  • 106
  • 104
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Construindo ontologias a partir de recursos existentes: uma prova de conceito no domínio da educação. / Building ontologies from existent resources: a proof of concept in education domain.

Regina Claudia Cantele 07 April 2009 (has links)
Na Grécia antiga, Aristóteles (384-322 aC) reuniu todo conhecimento de sua época para criar a Enciclopédia. Na última década surgiu a Web Semântica representando o conhecimento organizado em ontologias. Na Engenharia de Ontologias, o Aprendizado de Ontologias reúne os processos automáticos ou semi-automáticos de aquisição de conhecimento a partir de recursos existentes. Por outro lado, a Engenharia de Software faz uso de vários padrões para permitir a interoperabilidade entre diferentes ferramentas como os criados pelo Object Management Group (OMG) Model Driven Architecture (MDA), Meta Object Facility (MOF), Ontology Definition Metamodel (ODM) e XML Metadata Interchange (XMI). Já o World Wide Web Consortium (W3C) disponibilizou uma arquitetura em camadas com destaque para a Ontology Web Language (OWL). Este trabalho propõe um framework para reunir estes conceitos fundamentado no ODM, no modelo OWL, na correspondência entre metamodelos, nos requisitos de participação para as ferramentas e na seqüência de atividades a serem aplicadas até obter uma representação inicial da ontologia. Uma prova de conceito no domínio da Educação foi desenvolvida para testar esta proposta. / In ancient Greece, Aristotle (384-322 BCE) endeavored to collect all the existing science in his world to create the Encyclopedia. In the last decade, Berners-Lee and collaborators idealized the Web as a structured repository, observing an organization they called Semantic Web. Usually, domain knowledge is organized in ontologies. As a consequence, a great number of researchers are working on method and technique to build ontologies in Ontology Engineering. Ontology Learning meets automatic or semi-automatic processes which perform knowledge acquisition from existing resources. On the other hand, software engineering uses a collection of theories, methodologies and techniques to support information abstraction and several standards have been used, allowing interoperability and different tools promoted by the Object Management Group (OMG) Model Driven Architecture (MDA), Meta Object Facility (MOF), Ontology Definition Metamodel (ODM) and XML Metadata Interchange (XMI). The World Wide Web Consortium (W3C) released architecture in layers for implementing the Semantic Web with emphasis on the Web Ontology Language (OWL). A framework was developed to combine these concepts based on ODM, on OWL model, the correlation between metamodels, the requirements for the tools to participate; in it, the steps sequence was defined to be applied until initial representations of ontology were obtained. A proof of concept in the Education domain was developed to test this proposal.
242

Educação a distância e a WEB Semântica: modelagem ontológica de materiais e objetos de aprendizagem para a plataforma COL. / e-Learning and semantic Web: learning materials and objects for CoL plataform.

Moysés de Araujo 11 September 2003 (has links)
A World Wide Web está se tornando uma grande biblioteca virtual, onde a informação sobre qualquer assunto está disponível a qualquer hora e em qualquer lugar, com ou sem custo, criando oportunidades em várias áreas do conhecimento humano, dentre as quais a Educação não é exceção. Embora muitas aplicações educacionais baseadas na Web tenham sido desenvolvidas nos últimos anos, alguns problemas nesta área não foram resolvidos, entre as quais está a pesquisa de materiais e objetos de aprendizagem mais inteligentes e eficientes, pois como as informações na World Wide Web não são estruturadas e organizadas, as máquinas não podem “compreender” e nem “interpretar” o significado das informações semânticas. Para dar uma nova infra-estrutura para a World Wide Web está surgindo uma nova tecnologia conhecida com Web Semântica, cuja finalidade é estruturar e organizar as informações para buscas mais inteligentes e eficientes, utilizando-se principalmente do conceito de ontologia. Este trabalho apresenta uma proposta de modelagem ontológica de materiais e objetos de aprendizagem baseada nas tecnologias da Web Semântica para a plataforma de ensino a distância CoL - Cursos on LARC. Esta proposta estende esta plataforma adicionando-lhe a capacidade de organizar e estruturar seus materiais de aprendizagem, de forma a que pesquisas mais “inteligentes” e estruturadas possam ser realizadas, nestes materiais e propiciando a possibilidade de reutilização do conteúdo desses materiais. / The World Wide Web is turning itself into a huge virtual library, where a piece of information about any subject is available at any time in any place, with or without fees, creating opportunities in several areas of human knowledge. Education is no exception among this areas. Although many Web based educational applications have been recently developed, some problems in the area have not been solved yet. Among these is the search for more intelligent and effective object learning and materials, since the World Wide Web information is not structured, nor organized. The machines do not “understand” neither “interpret” the meaning of semantic information. In order to restructure the World Wide Web there is a new technology, known as Web Semantics, being developed. It aims to structure and organize information for more intelligent and effective search, making use of the ontology concept. This work presents an ontological modeling for learning subjects and materials, based on the Web Semantics Technology for the long distance education platform CoL – Courses on LARC. This proposal extends such platform, adding to it the possibility of organizing and structuring its learning materials, making possible more “intelligent” and structured searches on the materials as well as making possible the re-use of the materials contents.
243

Integração de recursos da web semântica e mineração de uso para personalização de sites / Integrating semantic web resources and web usage mining for websites personalization

Rigo, Sandro Jose January 2008 (has links)
Um dos motivos para o crescente desenvolvimento da área de mineração de dados encontra-se no aumento da quantidade de documentos gerados e armazenados em formato digital, estruturados ou não. A Web contribui sobremaneira para este contexto e, de forma coerente com esta situação, observa-se o surgimento de técnicas específicas para utilização nesta área, como a mineração de estrutura, de conteúdo e de uso. Pode-se afirmar que esta crescente oferta de informação na Web cria o problema da sobrecarga cognitiva. A Hipermídia Adaptativa permite minorar este problema, com a adaptação de hiperdocumentos e hipermídia aos seus usuários segundo suas necessidades, preferências e objetivos. De forma resumida, esta adaptação é realizada relacionando-se informações sobre o domínio da aplicação com informações sobre o perfil de usuários. Um dos tópicos importantes de pesquisa em sistemas de Hipermídia Adaptativa encontra-se na geração e manutenção do perfil dos usuários. Dentre as abordagens conhecidas, existe um contínuo de opções, variando desde cadastros de informações preenchidos manualmente, entrevistas, até a aquisição automática de informações com acompanhamento do uso da Web. Outro ponto fundamental de pesquisa nesta área está ligado à construção das aplicações, sendo que recursos da Web Semântica, como ontologias de domínio ou anotações semânticas de conteúdo podem ser observados no desenvolvimento de sistemas de Hipermídia Adaptativa. Os principais motivos para tal podem ser associados com a inerente flexibilidade, capacidade de compartilhamento e possibilidades de extensão destes recursos. Este trabalho descreve uma arquitetura para a aquisição automática de perfis de classes de usuários, a partir da mineração do uso da Web e da aplicação de ontologias de domínio. O objetivo principal é a integração de informações semânticas, obtidas em uma ontologia de domínio descrevendo o site Web em questão, com as informações de acompanhamento do uso obtidas pela manipulação dos dados de sessões de usuários. Desta forma é possível identificar mais precisamente os interesses e necessidades de um usuário típico. Integra o trabalho a implementação de aplicação de Hipermídia Adaptativa a partir de conceitos de modelagem semântica de aplicações, com a utilização de recursos de serviços Web, para validação experimental da proposta. / One of the reasons for the increasing development observed in Data Mining area is the raising in the quantity of documents generated and stored in digital format, structured or not. The Web plays central role in this context and some specific techniques can be observed, as structure, content and usage mining. This increasing information offer in the Web brings the cognitive overload problem. The Adaptive Hypermedia permits a reduction of this problem, when the contents of selected documents are presented in accordance with the user needs, preferences and objectives. Briefly put, this adaptation is carried out on the basis of relationship between information concerning the application domain and information concerning the user profile. One of the important points in Adaptive Hypermedia systems research is to be found in the generation and maintenance of the user profiles. Some approaches seek to create the user profile from data obtained from registration, others incorporate the results of interviews, and some have the objective of automatic acquisition of information by following the usage. Another fundamental research point is related with the applications construction, where can be observed the use of Web semantic resources, such as semantic annotation and domain ontologies. This work describes the architecture for automatic user profile acquisition, using domain ontologies and Web usage mining. The main objective is the integration of usage data, obtained from user sessions, with semantic description, obtained from a domain ontology. This way it is possible to identify more precisely the interests and needs of a typical user. The implementation of an Adaptive Hypermedia application based on the concepts of semantic application modeling and the use of Web services resources that were integrated into the proposal permitted greater flexibility and experimentation possibilities.
244

Template-Based Question Answering over Linked Data using Recursive Neural Networks

January 2018 (has links)
abstract: The Semantic Web contains large amounts of related information in the form of knowledge graphs such as DBpedia. These knowledge graphs are typically enormous and are not easily accessible for users as they need specialized knowledge in query languages (such as SPARQL) as well as deep familiarity of the ontologies used by these knowledge graphs. So, to make these knowledge graphs more accessible (even for non- experts) several question answering (QA) systems have been developed over the last decade. Due to the complexity of the task, several approaches have been undertaken that include techniques from natural language processing (NLP), information retrieval (IR), machine learning (ML) and the Semantic Web (SW). At a higher level, most question answering systems approach the question answering task as a conversion from the natural language question to its corresponding SPARQL query. These systems then utilize the query to retrieve the desired entities or literals. One approach to solve this problem, that is used by most systems today, is to apply deep syntactic and semantic analysis on the input question to derive the SPARQL query. This has resulted in the evolution of natural language processing pipelines that have common characteristics such as answer type detection, segmentation, phrase matching, part-of-speech-tagging, named entity recognition, named entity disambiguation, syntactic or dependency parsing, semantic role labeling, etc. This has lead to NLP pipeline architectures that integrate components that solve a specific aspect of the problem and pass on the results to subsequent components for further processing eg: DBpedia Spotlight for named entity recognition, RelMatch for relational mapping, etc. A major drawback in this approach is error propagation that is a common problem in NLP. This can occur due to mistakes early on in the pipeline that can adversely affect successive steps further down the pipeline. Another approach is to use query templates either manually generated or extracted from existing benchmark datasets such as Question Answering over Linked Data (QALD) to generate the SPARQL queries that is basically a set of predefined queries with various slots that need to be filled. This approach potentially shifts the question answering problem into a classification task where the system needs to match the input question to the appropriate template (class label). This thesis proposes a neural network approach to automatically learn and classify natural language questions into its corresponding template using recursive neural networks. An obvious advantage of using neural networks is the elimination for the need of laborious feature engineering that can be cumbersome and error prone. The input question would be encoded into a vector representation. The model will be trained and evaluated on the LC-QuAD Dataset (Large-scale Complex Question Answering Dataset). The dataset was created explicitly for machine learning based QA approaches for learning complex SPARQL queries. The dataset consists of 5000 questions along with their corresponding SPARQL queries over the DBpedia dataset spanning 5042 entities and 615 predicates. These queries were annotated based on 38 unique templates that the model will attempt to classify. The resulting model will be evaluated against both the LC-QuAD dataset and the Question Answering Over Linked Data (QALD-7) dataset. The recursive neural network achieves template classification accuracy of 0.828 on the LC-QuAD dataset and an accuracy of 0.618 on the QALD-7 dataset. When the top-2 most likely templates were considered the model achieves an accuracy of 0.945 on the LC-QuAD dataset and 0.786 on the QALD-7 dataset. After slot filling, the overall system achieves a macro F-score 0.419 on the LC- QuAD dataset and a macro F-score of 0.417 on the QALD-7 dataset. / Dissertation/Thesis / Masters Thesis Software Engineering 2018
245

[en] SEMANTIC MODELING DESIGN OF WEB APPLICATION / [pt] MODELAGEM SEMÂNTICA DE APLICAÇÕES NA WWW

FERNANDA LIMA 13 October 2003 (has links)
[pt] Este trabalho apresenta um modelo para projeto e implementação de aplicações hipermídia no contexto da Web semântica. A partir dos princípios o Object Oriented Hypermedia Design Method, utilizamos as noções de ontologias para definir o modelo conceitual de uma aplicação, estendendo o poder expressivo daquele método. Os modelos de navegação são definidos utilizando-se uma linguagem de consulta que permite referências tanto ao esquema de dados quanto às suas instâncias, possibilitando a definição de estruturas de navegação flexíveis e abrangentes. Adicionalmente, propomos a utilização de estruturas de acesso facetadas para o apoio à escolha de objetos de navegação utilizando múltiplos critérios. Finalmente, apresentamos uma arquitetura de implementação que permite a utilização direta da especificação da aplicação na derivação da implementação da aplicação final. / [en] In this thesis we present a method for the design and implementation of web applications for the Semantic Web. Based on the Object Oriented Hypermedia Design Method approach, we used ontology concepts to define an application conceptual model, extending the expressive power of the original method. The navigational models definitions use a query language capable of querying both schema and instances, enabling the specification of flexible access structures. Additionally, we propose the use of faceted access structures to improve the selection of navigational objects organized by multiple criteria. Finally, we present an implementation architecture that allows the direct use of the application specifications when deriving a final application implementation.
246

[en] ONTOLOGIES USE IN B2C DOMAIN / [pt] UTILIZAÇÃO DE ONTOLOGIAS NO DOMÍNIO B2C

FRANCISCO JOSE ZAMITH GUIMARAES 12 September 2003 (has links)
[pt] A principal dificuldade dentro do domínio B2C está em aumentar a utilidade da WWW para o comércio eletrônico através da melhoria das possibilidades oferecidas ao consumidor. Apesar de a WWW permitir ao comprador ter acesso a uma grande quantidade de informação, obter a informação do fornecedor certo que venda o produto desejado a um preço razoável, pode ser uma tarefa muito custosa. Uma das formas de melhorar essa situação é através do uso de agentes inteligentes de busca de informação, isto é, agentes de compra, que auxiliam os compradores a encontrar produtos de seu interesse. Para que isso ocorra esbarra-se em uma dificuldade inerente à própria WWW: a mistura da linguagem natural, imagens e informação de layout de HTML são uma das maiores barreiras para a automatização do comércio eletrônico, pois a semântica da informação é somente compreensível por seres humanos. Desta forma espera- se conseguir agentes de compra mais eficientes quando associados ao uso de ontologias, e lojas virtuais que tenham anotações especiais que sigam uma ontologia. Nessa dissertação fazemos um estudo sobre as principais tecnologias envolvidas no desenvolvimento de ontologias em Ciência da Computação. Fazemos também um estudo de caso sobre a aplicação de ontologias dentro do domínio de B2C, visando assim avaliar o potencial e as dificuldades existentes para o desenvolvimento desse tipo de aplicação. / [en] The main difficulty associated with the B2C domain is increasing the usefulness of WWW for the electronic trade through the improvement of the services provided to the consumer. Even though the WWW allows the buyer to have access to a great amount of information, to obtain the information from the right supplier that sells the desired product by a reasonable price can be a very expensive task. One of the ways of improving the web functionality is through the use of intelligent agents for search of information, that is, the introduction of purchase agents that aid the buyers to find products of their interest. For that to happen we need to overcome an inherent difficulty of the WWW: the mixture of natural language, images and layout information in HTML is one of the greatest barriers for the automation of the electronic trade, because the semantics of the information is only comprehensible for human beings. To solve this problem we hope to produce more efficient purchase agents by associating them to the use of ontologies, and virtual stores that have special annotations that follow ontologies. In the present dissertation we make a study of the main technologies related to ontologies development in computer science. We also develop a case study about the ontologies application to the B2C domain, seeking in this way to evaluate potential and existing difficulties for the development of this type of application.
247

Ontology learning from folksonomies.

January 2010 (has links)
Chen, Wenhao. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2010. / Includes bibliographical references (p. 63-70). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Ontologies and Folksonomies --- p.1 / Chapter 1.2 --- Motivation --- p.3 / Chapter 1.2.1 --- Semantics in Folksonomies --- p.3 / Chapter 1.2.2 --- Ontologies with basic level concepts --- p.5 / Chapter 1.2.3 --- Context and Context Effect --- p.6 / Chapter 1.3 --- Contributions --- p.6 / Chapter 1.4 --- Structure of the Thesis --- p.8 / Chapter 2 --- Background Study --- p.10 / Chapter 2.1 --- Semantic Web --- p.10 / Chapter 2.2 --- Ontology --- p.12 / Chapter 2.3 --- Folksonomy --- p.14 / Chapter 2.4 --- Cognitive Psychology --- p.17 / Chapter 2.4.1 --- Category (Concept) --- p.17 / Chapter 2.4.2 --- Basic Level Categories (Concepts) --- p.17 / Chapter 2.4.3 --- Context and Context Effect --- p.20 / Chapter 2.5 --- F1 Evaluation Metric --- p.21 / Chapter 2.6 --- State of the Art --- p.23 / Chapter 2.6.1 --- Ontology Learning --- p.23 / Chapter 2.6.2 --- Semantics in Folksonomy --- p.26 / Chapter 3 --- Ontology Learning from Folksonomies --- p.28 / Chapter 3.1 --- Generating Ontologies with Basic Level Concepts from Folksonomies --- p.29 / Chapter 3.1.1 --- Modeling Instances and Concepts in Folksonomies --- p.29 / Chapter 3.1.2 --- The Metric of Basic Level Categories (Concepts) --- p.30 / Chapter 3.1.3 --- Basic Level Concepts Detection Algorithm --- p.31 / Chapter 3.1.4 --- Ontology Generation Algorithm --- p.34 / Chapter 3.2 --- Evaluation --- p.35 / Chapter 3.2.1 --- Data Set and Experiment Setup --- p.35 / Chapter 3.2.2 --- Quantitative Analysis --- p.36 / Chapter 3.2.3 --- Qualitative Analysis --- p.39 / Chapter 4 --- Context Effect on Ontology Learning from Folksonomies --- p.43 / Chapter 4.1 --- Context-aware Basic Level Concepts Detection --- p.44 / Chapter 4.1.1 --- Modeling Context in Folksonomies --- p.44 / Chapter 4.1.2 --- Context Effect on Category Utility --- p.45 / Chapter 4.1.3 --- Context-aware Basic Level Concepts Detection Algorithm --- p.46 / Chapter 4.2 --- Evaluation --- p.47 / Chapter 4.2.1 --- Data Set and Experiment Setup --- p.47 / Chapter 4.2.2 --- Result Analysis --- p.49 / Chapter 5 --- Potential Applications --- p.54 / Chapter 5.1 --- Categorization of Web Resources --- p.54 / Chapter 5.2 --- Applications of Ontologies --- p.55 / Chapter 6 --- Conclusion and Future Work --- p.57 / Chapter 6.1 --- Conclusion --- p.57 / Chapter 6.2 --- Future Work --- p.59 / Bibliography --- p.63
248

Utilização de web semântica para seleção de informações de web services no registro UDDI uma abordagem com qualidade de serviço / The use of semantic web for selection of web services information in the UDDI registration an approach with quality service

Nakamura, Luis Hideo Vasconcelos 15 February 2012 (has links)
Este projeto de mestrado aborda a utilização de recursos daWeb Semântica na seleção de informações sobre Web Services no registro UDDI (Universal Description, Discovery, and Integration). Esse registro possui a limitação de apenas armazenar informações funcionais de Web Services. As informações não funcionais que incluem as informações de qualidade de serviço (QoS - Quality of Service) não são contempladas e dessa forma dificulta a escolha do melhor serviço pelos clientes. Neste projeto, a representação da base de conhecimento com informações sobre os provedores, clientes, acordos, serviços e a qualidade dos serviços prestados foi feita por meio de uma ontologia. Essa ontologia é utilizada pelo módulo UDOnt-Q (Universal Discovery with Ontology and QoS) que foi projetado para servir de plataforma para algoritmos de busca e composição de serviços com qualidade. Embora a utilização de semântica possa ser empregada para a composição e automatização de serviços, o foco deste trabalho é a garantia de qualidade de serviço em Web Services. Os algoritmos desenvolvidos empregam recursos da Web Semântica para classificar e selecionar os Web Services adequados de acordo com as informações de qualidade que estão armazenados na ontologia. O módulo e os algoritmos foram submetidos a avaliações de desempenho que revelaram problemas de desempenho com relação a abordagem adotada durante o processo de inferência da ontologia. Tal processo é utilizado para a classificação das informações dos elementos presentes na ontologia. Contudo, uma vez que as informações foram inferidas, o processo de busca e seleção de serviços comprovou a viabilidade de utilização do módulo e de um dos seus algoritmos de seleção / This master project addresses the use of Semantic Web resources in the selection of information about Web Services in UDDI registry (Universal Description, Discovery, and Integration). This registry has the limitation of only storing functional information of Web Services. The nonfunctional information that includes the quality of service information (QoS - Quality of Service) is not covered and thus it is complicate to choose the best service for customers. In this project, the representation of the knowledge base with information about the providers, customers, agreements, services and quality of services has been made through an ontology. This ontology is used by the module UDOnt-Q (Universal Discovery with Ontology and QoS) that was designed to serve as a platform for search algorithms and composition of services with quality. Although the use of semantics can be adopted for the composition and automation of services, the focus of this work is to guarantee quality of service in Web Services. The developed algorithms employ SemanticWeb resources to classify and select the appropriate Web Services according to the quality information that is stored in the ontology. The module and the algorithms have been subjected to performance evaluations that revealed performance problems in relation to the approach taken during the ontology inference process. This process is used for classification of information of the elements present in the ontology. However, since the information was inferred, the process of search and selection services proved the viability of using the module and one of its selection algorithms
249

[en] AN ACCESS CONTROL MODEL FOR THE DESIGN OF SEMANTIC WEB APPLICATIONS / [pt] MODELO DE CONTROLE DE ACESSO NO PROJETO DE APLICAÇÕES NA WEB SEMÂNTICA

MAIRON DE ARAUJO BELCHIOR 27 April 2012 (has links)
[pt] O modelo Role-based Access Control (RBAC) fornece uma maneira para gerenciar o acesso às informações de uma organização, reduzindo-se a complexidade e os custos administrativos e minimizandose os erros. Atualmente existem diversos métodos de desenvolvimento de aplicações na Web Semântica e na Web em geral, porém nenhum dos modelos produzidos por estes métodos abrange a descrição de diretivas relacionadas ao controle de acesso de forma integrada com os outros modelos produzidos por estes métodos. O objetivo desta dissertação é integrar o controle de acesso no projeto de aplicações na Web Semântica (e na Web em geral). Mais especificamente, este trabalho apresenta uma extensão do método SHDM (Semantic Hypermedia Design Method) para a inclusão do modelo RBAC e de um modelo de políticas baseada em regras de forma integrada com os outros modelos deste método. O método SHDM é um método para o projeto de aplicações hipermídia para a web semântica. Uma arquitetura de software modular foi proposta e implementada no Synth, que é um ambiente de desenvolvimento de aplicações projetadas segundo o método SHDM. / [en] The Role-based Access Control (RBAC) model provides a way to manage access to information of an organization, while reducing the complexity and cost of security administration in large networked applications. Currently, several design method of Semantic Web (and Web in general) applications was proposed, but none of these methods produces an specialize and integrated model for describing access control policies. The goal of this dissertation is to integrate the access control in design method of Semantic Web applications. More specifically, this work presents an extension of SHDM method (Semantic Hypermedia Design Method) in order to include RBAC model and an rule based policy Model integrated with the other models of this method. SHDM is a model-driven approach to design web applications for the semantic web. A modular software architecture was proposed and implemented in Synth, which is an application development environment according to SHDM method.
250

Towards Semantically Enabled Complex Event Processing

Keskisärkkä, Robin January 2017 (has links)
The Semantic Web provides a framework for semantically annotating data on the web, and the Resource Description Framework (RDF) supports the integration of structured data represented in heterogeneous formats. Traditionally, the Semantic Web has focused primarily on more or less static data, but information on the web today is becoming increasingly dynamic. RDF Stream Processing (RSP) systems address this issue by adding support for streaming data and continuous query processing. To some extent, RSP systems can be used to perform complex event processing (CEP), where meaningful high-level events are generated based on low-level events from multiple sources; however, there are several challenges with respect to using RSP in this context. Event models designed to represent static event information lack several features required for CEP, and are typically not well suited for stream reasoning. The dynamic nature of streaming data also greatly complicates the development and validation of RSP queries. Therefore, reusing queries that have been prepared ahead of time is important to be able to support real-time decision-making. Additionally, there are limitations in existing RSP implementations in terms of both scalability and expressiveness, where some features required in CEP are not supported by any of the current systems. The goal of this thesis work has been to address some of these challenges and the main contributions of the thesis are: (1) an event model ontology targeted at supporting CEP; (2) a model for representing parameterized RSP queries as reusable templates; and (3) an architecture that allows RSP systems to be integrated for use in CEP. The proposed event model tackles issues specifically related to event modeling in CEP that have not been sufficiently covered by other event models, includes support for event encapsulation and event payloads, and can easily be extended to fit specific use-cases. The model for representing RSP query templates was designed as an extension to SPIN, a vocabulary that supports modeling of SPARQL queries as RDF. The extended model supports the current version of the RSP Query Language (RSP-QL) developed by the RDF Stream Processing Community Group, along with some of the most popular RSP query languages. Finally, the proposed architecture views RSP queries as individual event processing agents in a more general CEP framework. Additional event processing components can be integrated to provide support for operations that are not supported in RSP, or to provide more efficient processing for specific tasks. We demonstrate the architecture in implementations for scenarios related to traffic-incident monitoring, criminal-activity monitoring, and electronic healthcare monitoring.

Page generated in 0.035 seconds