• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 4
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 67
  • 67
  • 44
  • 15
  • 14
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Automated Injection of Curated Knowledge Into Real-Time Clinical Systems: CDS Architecture for the 21st Century

January 2018 (has links)
abstract: Clinical Decision Support (CDS) is primarily associated with alerts, reminders, order entry, rule-based invocation, diagnostic aids, and on-demand information retrieval. While valuable, these foci have been in production use for decades, and do not provide a broader, interoperable means of plugging structured clinical knowledge into live electronic health record (EHR) ecosystems for purposes of orchestrating the user experiences of patients and clinicians. To date, the gap between knowledge representation and user-facing EHR integration has been considered an “implementation concern” requiring unscalable manual human efforts and governance coordination. Drafting a questionnaire engineered to meet the specifications of the HL7 CDS Knowledge Artifact specification, for example, carries no reasonable expectation that it may be imported and deployed into a live system without significant burdens. Dramatic reduction of the time and effort gap in the research and application cycle could be revolutionary. Doing so, however, requires both a floor-to-ceiling precoordination of functional boundaries in the knowledge management lifecycle, as well as formalization of the human processes by which this occurs. This research introduces ARTAKA: Architecture for Real-Time Application of Knowledge Artifacts, as a concrete floor-to-ceiling technological blueprint for both provider heath IT (HIT) and vendor organizations to incrementally introduce value into existing systems dynamically. This is made possible by service-ization of curated knowledge artifacts, then injected into a highly scalable backend infrastructure by automated orchestration through public marketplaces. Supplementary examples of client app integration are also provided. Compilation of knowledge into platform-specific form has been left flexible, in so far as implementations comply with ARTAKA’s Context Event Service (CES) communication and Health Services Platform (HSP) Marketplace service packaging standards. Towards the goal of interoperable human processes, ARTAKA’s treatment of knowledge artifacts as a specialized form of software allows knowledge engineers to operate as a type of software engineering practice. Thus, nearly a century of software development processes, tools, policies, and lessons offer immediate benefit: in some cases, with remarkable parity. Analyses of experimentation is provided with guidelines in how choice aspects of software development life cycles (SDLCs) apply to knowledge artifact development in an ARTAKA environment. Portions of this culminating document have been further initiated with Standards Developing Organizations (SDOs) intended to ultimately produce normative standards, as have active relationships with other bodies. / Dissertation/Thesis / Doctoral Dissertation Biomedical Informatics 2018
52

[en] A REAL-TIME REASONING SERVICE FOR THE INTERNET OF THINGS / [pt] UM SERVIÇO DE RACIOCÍNIO COMPUTACIONAL EM TEMPO REAL PARA A INTERNET DAS COISAS

RUHAN DOS REIS MONTEIRO 17 January 2019 (has links)
[pt] O crescimento da Internet das Coisas (IoT) nos trouxe a oportunidade de criar aplicações em diversas áreas com o uso de sensores e atuadores. Um dos problemas encontrados em sistemas de IoT é a dificuldade de adicionar relações semânticas aos dados brutos produzidos por estes sensores e conseguir inferir novos fatos a partir destas relações. Além disso, devido à natureza destes sistemas, os dados produzidos por eles, conhecidos como streams, precisam ser analisados em tempo real. Streams são uma sequência de elementos de dados com variação de tempo e que não devem ser tratados como dados a serem armazenados para sempre e consultados sob demanda. Os dados em streaming precisam ser consumidos rapidamente por meio de consultas contínuas que analisam e produzem novos dados relevantes. A capacidade de inferir novas relações semânticas sobre dados em streaming é chamada de inferência sobre streams. Nesta pesquisa, propomos um modo semântico e um mecanismo para processamento e inferência sobre streams em tempo real baseados em Processamento de Eventos Complexos (CEP), RDF (Resource Description Framework) e OWL (Web Ontology Language). Apresentamos um middleware que suporta uma inferência contínua sobre dados produzidores por sensores. As principais vantagens de nossa abodagem são: (a) considerar o tempo como uma relação-chave entre a informação; (b) processamento de fluxo por ser implementado usando o CEP; (c) é geral o suficiente para ser aplicado a qualquer sistema de gerenciamento de fluxo de dados (DSMS). Foi desenvolvido no Laboratório de Colaboração Avançada (LAC) utlizando e um estudo de caso no domínio da detecção de incêndio é conduzido e implementado, elucidando o uso de inferência em tempo real sobre streams. / [en] The growth of the Internet of Things (IoT) has brought the opportunity to create applications in several areas, with the use of sensors and actuators. One of the problems encountered in IoT systems is the difficulty of adding semantic relations to the raw data produced by the sensors and being able to infer new facts from these relations. Moreover, due to the fact that many IoT applications are online and need to react instantly on sensor data collected by them, they need to be analyzed in real-time. Streams are a sequence of time-varying data elements that should not be stored forever and queried on demand. Streaming data needs to be consumed quickly through ongoing queries that continue to analyze and produce new relevant data, i.e. stream of output/result events. The ability to infer new semantic relationships over streaming data is called Stream Reasoning. We propose a semantic model and a mechanism for real-time data stream processing and reasoning based on Complex Event Processing (CEP), RDF (resource description structure) and OWL (Web Ontology Language). This work presents a middleware service that supports continuous reasoning on data produced by sensors. The main advantages of our approach are: (a) to consider time as a key relationship between information; (b) flow processing can be implemented using CEP; (c) is general enough to be applied to any data flow management system (DSMS). It was developed in the Advanced Collaboration Laboratory (LAC) and a case study in the field of fire detection is conducted and implemented, elucidating the use of real-time inference on streams.
53

A distributed service delivery platform for automotive environments : enhancing communication capabilities of an M2M service platform for automotive application

Glaab, Markus January 2018 (has links)
The automotive domain is changing. On the way to more convenient, safe, and efficient vehicles, the role of electronic controllers and particularly software has increased significantly for many years, and vehicles have become software-intensive systems. Furthermore, vehicles are connected to the Internet to enable Advanced Driver Assistance Systems and enhanced In-Vehicle Infotainment functionalities. This widens the automotive software and system landscape beyond the physical vehicle boundaries to presently include as well external backend servers in the cloud. Moreover, the connectivity facilitates new kinds of distributed functionalities, making the vehicle a part of an Intelligent Transportation System (ITS) and thus an important example for a future Internet of Things (IoT). Manufacturers, however, are confronted with the challenging task of integrating these ever-increasing range of functionalities with heterogeneous or even contradictory requirements into a homogenous overall system. This requires new software platforms and architectural approaches. In this regard, the connectivity to fixed side backend systems not only introduces additional challenges, but also enables new approaches for addressing them. The vehicle-to-backend approaches currently emerging are dominated by proprietary solutions, which is in clear contradiction to the requirements of ITS scenarios which call for interoperability within the broad scope of vehicles and manufacturers. Therefore, this research aims at the development and propagation of a new concept of a universal distributed Automotive Service Delivery Platform (ASDP), as enabler for future automotive functionalities, not limited to ITS applications. Since Machine-to-Machine communication (M2M) is considered as a primary building block for the IoT, emergent standards such as the oneM2M service platform are selected as the initial architectural hypothesis for the realisation of an ASDP. Accordingly, this project describes a oneM2M-based ASDP as a reference configuration of the oneM2M service platform for automotive environments. In the research, the general applicability of the oneM2M service platform for the proposed ASDP is shown. However, the research also identifies shortcomings of the current oneM2M platform with respect to the capabilities needed for efficient communication and data exchange policies. It is pointed out that, for example, distributed traffic efficiency or vehicle maintenance functionalities are not efficiently treated by the standard. This may also have negative privacy impacts. Following this analysis, this research proposes novel enhancements to the oneM2M service platform, such as application-data-dependent criteria for data exchange and policy aggregation. The feasibility and advancements of the newly proposed approach are evaluated by means of proof-of-concept implementation and experiments with selected automotive scenarios. The results show the benefits of the proposed enhancements for a oneM2M-based ASDP, without neglecting to indicate their advantages for other domains of the oneM2M landscape where they could be applied as well.
54

[en] CONTINUOUS SERVICE DISCOVERY IN IOT / [pt] DESCOBERTA CONTÍNUA DE SERVIÇOS EM IOT

FELIPE OLIVEIRA CARVALHO 28 July 2017 (has links)
[pt] A popularização da Internet das Coisas (IoT, Internet of Things) provocou uma crescente oportunidade para a criação de aplicações em diversas áreas, através da combinação do uso de sensores e/ou atuadores. Em ambientes de IoT, o papel de elementos chamados de gateways consiste em fornecer uma camada de comunicação intermediária entre os dispositivos de IoT e serviços de nuvem. Um fator crucial para a construção de aplicações em larga escala é que os dispositivos de IoT possam ser utilizados de maneira transparente, num paradigma orientado a serviços, onde detalhes de comunicação e configuração destes objetos são tratados pelos gateways. No modelo de serviços, as aplicações devem descobrir as interfaces de alto-nível dos dispositivos e não precisam lidar com detalhes subjacentes, que são tratados pelos gateways. Em cenários de grande dinamismo e mobilidade (com conexões e desconexões de dispositivos acontecendo a todo momento), a descoberta e configuração de objetos deve ocorrer de forma contínua. Os protocolos de descoberta de serviços tradicional, como o Universal Plug and Play (UPnP) ou o Service Location Protocol (SLP), não foram desenvolvidos levando em consideração o alto dinamismo de ambientes IoT. Nesse sentido, introduzimos o processamento de eventos complexos (CEP), que é uma tecnologia para processamento em tempo real de fluxos de eventos heterogêneos, que permite a utilização de consultas em linguagem CQL (Continuous Query Language) para a busca de eventos de interesse. Em um modelo onde os eventos relacionados à descoberta de sensores são enviados para um fluxo CEP, consultas expressivas são escritas para que uma aplicação descubra continuamente serviços de interesse. Este trabalho apresenta a extensão do MHub/CDDL para o suporte à descoberta contínua de serviços em IoT, utilizando CEP. O MHub/CDDL (Mobile Hub / Context Data Distribution Layer) é um middleware para descoberta de serviços e gerenciamento de qualidade de contexto em IoT, desenvolvido numa parceria entre o Laboratory for Advanced Collaboration (LAC) da PUC-Rio e o Laboratório de Sistemas Distribuídos Inteligentes (LSDi) da Universidade Federal do Maranhão (UFMA). A implementação deste trabalho é feita para a plataforma Android (Java) e um estudo de caso no domínio de estacionamentos inteligentes é conduzido e implementado, elucidando o uso do mecanismo de descoberta contínuo. / [en] The popularization of the Internet of Things sparked a growing opportunity for the creation of applications in various areas, by combining the use of sensors and/or actuators. In IoT environments, the role of elements called gateways is to provide an intermediate communication layer between IoT devices and cloud services. A crucial factor for the construction of large-scale applications is to allow the use of IoT devices in a transparent manner, in a service-oriented paradigm, where details of communication and configuration are handled by the gateways. In service model, applications must discover the high-level interfaces of the devices and do not have to deal with underlying details that are handled by gateways. In scenarios of high dynamism and mobility (with connections and disconnections of devices occuring all the time), this discovery and configuration must occur continuously. Traditional service discovery protocols, such as Universal Plug and Play (UPnP) or Service Location Protocol (SLP), have not been developed taking into consideration the high dinamicity of IoT environments. In this sense, we introduce complex event processing (CEP), which is a technology for real-time processing of heterogeneous event flows, which allows the use of CQL (Continuous Query Language for the search of events of interest. In a model where events related to sensor discovery are sent to a CEP flow, expressive queries are written for an application to continuously discover services of interest. This work presents the extension of MHub / CDDL to support continuous service discovery in IoT, using CEP. The MHub / CDDL (Mobile Hub / Context Data Distribution Layer) is a middleware for service discovery and quality context management in IoT, developed in a partnership between the Laboratory for Advanced Collaboration (LAC) from PUC-Rio and the Laboratório de Sistemas Distribuídos Inteligentes (LSDi) from Universidade Federal do Maranhão (UFMA). The implementation of this work is done in Android (Java) platform and a case study in the domain of smart parking is conducted and implemented, elucidating the use of the continuous discovery mechanism.
55

Depersonalization Under Academic Stress: Frequency, Predictors, and Consequences

Schweden, Tabea L.K., Wolfradt, Uwe, Jahnke, Sara, Hoyer, Jürgen 26 May 2020 (has links)
Background: Based on the assumptions that depersonalization symptoms are relevant for test anxiety maintenance, we examined their frequency, psychological predictors, association with anxiety symptoms, and association with test performance. Sampling and Methods: In Study 1, 203 students rated their test anxiety severity and depersonalization in their last oral examination. In Study 2, we assessed test anxiety 1 week before an oral examination, depersonalization, safety behaviors, self-focused attention, and negative appraisals of depersonalization directly after the examination, and post-event processing 1 week later among 67 students. Results: In Study 1, 47.3% reported at least one moderate depersonalization symptom. In Study 2, test anxiety and negative appraisals of depersonalization significantly predicted depersonalization. Depersonalization was linked to a higher intensity of safety behaviors and post-event processing but not to self-focused attention. It was not related to performance. Conclusion: Results are limited by the non-random sampling and the small sample size of Study 2. However, by showing that depersonalization contributes to the processes the maintenance of test anxiety, the findings confirm that depersonalization – normally understood as an adaptive mechanism to cope with stressful events – can become maladaptive.
56

[pt] CEP DISTRIBUÍDO PARA AQUISIÇÃO E PROCESSAMENTO DE INFORMAÇÃO ADAPTATIVOS CIENTES DE CONTEXTO / [en] DISTRIBUTED CEP FOR CONTEXT-AWARE ADAPTIVE ACQUIREMENT AND PROCESSING OF INFORMATION

FERNANDO BENEDITO VERAS MAGALHAES 07 June 2021 (has links)
[pt] A disseminação atual da IoT aumenta a implantação de soluções de processamento de fluxo de dados para monitorar e controlar elementos do mundo real. Uma dessas soluções é o Processamento de Eventos Complexos (CEP). Inicialmente, um único computador ou cluster concentraria toda a execução do CEP. No entanto, a execução centralizada do CEP não é ideal para lidar com o alto volume, velocidade e volatilidade dos fluxos de dados dos sensores IoT. Em vez disso, as aplicações CEP devem criar e decentralizar o processamento de eventos CEP, de preferência tendo agentes CEP na nuvem e em dispositivos na borda. Além disso, tão importante quanto a descentralização, é decidir como o processamento será dividido entre esses dispositivos. Dito isso, estar ciente do contexto atual de cada dispositivo, por exemplo, sua localização e sensores disponíveis, pode ajudar a coletar e (parcialmente) processar os dados em dispositivos próximos ao local onde os dados foram produzidos. Este trabalho apresenta uma plataforma de CEP distribuído com ciência de contexto chamada Global CEP Manager (GCM). GCM é um serviço do middleware ContextNet que oferece suporte à implantação e ao rearranjo dinâmico de consultas CEP baseados em contexto para motores CEP em execução na nuvem, em dispositivos na borda estacionários e M-Hubs, que são dispositivos na borda móveis do ContextNet. O GCM usa o ContextMatcher, que também faz parte deste trabalho. ContextMatcher é um módulo para aplicações ContextNet que permite a entrega de mensagens para nós cujo contexto esteja de compatível com um determinado conjunto de características contextuais. / [en] The current dissemination of IoT increases the deployment of stream processing solutions for monitoring and controlling elements of the real world. One of those solutions is Complex Event Processing (CEP). Initially, a single computer/cluster would concentrate all the CEP execution. However, a centralized execution of CEP is not suitable for coping with the high volume, velocity, and volatility of IoT sensors’ data streams. Instead, applications using CEP should deploy a distributed CEP Event Processing Network, preferably having CEP agents both in the cloud and at edge devices. Also, deciding the arrangement used to split the processing among these tiers and their devices can be just as important. That said, being aware of each of the devices current context, for instance, their location and available sensors, can help to collect and (partially) process the data on devices close to the data s production site. This work presents a contextaware distributed CEP platform called Global CEP Manager (GCM). GCM is a service of the ContextNet middleware that supports the context-based deployment, and dynamic rearrangement of CEP queries to CEP engines executing in the cloud, stationary edge devices, and M-Hubs, which are ContextNet s mobile edge devices. GCM uses the ContextMatcher, which is also part of this work. ContextMatcher is a module for ContextNet applications that enables the delivery of messages for nodes that match a specified set of contextual requirements.
57

Semantically-enabled stream processing and complex event processing over RDF graph streams / Traitement de flux sémantiquement activé et traitement d'évènements complexes sur des flux de graphe RDF

Gillani, Syed 04 November 2016 (has links)
Résumé en français non fourni par l'auteur. / There is a paradigm shift in the nature and processing means of today’s data: data are used to being mostly static and stored in large databases to be queried. Today, with the advent of new applications and means of collecting data, most applications on the Web and in enterprises produce data in a continuous manner under the form of streams. Thus, the users of these applications expect to process a large volume of data with fresh low latency results. This has resulted in the introduction of Data Stream Processing Systems (DSMSs) and a Complex Event Processing (CEP) paradigm – both with distinctive aims: DSMSs are mostly employed to process traditional query operators (mostly stateless), while CEP systems focus on temporal pattern matching (stateful operators) to detect changes in the data that can be thought of as events. In the past decade or so, a number of scalable and performance intensive DSMSs and CEP systems have been proposed. Most of them, however, are based on the relational data models – which begs the question for the support of heterogeneous data sources, i.e., variety of the data. Work in RDF stream processing (RSP) systems partly addresses the challenge of variety by promoting the RDF data model. Nonetheless, challenges like volume and velocity are overlooked by existing approaches. These challenges require customised optimisations which consider RDF as a first class citizen and scale the processof continuous graph pattern matching. To gain insights into these problems, this thesis focuses on developing scalable RDF graph stream processing, and semantically-enabled CEP systems (i.e., Semantic Complex Event Processing, SCEP). In addition to our optimised algorithmic and data structure methodologies, we also contribute to the design of a new query language for SCEP. Our contributions in these two fields are as follows: • RDF Graph Stream Processing. We first propose an RDF graph stream model, where each data item/event within streams is comprised of an RDF graph (a set of RDF triples). Second, we implement customised indexing techniques and data structures to continuously process RDF graph streams in an incremental manner. • Semantic Complex Event Processing. We extend the idea of RDF graph stream processing to enable SCEP over such RDF graph streams, i.e., temporalpattern matching. Our first contribution in this context is to provide a new querylanguage that encompasses the RDF graph stream model and employs a set of expressive temporal operators such as sequencing, kleene-+, negation, optional,conjunction, disjunction and event selection strategies. Based on this, we implement a scalable system that employs a non-deterministic finite automata model to evaluate these operators in an optimised manner. We leverage techniques from diverse fields, such as relational query optimisations, incremental query processing, sensor and social networks in order to solve real-world problems. We have applied our proposed techniques to a wide range of real-world and synthetic datasets to extract the knowledge from RDF structured data in motion. Our experimental evaluations confirm our theoretical insights, and demonstrate the viability of our proposed methods
58

Obtenção de padrões sequenciais em data streams atendendo requisitos do Big Data

Carvalho, Danilo Codeco 06 June 2016 (has links)
Submitted by Daniele Amaral (daniee_ni@hotmail.com) on 2016-10-20T18:13:56Z No. of bitstreams: 1 DissDCC.pdf: 2421455 bytes, checksum: 5fd16625959b31340d5f845754f109ce (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-11-08T18:42:36Z (GMT) No. of bitstreams: 1 DissDCC.pdf: 2421455 bytes, checksum: 5fd16625959b31340d5f845754f109ce (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-11-08T18:42:42Z (GMT) No. of bitstreams: 1 DissDCC.pdf: 2421455 bytes, checksum: 5fd16625959b31340d5f845754f109ce (MD5) / Made available in DSpace on 2016-11-08T18:42:49Z (GMT). No. of bitstreams: 1 DissDCC.pdf: 2421455 bytes, checksum: 5fd16625959b31340d5f845754f109ce (MD5) Previous issue date: 2016-06-06 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / The growing amount of data produced daily, by both businesses and individuals in the web, increased the demand for analysis and extraction of knowledge of this data. While the last two decades the solution was to store and perform data mining algorithms, currently it has become unviable even to supercomputers. In addition, the requirements of the Big Data age go far beyond the large amount of data to analyze. Response time requirements and complexity of the data acquire more weight in many areas in the real world. New models have been researched and developed, often proposing distributed computing or different ways to handle the data stream mining. Current researches shows that an alternative in the data stream mining is to join a real-time event handling mechanism with a classic mining association rules or sequential patterns algorithms. In this work is shown a data stream mining approach to meet the Big Data response time requirement, linking the event handling mechanism in real time Esper and Incremental Miner of Stretchy Time Sequences (IncMSTS) algorithm. The results show that is possible to take a static data mining algorithm for data stream environment and keep tendency in the patterns, although not possible to continuously read all data coming into the data stream. / O crescimento da quantidade de dados produzidos diariamente, tanto por empresas como por indivíduos na web, aumentou a exigência para a análise e extração de conhecimento sobre esses dados. Enquanto nas duas últimas décadas a solução era armazenar e executar algoritmos de mineração de dados, atualmente isso se tornou inviável mesmo em super computadores. Além disso, os requisitos da chamada era do Big Data vão muito além da grande quantidade de dados a se analisar. Requisitos de tempo de resposta e complexidade dos dados adquirem maior peso em muitos domínios no mundo real. Novos modelos têm sido pesquisados e desenvolvidos, muitas vezes propondo computação distribuída ou diferentes formas de se tratar a mineração de fluxo de dados. Pesquisas atuais mostram que uma alternativa na mineração de fluxo de dados é unir um mecanismo de tratamento de eventos em tempo real com algoritmos clássicos de mineração de regras de associação ou padrões sequenciais. Neste trabalho é mostrada uma abordagem de mineração de fluxo de dados (data stream) para atender ao requisito de tempo de resposta do Big Data, que une o mecanismo de manipulação de eventos em tempo real Esper e o algoritmo Incremental Miner of Stretchy Time Sequences (IncMSTS). Os resultados mostram ser possível levar um algoritmo de mineração de dados estático para o ambiente de fluxo de dados e manter as tendências de padrões encontrados, mesmo não sendo possível ler todos os dados vindos continuamente no fluxo de dados.
59

Linguagem específica de domínio para abstração de solução de processamento de eventos complexos

DINIZ, Herbertt Barros Mangueira 04 March 2016 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-10-31T12:04:21Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) DissertacaoHerbertt_CIN_UFPE.pdf: 3162767 bytes, checksum: 3208dfce28e7404730479384c2ba99a0 (MD5) / Made available in DSpace on 2016-10-31T12:04:21Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) DissertacaoHerbertt_CIN_UFPE.pdf: 3162767 bytes, checksum: 3208dfce28e7404730479384c2ba99a0 (MD5) Previous issue date: 2016-03-04 / Cada vez mais se evidencia uma maior escassez de recursos e uma disputa por espaços físicos, em decorrência da crescente e demasiada concentração populacional nas grandes cidades. Nesse âmbito, surge a necessidade de soluções que vão de encontro à iniciativa de “Cidades Inteligentes" (Smart Cities). Essas soluções buscam centralizar o monitoramento e controle, para auxiliar no apoio à tomada de decisão. No entanto, essas fontes de TICs formam estruturas complexas e geram um grande volume de dados, que apresentam enormes desafios e oportunidades. Uma das principais ferramentas tecnológicas utilizadas nesse contexto é o Complex Event Processing (CEP), o qual pode ser considerado uma boa solução, para lidar com o aumento da disponibilidade de grandes volumes de dados, em tempo real. CEPs realizam captação de eventos de maneira simplificada, utilizando linguagem de expressão, para definir e executar regras de processamento. No entanto, apesar da eficiência comprovada dessas ferramentas, o fato das regras serem expressas em baixo nível, torna o seu uso exclusivo para usuários especialistas, dificultando a criação de soluções. Com intuito de diminuir a complexidade das ferramentas de CEP, em algumas soluções, tem-se utilizado uma abordagem de modelos Model-Driven Development (MDD), a fim de se produzir uma camada de abstração, que possibilite criar regras, sem que necessariamente seja um usuário especialista em linguagem de CEP. No entanto, muitas dessas soluções acabam tornando-se mais complexas no seu manuseio do que o uso convencional da linguagem de baixo nível. Este trabalho tem por objetivo a construção de uma Graphic User Interface (GUI) para criação de regras de CEP, utilizando MDD, a fim de tornar o desenvolvimento mais intuitivo, através de um modelo adaptado as necessidades do usuário não especialista. / Nowadays is Increasingly evident a greater resources scarcity and competition for physical space, in result of growing up and large population concentration into large cities. In this context, comes up the necessity of solutions that are in compliance with initiative of smart cities. Those solutions seek concentrate monitoring and control, for help to make decisions. Although, this sources of information technology and communications (ITCs) forming complex structures and generates a huge quantity of data that represents biggest challenges and opportunities. One of the main technological tools used in this context is the Complex Event Processing (CEP), which may be considered a good solution to deal with increase of the availability and large volume of data, in real time. The CEPs realizes captation of events in a simple way, using expressive languages, to define and execute processing rules. Although the efficient use of this tools, the fact of the rules being expressed in low level, becomes your use exclusive for specialists, difficulting the creation of solutions. With the aim of reduce the complexity of the CEPs tools, solutions has used an approach of models Model-Driven Development (MDD), in order to produce an abstraction layer, that allows to create rules, without necessarily being a specialist in CEP languages. however, many this tools become more complex than the conventional low level language approach. This work aims to build a Graphic User Interface (GUI) for the creation of CEP rules, using MDD, in order to a more intuitive development, across of the adapted model how necessities of the non specialist users.
60

[en] A MOBILE AND ONLINE OUTLIER DETECTION OVER MULTIPLE DATA STREAMS: A COMPLEX EVENT PROCESSING APPROACH FOR DRIVING BEHAVIOR DETECTION / [pt] DETECÇÃO MÓVEL E ONLINE DE ANOMALIA EM MÚLTIPLOS FLUXOS DE DADOS: UMA ABORDAGEM BASEADA EM PROCESSAMENTO DE EVENTOS COMPLEXOS PARA DETECÇÃO DE COMPORTAMENTO DE CONDUÇÃO

IGOR OLIVEIRA VASCONCELOS 24 July 2017 (has links)
[pt] Dirigir é uma tarefa diária que permite uma locomoção mais rápida e mais confortável, no entanto, mais da metade dos acidentes fatais estão relacionados à imprudência. Manobras imprudentes podem ser detectadas com boa precisão, analisando dados relativos à interação motorista-veículo, por exemplo, curvas, aceleração e desaceleração abruptas. Embora existam algoritmos para detecção online de anomalias, estes normalmente são projetados para serem executados em computadores com grande poder computacional. Além disso, geralmente visam escala através da computação paralela, computação em grid ou computação em nuvem. Esta tese apresenta uma abordagem baseada em complex event processing para a detecção online de anomalias e classificação do comportamento de condução. Além disso, objetivamos identificar se dispositivos móveis com poder computacional limitado, como os smartphones, podem ser usados para uma detecção online do comportamento de condução. Para isso, modelamos e avaliamos três algoritmos de detecção online de anomalia no paradigma de processamento de fluxos de dados, que recebem os dados dos sensores do smartphone e dos sensores à bordo do veículo como entrada. As vantagens que o processamento de fluxos de dados proporciona reside no fato de que este reduz a quantidade de dados transmitidos do dispositivo móvel para servidores/nuvem, bem como se reduz o consumo de energia/bateria devido à transmissão de dados dos sensores e possibilidade de operação mesmo se o dispositivo móvel estiver desconectado. Para classificar os motoristas, um mecanismo estatístico utilizado na mineração de documentos que avalia a importância de uma palavra em uma coleção de documentos, denominada frequência de documento inversa, foi adaptado para identificar a importância de uma anomalia em um fluxo de dados, e avaliar quantitativamente o grau de prudência ou imprudência das manobras dos motoristas. Finalmente, uma avaliação da abordagem (usando o algoritmo que obteve melhor resultado na primeira etapa) foi realizada através de um estudo de caso do comportamento de condução de 25 motoristas em cenário real. Os resultados mostram uma acurácia de classificação de 84 por cento e um tempo médio de processamento de 100 milissegundos. / [en] Driving is a daily task that allows individuals to travel faster and more comfortably, however, more than half of fatal crashes are related to recklessness driving behaviors. Reckless maneuvers can be detected with accuracy by analyzing data related to driver-vehicle interactions, abrupt turns, acceleration, and deceleration, for instance. Although there are algorithms for online anomaly detection, they are usually designed to run on computers with high computational power. In addition, they typically target scale through parallel computing, grid computing, or cloud computing. This thesis presents an online anomaly detection approach based on complex event processing to enable driving behavior classification. In addition, we investigate if mobile devices with limited computational power, such as smartphones, can be used for online detection of driving behavior. To do so, we first model and evaluate three online anomaly detection algorithms in the data stream processing paradigm, which receive data from the smartphone and the in-vehicle embedded sensors as input. The advantages that stream processing provides lies in the fact that reduce the amount of data transmitted from the mobile device to servers/the cloud, as well as reduce the energy/battery usage due to transmission of sensor data and possibility to operate even if the mobile device is disconnected. To classify the drivers, a statistical mechanism used in document mining that evaluates the importance of a word in a collection of documents, called inverse document frequency, has been adapted to identify the importance of an anomaly in a data stream, and then quantitatively evaluate how cautious or reckless drivers maneuvers are. Finally, an evaluation of the approach (using the algorithm that achieved better result in the first step) was carried out through a case study of the 25 drivers driving behavior. The results show an accuracy of 84 percent and an average processing time of 100 milliseconds.

Page generated in 0.4777 seconds