Spelling suggestions: "subject:"complex every processing""
41 |
[en] DSCEP: AN INFRASTRUCTURE FOR DECENTRALIZED SEMANTIC COMPLEX EVENT PROCESSING / [pt] DSCEP: UMA INFRESTRUTURA DISTRIBUÍDA PARA PROCESSAMENTO DE EVENTOS COMPLEXOS SEMÂNTICOSVITOR PINHEIRO DE ALMEIDA 28 October 2021 (has links)
[pt] Muitas aplicações necessitam do processamento de eventos de streeams de
fontes diferentes em combinação com grandes quantidades de dados de bases de
conhecimento. CEP Semântico é um paradigma especificamente designado
para isso, ele extende o processamento complexo de eventos (CEP) para
adicionar o suporte para a linguagem RDF e utiliza uma rede de operadores
para processar streams RDF em combinação com bases de conhecimento em
RDF. Outra classe popular de sistemas projetados para um proposito similar
são os processadores de stream RDF (RSPs). Estes são sistemas que extendem a
linguagem SPARQL (a linguaguem de query padrão para RDF) para adicionar
a capacidade de fazer queries em stream. CEP Semântico e RSPs possuem
propositos similares porém focam em objetivos diferentes. O CEP Semântico,
foca na scalabilidade e processamento distribuido enquanto os RSPs focam nos
desafios do processamento de streams RDF. Nesta tese, propomos o uso de
RSPs como unidades para processamento de streams RDF dentro do contexto
de CEP Semântico. Apresentamos uma infraestrutura, chamada DSCEP, que
permite o encapsulamento de RSPs existentes em operadores do estilo CEP,
de maneira que estes RSPs possam ser interconectados formando uma rede
de operadores distribuída e descentralizada. DSCEP lida com os desafios e
obstáculos desta interconexão, como comunicação confiável, divisão e agregação
de streams, identificação de eventos e time-stamping, etc., permitindo que os
usuários se concentrem nas consultas. Também discutimos nesta tese como o
DSCEP pode ser usado para diminuir o tempo de processamento de consultas
SPARQL monolíticas, seja dividindo-as em subconsultas e operando-as em
paralelo através do uso de operadores ou seja dividingo a stream de entrada
em multiplos operadores que possuem a mesma query e são executados em
paralelo. Além disso também é avaliado o impacto que a base de conhecimento
possui no tempo de processamento de queires contínuas. / [en] Many applications require the processing of event streams from different
sources in combination with large amounts of background knowledge. Semantic
CEP is a paradigm explicitly designed for that. It extends complex event
processing (CEP) with RDF support and uses a network of operators to process
RDF streams combined with RDF knowledge bases. Another popular class of
systems designed for a similar purpose is the RDF stream processors (RSPs).
These are systems that extend SPARQL (the RDF query language) with stream
processing capabilities. Semantic CEP and RSPs have similar purposes but
focus on different things. The former focuses on scalability and distributed
processing, while the latter tends to focus on the intricacies of RDF stream
processing per se. In this thesis, we propose the use of RSP engines as building
blocks for Semantic CEP. We present an infrastructure, called DSCEP, that
allows the encapsulation of existing RSP engines into CEP-like operators so
that these can be seamlessly interconnected in a distributed, decentralized
operator network. DSCEP handles the hurdles of such interconnection, such
as reliable communication, stream aggregation and slicing, event identification
and time-stamping, etc., allowing users to concentrate on the queries. We also
discuss how DSCEP can be used to speed up monolithic SPARQL queries; by
splitting them into parallel subqueries that can be executed by the operator
network or even by splitting the input stream into multiple operators with the
same query running in parallel. Additionally, we evaluate the impact of the
knowledge base on the processing time of SPARQL continuous queries.
|
42 |
General dynamic Yannakakis: Conjunctive queries with theta joins under updatesIdris, Muhammad, Ugarte, Martín, Vansummeren, Stijn, Voigt, Hannes, Lehner, Wolfgang 17 July 2023 (has links)
The ability to efficiently analyze changing data is a key requirement of many real-time analytics applications. In prior work, we have proposed general dynamic Yannakakis (GDYN), a general framework for dynamically processing acyclic conjunctive queries with θ-joins in the presence of data updates. Whereas traditional approaches face a trade-off between materialization of subresults (to avoid inefficient recomputation) and recomputation of subresults (to avoid the potentially large space overhead of materialization), GDYN is able to avoid this trade-off. It intelligently maintains a succinct data structure that supports efficient maintenance under updates and from which the full query result can quickly be enumerated. In this paper, we consolidate and extend the development of GDYN. First, we give full formal proof of GDYN ’s correctness and complexity. Second, we present a novel algorithm for computing GDYN query plans. Finally, we instantiate GDYN to the case where all θ-joins are inequalities and present extended experimental comparison against state-of-the-art engines. Our approach performs consistently better than the competitor systems with multiple orders of magnitude improvements in both time and memory consumption.
|
43 |
[en] AN ENERGY-AWARE IOT GATEWAY, WITH CONTINUOUS PROCESSING OF SENSOR DATA / [pt] UM ENERGY-AWARE IOT GATEWAY, COM PROCESSAMENTO CONTÍNUO DE DADOS DE SENSORLUIS EDUARDO TALAVERA RIOS 30 August 2016 (has links)
[pt] Poucos estudos têm investigado e propôs uma solução de middleware
para a Internet das Coisas Móveis (IoMT), onde as coisas inteligentes
(Objetos Inteligente) podem ser movidos, ou podem mover-se de forma
autônoma, mas permanecem acessíveis a partir de qualquer outro computador
através da Internet. Neste contexto, existe uma necessidade de gateways
com eficiência energética para fornecer conectividade para uma grande variedade
de objetos inteligentes. As soluções propostas têm mostrado que
os dispositivos móveis (smartphones e tablets) são uma boa opção para se
tornar os intermediários universais, proporcionando um ponto de conexão
para os objetos inteligentes vizinhos com tecnologias de comunicação de
curto alcance. No entanto, eles só se preocupam apenas sobre a transmissão
de dados de sensores-primas (obtido a partir de objetos inteligentes conectados)
para a nuvem onde o processamento (e.g. agregação) é executada.
Comunicação via Internet é uma atividade de forte drenagem da bateria em
dispositivos móveis; Além disso, a largura de banda pode não ser suficiente
quando grandes quantidades de informação estão sendo recebidas dos objetos
inteligentes. Por isso, consideramos que uma parte do processamento
deve ser empurrada tão perto quanto possível das fontes. A respeito disso,
processamento de eventos complexos (CEP) é muitas vezes usado para o
processamento em tempo real de dados heterogêneos e pode ser uma tecnologia
chave para ser incluído nas Gateways. Ele permite uma maneira
de descrever o processamento como consultas expressivas que podem ser
implantados ou removidos dinamicamente no vôo. Assim, sendo adequado
para aplicações que têm de lidar com adaptação dinâmica de processamento
local. Esta dissertação descreve uma extensão de um middleware móvel com
a inclusão de processamento contínuo dos dados do sensor, a sua concepção
e implementação de um protótipo para Android. Experimentos têm mostrado
que a nossa implementação proporciona uma boa redução no consumo
de energia e largura de banda. / [en] Few studies have investigated and proposed a middleware solution for
the Internet of Mobile Things (IoMT), where the smart things (Smart Objects)
can be moved, or else can move autonomously, but remain accessible
from any other computer over the Internet. In this context, there is a need
for energy-efficient gateways to provide connectivity to a great variety of
Smart Objects. Proposed solutions have shown that mobile devices (smartphones
and tablets) are a good option to become the universal intermediates
by providing a connection point to nearby Smart Objects with short-range
communication technologies. However, they only focus on the transmission
of raw sensor data (obtained from connected Smart Objects) to the cloud
where processing (e.g. aggregation) is performed. Internet Communication
is a strong battery-draining activity for mobile devices; moreover, bandwidth
may not be sufficient when large amounts of information is being
received from the Smart Objects. Hence, we argue that some of the processing
should be pushed as close as possible to the sources. In this regard,
Complex Event Processing (CEP) is often used for real-time processing of
heterogeneous data and could be a key technology to be included in the
gateways. It allows a way to describe the processing as expressive queries
that can be dynamically deployed or removed on-the-
fly. Thus, being suitable
for applications that have to deal with dynamic adaptation of local
processing. This dissertation describes an extension of a mobile middleware
with the inclusion of continuous processing of sensor data, its design and
prototype implementation for Android. Experiments have shown that our
implementation delivers good reduction in energy and bandwidth consumption.
|
44 |
Parallel Execution of Order Dependent Grouping FunctionsPeters, Mathias 29 October 2024 (has links)
Der exponentielle Anstieg elektronisch gespeicherter Daten erfordert leistungsfähige Systeme zur Verarbeitung und Analyse großer Datenmengen. Parallel relationale Datenbanksysteme (PRDBMS) waren lange Zeit der Standard für analytische Abfragen. Neuere Systeme, wie Apache Flink, Tez und Spark, nutzen erweiterte Ansätze zur Analyse und trennen logische Spezifikationen von physischen Ausführungen. Ein weit verbreitetes Optimierungsverfahren in der analytischen Verarbeitung ist die partielle Aggregation, bei der Aggregation in zwei Stufen erfolgt: Zunächst werden partielle Aggregatgruppen erstellt, die dann zusammengeführt werden, um das Endergebnis zu berechnen. Dieses Verfahren ermöglicht eine parallele Verarbeitung und reduziert die Größe der Zwischenergebnisse.
Bisherige Ansätze konzentrieren sich auf ordnungsunabhängige Gruppierungsfunktionen, bei denen Elemente ohne Berücksichtigung der Reihenfolge gruppiert werden können. In der Praxis gibt es jedoch auch ordnungsabhängige Gruppierungsfunktionen, die von der Reihenfolge der Eingaben abhängen und komplexer in der parallelen Ausführung sind. Derzeit existieren nur begrenzte Ansätze für eine effiziente Parallelisierung solcher Funktionen.
Diese Dissertation präsentiert einen neuen Ansatz zur Parallelisierung von Aggregationsanfragen für drei ordnungsabhängige Gruppierungsfunktionen: Sessionization, Regular Expression Matching (REM) und Complex Event Recognition (CER). Unsere Methode nutzt zerlegbare Aggregationsfunktionen, um eine effiziente parallele Ausführung in modernen Shared-Nothing-Compute-Umgebungen zu ermöglichen. Die stufenweise Ausführung dieser Funktionen eröffnet neue Optimierungsmöglichkeiten. Unser Ansatz erlaubt es Optimierungsalgorithmen, zwischen sequentiellen und stufenweisen Verfahren zu wählen. Zusätzlich schlägt die Arbeit ein Schema vor, wie weitere Gruppierungsfunktionen zerlegt und in die partielle Aggregation integriert werden können. / Advances in information technologies and decreasing cost for storage and compute capacities lead to exponential growth of data being available electronically worldwide. Systems capable of processing these large amounts of data with the goal of analyzing and extracting information are essential for both: research and businesses. Analytical data processing systems employ various optimizations to execute queries efficiently.
Partial Aggregation (PA) using GroupBy and decomposable aggregation functions is a common optimization approach in analytical query processing. Analytical systems execute PA in two stages: During the first stage, they create partial groups to compute partial aggregates. During the second stage, the partial aggregates are grouped and aggregated again to produce the final result. The main benefits of PA are an increased potential of parallel execution during the first stage and a reduction of intermediate result sizes by aggregating over the partial groups. So far, existing approaches to PA only use an order-agnostic grouping function on sets to create groups.
There are grouping functions that depend on ordered input and information on previously processed input items to associate a given input item to its group. Staged execution of order-dependent grouping functions is more difficult than for order-agnostic grouping functions. Systems must compute correct partial states during the first stage and combine them during the final stage. Approaches for efficient parallel execution only exist in a limited way despite the high practical relevance.
In this thesis, we present a novel approach for parallelizing aggregation for three order-dependent grouping functions: Sessionization, Regular Expression Matching (REM), and Complex Event Recognition (CER). Our approach of computing the three grouping functions in stages combined with decomposable aggregation functions allows for efficient parallel execution in state-of-the-art shared-nothing compute environments.
|
45 |
Real-time Business Intelligence through Compact and Efficient Query Processing Under UpdatesIdris, Muhammad 05 March 2019 (has links) (PDF)
Responsive analytics are rapidly taking over the traditional data analytics dominated by the post-fact approaches in traditional data warehousing. Recent advancements in analytics demand placing analytical engines at the forefront of the system to react to updates occurring at high speed and detect patterns, trends, and anomalies. These kinds of solutions find applications in Financial Systems, Industrial Control Systems, Business Intelligence and on-line Machine Learning among others. These applications are usually associated with Big Data and require the ability to react to constantly changing data in order to obtain timely insights and take proactive measures. Generally, these systems specify the analytical results or their basic elements in a query language, where the main task then is to maintain query results under frequent updates efficiently. The task of reacting to updates and analyzing changing data has been addressed in two ways in the literature: traditional business intelligence (BI) solutions focus on historical data analysis where the data is refreshed periodically and in batches, and stream processing solutions process streams of data from transient sources as flows of data items. Both kinds of systems share the niche of reacting to updates (known as dynamic evaluation), however, they differ in architecture, query languages, and processing mechanisms. In this thesis, we investigate the possibility of a reactive and unified framework to model queries that appear in both kinds of systems.In traditional BI solutions, evaluating queries under updates has been studied under the umbrella of incremental evaluation of queries that are based on the relational incremental view maintenance model and mostly focus on queries that feature equi-joins. Streaming systems, in contrast, generally follow automaton based models to evaluate queries under updates, and they generally process queries that mostly feature comparisons of temporal attributes (e.g. timestamp attributes) along with comparisons of non-temporal attributes over streams of bounded sizes. Temporal comparisons constitute inequality constraints while non-temporal comparisons can either be equality or inequality constraints. Hence these systems mostly process inequality joins. As a starting point for our research, we postulate the thesis that queries in streaming systems can also be evaluated efficiently based on the paradigm of incremental evaluation just like in BI systems in a main-memory model. The efficiency of such a model is measured in terms of runtime memory footprint and the update processing cost. To this end, the existing approaches of dynamic evaluation in both kinds of systems present a trade-off between memory footprint and the update processing cost. More specifically, systems that avoid materialization of query (sub)results incur high update latency and systems that materialize (sub)results incur high memory footprint. We are interested in investigating the possibility to build a model that can address this trade-off. In particular, we overcome this trade-off by investigating the possibility of practical dynamic evaluation algorithm for queries that appear in both kinds of systems and present a main-memory data representation that allows to enumerate query (sub)results without materialization and can be maintained efficiently under updates. We call this representation the Dynamic Constant Delay Linear Representation (DCLRs).We devise DCLRs with the following properties: 1) they allow, without materialization, enumeration of query results with bounded-delay (and with constant delay for a sub-class of queries), 2) they allow tuple lookup in query results with logarithmic delay (and with constant delay for conjunctive queries with equi-joins only), 3) they take space linear in the size of the database, 4) they can be maintained efficiently under updates. We first study the DCLRs with the above-described properties for the class of acyclic conjunctive queries featuring equi-joins with projections and present the dynamic evaluation algorithm called the Dynamic Yannakakis (DYN) algorithm. Then, we present the generalization of the DYN algorithm to the class of acyclic queries featuring multi-way Theta-joins with projections and call it Generalized DYN (GDYN). We devise DCLRs with the above properties for acyclic conjunctive queries, and the working of DYN and GDYN over DCLRs are based on a particular variant of join trees, called the Generalized Join Trees (GJTs) that guarantee the above-described properties of DCLRs. We define GJTs and present algorithms to test a conjunctive query featuring Theta-joins for acyclicity and to generate GJTs for such queries. We extend the classical GYO algorithm from testing a conjunctive query with equalities for acyclicity to testing a conjunctive query featuring multi-way Theta-joins with projections for acyclicity. We further extend the GYO algorithm to generate GJTs for queries that are acyclic.GDYN is hence a unified framework based on DCLRs that enables processing of queries that appear in streaming systems as well as in BI systems in a unified main-memory model and addresses the space-time trade-off. We instantiate GDYN to the particular case where all Theta-joins involve only equalities and inequalities and call this instantiation IEDYN. We implement DYN and IEDYN as query compilers that generate executable programs in the Scala programming language and provide all the necessary data structures and their maintenance and enumeration methods in a continuous stream processing model. We evaluate DYN and IEDYN against state-of-the-art BI and streaming systems on both industrial and synthetically generated benchmarks. We show that DYN and IEDYN outperform the existing systems by over an order of magnitude efficiency in both memory footprint and update processing time. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
|
46 |
Real-time Business Intelligence through Compact and Efficient Query Processing Under UpdatesIdris, Muhammad 10 April 2019 (has links)
Responsive analytics are rapidly taking over the traditional data analytics dominated by the post-fact approaches in traditional data warehousing. Recent advancements in analytics demand placing analytical engines at the forefront of the system to react to updates occurring at high speed and detect patterns, trends and anomalies. These kinds of solutions find applications in Financial Systems, Industrial Control Systems, Business Intelligence and on-line Machine Learning among others. These applications are usually associated with Big Data and require the ability to react to constantly changing data in order to obtain timely insights and take proactive measures. Generally, these systems specify the analytical results or their basic elements in a query language, where the main task then is to maintain these results under frequent updates efficiently. The task of reacting to updates and analyzing changing data has been addressed in two ways in the literature: traditional business intelligence (BI) solutions focus on historical data analysis where the data is refreshed periodically and in batches, and stream processing solutions process streams of data from transient sources as flow (or set of flows) of data items. Both kinds of systems share the niche of reacting to updates (known as dynamic evaluation); however, they differ in architecture, query languages, and processing mechanisms. In this thesis, we investigate the possibility of a reactive and unified framework to model queries that appear in both kinds of systems.
In traditional BI solutions, evaluating queries under updates has been studied under the umbrella of incremental evaluation of updates that is based on relational incremental view maintenance model and mostly focus on queries that feature equi-joins. Streaming systems, in contrast, generally follow the automaton based models to evaluate queries under updates, and they generally process queries that mostly feature comparisons of temporal attributes (e.g., timestamp attributes) along-with comparisons of non-temporal attributes over streams of bounded sizes. Temporal comparisons constitute inequality constraints, while non-temporal comparisons can either be equality or inequality constraints, hence these systems mostly process inequality joins. As starting point, we postulate the thesis that queries in streaming systems can also be evaluated efficiently based on the paradigm of incremental evaluation just like in BI systems in a main-memory model. The efficiency of such a model is measured in terms of runtime memory footprint and the update processing cost. To this end, the existing approaches of dynamic evaluation in both kind of systems present a trade-off between memory footprint and the update processing cost. More specifically, systems that avoid materialization of query (sub) results incur high update latency and systems that materialize (sub) results incur high memory footprint. We are interested in investigating the possibility to build a model that can address this trade-off. In particular, we overcome this trade-off by investigating the possibility of practical dynamic evaluation algorithm for queries that appear in both kinds of systems, and present a main-memory data representation that allows to enumerate query (sub) results without materialization and can be maintained efficiently under updates. We call this representation the Dynamic Constant Delay Linear Representation (DCLR).
We devise DCLRs with the following properties: 1) they allow, without materialization, enumeration of query results with bounded-delay (and with constant delay for a sub-class of queries); 2) they allow tuple lookup in query results with logarithmic delay (and with constant delay for conjunctive queries with equi-joins only); 3) they take space linear in the size of the database; 4) they can be maintained efficiently under updates. We first study the DCLRs with the above-described properties for the class of acyclic conjunctive queries featuring equi-joins with projections and present the dynamic evaluation algorithm. Then, we present the generalization of thiw algorithm to the class of acyclic queries featuring multi-way theta-joins with projections. We devise DCLRs with the above properties for acyclic conjunctive queries, and the working of dynamic algorithms over DCLRs is based on a particular variant of join trees, called the Generalized Join Trees (GJTs) that guarantee the above-described properties of DCLRs. We define GJTs and present the algorithms to test a conjunctive query featuring theta-joins for acyclicity and to generate GJTs for such queries. To do this, we extend the classical GYO algorithm from testing a conjunctive query with equalities for acyclicity to test a conjunctive query featuring multi-way theta-joins with projections for acyclicity. We further extend the GYO algorithm to generate GJTs for queries that are acyclic. We implemented our algorithms in a query compiler that takes as input the SQL queries and generates Scala executable code – a trigger program to process queries and maintain under updates. We tested our approach against state of the art main-memory BI and CEP systems. Our evaluation results have shown that our DCLRs based approach is over an order of magnitude efficient than existing systems for both memory footprint and update processing cost. We have also shown that the enumeration of query results without materialization in DCLRs is comparable (and in some cases efficient) as compared to enumerating from materialized query results.
|
Page generated in 0.1315 seconds