• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 97
  • 13
  • 10
  • 5
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 160
  • 160
  • 58
  • 53
  • 50
  • 46
  • 43
  • 43
  • 43
  • 38
  • 31
  • 29
  • 29
  • 29
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Designing Random Sample Synopses with Outliers

Lehner, Wolfgang, Rosch, Philip, Gemulla, Rainer 12 August 2022 (has links)
Random sampling is one of the most widely used means to build synopses of large datasets because random samples can be used for a wide range of analytical tasks. Unfortunately, the quality of the estimates derived from a sample is negatively affected by the presence of 'outliers' in the data. In this paper, we show how to circumvent this shortcoming by constructing outlier-aware sample synopses. Our approach extends the well-known outlier indexing scheme to multiple aggregation columns.
152

A Sample Advisor for Approximate Query Processing

Rösch, Philipp, Lehner, Wolfgang 25 January 2023 (has links)
The rapid growth of current data warehouse systems makes random sampling a crucial component of modern data management systems. Although there is a large body of work on database sampling, the problem of automatic sample selection remained (almost) unaddressed. In this paper, we tackle the problem with a sample advisor. We propose a cost model to evaluate a sample for a given query. Based on this, our sample advisor determines the optimal set of samples for a given set of queries specified by an expert. We further propose an extension to utilize recorded workload information. In this case, the sample advisor takes the set of queries and a given memory bound into account for the computation of a sample advice. Additionally, we consider the merge of samples in case of overlapping sample advice and present both an exact and a heuristic solution. Within our evaluation, we analyze the properties of the cost model and compare the proposed algorithms. We further demonstrate the effectiveness and the efficiency of the heuristic solutions with a variety of experiments.
153

General dynamic Yannakakis: Conjunctive queries with theta joins under updates

Idris, Muhammad, Ugarte, Martín, Vansummeren, Stijn, Voigt, Hannes, Lehner, Wolfgang 17 July 2023 (has links)
The ability to efficiently analyze changing data is a key requirement of many real-time analytics applications. In prior work, we have proposed general dynamic Yannakakis (GDYN), a general framework for dynamically processing acyclic conjunctive queries with θ-joins in the presence of data updates. Whereas traditional approaches face a trade-off between materialization of subresults (to avoid inefficient recomputation) and recomputation of subresults (to avoid the potentially large space overhead of materialization), GDYN is able to avoid this trade-off. It intelligently maintains a succinct data structure that supports efficient maintenance under updates and from which the full query result can quickly be enumerated. In this paper, we consolidate and extend the development of GDYN. First, we give full formal proof of GDYN ’s correctness and complexity. Second, we present a novel algorithm for computing GDYN query plans. Finally, we instantiate GDYN to the case where all θ-joins are inequalities and present extended experimental comparison against state-of-the-art engines. Our approach performs consistently better than the competitor systems with multiple orders of magnitude improvements in both time and memory consumption.
154

Model-based Integration of Past & Future in TimeTravel

Khalefa, Mohamed E., Fischer, Ulrike, Pedersen, Torben Bach, Lehner, Wolfgang 10 January 2023 (has links)
We demonstrate TimeTravel, an efficient DBMS system for seamless integrated querying of past and (forecasted) future values of time series, allowing the user to view past and future values as one joint time series. This functionality is important for advanced application domain like energy. The main idea is to compactly represent time series as models. By using models, the TimeTravel system answers queries approximately on past and future data with error guarantees (absolute error and confidence) one order of magnitude faster than when accessing the time series directly. In addition, it efficiently supports exact historical queries by only accessing relevant portions of the time series. This is unlike existing approaches, which access the entire time series to exactly answer the query. To realize this system, we propose a novel hierarchical model index structure. As real-world time series usually exhibits seasonal behavior, models in this index incorporate seasonality. To construct a hierarchical model index, the user specifies seasonality period, error guarantees levels, and a statistical forecast method. As time proceeds, the system incrementally updates the index and utilizes it to answer approximate and exact queries. TimeTravel is implemented into PostgreSQL, thus achieving complete user transparency at the query level. In the demo, we show the easy building of a hierarchical model index for a real-world time series and the effect of varying the error guarantees on the speed up of approximate and exact queries.
155

Efficient techniques for large-scale Web data management / Techniques efficaces de gestion de données Web à grande échelle

Camacho Rodriguez, Jesus 25 September 2014 (has links)
Le développement récent des offres commerciales autour du cloud computing a fortement influé sur la recherche et le développement des plateformes de distribution numérique. Les fournisseurs du cloud offrent une infrastructure de distribution extensible qui peut être utilisée pour le stockage et le traitement des données.En parallèle avec le développement des plates-formes de cloud computing, les modèles de programmation qui parallélisent de manière transparente l'exécution des tâches gourmandes en données sur des machines standards ont suscité un intérêt considérable, à commencer par le modèle MapReduce très connu aujourd'hui puis par d'autres frameworks plus récents et complets. Puisque ces modèles sont de plus en plus utilisés pour exprimer les tâches de traitement de données analytiques, la nécessité se fait ressentir dans l'utilisation des langages de haut niveau qui facilitent la charge de l'écriture des requêtes complexes pour ces systèmes.Cette thèse porte sur des modèles et techniques d'optimisation pour le traitement efficace de grandes masses de données du Web sur des infrastructures à grande échelle. Plus particulièrement, nous étudions la performance et le coût d'exploitation des services de cloud computing pour construire des entrepôts de données Web ainsi que la parallélisation et l'optimisation des langages de requêtes conçus sur mesure selon les données déclaratives du Web.Tout d'abord, nous présentons AMADA, une architecture d'entreposage de données Web à grande échelle dans les plateformes commerciales de cloud computing. AMADA opère comme logiciel en tant que service, permettant aux utilisateurs de télécharger, stocker et interroger de grands volumes de données Web. Sachant que les utilisateurs du cloud prennent en charge les coûts monétaires directement liés à leur consommation de ressources, notre objectif n'est pas seulement la minimisation du temps d'exécution des requêtes, mais aussi la minimisation des coûts financiers associés aux traitements de données. Plus précisément, nous étudions l'applicabilité de plusieurs stratégies d'indexation de contenus et nous montrons qu'elles permettent non seulement de réduire le temps d'exécution des requêtes mais aussi, et surtout, de diminuer les coûts monétaires liés à l'exploitation de l'entrepôt basé sur le cloud.Ensuite, nous étudions la parallélisation efficace de l'exécution de requêtes complexes sur des documents XML mis en œuvre au sein de notre système PAXQuery. Nous fournissons de nouveaux algorithmes montrant comment traduire ces requêtes dans des plans exprimés par le modèle de programmation PACT (PArallelization ConTracts). Ces plans sont ensuite optimisés et exécutés en parallèle par le système Stratosphere. Nous démontrons l'efficacité et l'extensibilité de notre approche à travers des expérimentations sur des centaines de Go de données XML.Enfin, nous présentons une nouvelle approche pour l'identification et la réutilisation des sous-expressions communes qui surviennent dans les scripts Pig Latin. Notre algorithme, nommé PigReuse, agit sur les représentations algébriques des scripts Pig Latin, identifie les possibilités de fusion des sous-expressions, sélectionne les meilleurs à exécuter en fonction du coût et fusionne d'autres expressions équivalentes pour partager leurs résultats. Nous apportons plusieurs extensions à l'algorithme afin d’améliorer sa performance. Nos résultats expérimentaux démontrent l'efficacité et la rapidité de nos algorithmes basés sur la réutilisation et des stratégies d'optimisation. / The recent development of commercial cloud computing environments has strongly impacted research and development in distributed software platforms. Cloud providers offer a distributed, shared-nothing infrastructure, that may be used for data storage and processing.In parallel with the development of cloud platforms, programming models that seamlessly parallelize the execution of data-intensive tasks over large clusters of commodity machines have received significant attention, starting with the MapReduce model very well known by now, and continuing through other novel and more expressive frameworks. As these models are increasingly used to express analytical-style data processing tasks, the need for higher-level languages that ease the burden of writing complex queries for these systems arises.This thesis investigates the efficient management of Web data on large-scale infrastructures. In particular, we study the performance and cost of exploiting cloud services to build Web data warehouses, and the parallelization and optimization of query languages that are tailored towards querying Web data declaratively.First, we present AMADA, an architecture for warehousing large-scale Web data in commercial cloud platforms. AMADA operates in a Software as a Service (SaaS) approach, allowing users to upload, store, and query large volumes of Web data. Since cloud users support monetary costs directly connected to their consumption of resources, our focus is not only on query performance from an execution time perspective, but also on the monetary costs associated to this processing. In particular, we study the applicability of several content indexing strategies, and show that they lead not only to reducing query evaluation time, but also, importantly, to reducing the monetary costs associated with the exploitation of the cloud-based warehouse.Second, we consider the efficient parallelization of the execution of complex queries over XML documents, implemented within our system PAXQuery. We provide novel algorithms showing how to translate such queries into plans expressed in the PArallelization ConTracts (PACT) programming model. These plans are then optimized and executed in parallel by the Stratosphere system. We demonstrate the efficiency and scalability of our approach through experiments on hundreds of GB of XML data.Finally, we present a novel approach for identifying and reusing common subexpressions occurring in Pig Latin scripts. In particular, we lay the foundation of our reuse-based algorithms by formalizing the semantics of the Pig Latin query language with extended nested relational algebra for bags. Our algorithm, named PigReuse, operates on the algebraic representations of Pig Latin scripts, identifies subexpression merging opportunities, selects the best ones to execute based on a cost function, and merges other equivalent expressions to share its result. We bring several extensions to the algorithm to improve its performance. Our experiment results demonstrate the efficiency and effectiveness of our reuse-based algorithms and optimization strategies.
156

Real-time Business Intelligence through Compact and Efficient Query Processing Under Updates

Idris, Muhammad 05 March 2019 (has links) (PDF)
Responsive analytics are rapidly taking over the traditional data analytics dominated by the post-fact approaches in traditional data warehousing. Recent advancements in analytics demand placing analytical engines at the forefront of the system to react to updates occurring at high speed and detect patterns, trends, and anomalies. These kinds of solutions find applications in Financial Systems, Industrial Control Systems, Business Intelligence and on-line Machine Learning among others. These applications are usually associated with Big Data and require the ability to react to constantly changing data in order to obtain timely insights and take proactive measures. Generally, these systems specify the analytical results or their basic elements in a query language, where the main task then is to maintain query results under frequent updates efficiently. The task of reacting to updates and analyzing changing data has been addressed in two ways in the literature: traditional business intelligence (BI) solutions focus on historical data analysis where the data is refreshed periodically and in batches, and stream processing solutions process streams of data from transient sources as flows of data items. Both kinds of systems share the niche of reacting to updates (known as dynamic evaluation), however, they differ in architecture, query languages, and processing mechanisms. In this thesis, we investigate the possibility of a reactive and unified framework to model queries that appear in both kinds of systems.In traditional BI solutions, evaluating queries under updates has been studied under the umbrella of incremental evaluation of queries that are based on the relational incremental view maintenance model and mostly focus on queries that feature equi-joins. Streaming systems, in contrast, generally follow automaton based models to evaluate queries under updates, and they generally process queries that mostly feature comparisons of temporal attributes (e.g. timestamp attributes) along with comparisons of non-temporal attributes over streams of bounded sizes. Temporal comparisons constitute inequality constraints while non-temporal comparisons can either be equality or inequality constraints. Hence these systems mostly process inequality joins. As a starting point for our research, we postulate the thesis that queries in streaming systems can also be evaluated efficiently based on the paradigm of incremental evaluation just like in BI systems in a main-memory model. The efficiency of such a model is measured in terms of runtime memory footprint and the update processing cost. To this end, the existing approaches of dynamic evaluation in both kinds of systems present a trade-off between memory footprint and the update processing cost. More specifically, systems that avoid materialization of query (sub)results incur high update latency and systems that materialize (sub)results incur high memory footprint. We are interested in investigating the possibility to build a model that can address this trade-off. In particular, we overcome this trade-off by investigating the possibility of practical dynamic evaluation algorithm for queries that appear in both kinds of systems and present a main-memory data representation that allows to enumerate query (sub)results without materialization and can be maintained efficiently under updates. We call this representation the Dynamic Constant Delay Linear Representation (DCLRs).We devise DCLRs with the following properties: 1) they allow, without materialization, enumeration of query results with bounded-delay (and with constant delay for a sub-class of queries), 2) they allow tuple lookup in query results with logarithmic delay (and with constant delay for conjunctive queries with equi-joins only), 3) they take space linear in the size of the database, 4) they can be maintained efficiently under updates. We first study the DCLRs with the above-described properties for the class of acyclic conjunctive queries featuring equi-joins with projections and present the dynamic evaluation algorithm called the Dynamic Yannakakis (DYN) algorithm. Then, we present the generalization of the DYN algorithm to the class of acyclic queries featuring multi-way Theta-joins with projections and call it Generalized DYN (GDYN). We devise DCLRs with the above properties for acyclic conjunctive queries, and the working of DYN and GDYN over DCLRs are based on a particular variant of join trees, called the Generalized Join Trees (GJTs) that guarantee the above-described properties of DCLRs. We define GJTs and present algorithms to test a conjunctive query featuring Theta-joins for acyclicity and to generate GJTs for such queries. We extend the classical GYO algorithm from testing a conjunctive query with equalities for acyclicity to testing a conjunctive query featuring multi-way Theta-joins with projections for acyclicity. We further extend the GYO algorithm to generate GJTs for queries that are acyclic.GDYN is hence a unified framework based on DCLRs that enables processing of queries that appear in streaming systems as well as in BI systems in a unified main-memory model and addresses the space-time trade-off. We instantiate GDYN to the particular case where all Theta-joins involve only equalities and inequalities and call this instantiation IEDYN. We implement DYN and IEDYN as query compilers that generate executable programs in the Scala programming language and provide all the necessary data structures and their maintenance and enumeration methods in a continuous stream processing model. We evaluate DYN and IEDYN against state-of-the-art BI and streaming systems on both industrial and synthetically generated benchmarks. We show that DYN and IEDYN outperform the existing systems by over an order of magnitude efficiency in both memory footprint and update processing time. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
157

Επεξεργασία πολύπλοκων ερωτημάτων και εκτίμηση ανομοιόμορφων κατανομών σε κατανεμημένα δίκτυα κλίμακας ίντερνετ / Complex query processing and estimation of distribution skewness in Internet-scale distributed networks

Πιτουρά, Θεώνη 12 January 2009 (has links)
Τα κατανεμημένα δίκτυα κλίμακας Ίντερνετ και κυρίως τα δίκτυα ομοτίμων εταίρων, γνωστά και ως peer-to-peer (p2p), που αποτελούν το πιο αντιπροσωπευτικό παράδειγμά τους, προσελκύουν τα τελευταία χρόνια μεγάλο ενδιαφέρον από τους ερευνητές και τις επιχειρήσεις λόγω των ιδιόμορφων χαρακτηριστικών τους, όπως ο πλήρης αποκεντρωτικός χαρακτήρας, η αυτονομία των κόμβων, η ικανότητα κλιμάκωσης, κ.λπ. Αρχικά σχεδιασμένα να υποστηρίζουν εφαρμογές διαμοιρασμού αρχείων με βασική υπηρεσία την επεξεργασία απλών ερωτημάτων, σύντομα εξελίχτηκαν σε ένα καινούργιο μοντέλο κατανεμημένων συστημάτων, με μεγάλες και αυξανόμενες δυνατότητες για διαδικτυακές εφαρμογές, υποστηρίζοντας πολύπλοκες εφαρμογές διαμοιρασμού δομημένων και σημασιολογικά προσδιορισμένων δεδομένων. Η προσέγγισή μας στην περιοχή αυτή γίνεται προς δύο βασικές κατευθύνσεις: (α) την επεξεργασία πολύπλοκων ερωτημάτων και (β) την εκτίμηση των ανομοιομορφιών των διαφόρων κατανομών που συναντάμε στα δίκτυα αυτά (π.χ. φορτίου, προσφοράς ή κατανάλωσης ενός πόρου, τιμών των δεδομένων των κόμβων, κ.λπ.), που εκτός των άλλων αποτελεί ένα σημαντικό εργαλείο στην υποστήριξη πολύπλοκων ερωτημάτων. Συγκεκριμένα, ασχολούμαστε και επιλύουμε τρία βασικά ανοικτά προβλήματα. Το πρώτο ανοικτό πρόβλημα είναι η επεξεργασία ερωτημάτων εύρους τιμών σε ομότιμα συστήματα κατανεμημένου πίνακα κατακερματισμού, με ταυτόχρονη εξασφάλιση της εξισορρόπησης του φορτίου των κόμβων και της ανοχής σε σφάλματα. Προτείνουμε μια αρχιτεκτονική επικάλυψης, που ονομάζουμε Saturn, που εφαρμόζεται πάνω από ένα δίκτυο κατανεμημένου πίνακα κατακερματισμού. Η αρχιτεκτονική Saturn χρησιμοποιεί: (α) μια πρωτότυπη συνάρτηση κατακερματισμού που τοποθετεί διαδοχικές τιμές δεδομένων σε γειτονικούς κόμβους, για την αποδοτική επεξεργασία των ερωτημάτων εύρους τιμών και (β) την αντιγραφή, για την εξασφάλιση της εξισορρόπησης του φορτίου προσπελάσεων (κάθετη, καθοδηγούμενη από το φορτίο αντιγραφή) και της ανοχής σε σφάλματα (οριζόντια αντιγραφή). Μέσα από μια εκτεταμένη πειραματική αξιολόγηση του Saturn και σύγκριση με δύο βασικά δίκτυα κατανεμημένου πίνακα κατακερματισμού (Chord και OP-Chord) πιστοποιούμε την ανωτερότητα του Saturn να αντιμετωπίζει και τα τρία ζητήματα που θέσαμε, αλλά και την ικανότητά του να συντονίζει το βαθμό αντιγραφής ώστε να ανταλλάζει ανάμεσα στο κόστος αντιγραφής και στο βαθμό εξισορρόπησης του φορτίου. Το δεύτερο ανοικτό πρόβλημα που αντιμετωπίζουμε αφορά την έλλειψη κατάλληλων μετρικών που να εκφράζουν τις ανομοιομορφίες των διαφόρων κατανομών (όπως, για παράδειγμα, το βαθμό δικαιοσύνης μιας κατανομής φορτίου) σε κατανεμημένα δίκτυα κλίμακας Ίντερνετ και την μη αποτελεσματική ή δυναμική εκμετάλλευση μετρικών ανομοιομορφίας σε συνδυασμό με αλγορίθμους διόρθωσης (όπως ο αλγόριθμος εξισορρόπησης φορτίου). Το πρόβλημα είναι σημαντικό γιατί η εκτίμηση των κατανομών συντελεί στην ικανότητα κλιμάκωσης και στην επίδοση αυτών των δικτύων. Αρχικά, προτείνουμε τρεις μετρικές ανομοιομορφίας (το συντελεστή του Gini, τον δείκτη δικαιοσύνης και το συντελεστή διασποράς) μετά από μια αναλυτική αξιολόγηση μεταξύ γνωστών μετρικών εκτίμησης ανομοιομορφίας και στη συνέχεια, αναπτύσσουμε τεχνικές δειγματοληψίας (τρεις γνωστές τεχνικές και τρεις προτεινόμενες) για τη δυναμική εκτίμηση αυτών των μετρικών. Με εκτεταμένα πειράματα αξιολογούμε συγκριτικά τους προτεινόμενους αλγορίθμους εκτίμησης και τις τρεις μετρικές και επιδεικνύουμε πώς αυτές οι μετρικές και ειδικά, ο συντελεστής του Gini, μπορούν να χρησιμοποιηθούν εύκολα και δυναμικά από υψηλότερου επιπέδου αλγορίθμους, οι οποίοι μπορούν τώρα να ξέρουν πότε να επέμβουν για να διορθώσουν τις άδικες κατανομές. Το τρίτο και τελευταίο ανοικτό πρόβλημα αφορά την εκτίμηση του μεγέθους αυτοσύνδεσης μιας σχέσης όπου οι πλειάδες της είναι κατανεμημένες σε κόμβους δεδομένων που αποτελούν ένα ομότιμο δίκτυο επικάλυψης. Το μέγεθος αυτοσύνδεσης έχει χρησιμοποιηθεί εκτεταμένα σε συγκεντρωτικές βάσεις δεδομένων για τη βελτιστοποίηση ερωτημάτων και υποστηρίζουμε ότι μπορεί να χρησιμοποιηθεί και σε ένα πλήθος άλλων εφαρμογών, ειδικά στα ομότιμα δίκτυα (π.χ. συσταδοποίηση του Ιστού, αναζήτηση στον Ιστό, κ.λπ.). Η συνεισφορά μας περιλαμβάνει, αρχικά, τις προσαρμογές πέντε γνωστών συγκεντρωτικών τεχνικών εκτίμησης του μεγέθους αυτοσύνδεσης (συγκεκριμένα, σειριακή, ετεροδειγματοληπτική, προσαρμοστική και διεστιακή δειγματοληψία και δειγματοληψία με μέτρηση δείγματος) στο περιβάλλον ομοτίμων εταίρων και η ανάπτυξη μια πρωτότυπης τεχνικής εκτίμησης του μεγέθους αυτοσύνδεσης, βασισμένη στο συντελεστή του Gini. Με μαθηματική ανάλυση δείχνουμε ότι οι εκτιμήσεις του συντελεστή του Gini μπορούν να οδηγήσουν σε εκτιμήσεις των υποκείμενων κατανομών δεδομένων, όταν αυτά ακολουθούν το νόμο της δύναμης ή το νόμο του Zipf και αυτές, με τη σειρά τους, σε εκτιμήσεις του μεγέθους αυτοσύνδεσης των σχέσεων των δεδομένων. Μετά από αναλυτική πειραματική μελέτη και σύγκριση όλων των παραπάνω τεχνικών αποδεικνύουμε ότι η καινούργια τεχνική που προτείνουμε είναι πολύ αποτελεσματική ως προς την ακρίβεια, την πιστότητα και την απόδοση έναντι των άλλων πέντε μεθόδων. / The distributed, Internet-scale networks, and mainly, the peer-to-peer networks (p2p), that constitute their most representative example, recently attract a great interest from the researchers and the industry, due to their outstanding properties, such as full decentralization, autonomy of nodes, scalability, etc. Initially designed to support file sharing applications with simple lookup operations, they soon developed in a new model of distributed systems, with many and increasing possibilities for Internet applications, supporting complex applications of structured and semantically rich data. Our research to the area has two basic points of view: (a) complex query processing and (b) estimation of skewness in various distributions existing in these networks (e.g. load distribution, distribution of offer, or consumption of resources, data value distributions, etc), which, among others, it is an important tool to complex query processing support. Specifically, we deal with and solve three basic open problems. The first open problem is range query processing in p2p systems based on distributed hash tables (DHT), with simultaneous guarantees of access load balancing and fault tolerance. We propose an overlay DHT architecture, coined Saturn. Saturn uses a novel order-preserving hash function that places consecutive data values in successive nodes to provide efficient range query processing, and replication to guarantee access load balancing (vertical, load-driven replication) and fault tolerance (horizontal replication). With extensive experimentation, we evaluate and compare Saturn with two basic DHT networks (Chord and OP - Chord), and certify its superiority to cope with the three above requirements, but also its ability to tune the degree of replication to trade off replication costs for access load balancing. The second open problem that we face concerns the lack of appropriate metrics to express the degree of skewness of various distributions (for example, the fairness degree of load balancing) in p2p networks, and the inefficient and offline-only exploitation of metrics of skewness, which does not enable any cooperation with corrective algorithms (for example, load balancing algorithms). The problem is important because estimation of distribution fairness contributes to system scalability and efficiency. First, after a comprehensive study and evaluation of popular metrics of skewness, we propose three of them (the coefficient of Gini, the fairness index, and the coefficient of variation), and, then, we develop sampling techniques (three already known techniques, and three novel ones) to dynamically estimate these metrics. With extensive experimentation, which comparatively evaluates both the various proposed estimation algorithms and the three metrics we propose, we show how these three metrics, and especially, the coefficient of Gini, can be easily utilized online by higher-level algorithms, which can now know when to best intervene to correct unfairness. The third and last open problem concerns self-join size estimation of a relation whose tuples are distributed over data nodes which comprise an overlay network. Self-join size has been extensively used in centralized databases for query optimization purposes, and we support that it can also be used in various other applications, specifically in p2p networks (e.g. web clustering, web searching, etc). Our contribution first includes the adaptations of five well-known self-join size estimation, centralized techniques (specifically, sequential sampling, cross-sampling, adaptive and bifocal sampling, and sample-count) to the p2p environment and a novel estimation technique which is based on the Gini coefficient. With mathematical analysis we show that, the estimates of the Gini coefficient can lead to estimates of the degree of skewness of the underlying data distribution, when these follow the power, or Zipf’s law, and these estimates can lead to self-join size estimates of those data relations. With extensive experimental study and comparison of all above techniques, we prove that the proposed technique is very efficient in terms of accuracy, precision, and cost of estimation against the other five methods.
158

Combining checkpointing and other resilience mechanisms for exascale systems / L'utilisation conjointe de mécanismes de sauvegarde de points de reprise (checkpoints) et d'autres mécanismes de résilience pour les systèmes exascales

Bentria, Dounia 10 December 2014 (has links)
Dans cette thèse, nous nous sommes intéressés aux problèmes d'ordonnancement et d'optimisation dans des contextes probabilistes. Les contributions de cette thèse se déclinent en deux parties. La première partie est dédiée à l’optimisation de différents mécanismes de tolérance aux pannes pour les machines de très large échelle qui sont sujettes à une probabilité de pannes. La seconde partie est consacrée à l’optimisation du coût d’exécution des arbres d’opérateurs booléens sur des flux de données.Dans la première partie, nous nous sommes intéressés aux problèmes de résilience pour les machines de future génération dites « exascales » (plateformes pouvant effectuer 1018 opérations par secondes).Dans le premier chapitre, nous présentons l’état de l’art des mécanismes les plus utilisés dans la tolérance aux pannes et des résultats généraux liés à la résilience.Dans le second chapitre, nous étudions un modèle d’évaluation des protocoles de sauvegarde de points de reprise (checkpoints) et de redémarrage. Le modèle proposé est suffisamment générique pour contenir les situations extrêmes: d’un côté le checkpoint coordonné, et de l’autre toute une famille de stratégies non-Coordonnées. Nous avons proposé une analyse détaillée de plusieurs scénarios, incluant certaines des plateformes de calcul existantes les plus puissantes, ainsi que des anticipations sur les futures plateformes exascales.Dans les troisième, quatrième et cinquième chapitres, nous étudions l'utilisation conjointe de différents mécanismes de tolérance aux pannes (réplication, prédiction de pannes et détection d'erreurs silencieuses) avec le mécanisme traditionnel de checkpoints et de redémarrage. Nous avons évalué plusieurs modèles au moyen de simulations. Nos résultats montrent que ces modèles sont bénéfiques pour un ensemble de modèles d'applications dans le cadre des futures plateformes exascales.Dans la seconde partie de la thèse, nous étudions le problème de la minimisation du coût de récupération des données par des applications lors du traitement d’une requête exprimée sous forme d'arbres d'opérateurs booléens appliqués à des prédicats sur des flux de données de senseurs. Le problème est de déterminer l'ordre dans lequel les prédicats doivent être évalués afin de minimiser l'espérance du coût du traitement de la requête. Dans le sixième chapitre, nous présentons l'état de l'art de la seconde partie et dans le septième chapitre, nous étudions le problème pour les requêtes exprimées sous forme normale disjonctive. Nous considérons le cas plus général où chaque flux peut apparaître dans plusieurs prédicats et nous étudions deux modèles, le modèle où chaque prédicat peut accéder à un seul flux et le modèle où chaque prédicat peut accéder à plusieurs flux. / In this thesis, we are interested in scheduling and optimization problems in probabilistic contexts. The contributions of this thesis come in two parts. The first part is dedicated to the optimization of different fault-Tolerance mechanisms for very large scale machines that are subject to a probability of failure and the second part is devoted to the optimization of the expected sensor data acquisition cost when evaluating a query expressed as a tree of disjunctive Boolean operators applied to Boolean predicates. In the first chapter, we present the related work of the first part and then we introduce some new general results that are useful for resilience on exascale systems.In the second chapter, we study a unified model for several well-Known checkpoint/restart protocols. The proposed model is generic enough to encompass both extremes of the checkpoint/restart space, from coordinated approaches to a variety of uncoordinated checkpoint strategies. We propose a detailed analysis of several scenarios, including some of the most powerful currently available HPC platforms, as well as anticipated exascale designs.In the third, fourth, and fifth chapters, we study the combination of different fault tolerant mechanisms (replication, fault prediction and detection of silent errors) with the traditional checkpoint/restart mechanism. We evaluated several models using simulations. Our results show that these models are useful for a set of models of applications in the context of future exascale systems.In the second part of the thesis, we study the problem of minimizing the expected sensor data acquisition cost when evaluating a query expressed as a tree of disjunctive Boolean operators applied to Boolean predicates. The problem is to determine the order in which predicates should be evaluated so as to shortcut part of the query evaluation and minimize the expected cost.In the sixth chapter, we present the related work of the second part and in the seventh chapter, we study the problem for queries expressed as a disjunctive normal form. We consider the more general case where each data stream can appear in multiple predicates and we consider two models, the model where each predicate can access a single stream and the model where each predicate can access multiple streams.
159

Scalable algorithms for cloud-based Semantic Web data management / Algorithmes passant à l’échelle pour la gestion de données du Web sémantique sur les platformes cloud

Zampetakis, Stamatis 21 September 2015 (has links)
Afin de construire des systèmes intelligents, où les machines sont capables de raisonner exactement comme les humains, les données avec sémantique sont une exigence majeure. Ce besoin a conduit à l’apparition du Web sémantique, qui propose des technologies standards pour représenter et interroger les données avec sémantique. RDF est le modèle répandu destiné à décrire de façon formelle les ressources Web, et SPARQL est le langage de requête qui permet de rechercher, d’ajouter, de modifier ou de supprimer des données RDF. Être capable de stocker et de rechercher des données avec sémantique a engendré le développement des nombreux systèmes de gestion des données RDF.L’évolution rapide du Web sémantique a provoqué le passage de systèmes de gestion des données centralisées à ceux distribués. Les premiers systèmes étaient fondés sur les architectures pair-à-pair et client-serveur, alors que récemment l’attention se porte sur le cloud computing.Les environnements de cloud computing ont fortement impacté la recherche et développement dans les systèmes distribués. Les fournisseurs de cloud offrent des infrastructures distribuées autonomes pouvant être utilisées pour le stockage et le traitement des données. Les principales caractéristiques du cloud computing impliquent l’évolutivité́, la tolérance aux pannes et l’allocation élastique des ressources informatiques et de stockage en fonction des besoins des utilisateurs.Cette thèse étudie la conception et la mise en œuvre d’algorithmes et de systèmes passant à l’échelle pour la gestion des données du Web sémantique sur des platformes cloud. Plus particulièrement, nous étudions la performance et le coût d’exploitation des services de cloud computing pour construire des entrepôts de données du Web sémantique, ainsi que l’optimisation de requêtes SPARQL pour les cadres massivement parallèles.Tout d’abord, nous introduisons les concepts de base concernant le Web sémantique et les principaux composants des systèmes fondés sur le cloud. En outre, nous présentons un aperçu des systèmes de gestion des données RDF (centralisés et distribués), en mettant l’accent sur les concepts critiques de stockage, d’indexation, d’optimisation des requêtes et d’infrastructure.Ensuite, nous présentons AMADA, une architecture de gestion de données RDF utilisant les infrastructures de cloud public. Nous adoptons le modèle de logiciel en tant que service (software as a service - SaaS), où la plateforme réside dans le cloud et des APIs appropriées sont mises à disposition des utilisateurs, afin qu’ils soient capables de stocker et de récupérer des données RDF. Nous explorons diverses stratégies de stockage et d’interrogation, et nous étudions leurs avantages et inconvénients au regard de la performance et du coût monétaire, qui est une nouvelle dimension importante à considérer dans les services de cloud public.Enfin, nous présentons CliqueSquare, un système distribué de gestion des données RDF basé sur Hadoop. CliqueSquare intègre un nouvel algorithme d’optimisation qui est capable de produire des plans massivement parallèles pour des requêtes SPARQL. Nous présentons une famille d’algorithmes d’optimisation, s’appuyant sur les équijointures n- aires pour générer des plans plats, et nous comparons leur capacité à trouver les plans les plus plats possibles. Inspirés par des techniques de partitionnement et d’indexation existantes, nous présentons une stratégie de stockage générique appropriée au stockage de données RDF dans HDFS (Hadoop Distributed File System). Nos résultats expérimentaux valident l’effectivité et l’efficacité de l’algorithme d’optimisation démontrant également la performance globale du système. / In order to build smart systems, where machines are able to reason exactly like humans, data with semantics is a major requirement. This need led to the advent of the Semantic Web, proposing standard ways for representing and querying data with semantics. RDF is the prevalent data model used to describe web resources, and SPARQL is the query language that allows expressing queries over RDF data. Being able to store and query data with semantics triggered the development of many RDF data management systems. The rapid evolution of the Semantic Web provoked the shift from centralized data management systems to distributed ones. The first systems to appear relied on P2P and client-server architectures, while recently the focus moved to cloud computing.Cloud computing environments have strongly impacted research and development in distributed software platforms. Cloud providers offer distributed, shared-nothing infrastructures that may be used for data storage and processing. The main features of cloud computing involve scalability, fault-tolerance, and elastic allocation of computing and storage resources following the needs of the users.This thesis investigates the design and implementation of scalable algorithms and systems for cloud-based Semantic Web data management. In particular, we study the performance and cost of exploiting commercial cloud infrastructures to build Semantic Web data repositories, and the optimization of SPARQL queries for massively parallel frameworks.First, we introduce the basic concepts around Semantic Web and the main components and frameworks interacting in massively parallel cloud-based systems. In addition, we provide an extended overview of existing RDF data management systems in the centralized and distributed settings, emphasizing on the critical concepts of storage, indexing, query optimization, and infrastructure. Second, we present AMADA, an architecture for RDF data management using public cloud infrastructures. We follow the Software as a Service (SaaS) model, where the complete platform is running in the cloud and appropriate APIs are provided to the end-users for storing and retrieving RDF data. We explore various storage and querying strategies revealing pros and cons with respect to performance and also to monetary cost, which is a important new dimension to consider in public cloud services. Finally, we present CliqueSquare, a distributed RDF data management system built on top of Hadoop, incorporating a novel optimization algorithm that is able to produce massively parallel plans for SPARQL queries. We present a family of optimization algorithms, relying on n-ary (star) equality joins to build flat plans, and compare their ability to find the flattest possibles. Inspired by existing partitioning and indexing techniques we present a generic storage strategy suitable for storing RDF data in HDFS (Hadoop’s Distributed File System). Our experimental results validate the efficiency and effectiveness of the optimization algorithm demonstrating also the overall performance of the system.
160

PLANT LEVEL IIOT BASED ENERGY MANAGEMENT FRAMEWORK

Liya Elizabeth Koshy (14700307) 31 May 2023 (has links)
<p><strong>The Energy Monitoring Framework</strong>, designed and developed by IAC, IUPUI, aims to provide a cloud-based solution that combines business analytics with sensors for real-time energy management at the plant level using wireless sensor network technology.</p> <p>The project provides a platform where users can analyze the functioning of a plant using sensor data. The data would also help users to explore the energy usage trends and identify any energy leaks due to malfunctions or other environmental factors in their plant. Additionally, the users could check the machinery status in their plant and have the capability to control the equipment remotely.</p> <p>The main objectives of the project include the following:</p> <ul> <li>Set up a wireless network using sensors and smart implants with a base station/ controller.</li> <li>Deploy and connect the smart implants and sensors with the equipment in the plant that needs to be analyzed or controlled to improve their energy efficiency.</li> <li>Set up a generalized interface to collect and process the sensor data values and store the data in a database.</li> <li>Design and develop a generic database compatible with various companies irrespective of the type and size.</li> <li> Design and develop a web application with a generalized structure. Hence the database can be deployed at multiple companies with minimum customization. The web app should provide the users with a platform to interact with the data to analyze the sensor data and initiate commands to control the equipment.</li> </ul> <p>The General Structure of the project constitutes the following components:</p> <ul> <li>A wireless sensor network with a base station.</li> <li>An Edge PC, that interfaces with the sensor network to collect the sensor data and sends it out to the cloud server. The system also interfaces with the sensor network to send out command signals to control the switches/ actuators.</li> <li>A cloud that hosts a database and an API to collect and store information.</li> <li>A web application hosted in the cloud to provide an interactive platform for users to analyze the data.</li> </ul> <p>The project was demonstrated in:</p> <ul> <li>Lecture Hall (https://iac-lecture-hall.engr.iupui.edu/LectureHallFlask/).</li> <li>Test Bed (https://iac-testbed.engr.iupui.edu/testbedflask/).</li> <li>A company in Indiana.</li> </ul> <p>The above examples used sensors such as current sensors, temperature sensors, carbon dioxide sensors, and pressure sensors to set up the sensor network. The equipment was controlled using compactable switch nodes with the chosen sensor network protocol. The energy consumption details of each piece of equipment were measured over a few days. The data was validated, and the system worked as expected and helped the user to monitor, analyze and control the connected equipment remotely.</p> <p><br></p>

Page generated in 0.0946 seconds