• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Empirical Evaluation of Edge Computing for Smart Building Streaming IoT Applications

Ghaffar, Talha 13 March 2019 (has links)
Smart buildings are one of the most important emerging applications of Internet of Things (IoT). The astronomical growth in IoT devices, data generated from these devices and ubiquitous connectivity have given rise to a new computing paradigm, referred to as "Edge computing", which argues for data analysis to be performed at the "edge" of the IoT infrastructure, near the data source. The development of efficient Edge computing systems must be based on advanced understanding of performance benefits that Edge computing can offer. The goal of this work is to develop this understanding by examining the end-to-end latency and throughput performance characteristics of Smart building streaming IoT applications when deployed at the resource-constrained infrastructure Edge and to compare it against the performance that can be achieved by utilizing Cloud's data-center resources. This work also presents a real-time streaming application to detect and localize the footstep impacts generated by a building's occupant while walking. We characterize this application's performance for Edge and Cloud computing and utilize a hybrid scheme that (1) offers maximum of around 60% and 65% reduced latency compared to Edge and Cloud respectively for similar throughput performance and (2) enables processing of higher ingestion rates by eliminating network bottleneck. / Master of Science / Among the various emerging applications of Internet of Things (IoT) are Smart buildings, that allow us to monitor and manipulate various operating parameters of a building by instrumenting it with sensor and actuator devices (Things). These devices operate continuously and generate unbounded streams of data that needs to be processed at low latency. This data, until recently, has been processed by the IoT applications deployed in the Cloud at the cost of high network latency of accessing Cloud’s resources. However, the increasing availability of IoT devices, ubiquitous connectivity, and exponential growth in the volume of IoT data has given rise to a new computing paradigm, referred to as “Edge computing”. Edge computing argues that IoT data should be analyzed near its source (at the network’s Edge) in order to eliminate high latency of accessing Cloud for data processing. In order to develop efficient Edge computing systems, an in-depth understanding of the trade-offs involved in Edge and Cloud computing paradigms is required. In this work, we seek to understand these trade-offs and the potential benefits of Edge computing. We examine end to-end latency and throughput performance characteristics of Smart building streaming IoT applications by deploying them at the resource-constrained Edge and compare it against the performance that can be achieved by Cloud deployment. We also present a real-time streaming application to detect and localize the footstep impacts generated by a building’s occupant while walking. We characterize this application’s performance for Edge and Cloud computing and utilize a hybrid scheme that (1) offers maximum of around 60% and 65% reduced latency compared to Edge and Cloud respectively for similar throughput performance and (2) enables processing of higher ingestion rates by eliminating network bottleneck.
2

分散式計算系統及巨量資料處理架構設計-基於YARN, Storm及Spark / Distributed computing system and big data real-time processing structure—based on YARN, Storm and Spark

曾柏崴, Tseng, Po Wei Unknown Date (has links)
近年來,隨著大數據時代的來臨,即時資料運算面臨許多挑戰。例如在期貨交易預測方面,為了精準的預測市場狀態,我們需要在海量資料中建立預測模型,且耗時在數十毫秒之內。 在本研究中,我們將介紹一套即時巨量資料運算架構,這套架構將解決在實務上需要解決的三大需求:高速處理需求、巨量資料處理以及儲存需求。同時,在整個平行運算系統之下,我們也實作了數種人工智慧演算法,例如SVM (Support Vector Machine)和LR (Logistic Regression)等,做為策略模擬的子系統。本架構包含下列三種主要的雲端運算技術: 1. 使用Apache YARN以整合整體系統資源,使叢集資源運用更具效率。 2. 為滿足高速處理需求,本架構使用Apache Storm以便處理海量且即時之資料流。同時,借助該框架,可在數十毫秒之內,運算上千種市場狀態數值供模型建模之用。 3. 運用Apache Spark,本研究建立了一套分散式運算架構用於模型建模。藉由使用Spark RDD(Resilient Distributed Datasets),本架構可將SVM和LR之模型建模時間縮短至數百毫秒之內。 為解決上述需求,本研究設計了一套n層分散式架構且整合上列數種技術。另外,在該架構中,我們使用Apache Kafka作為整體系統之訊息中介層,並支持系統內各子系統間之非同步訊息溝通。 / With the coming of the era of big data, the immediacy and the amount of data computation are facing with many challenges. For example, for Futures market forecasting, we need to accurately forecast the market state with the model built from large data (hundreds of GB to tens of TB) within tens of milliseconds. In this research, we will introduce a real-time big data computing architecture to resolve requests of high speed processing, the immense volume of data and the request of large data processing. In the meantime, several algorithms, such as SVM (Support Vector Machine, SVM) and LR (Logistic Regression, LR), are implemented as a subproject under the parallel distributed computing system. This architecture involves three main cloud computing techniques: 1. Use Apache YARN as a system of integrated resource management in order to apply cluster resources more efficiently. 2. To satisfy the requests of high speed processing, we apply Apache Storm in order to process large real-time data stream and compute thousands of numerical value within tens of milliseconds for following model building. 3. With Apache Spark, we establish a distributed computing architecture for model building. By using Spark RDD (Resilient Distributed Datasets, RDD), this architecture can shorten the execution time to within hundreds of milliseconds for SVM and LR model building. To resolve the requirements of the distributed system, we design an n-tier distributed architecture to integrate the foregoing several techniques. In this architecture, we use the Apache Kafka as the messaging middleware to support asynchronous message-based communication.
3

An Autonomic Workflow Performance Manager for Weather Forecast and Research Modeling Workflows

Gu, Shuqing, Gu, Shuqing January 2016 (has links)
Parameter selection is a critical task in scientific workflows in order to maintain the accuracy of the simulation in an environment where physical conditions change dynamically such as in the case of weather research and forecast simulations. Currently, Numerical Weather Prediction (NWP) is the premier method for weather prediction, which is used by the National Oceanic and Atmospheric Administration (NOAA). It takes the current observations from observed sites as the input for numeric computer models and then produces the final prediction. Considering the large number of simulation parameters, the size of the configuration search space becomes prohibitive for rapidly evaluating and identifying the parameter configuration that leads to most accurate prediction. In this thesis, we develop an Autonomic Workflow Performance Manager (AWPM) for Hurricane Integrated Modeling System (HIMS). AWPM is implemented on top of the Apache Storm and ZooKeeper to handle multiple real-time data streams for weather forecast. AWPM can automatically manage model initialization and execution workflow and achieve better performance and efficiency. In our experiments, AWPM achieves better performance and efficiency for the model initialization and execution processes, by utilizing automatic computing, distributed computing and component-based development. We reduced the timescale of the configuration search workflow by a factor of 10 by using 20 threads with the full search method, and a factor of 20 by with the roofline method when compared to serial workflow execution as it is typically performed by domain scientists.
4

JOB SCHEDULING FOR STREAMING APPLICATIONS IN HETEROGENEOUS DISTRIBUTED PROCESSING SYSTEMS

Al-Sinayyid, Ali 01 December 2020 (has links)
The colossal amounts of data generated daily are increasing exponentially at a never-before-seen pace. A variety of applications—including stock trading, banking systems, health-care, Internet of Things (IoT), and social media networks, among others—have created an unprecedented volume of real-time stream data estimated to reach billions of terabytes in the near future. As a result, we are currently living in the so-called Big Data era and witnessing a transition to the so-called IoT era. Enterprises and organizations are tackling the challenge of interpreting the enormous amount of raw data streams to achieve an improved understanding of data, and thus make efficient and well-informed decisions (i.e., data-driven decisions). Researchers have designed distributed data stream processing systems that can directly process data in near real-time. To extract valuable information from raw data streams, analysts need to create and implement data stream processing applications structured as a directed acyclic graphs (DAG). The infrastructure of distributed data stream processing systems, as well as the various requirements of stream applications, impose new challenges. Cluster heterogeneity in a distributed environment results in different cluster resources for task execution and data transmission, which make the optimal scheduling algorithms an NP-complete problem. Scheduling streaming applications plays a key role in optimizing system performance, particularly in maximizing the frame-rate, or how many instances of data sets can be processed per unit of time. The scheduling algorithm must consider data locality, resource heterogeneity, and communicational and computational latencies. The latencies associated with the bottleneck from computation or transmission need to be minimized when mapped to the heterogeneous and distributed cluster resources. Recent work on task scheduling for distributed data stream processing systems has a number of limitations. Most of the current schedulers are not designed to manage heterogeneous clusters. They also lack the ability to consider both task and machine characteristics in scheduling decisions. Furthermore, current default schedulers do not allow the user to control data locality aspects in application deployment.In this thesis, we investigate the problem of scheduling streaming applications on a heterogeneous cluster environment and develop the maximum throughput scheduler algorithm (MT-Scheduler) for streaming applications. The proposed algorithm uses a dynamic programming technique to efficiently map the application topology onto a heterogeneous distributed system based on computing and data transfer requirements, while also taking into account the capacity of underlying cluster resources. The proposed approach maximizes the system throughput by identifying and minimizing the time incurred at the computing/transfer bottleneck. The MT-Scheduler supports scheduling applications that are structured as a DAG, such as Amazon Timestream, Google Millwheel, and Twitter Heron. We conducted experiments using three Storm microbenchmark topologies in both simulated and real Apache Storm environments. To evaluate performance, we compared the proposed MT-Scheduler with the simulated round-robin and the default Storm scheduler algorithms. The results indicated that the MT-Scheduler outperforms the default round-robin approach in terms of both average system latency and throughput.
5

Resource optimization of edge servers dealing with priority-based workloads by utilizing service level objective-aware virtual rebalancing

Shahid, Amna 08 August 2023 (has links) (PDF)
IoT enables profitable communication between sensor/actuator devices and the cloud. Slow network causing Edge data to lack Cloud analytics hinders real-time analytics adoption. VRebalance solves priority-based workload performance for stream processing at the Edge. BO is used in VRebalance to prioritize workloads and find optimal resource configurations for efficient resource management. Apache Storm platform was used with RIoTBench IoT benchmark tool for real-time stream processing. Tools were used to evaluate VRebalance. Study shows VRebalance is more effective than traditional methods, meeting SLO targets despite system changes. VRebalance decreased SLO violation rates by almost 30% for static priority-based workloads and 52.2% for dynamic priority-based workloads compared to hill climbing algorithm. Using VRebalance decreased SLO violations by 66.1% compared to Apache Storm's default allocation.
6

Benchmarking and Scheduling Strategies for Distributed Stream Processing

Shukla, Anshu January 2017 (has links) (PDF)
The velocity dimension of Big Data refers to the need to rapidly process data that arrives continuously as streams of messages or events. Distributed Stream Processing Systems (DSPS) refer to distributed programming and runtime platforms that allow users to define a composition of dataflow logic that are executed on distributed resources over streams of incoming messages. A DSPS uses commodity clusters and Cloud Virtual Machines (VMs) for its execution. In order to meet the required performance for these applications, the DSPS needs to schedule these dataßows efficiently over the resources. Despite their growing use, resource scheduling for DSPSÕs tends to be done in an ad hoc manner, favoring empirical and reactive approaches, rather than a model-driven and analytical approach. Such empirical strategies may arrive at an approximate schedule for the dataflow that needs further tuning to meet the quality of service. We propose a model-based scheduling approach that makes use of performance profiles and benchmarks developed for tasks in the dataßow to plan both the resource allocation and the resource mapping that together form the schedule planning process. We propose the Model Based Allocation (MBA) and the Slot Aware Mapping (SAM) approaches that efectively utilize knowledge of the performance model of logic tasks to provide an efficient and predictable scheduling behavior. We implemented and validate these algorithms using the popular open source Apache Storm DSPS for several micro and application dataflows. The results show that our model-driven approach is able to reduce the amount of required resources (VMs) by 30% − 50% relative to existing techniques. Also we see that our strategies o↵er a predictable behavior that ensures that the expected and actual rates supported and resources used match closely. This can enable deterministic schedule planning even under dynamic conditions. Besides this static scheduling, we also examine the ability to dynamically consolidate tasks onto fewer VMs when the load on the dataßow decreases or the VMs get fragmented. We propose reliable task migration models for Apache Storm dataßows that are able to rapidly move the task assignment in the cluster, and resume the dataflow execution without any message loss.

Page generated in 0.0597 seconds