Spelling suggestions: "subject:" cloud computing"" "subject:" aloud computing""
111 |
Efficient Mobile Computation Offloading with Hard Task Deadlines and Concurrent Local ExecutionTeymoori, Peyvand January 2021 (has links)
Mobile computation offloading (MCO) can alleviate the hardware limitations of mobile devices by migrating heavy computational tasks from mobile devices to more powerful cloud servers. This can lead to better performance and energy savings for the mobile devices. This thesis considers MCO over stochastic wireless channels when task completion times are subject to hard deadline constraints. Hard deadlines, however, are difficult to meet in conventional computation offloading due to the randomness caused by the wireless channels. In the proposed offloading policies, concurrent local execution (CLE) is used to guarantee task execution time constraints. By sometimes allowing simultaneous local and remote execution, CLE ensures that job deadlines are always satisfied in the face of any unexpected wireless channel conditions. The thesis introduces online optimal algorithms that reduce the remote and local execution overlap so that energy wastage is minimized. Markov processes are used to model the communication channels.
MCO is addressed for three different job offloading schemes: continuous, multi-part, and preemptive. In the case of continuous offloading, referred to as 1-Part offloading, the mobile device will upload the entire job in one piece without interruption, when the scheduler decides to do so. In multi-part computation offloading, the job is partitioned into a known number (K) of parts, and each part is uploaded separately. In this offloading mechanism, which is referred to as K-Part Offloading, the upload initiation times of each part must be determined dynamically during runtime, and there may be waiting time periods between consecutive upload parts. Preemptive offloading is a generalization of K-Part Offloading where the number of task upload parts is unknown. In this scheme, a decision to either continue offloading or to temporarily interrupt the offload is made at the start of each time slot. Compared to the conventional contiguous computation offloading, interrupted offloading mechanisms (i.e., K-Part and preemptive offloading) allow the system to adapt when channel conditions change and therefore may result in lower mobile device energy consumption. This energy reduction will be obtained at the expense of having higher computational complexity. In this thesis, for each offloading scheme, an online computation offloading algorithm is introduced by constructing a time-dilated absorbing Markov chain (TDAMC) and applying dynamic programming (DP). These algorithms are shown to be energy-optimal while ensuring that the hard task deadline constraints are always satisfied. The optimality of these algorithms is proved using Markovian decision process stopping theory. Since the computational complexity of the proposed online algorithms, especially in the case of preemptive offloading, can be significant, three simpler and computationally efficient approximation methods are introduced: Markovian Compression (MC), Time Compression (TC), and Preemption Using Continuous Offloading (Preemption-CO). MC and TC reduce the state space of the offloading Markovian process by using a novel notion of geometric similarity or by running an optimal online offloading algorithm in periodic time steps. In Preemption-CO, while a task is offloaded preemptively, the offloading decision at every time-slot is based on non-preemptive calculations. These methods are used alone or in combination to construct practical offloading algorithms. A variety of results are presented that show the tradeoffs between complexity and mobile energy-saving performance for the different algorithms. / Thesis / Doctor of Philosophy (PhD)
|
112 |
Automated Hybrid Time Series Forecasting: Design, Benchmarking, and Use Cases / Automatisierte hybride Zeitreihenprognose: Design, Benchmarking und AnwendungsfälleBauer, André January 2021 (has links) (PDF)
These days, we are living in a digitalized world. Both our professional and private lives are pervaded by various IT services, which are typically operated using distributed computing systems (e.g., cloud environments). Due to the high level of digitalization, the operators of such systems are confronted with fast-paced and changing requirements. In particular, cloud environments have to cope with load fluctuations and respective rapid and unexpected changes in the computing resource demands. To face this challenge, so-called auto-scalers, such as the threshold-based mechanism in Amazon Web Services EC2, can be employed to enable elastic scaling of the computing resources. However, despite this opportunity, business-critical applications are still run with highly overprovisioned resources to guarantee a stable and reliable service operation. This strategy is pursued due to the lack of trust in auto-scalers and the concern that inaccurate or delayed adaptations may result in financial losses.
To adapt the resource capacity in time, the future resource demands must be "foreseen", as reacting to changes once they are observed introduces an inherent delay. In other words, accurate forecasting methods are required to adapt systems proactively. A powerful approach in this context is time series forecasting, which is also applied in many other domains. The core idea is to examine past values and predict how these values will evolve as time progresses. According to the "No-Free-Lunch Theorem", there is no algorithm that performs best for all scenarios. Therefore, selecting a suitable forecasting method for a given use case is a crucial task. Simply put, each method has its benefits and drawbacks, depending on the specific use case. The choice of the forecasting method is usually based on expert knowledge, which cannot be fully automated, or on trial-and-error. In both cases, this is expensive and prone to error.
Although auto-scaling and time series forecasting are established research fields, existing approaches cannot fully address the mentioned challenges: (i) In our survey on time series forecasting, we found that publications on time series forecasting typically consider only a small set of (mostly related) methods and evaluate their performance on a small number of time series with only a few error measures while providing no information on the execution time of the studied methods. Therefore, such articles cannot be used to guide the choice of an appropriate method for a particular use case; (ii) Existing open-source hybrid forecasting methods that take advantage of at least two methods to tackle the "No-Free-Lunch Theorem" are computationally intensive, poorly automated, designed for a particular data set, or they lack a predictable time-to-result. Methods exhibiting a high variance in the time-to-result cannot be applied for time-critical scenarios (e.g., auto-scaling), while methods tailored to a specific data set introduce restrictions on the possible use cases (e.g., forecasting only annual time series); (iii) Auto-scalers typically scale an application either proactively or reactively. Even though some hybrid auto-scalers exist, they lack sophisticated solutions to combine reactive and proactive scaling. For instance, resources are only released proactively while resource allocation is entirely done in a reactive manner (inherently delayed); (iv) The majority of existing mechanisms do not take the provider's pricing scheme into account while scaling an application in a public cloud environment, which often results in excessive charged costs. Even though some cost-aware auto-scalers have been proposed, they only consider the current resource demands, neglecting their development over time. For example, resources are often shut down prematurely, even though they might be required again soon.
To address the mentioned challenges and the shortcomings of existing work, this thesis presents three contributions: (i) The first contribution-a forecasting benchmark-addresses the problem of limited comparability between existing forecasting methods; (ii) The second contribution-Telescope-provides an automated hybrid time series forecasting method addressing the challenge posed by the "No-Free-Lunch Theorem"; (iii) The third contribution-Chamulteon-provides a novel hybrid auto-scaler for coordinated scaling of applications comprising multiple services, leveraging Telescope to forecast the workload intensity as a basis for proactive resource provisioning. In the following, the three contributions of the thesis are summarized:
Contribution I - Forecasting Benchmark
To establish a level playing field for evaluating the performance of forecasting methods in a broad setting, we propose a novel benchmark that automatically evaluates and ranks forecasting methods based on their performance in a diverse set of evaluation scenarios. The benchmark comprises four different use cases, each covering 100 heterogeneous time series taken from different domains. The data set was assembled from publicly available time series and was designed to exhibit much higher diversity than existing forecasting competitions. Besides proposing a new data set, we introduce two new measures that describe different aspects of a forecast. We applied the developed benchmark to evaluate Telescope.
Contribution II - Telescope
To provide a generic forecasting method, we introduce a novel machine learning-based forecasting approach that automatically retrieves relevant information from a given time series. More precisely, Telescope automatically extracts intrinsic time series features and then decomposes the time series into components, building a forecasting model for each of them. Each component is forecast by applying a different method and then the final forecast is assembled from the forecast components by employing a regression-based machine learning algorithm. In more than 1300 hours of experiments benchmarking 15 competing methods (including approaches from Uber and Facebook) on 400 time series, Telescope outperformed all methods, exhibiting the best forecast accuracy coupled with a low and reliable time-to-result. Compared to the competing methods that exhibited, on average, a forecast error (more precisely, the symmetric mean absolute forecast error) of 29%, Telescope exhibited an error of 20% while being 2556 times faster. In particular, the methods from Uber and Facebook exhibited an error of 48% and 36%, and were 7334 and 19 times slower than Telescope, respectively.
Contribution III - Chamulteon
To enable reliable auto-scaling, we present a hybrid auto-scaler that combines proactive and reactive techniques to scale distributed cloud applications comprising multiple services in a coordinated and cost-effective manner. More precisely, proactive adaptations are planned based on forecasts of Telescope, while reactive adaptations are triggered based on actual observations of the monitored load intensity. To solve occurring conflicts between reactive and proactive adaptations, a complex conflict resolution algorithm is implemented. Moreover, when deployed in public cloud environments, Chamulteon reviews adaptations with respect to the cloud provider's pricing scheme in order to minimize the charged costs. In more than 400 hours of experiments evaluating five competing auto-scaling mechanisms in scenarios covering five different workloads, four different applications, and three different cloud environments, Chamulteon exhibited the best auto-scaling performance and reliability while at the same time reducing the charged costs. The competing methods provided insufficient resources for (on average) 31% of the experimental time; in contrast, Chamulteon cut this time to 8% and the SLO (service level objective) violations from 18% to 6% while using up to 15% less resources and reducing the charged costs by up to 45%.
The contributions of this thesis can be seen as major milestones in the domain of time series forecasting and cloud resource management. (i) This thesis is the first to present a forecasting benchmark that covers a variety of different domains with a high diversity between the analyzed time series. Based on the provided data set and the automatic evaluation procedure, the proposed benchmark contributes to enhance the comparability of forecasting methods. The benchmarking results for different forecasting methods enable the selection of the most appropriate forecasting method for a given use case. (ii) Telescope provides the first generic and fully automated time series forecasting approach that delivers both accurate and reliable forecasts while making no assumptions about the analyzed time series. Hence, it eliminates the need for expensive, time-consuming, and error-prone procedures, such as trial-and-error searches or consulting an expert. This opens up new possibilities especially in time-critical scenarios, where Telescope can provide accurate forecasts with a short and reliable time-to-result.
Although Telescope was applied for this thesis in the field of cloud computing, there is absolutely no limitation regarding the applicability of Telescope in other domains, as demonstrated in the evaluation. Moreover, Telescope, which was made available on GitHub, is already used in a number of interdisciplinary data science projects, for instance, predictive maintenance in an Industry 4.0 context, heart failure prediction in medicine, or as a component of predictive models of beehive development. (iii) In the context of cloud resource management, Chamulteon is a major milestone for increasing the trust in cloud auto-scalers. The complex resolution algorithm enables reliable and accurate scaling behavior that reduces losses caused by excessive resource allocation or SLO violations. In other words, Chamulteon provides reliable online adaptations minimizing charged costs while at the same time maximizing user experience. / Heutzutage leben wir in einer digitalisierten Welt. Sowohl unser berufliches als auch unser privates Leben ist von verschiedenen IT-Diensten durchzogen, welche typischerweise in verteilten Computersystemen (z.B. Cloud-Umgebungen) betrieben werden. Die Betreiber solcher Systeme sind aufgrund des hohen Digitalisierungsgrades mit schnellen und wechselnden Anforderungen konfrontiert. Insbesondere Cloud-Umgebungen unterliegen starken Lastschwankungen und entsprechenden schnellen und unerwarteten Änderungen des Bedarfs an Rechenressourcen. Um dieser Herausforderung zu begegnen, können so genannte Auto-Scaler, wie z.B. der schwellenwertbasierte Mechanismus von Amazon Web Services EC2, eingesetzt werden, um eine elastische Skalierung der Rechenressourcen zu ermöglichen. Doch trotz dieser Gelegenheit werden geschäftskritische Anwendungen nach wie vor mit deutlich überdimensionierten Rechenkapazitäten betrieben, um einen stabilen und zuverlässigen Dienstbetrieb zu gewährleisten. Diese Strategie wird aufgrund des mangelnden Vertrauens in Auto-Scaler und der Sorge verfolgt, dass ungenaue oder verzögerte Anpassungen zu finanziellen Verlusten führen könnten.
Um die Ressourcenkapazität rechtzeitig anpassen zu können, müssen die zukünftigen Ressourcenanforderungen "vorhergesehen" werden. Denn die Reaktion auf Veränderungen, sobald diese beobachtet werden, führt zu einer inhärenten Verzögerung. Mit anderen Worten, es sind genaue Prognosemethoden erforderlich, um Systeme proaktiv anzupassen. Ein wirksamer Ansatz in diesem Zusammenhang ist die Zeitreihenprognose, welche auch in vielen anderen Bereichen angewandt wird. Die Kernidee besteht darin, vergangene Werte zu untersuchen und vorherzusagen, wie sich diese Werte im Laufe der Zeit entwickeln werden. Nach dem "No-Free-Lunch Theorem" gibt es keinen Algorithmus, der für alle Szenarien am besten funktioniert. Daher ist die Auswahl einer geeigneten Prognosemethode für einen gegebenen Anwendungsfall eine wesentliche Herausforderung. Denn jede Methode hat - abhängig vom spezifischen Anwendungsfall - ihre Vor- und Nachteile. Deshalb basiert üblicherweise die Wahl der Prognosemethode auf Trial-and-Error oder auf Expertenwissen, welches nicht vollständig automatisiert werden kann. Beide Ansätze sind teuer und fehleranfällig.
Obwohl Auto-Skalierung und Zeitreihenprognose etablierte Forschungsgebiete sind, können die bestehenden Ansätze die genannten Herausforderungen nicht vollständig bewältigen: (i) Bei unserer Untersuchung zur Zeitreihenvorhersage stellten wir fest, dass die meisten der überprüften Artikel nur eine geringe Anzahl von (meist verwandten) Methoden berücksichtigen und ihre Performanz auf einem kleinen Datensatz von Zeitreihen mit nur wenigen Fehlermaßen bewerten, während sie keine Informationen über die Ausführungszeit der untersuchten Methoden liefern. Daher können solche Artikel nicht als Hilfe für die Wahl einer geeigneten Methode für einen bestimmten Anwendungsfall herangezogen werden; (ii) Bestehende hybride open-source Prognosemethoden, die sich mindestens zwei Methoden zunutze machen, um das "No-Free-Lunch Theorem" anzugehen, sind rechenintensiv, schlecht automatisiert, für einen bestimmten Datensatz ausgelegt oder haben eine unvorhersehbare Laufzeit. Methoden, die eine hohe Varianz in der Ausführungszeit aufweisen, können nicht für zeitkritische Szenarien angewendet werden (z.B. Autoskalierung), während Methoden, die auf einen bestimmten Datensatz zugeschnitten sind, Einschränkungen für mögliche Anwendungsfälle mit sich bringen (z.B. nur jährliche Zeitreihen vorhersagen); (iii) Auto-Scaler skalieren typischerweise eine Anwendung entweder proaktiv oder reaktiv. Obwohl es einige hybride Auto-Scaler gibt, fehlt es ihnen an ausgeklügelten Lösungen zur Kombination von reaktiver und proaktiver Skalierung. Beispielsweise werden Ressourcen nur proaktiv freigesetzt, während die Ressourcenzuweisung vollständig reaktiv (inhärent verzögert) erfolgt; (iv) Die Mehrheit der vorhandenen Mechanismen berücksichtigt bei der Skalierung einer Anwendung in einer öffentlichen Cloud-Umgebung nicht das Preismodell des Anbieters, was häufig zu überhöhten Kosten führt. Auch wenn einige kosteneffiziente Auto-Scaler vorgeschlagen wurden, berücksichtigen sie nur den aktuellen Ressourcenbedarf und vernachlässigen ihre Entwicklung im Laufe der Zeit. Beispielsweise werden Ressourcen oft vorzeitig abgeschaltet, obwohl sie vielleicht bald wieder benötigt werden.
Um den genannten Herausforderungen und den Defiziten der bisherigen Arbeiten zu begegnen, werden in dieser Arbeit drei Beiträge vorgestellt: (i) Der erste Beitrag - ein Prognosebenchmark - behandelt das Problem der begrenzten Vergleichbarkeit zwischen bestehenden Prognosemethoden; (ii) Der zweite Beitrag stellt eine automatisierte hybride Zeitreihen-Prognosemethode namens Telescope vor, die sich der Herausforderung des "No-Free-Lunch Theorem" stellt; (iii) Der dritte Beitrag stellt Chamulteon, einen neuartigen hybriden Auto-Scaler für die koordinierte Skalierung von Anwendungen mit mehreren Diensten, bereit, der Telescope zur Vorhersage der Lastintensität als Grundlage für eine proaktive Ressourcenbereitstellung nutzt. Im Folgenden werden die drei Beiträge der Arbeit zusammengefasst:
Beitrag I - Prognosebenchmark
Um gleiche Ausgangsbedingungen für die Bewertung von Prognosemethoden anhand eines breiten Spektrums zu schaffen, schlagen wir einen neuartigen Benchmark vor, der Prognosemethoden auf der Grundlage ihrer Performanz in einer Vielzahl von Szenarien automatisch bewertet und ein Ranking erstellt. Der Benchmark umfasst vier verschiedene Anwendungsfälle, die jeweils 100 heterogene Zeitreihen aus verschiedenen Bereichen abdecken. Der Datensatz wurde aus öffentlich zugänglichen Zeitreihen zusammengestellt und so konzipiert, dass er eine viel höhere Diversität aufweist als bestehende Prognosewettbewerbe. Neben dem neuen Datensatz führen wir zwei neue Maße ein, die verschiedene Aspekte einer Prognose beschreiben. Wir haben den entwickelten Benchmark zur Bewertung von Telescope angewandt.
Beitrag II - Telescope
Um eine generische Prognosemethode bereitzustellen, stellen wir einen neuartigen, auf maschinellem Lernen basierenden Prognoseansatz vor, der automatisch relevante Informationen aus einer gegebenen Zeitreihe extrahiert. Genauer gesagt, Telescope extrahiert automatisch intrinsische Zeitreihenmerkmale und zerlegt die Zeitreihe dann in Komponenten, wobei für jede dieser Komponenten ein Prognosemodell erstellt wird. Jede Komponente wird mit einer anderen Methode prognostiziert und dann wird die endgültige Prognose aus den vorhergesagten Komponenten unter Verwendung eines regressionsbasierten Algorithmus des maschinellen Lernens zusammengestellt. In mehr als 1300 Experiment-Stunden, in denen 15 konkurrierende Methoden (einschließlich Ansätze von Uber und Facebook) auf 400 Zeitreihen verglichen wurden, übertraf Telescope alle Methoden und zeigte die beste Prognosegenauigkeit in Verbindung mit einer niedrigen und zuverlässigen Ausführungszeit. Im Vergleich zu den konkurrierenden Methoden, die im Durchschnitt einen Prognosefehler (genauer gesagt, den symmetric mean absolute forecast error) von 29% aufwiesen, wies Telescope einen Fehler von 20% auf und war dabei 2556 mal schneller. Insbesondere die Methoden von Uber und Facebook wiesen einen Fehler von 48% bzw. 36% auf und waren 7334 bzw. 19 mal langsamer als Telescope.
Beitrag III - Chamulteon
Um eine zuverlässige Auto-Skalierung zu ermöglichen, stellen wir einen hybriden Auto-Scaler vor, der proaktive und reaktive Techniken kombiniert, um verteilte Cloud-Anwendungen, die mehrere Dienste umfassen, koordiniert und kostengünstig zu skalieren. Genauer gesagt, werden proaktive Anpassungen auf der Grundlage von Prognosen von Telescope geplant, während reaktive Anpassungen auf der Grundlage tatsächlicher Beobachtungen der überwachten Lastintensität ausgelöst werden. Um auftretende Konflikte zwischen reaktiven und proaktiven Anpassungen zu lösen, wird ein komplexer Konfliktlösungsalgorithmus implementiert. Außerdem überprüft Chamulteon Anpassungen im Hinblick auf das Preismodell des Cloud-Anbieters, um die anfallenden Kosten in öffentlichen Cloud-Umgebungen zu minimieren. In mehr als 400 Experiment-Stunden, in denen fünf konkurrierende Auto-Skalierungsmechanismen unter fünf verschiedene Arbeitslasten, vier verschiedene Anwendungen und drei verschiedene Cloud-Umgebungen evaluiert wurden, zeigte Chamulteon die beste Auto-Skalierungsleistung und Zuverlässigkeit bei gleichzeitiger Reduzierung der berechneten Kosten. Die konkurrierenden Methoden lieferten während (durchschnittlich) 31% der Versuchszeit zu wenige Ressourcen. Im Gegensatz dazu reduzierte Chamulteon diese Zeit auf 8% und die SLO-Verletzungen (Service Level Objectives) von 18% auf 6%, während es bis zu 15% weniger Ressourcen verwendete und die berechneten Kosten um bis zu 45% senkte.
Die Beiträge dieser Arbeit können als wichtige Meilensteine auf dem Gebiet der Zeitreihenprognose und der automatischen Skalierung in Cloud Computing angesehen werden. (i) In dieser Arbeit wird zum ersten Mal ein Prognosebenchmark präsentiert, der eine Vielzahl verschiedener Bereiche mit einer hohen Diversität zwischen den analysierten Zeitreihen abdeckt. Auf der Grundlage des zur Verfügung gestellten Datensatzes und des automatischen Auswertungsverfahrens trägt der vorgeschlagene Benchmark dazu bei, die Vergleichbarkeit von Prognosemethoden zu verbessern. Die Benchmarking-Ergebnisse von verschiedenen Prognosemethoden ermöglichen die Auswahl der am besten geeigneten Prognosemethode für einen gegebenen Anwendungsfall. (ii) Telescope bietet den ersten generischen und vollautomatischen Zeitreihen-Prognoseansatz, der sowohl genaue als auch zuverlässige Prognosen liefert, ohne Annahmen über die analysierte Zeitreihe zu treffen. Dementsprechend macht es teure, zeitaufwändige und fehleranfällige Verfahren überflüssig, wie z.B. Trial-and-Error oder das Hinzuziehen eines Experten. Dies eröffnet neue Möglichkeiten, insbesondere in zeitkritischen Szenarien, in denen Telescope genaue Vorhersagen mit einer kurzen und zuverlässigen Antwortzeit liefern kann.
Obwohl Telescope für diese Arbeit im Bereich des Cloud Computing eingesetzt wurde, gibt es, wie die Auswertung zeigt, keinerlei Einschränkungen hinsichtlich der Anwendbarkeit von Telescope in anderen Bereichen. Darüber hinaus wird Telescope, das auf GitHub zur Verfügung gestellt wurde, bereits in einer Reihe von interdisziplinären datenwissenschaftlichen Projekten eingesetzt, z.B. bei der vorausschauenden Wartung im Rahmen von Industry 4.0, bei der Vorhersage von Herzinsuffizienz in der Medizin oder als Bestandteil von Vorhersagemodellen für die Entwicklung von Bienenstöcken. (iii) Im Kontext der elastischen Ressourcenverwaltung ist Chamulteon ein wichtiger Meilenstein für die Stärkung des Vertrauens in Auto-Scaler. Der komplexe Konfliktlösungsalgorithmus ermöglicht ein zuverlässiges und genaues Skalierungsverhalten, das Verluste durch übermäßige Ressourcenzuweisung oder SLO-Verletzungen reduziert. Mit anderen Worten, Chamulteon bietet zuverlässige Ressourcenanpassungen, die die berechneten Kosten minimieren und gleichzeitig die Benutzerzufriedenheit maximieren.
|
113 |
ARTS and CRAFTS: Predictive Scaling for Request-Based Services in the CloudGuenther, Andrew 01 June 2014 (has links) (PDF)
Modern web services can see well over a billion requests per day. Data and services at such scale require advanced software and large amounts of computational resources to process requests in reasonable time. Advancements in cloud computing now allow us to acquire additional resources faster than in traditional capacity planning scenarios. Companies can scale systems up and down as required, allowing them to meet the demand of their customers without having to purchase their own expensive hardware. Unfortunately, these, now routine, scaling operations remain a primarily manual task. To solve this problem, we present CRAFTS (Cloud Resource Anticipation For Timing Scaling), a system for automatically identifying application throughput and predictive scaling of cloud computing resources based on historical data. We also present ARTS (Automated Request Trace Simulator), a request based workload generation tool for constructing diverse and realistic request patterns for modern web applications. ARTS allows us to evaluate CRAFTS' algorithms on a wide range of scenarios. In this thesis, we outline the design and implementation of both ARTS and CRAFTS and evaluate the effectiveness of various prediction algorithms applied to real-world request data and artificial workloads generated by ARTS.
|
114 |
Energy Efficiency Comparison for Latency-Constraint Mobile Computation Offloading MechanismsLiang, Feng 23 January 2023 (has links)
In this thesis, we compare the energy efficiency of various strategies of mobile computation offloading over stochastic transmission channels where the task completion time is subject to a latency constraint. In the proposed methods, finite-state Markov chains are used to model the wireless channels between the mobile devices and the remote servers. We analyze the mechanisms of efficient mobile computation offloading under both soft and hard latency constraints. For the case of soft latency constraint, the task completion could miss the deadline below a specified probability threshold. On the other hand, under a hard deadline constraint, the task execution result must be available at the mobile device before the deadline. In order to make sure the task completes before the hard deadline, the hard deadline constraint approach requires concurrent execution in both local and cloud in specific circumstances.
The closed-form solutions are often obtained using the broad Markov processes. The
GE (Gilbert-Elliott) model is a more efficient method for demonstrating how the Markov
chain’s structure can be used to compute the best offload initiation (Hekmati et al., 2019a).The effectiveness of the algorithms is studied under various deadline constraints and offloading methods. In this thesis, six algorithms are assessed in various scenarios. 1) Single user optimal (Zhang et al., 2013), 2) LARAC (Lagrangian Relaxation Based Aggregated Cost) (Zhang et al., 2014), 3) OnOpt (Online Optimal) algorithm (Hekmati et al., 2019a), 4) Water-Filling With Equilibrium (WF-Equ), 5) Water-Filling With Exponentiation (WFExp) (Teymoori et al., 2021), 6) MultiOPT (Multi-Decision Online Optimal). The experiment demonstrates that the studied algorithms perform dramatically different in the same setting. The various types of deadline restrictions also affect how efficiently the algorithms use energy. The experiment also highlights the trade-off between computational complexities and mobile energy savings (Teymoori et al., 2021).
|
115 |
An Autonomic Framework Supporting Task Consolidation and Migration in the Cloud EnvironmentZhu, Jiedan 13 September 2011 (has links)
No description available.
|
116 |
An Efficient Architecture For Networking Event-Based Fluorescent Imaging Analysis ProcessesBright, Mark D. 01 1900 (has links)
Complex user-end procedures for the execution of computationally expensive processes and tools on high performance computing platforms can hinder the scientific progress of researchers across many domains. In addition, such processes occasionally cannot be executed on user-end platforms either due to insufficient hardware resources or unacceptably long computing times. Such circumstances currently extend to highly sophisticated algorithms and tools utilized for analysis of fluorescent imaging data. Although an extensive collection of cloud-computing solutions exist allowing software developers to resolve these issues, such solutions often abstract both developers and integrators from the executing hardware particulars and can inadvertently incentivize non-ideal software design practices. The discussion herein consists of the theoretical design and real-world realization of an efficient architecture to enable direct multi-user parallel remote utilization of such research tools. Said networked scalable real-time architecture is multi-tier, extensible by design to a vast collection of application archetypes, and is not strictly limited to imaging analysis applications. Transport layer interfaces for packetized binary data transmission, asynchronous command issuance mechanisms, compression and decompression algorithm aggregation, and relational database management systems for inter-tier communication intermediation enable a robust, lightweight, and efficient architecture for networking and remotely interfacing with fluorescent imaging analysis processes. / M.S. / Collaboration amongst researchers within various technical domains who rely on information processing and analysis tools can be strengthened through the deployment of scientific computing infrastructure that enables their usage via a web interface. The architecture of such infrastructure is preferably efficient, lightweight, and simple while retaining potential future integration capabilities with additional research tools. This work presents the theoretical design and realization of an architecture for networking fluorescent imaging analysis processes so as to make them remotely usable within internal computer networks and across the world wide web.
|
117 |
Securing the Public Cloud: Host-Obscure Computing with Secure EnclavesCain, Chandler Lee 12 January 2021 (has links)
As the practice of renting remote computing resources from a cloud computing platform becomes increasingly popular, the security of such systems is a subject of continued scrutiny. This thesis explores the current state of cloud computing security along with critical components of the cloud computing model. It identifies the need to trust a third party with sensitive information as a substantial obstacle for cloud computing customers. It then proposes a new model, Host-Obscure Computing, for a cloud computing service using secure enclaves and encryption that allows a customer to execute code remotely without exposing sensitive information, including program flow control logic. It presents a proof of concept for a secure cloud computing service using confidential computing technology, cryptography, and an emulator that runs in a secure memory space. It then provides an analysis of its effectiveness at reducing data exposure and its performance impact. Finally, it analyzes this model's advantages and its potential impact on the cloud computing industry. / Master of Science / The use of public cloud computing services continues to rise as a solution to many of the problems associated with on-premises data centers. Customers who would otherwise move to the cloud have resisted this change for security reasons. This research investigates what these security barriers are. Then, it proposes a novel model for a cloud computing service, referred to as Host-Obscure Computing, that is designed to mitigate these issues. Specifically, it addresses the need of a customer to share their program code and working data with the cloud provider. It outlines the development of a prototype implementation of this model. It then presents an analysis of this new service model from both a performance and security perspective. Finally, it suggests how the adoption of a service model similar to Host-Obscure Computing could improve the state of the cloud computing industry.
|
118 |
Formation of the Cloud: History, Metaphor, and MaterialityCroker, Trevor D. 14 January 2020 (has links)
In this dissertation, I look at the history of cloud computing to demonstrate the entanglement of history, metaphor, and materiality. In telling this story, I argue that metaphors play a powerful role in how we imagine, construct, and maintain our technological futures. The cloud, as a metaphor in computing, works to simplify complexities in distributed networking infrastructures. The language and imagery of the cloud has been used as a tool that helps cloud providers shift public focus away from potentially important regulatory, environmental, and social questions while constructing a new computing marketplace. To address these topics, I contextualize the history of the cloud by looking back at the stories of utility computing (1960s-70s) and ubiquitous computing (1980s-1990s). These visions provide an alternative narrative about the design and regulation of new technological systems.
Drawing upon these older metaphors of computing, I describe the early history of the cloud (1990-2008) in order to explore how this new vision of computing was imagined. I suggest that the metaphor of the cloud was not a historical inevitability. Rather, I argue that the social-construction of metaphors in computing can play a significant role in how the public thinks about, develops, and uses new technologies. In this research, I explore how the metaphor of the cloud underplays the impact of emerging large-scale computing infrastructures while at the same time slowly transforming traditional ownership-models in digital communications.
Throughout the dissertation, I focus on the role of materiality in shaping digital technologies. I look at how the development of the cloud is tied to the establishment of cloud data centers and the deployment of global submarine data cables. Furthermore, I look at the materiality of the cloud by examining its impact on a local community (Los Angeles, CA). Throughout this research, I argue that the metaphor of the cloud often hides deeper socio-technical complexities. Both the materials and metaphor of the cloud work to make the system invisible. By looking at the material impact of the cloud, I demonstrate how these larger economic, social, and political realities are entangled in the story and metaphor of the cloud. / Doctor of Philosophy / This dissertation tells the story of cloud computing by looking at the history of the cloud and then discussing the social and political implications of this history. I start by arguing that the cloud is connected to earlier visions of computing (specifically, utility computing and ubiquitous computing). By referencing these older histories, I argue that much of what we currently understand as cloud computing is actually connected to earlier debates and efforts to shape a computing future. Using the history of computing, I demonstrate the role that metaphor plays in the development of a technology.
Using these earlier histories, I explain how cloud computing was coined in the 1990s and eventually became a dominant vision of computing in the late 2000s. Much of the research addresses how the metaphor of the cloud is used, the initial reaction to the idea of the cloud, and how the creation of the cloud did (or did not) borrow from older visions of computing. This research looks at which people use the cloud, how the cloud is marketed to different groups, and the challenges of conceptualizing this new distributed computing network.
This dissertation gives particular weight to the materiality of the cloud. My research focuses on the cloud's impact on data centers and submarine communication data cables. Additionally, I look at the impact of the cloud on a local community (Los Angeles, CA). Throughout this research, I argue that the metaphor of the cloud often hides deeper complexities. By looking at the material impact of the cloud, I demonstrate how larger economic, social, and political realities are entangled in the story and metaphor of the cloud.
|
119 |
From e-government to cloud-government: challenges of Jordanian citizens’ acceptance for public servicesAlkhwaldi, Abeer F.A.H., Kamala, Mumtaz A., Qahwaji, Rami S.R. 10 May 2018 (has links)
Yes / On the inception of the third millennium, there is much evidence that cloud technologies have become the strategic trend for many governments, not only for developed countries (e.g. the UK, Japan and the USA), but also developing countries (e.g. Malaysia and countries in the Middle East region). These countries have launched cloud computing movements for enhanced standardization of IT resources, cost reduction and more efficient public services. Cloud-based e-government services are considered to be one of the high priorities for government agencies in Jordan. Although experiencing phenomenal evolution, government cloud-services are still suffering from the adoption challenges of e-government initiatives (e.g. technological, human, social and financial aspects) which need to be considered carefully by governments contemplating their implementation. While e-government adoption from the citizens’ perspective has been extensively investigated using different theoretical models, these models have not paid adequate attention to security issues. This paper presents a pilot study to investigate citizens’ perceptions of the extent to which these challenges inhibit the acceptance and use of cloud computing in the Jordanian public sector and examine the effect of these challenges on the security perceptions of citizens. Based on the analysis of data collected from online surveys, some important challenges were identified. The results can help to guide successful acceptance of cloud-based e-government services in Jordan.
|
120 |
Design of robust, malleable arithmetic units using lookup tablesRaudies, Florian January 2014 (has links)
Thesis (M.Sc.Eng.) PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you. / Cloud computing demands the reconfigurability on a sub-core basis to maximize the performance per customer application and the overall utilization of hardware resources in a data center. We propose the design of arithmetic units (AUs) using look-up tables (LUTs), which can also function as cache units. We imagine such LUT-based implementations of AUs and caches to be part of a malleable computing paradigm that allows the re-configuration of the core architecture inside a core and across cores. Our envisioned malleable computing can configure an LUT to behave as an AU or a cache at run time depending on the customers, their application requirements, and the computational demand in a data-center.
To evaluate the scope for reconfigurability of LUTs, we determined the exchange rate between caches and AUs. This exchange rate tells us the cost of designing a LUT-based AU in kilo bytes of cache. In this thesis, we provide exchange rates for LUT-based adder and multiplier designs. For our analysis, we use CACTI 6.5 to estimate the access time, area, and power of caches varying in size, number of banks, and set associativity, which we fitted by multinomial models. The delay time of these LUT-based designs is comparable to that of logic gate based designs of AUs using the logical effort theory for scaling. As delay time for LUT-based AUs we get 0.5 ns to 1.5 ns (2 GHz to 667 MHz) using the 45 nm Nangate open cell library. The cost of an adder ranges from 0.125 kB to 5 kB cache size. The cost for an multiplier ranges from 2.7 kB to 2.8 kB cache size. The area for these LUT-based designs is smaller or equal compared to logic gate based adder and multiplier designs. Using RRAM technology the area can be reduced by two orders of magnitude with a slowdown in delay time by one order of magnitude.
We also compared the robustness of our LUT-based adder and multiplier designs to logic gate equivalent adder and multiplier designs in presence of soft errors using analytical models and simulations. We show that LUT-based designs are more resilient toward soft errors when comparing output error rates of AUs. Our analytical models can help design robust AUs by quantifying design patterns in terms of their robustness. / 2999-01-01
|
Page generated in 0.1008 seconds