• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 578
  • 207
  • 117
  • 53
  • 46
  • 32
  • 28
  • 26
  • 25
  • 19
  • 12
  • 10
  • 9
  • 7
  • 6
  • Tagged with
  • 1289
  • 1289
  • 288
  • 233
  • 224
  • 220
  • 214
  • 195
  • 193
  • 177
  • 177
  • 153
  • 149
  • 134
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Performance Engineering of Serverless Applications and Platforms / Performanz Engineering von Serverless Anwendungen und Plattformen

Eismann, Simon January 2023 (has links) (PDF)
Serverless computing is an emerging cloud computing paradigm that offers a highlevel application programming model with utilization-based billing. It enables the deployment of cloud applications without managing the underlying resources or worrying about other operational aspects. Function-as-a-Service (FaaS) platforms implement serverless computing by allowing developers to execute code on-demand in response to events with continuous scaling while having to pay only for the time used with sub-second metering. Cloud providers have further introduced many fully managed services for databases, messaging buses, and storage that also implement a serverless computing model. Applications composed of these fully managed services and FaaS functions are quickly gaining popularity in both industry and in academia. However, due to this rapid adoption, much information surrounding serverless computing is inconsistent and often outdated as the serverless paradigm evolves. This makes the performance engineering of serverless applications and platforms challenging, as there are many open questions, such as: What types of applications is serverless computing well suited for, and what are its limitations? How should serverless applications be designed, configured, and implemented? Which design decisions impact the performance properties of serverless platforms and how can they be optimized? These and many other open questions can be traced back to an inconsistent understanding of serverless applications and platforms, which could present a major roadblock in the adoption of serverless computing. In this thesis, we address the lack of performance knowledge surrounding serverless applications and platforms from multiple angles: we conduct empirical studies to further the understanding of serverless applications and platforms, we introduce automated optimization methods that simplify the operation of serverless applications, and we enable the analysis of design tradeoffs of serverless platforms by extending white-box performance modeling. / Serverless Computing ist ein neues Cloud-Computing-Paradigma, das ein High-Level-Anwendungsprogrammiermodell mit nutzungsbasierter Abrechnung bietet. Es ermöglicht die Bereitstellung von Cloud-Anwendungen, ohne dass die zugrunde liegenden Ressourcen verwaltet werden müssen oder man sich um andere betriebliche Aspekte kümmern muss. FaaS-Plattformen implementieren Serverless Computing, indem sie Entwicklern die Möglichkeit geben, Code nach Bedarf als Reaktion auf Ereignisse mit kontinuierlicher Skalierung auszuführen, während sie nur für die genutzte Zeit mit sekundengenauer Abrechnung zahlen müssen. Cloud-Anbieter haben darüber hinaus viele vollständig verwaltete Dienste für Datenbanken, Messaging-Busse und Orchestrierung eingeführt, die ebenfalls ein Serverless Computing-Modell implementieren. Anwendungen, die aus diesen vollständig verwalteten Diensten und FaaS-Funktionen bestehen, werden sowohl in der Industrie als auch in der Wissenschaft immer beliebter. Aufgrund dieser schnellen Verbreitung sind jedoch viele Informationen zum Serverless Computing inkonsistent und oft veraltet, da sich das Serverless Paradigma weiterentwickelt. Dies macht das Performanz-Engineering von Serverless Anwendungen und Plattformen zu einer Herausforderung, da es viele offene Fragen gibt, wie zum Beispiel: Für welche Arten von Anwendungen ist Serverless Computing gut geeignet und wo liegen seine Grenzen? Wie sollten Serverless Anwendungen konzipiert, konfiguriert und implementiert werden? Welche Designentscheidungen wirken sich auf die Performanzeigenschaften von Serverless Plattformen aus und wie können sie optimiert werden? Diese und viele andere offene Fragen lassen sich auf ein uneinheitliches Verständnis von Serverless Anwendungen und Plattformen zurückführen, was ein großes Hindernis für die Einführung von Serverless Computing darstellen könnte. In dieser Arbeit adressieren wir den Mangel an Performanzwissen zu Serverless Anwendungen und Plattformen aus mehreren Blickwinkeln: Wir führen empirische Studien durch, um das Verständnis von Serverless Anwendungen und Plattformen zu fördern, wir stellen automatisierte Optimierungsmethoden vor, die das benötigte Wissen für den Betrieb von Serverless Anwendungen reduzieren, und wir erweitern die White-Box-Performanzmodellierungerung für die Analyse von Designkompromissen von Serverless Plattformen.
22

Channel and Server Scheduling for Energy-Fair Mobile Computation Offloading

Moscardini, Jonathan A. January 2016 (has links)
This thesis investigates energy fairness in an environment where multiple mobile cloud computing users are attempting to utilize both a shared channel and a shared server to offload jobs to remote computation resources, a technique known as mobile computation offloading. This offloading is done in an effort to reduce energy consumption at the mobile device, which has been demonstrated to be highly effective in previous work. However, insufficient resources are available for all mobile devices to offload all generated jobs due to constraints at the shared channel and server. In addition to these constraints, certain mobile devices are at a disadvantage relative to others in their achievable offloading rate. Hence, the shared resources are not necessarily shared fairly, and an effort must be made to do so. A method for improving offloading fairness in terms of total energy is derived, in which the state of the queue of jobs waiting for offloading is evaluated in an online fashion, at each job arrival, in order to inform an offloading decision for that newest arrival; no prior state or future predictions are used to determine the optimal decision. This algorithm is evaluated by comparing it on several criteria to standard scheduling methods, as well as to an optimal offline (i.e., non-causal) schedule derived from the solution of a min-max energy integer linear program. Various results derived by simulation demonstrate the improvements in energy fairness achieved. / Thesis / Master of Applied Science (MASc)
23

Data Parallel Application Development and Performance with Azure

Zhang, Dean 08 September 2011 (has links)
No description available.
24

Optimal Mobile Computation Offloading With Hard Task Deadlines

Hekmati, Arvin January 2019 (has links)
This thesis considers mobile computation offloading where task completion times are subject to hard deadline constraints. Hard deadlines are difficult to meet in conventional computation offloading due to the stochastic nature of the wireless channels involved. Rather than using binary offload decisions, we permit concurrent remote and local job execution when it is needed to ensure task completion deadlines. The thesis addresses this problem for homogeneous Markovian wireless channels. Two online energy-optimal computation offloading algorithms, OnOpt and MultiOpt, are proposed. OnOpt uploads the job to the server continuously and MultiOpt uploads the job in separate parts, each of which requires a separate offload initiation decision. The energy optimality of the algorithms is shown by constructing a time-dilated absorbing Markov process and applying dynamic programming. Closed form results are derived for general Markovian channels. The Gilbert-Elliott channel model is used to show how a particular Markov chain structure can be exploited to compute optimal offload initiation times more efficiently. The performance of the proposed algorithms is compared to three others, namely, Immediate Offloading, Channel Threshold, and Local Execution. Performance results show that the proposed algorithms can significantly improve mobile device energy consumption compared to the other approaches while guaranteeing hard task execution deadlines. / Thesis / Master of Applied Science (MASc)
25

Governance a management služeb cloud computingu z pohledu spotřebitele / Cloud computing governance a management from consumer point of view

Karkošková, Soňa January 2017 (has links)
Cloud computing brings widely recognized benefits as well as new challenges and risks resulting mainly from the fact that cloud service provider is an external third party that provides public cloud services in multi-tenancy model. At present, widely accepted IT governance frameworks lack focus on cloud computing governance and do not fully address the requirements of cloud computing from cloud consumer viewpoint. Given the absence of any comprehensive cloud computing governance and management framework, this doctoral dissertation thesis focuses on specific aspects of cloud service governance and management from consumer perspective. The main aim of doctoral dissertation thesis is the design of methodological framework for cloud service governance and management (Cloud computing governance and management) from consumer point of view. Cloud consumer is considered as a medium or large-sized enterprise that uses services in public cloud computing model, which are offered and delivered by cloud service provider. Theoretical part of this doctoral dissertation thesis identifies the main theoretical concepts of IT governance, IT management and cloud computing (chapter 2). Analytical part of this doctoral dissertation thesis reviews the literature dealing with specifics of cloud services utilization and their impact on IT governance and IT management, cloud computing governance and cloud computing management (chapter 3). Further, existing IT governance and IT management frameworks (SOA Governance, COBIT, ITIL and MBI) were analysed and evaluated in terms of the use of cloud services from cloud consumer perspective (chapter 4). Scientific research was based on Design Science Research Methodology with intention to design and evaluate artifact methodological framework. The main part of this doctoral dissertation thesis proposes methodical framework Cloud computing governance and management based on SOA Governance, COBIT 5 and ITIL 2011 (chapter 5, 6 and 7). Verification of proposed methodical framework Cloud computing governance and management from cloud consumer perspective was based on scientific method of case study (chapter 8). The main objective of the case study was to evaluate and verify proposed methodical framework Cloud computing governance and management in a real business environment. The main contribution of this doctoral dissertation thesis is both the use of existing knowledge, approaches and methodologies in area of IT governance and IT management to design methodical framework Cloud computing governance and management and the extension of Management of Business Informatics (MBI) framework by a set of new tasks containing procedures and recommendations relating to adoption and utilization of cloud computing services.
26

User Experience-Based Provisioning Services in Vehicular Clouds

Aloqaily, Moayad January 2016 (has links)
Today, the increasing number of applications based on the Internet of Things, as well as advances in wireless communication, information and communication technology, and mobile cloud computing have allowed users to access a wide range of resources while mobile. Vehicular clouds are considered key elements for today’s intelligent transportation systems. They are outfitted with equipment to enable applications and services for vehicle drivers, surrounding vehicles, pedestrians and third parties. As vehicular cloud computing has become more popular, due to its ability to improve driver and vehicle safety and provide provisioning services and applications, researchers and industry have growing interest in the design and development of vehicular networks for emerging applications. Though vehicle drivers can now access a variety of on-demand resources en route via vehicular network service providers, the development of vehicular cloud provisioning services has many challenges. In this dissertation, we examine the most critical provisioning service challenges drivers face, including, cost, privacy and latency. To this point, very little research has addressed these issues from the driver perspective. Privacy and service latency are certainly emerging challenges for drivers, as are service costs since this is a relatively new financial concept. Motivated by the Quality of Experience paradigm and the concept of the Trusted Third Party, we identify and investigate these challenges and examine the limitations and requirements of a vehicular environment. We found no research that addressed these challenges simultaneously, or investigated their effect on one another. We have developed a Quality of Experience framework that provides scalability and reduces congestion overhead for users. Furthermore, we propose two theory-based frameworks to manage on-demand service provision in vehicular clouds: Auction-driven Multi-objective Provisioning and a Multiagent/Multiobjective Interaction Game System. We present different approaches to these, and show through analytical and simulation results that our potential schemes help drivers minimize costs and latency, and maximize privacy.
27

Security audit compliance for cloud computing

Doelitzscher, Frank January 2014 (has links)
Cloud computing has grown largely over the past three years and is widely popular amongst today's IT landscape. In a comparative study between 250 IT decision makers of UK companies they said, that they already use cloud services for 61% of their systems. Cloud vendors promise "infinite scalability and resources" combined with on-demand access from everywhere. This lets cloud users quickly forget, that there is still a real IT infrastructure behind a cloud. Due to virtualization and multi-tenancy the complexity of these infrastructures is even increased compared to traditional data centers, while it is hidden from the user and outside of his control. This makes management of service provisioning, monitoring, backup, disaster recovery and especially security more complicated. Due to this, and a number of severe security incidents at commercial providers in recent years there is a growing lack of trust in cloud infrastructures. This thesis presents research on cloud security challenges and how they can be addressed by cloud security audits. Security requirements of an Infrastructure as a Service (IaaS) cloud are identified and it is shown how they differ from traditional data centres. To address cloud specific security challenges, a new cloud audit criteria catalogue is developed. Subsequently, a novel cloud security audit system gets developed, which provides a flexible audit architecture for frequently changing cloud infrastructures. It is based on lightweight software agents, which monitor key events in a cloud and trigger specific targeted security audits on demand - on a customer and a cloud provider perspective. To enable these concurrent cloud audits, a Cloud Audit Policy Language is developed and integrated into the audit architecture. Furthermore, to address advanced cloud specific security challenges, an anomaly detection system based on machine learning technology is developed. By creating cloud usage profiles, a continuous evaluation of events - customer specific as well as customer overspanning - helps to detect anomalies within an IaaS cloud. The feasibility of the research is presented as a prototype and its functionality is presented in three demonstrations. Results prove, that the developed cloud audit architecture is able to mitigate cloud specific security challenges.
28

Semi-supervised and Self-evolving Learning Algorithms with Application to Anomaly Detection in Cloud Computing

Pannu, Husanbir Singh 12 1900 (has links)
Semi-supervised learning (SSL) is the most practical approach for classification among machine learning algorithms. It is similar to the humans way of learning and thus has great applications in text/image classification, bioinformatics, artificial intelligence, robotics etc. Labeled data is hard to obtain in real life experiments and may need human experts with experimental equipments to mark the labels, which can be slow and expensive. But unlabeled data is easily available in terms of web pages, data logs, images, audio, video les and DNA/RNA sequences. SSL uses large unlabeled and few labeled data to build better classifying functions which acquires higher accuracy and needs lesser human efforts. Thus it is of great empirical and theoretical interest. We contribute two SSL algorithms (i) adaptive anomaly detection (AAD) (ii) hybrid anomaly detection (HAD), which are self evolving and very efficient to detect anomalies in a large scale and complex data distributions. Our algorithms are capable of modifying an existing classier by both retiring old data and adding new data. This characteristic enables the proposed algorithms to handle massive and streaming datasets where other existing algorithms fail and run out of memory. As an application to semi-supervised anomaly detection and for experimental illustration, we have implemented a prototype of the AAD and HAD systems and conducted experiments in an on-campus cloud computing environment. Experimental results show that the detection accuracy of both algorithms improves as they evolves and can achieve 92.1% detection sensitivity and 83.8% detection specificity, which makes it well suitable for anomaly detection in large and streaming datasets. We compared our algorithms with two popular SSL methods (i) subspace regularization (ii) ensemble of Bayesian sub-models and decision tree classifiers. Our contributed algorithms are easy to implement, significantly better in terms of space, time complexity and accuracy than these two methods for semi-supervised anomaly detection mechanism.
29

An Anonymous and Distributed Approach to Improving Privacy in Cloud Computing: An Analysis of Privacy-Preserving Tools & Applications

Peters, Emmanuel Sean January 2017 (has links)
The seemingly limitless computing resources and power of the cloud has made it ubiquitous. However, despite its utility and widespread adoption in several everyday applications the cloud still suffers from several trust and privacy concerns. Many of these concerns are validated by the endless reports of cyber-attacks that compromise the private information of large numbers of users. A review of the literature reveals the following challenges with privacy in cloud computing: (1) Although there is a wealth of approaches that attempt to prevent cyber-attacks, these approach ignore the reality that system compromises are inevitable; every system can and will be compromised. (2) There are a handful of metrics for the security of systems, however, the current literature is lacking in privacy metrics that can be used to compare the privacy of across various systems. (3) One of the difficulties with addressing of privacy in cloud computing is the inevitable trade-off between privacy and utility; many privacy-preserving techniques sacrifice more utility than needed in an attempt to achieve the unattainable, perfect privacy. In this dissertation we present our contributions that address the aforementioned privacy challenges supported by the literature. We base our approach on the assumption that every system can and will be compromised; we focused on mitigating the adverse effects of a cyber-attack by limiting the amount of information that is compromised during an attack. Our contribution is twofold and includes (1) a set of tools for designing privacy-mitigating applications and measuring privacy and (2) two applications designed using the aforementioned tools. We will first describe three tools that we used to design two applications. These tools are: (1) The processing graph and its collection of creation protocols. The processing graph is the mechanism we used to partition data across multiple units of cloud-based storage and processing; it also manages the flow of processed information between components and is customizable based on the specific needs of the user; (2) A privacy metric based in information theory. We use this metric to compare the amount of information compromised when centralized and distributed systems are attacked; (3) The third tool is the extension of the double-locked box protocol in the cloud environment. The double-locked box protocol facilitates anonymous between two entities via an intermediary. We then present two applications that utilize the aforementioned tools to improve the privacy of storing and processing a user’s data. These applications are (1) the anonymous tax preparation application and (2) the distributed insurance clearinghouse and distributed electronic health record. We show how the creation protocols are used to establish progressing graphs to privately complete a user’s tax form and process a patient’s insurance claim form. We also highlight the future work in medical research that is made possible because of our contributions; our approach allows for medical research to be conducted on data without risking the identity of patients. For each application we perform a privacy analysis that employs the privacy metric; in these privacy analyses, we compare both applications to their centralized counterparts and show the reduction in the amount of information revealed during an attack. Based on our analysis, the anonymous tax preparation application reduces the amount of compromised information in the event of an attack by up 64%. Similarly, the distributed insurance clearinghouse reduces the amount of patient data revealed during an attack by up to 79%.
30

Improving energy efficiency of virtualized datacenters

Nitu, Vlad-Tiberiu 28 September 2018 (has links) (PDF)
Nowadays, many organizations choose to increasingly implement the cloud computing approach. More specifically, as customers, these organizations are outsourcing the management of their physical infrastructure to data centers (or cloud computing platforms). Energy consumption is a primary concern for datacenter (DC) management. Its cost represents about 80% of the total cost of ownership and it is estimated that in 2020, the US DCs alone will spend about $13 billion on energy bills. Generally, the datacenter servers are manufactured in such a way that they achieve high energy efficiency at high utilizations. Thereby for a low cost per computation all datacenter servers should push the utilization as high as possible. In order to fight the historically low utilization, cloud computing adopted server virtualization. The latter allows a physical server to execute multiple virtual servers (called virtual machines) in an isolated way. With virtualization, the cloud provider can pack (consolidate) the entire set of virtual machines (VMs) on a small set of physical servers and thereby, reduce the number of active servers. Even so, the datacenter servers rarely reach utilizations higher than 50% which means that they operate with sets of longterm unused resources (called 'holes'). My first contribution is a cloud management system that dynamically splits/fusions VMs such that they can better fill the holes. This solution is effective only for elastic applications, i.e. applications that can be executed and reconfigured over an arbitrary number of VMs. However the datacenter resource fragmentation stems from a more fundamental problem. Over time, cloud applications demand more and more memory but the physical servers provide more an more CPU. In nowadays datacenters, the two resources are strongly coupled since they are bounded to a physical sever. My second contribution is a practical way to decouple the CPU-memory tuple that can simply be applied to a commodity server. Thereby, the two resources can vary independently, depending on their demand. My third and my forth contribution show a practical system which exploit the second contribution. The underutilization observed on physical servers is also true for virtual machines. It has been shown that VMs consume only a small fraction of the allocated resources because the cloud customers are not able to correctly estimate the resource amount necessary for their applications. My third contribution is a system that estimates the memory consumption (i.e. the working set size) of a VM, with low overhead and high accuracy. Thereby, we can now consolidate the VMs based on their working set size (not the booked memory). However, the drawback of this approach is the risk of memory starvation. If one or multiple VMs have an sharp increase in memory demand, the physical server may run out of memory. This event is undesirable because the cloud platform is unable to provide the client with the booked memory. My fourth contribution is a system that allows a VM to use remote memory provided by a different rack server. Thereby, in the case of a peak memory demand, my system allows the VM to allocate memory on a remote physical server.

Page generated in 0.0367 seconds