• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 580
  • 204
  • 117
  • 53
  • 46
  • 32
  • 28
  • 26
  • 26
  • 19
  • 12
  • 10
  • 9
  • 7
  • 6
  • Tagged with
  • 1290
  • 1290
  • 290
  • 234
  • 219
  • 215
  • 214
  • 195
  • 193
  • 179
  • 179
  • 153
  • 148
  • 136
  • 129
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Measurement, Modeling, and Emulation of Power Consumption of Distributed Systems / Messung, Modellierung und Emulation des Stromverbrauchs von verteilten Systemen

Schmitt, Norbert January 2022 (has links) (PDF)
Today’s cloud data centers consume an enormous amount of energy, and energy consumption will rise in the future. An estimate from 2012 found that data centers consume about 30 billion watts of power, resulting in about 263TWh of energy usage per year. The energy consumption will rise to 1929TWh until 2030. This projected rise in energy demand is fueled by a growing number of services deployed in the cloud. 50% of enterprise workloads have been migrated to the cloud in the last decade so far. Additionally, an increasing number of devices are using the cloud to provide functionalities and enable data centers to grow. Estimates say more than 75 billion IoT devices will be in use by 2025. The growing energy demand also increases the amount of CO2 emissions. Assuming a CO2-intensity of 200g CO2 per kWh will get us close to 227 billion tons of CO2. This emission is more than the emissions of all energy-producing power plants in Germany in 2020. However, data centers consume energy because they respond to service requests that are fulfilled through computing resources. Hence, it is not the users and devices that consume the energy in the data center but the software that controls the hardware. While the hardware is physically consuming energy, it is not always responsible for wasting energy. The software itself plays a vital role in reducing the energy consumption and CO2 emissions of data centers. The scenario of our thesis is, therefore, focused on software development. Nevertheless, we must first show developers that software contributes to energy consumption by providing evidence of its influence. The second step is to provide methods to assess an application’s power consumption during different phases of the development process and to allow modern DevOps and agile development methods. We, therefore, need to have an automatic selection of system-level energy-consumption models that can accommodate rapid changes in the source code and application-level models allowing developers to locate power-consuming software parts for constant improvements. Afterward, we need emulation to assess the energy efficiency before the actual deployment. / Die heutigen Cloud-Rechenzentren verbrauchen eine enorme Menge an Energie, und der Energieverbrauch wird in Zukunft noch steigen. Eine Schätzung aus dem Jahr 2012 ergab, dass Rechenzentren etwa 30 Milliarden Watt Strom verbrauchen, was einem Energieverbrauch von etwa 263TWh pro Jahr entspricht. Der Energieverbrauch wird bis zum Jahr 2030 auf 1929TWh ansteigen. Dieser prognostizierte Anstieg des Energiebedarfs wird durch die wachsende Zahl der in der Cloud bereitgestellten Dienste angeheizt. In den letzten zehn Jahren wurden bereits 50% der Arbeitslasten in Unternehmen in die Cloud verlagert. Außerdem nutzen immer mehr Geräte die Cloud, um Funktionen bereitzustellen und das Wachstum von Rechenzentren zu ermöglichen. Schätzungen zufolge werden bis 2025 mehr als 75 Milliarden IoT-Geräte im Einsatz sein. Der wachsende Energiebedarf erhöht auch die Menge der CO2-Emissionen. Geht man von einer CO2-Intensität von 200g CO2 pro kWh in einem eher optimistischen Szenario aus, kommen wir auf fast 227 Milliarden Tonnen CO2. Dieser Ausstoß ist mehr CO2 als die Emissionen aller energieerzeugenden Kraftwerke in Deutschland im Jahr 2020. Rechenzentren verbrauchen jedoch Energie, weil sie auf Serviceanfragen reagieren, die durch Rechenressourcen erfüllt werden. Es sind also nicht die Benutzer und Geräte, die in einem Rechenzentrum Energie verbrauchen, sondern die Software, die die Hardware steuert. Obwohl die Hardware physisch Energie verbraucht, ist sie nicht immer für die Energieverschwendung verantwortlich. Die Software selbst spielt eine wichtige Rolle bei der Reduzierung des Energieverbrauchs und der CO2-Emissionen von Rechenzentren. Das Szenario unserer Arbeit konzentriert sich daher auf die Softwareentwicklung. Dennoch müssen wir die Entwickler zunächst darauf hinweisen, dass die Software zum Energieverbrauch beiträgt, indem wir ihren Einfluss nachweisen. Der zweite Schritt ist die Bereitstellung von Methoden zur Bewertung des Energieverbrauchs einer Anwendung in den verschiedenen Phasen des Entwicklungsprozesses, um moderne DevOps und agile Entwicklungsmethoden zu ermöglichen. Wir brauchen daher eine automatische Auswahl von Energieverbrauchsmodellen auf Systemebene, die schnelle Änderungen im Quellcode berücksichtigen können, und Modelle auf Anwendungsebene, die es den Entwicklern ermöglichen, stromverbrauchende Softwareteile für ständige Verbesserungen zu lokalisieren. Danach benötigen wir eine Emulation, um die Energieeffizienz vor dem eigentlichen Einsatz zu bewerten
202

Factors limiting adoption of new technology : a study of drawbacks affecting transition from on-premise systems to cloud computing / Begränsande faktorer vid införande av ny teknologi : en studie av aspekter som hindrar övergången från lokala system till molntjänster

KILSTRÖM, THERÉSE January 2016 (has links)
Cloud computing has grown from being a business concept to one of the fastest growing segments of modern ICT industry. Cloud computing are addressing many issues emerged by the globalization in terms of the ever faster pace of growth, shorter product life cycles, increased complexity of systems and higher investment needs. Cloud computing is enetrating all sectors of business applications and has influenced the whole IT industry. The business model has grown to be an alternative to traditional on-premise systems, where traditional environment, applications and additional IT infrastructure is maintained in-house within the organization. However, organizations are still reluctant to deploy their business in the cloud. There are many concerns regarding cloud computing services and despite all its advantages, cloud adoption is still very low at an organizational landscape. Hence, this master thesis aims to investigate what the drawbacks regarding a transition from an on-premise system to a cloud computing service are and how these relate to factors that influence the decision of adoption. Furthermore, this study will investigate how cloud service providers can develop a pro-active approach to manage the main drawbacks of cloud adoption. In order to fulfill the aim of the study, empirical research in form of data collection of  onducted interviews were carried out. The results of the study identified security and perceived loss of control as the main drawbacks in the transition from an on-premise system to a cloud computing service. Since these findings could be described as foremost technological and attitudinal, the thesis contributes to practitioners in terms of implications of communicating and educating customers and adherence to industry standards and certifications as important factors to address. Lastly, this thesis identified lack of understanding for cloud computing as a result of poor information, indicating for further research within this area.
203

Evaluation of “Serverless” Application Programming Model : How and when to start Serverles

Grumuldis, Algirdas January 2019 (has links)
Serverless is a fascinating trend in modern software development which consists of pay-as-you-go, autoscaling services. Promised reduction in operational and development costs attracts not only startups but also enterprise clients despite that serverless is a relatively fresh field where new patterns and services continue to emerge. Serverless started as independent services which solve specific problems (highly scalable storage and computing), and now it's become a paradigm shift how systems are built. This thesis addressed questions when and how to start with serverless by reviewing available literature, conducting interviews with IT professionals, analyzing available tools, identifying limitations of serverless architecture and providing checklist when serverless is applicable. The focus was on AWS serverless stack, but main findings are generic and hold for all serverless providers serverless delivers what it promises, however, the devil is in the detail. Providers are continuously working to resolve limitations or building new services as solutions in order to make serverless the next phase of cloud evolution. / Serverless är en fascinerande trend inom nutida mjukvaruutveckling som består av pay-as-you-go, autoscaling-tjänster. Löftet om reducerade kostnader för drift och utveckling attraherar såväl startupföretag som storföretag, trots att serverless är ett relativt nytt område där nya inriktningar och tjänster fortsätter att uppkomma. Serverless började som en oberoende tjänst som löste specifika problem (högt skalbar lagring och databehandling), och har nu blivit ett paradigmskifte för hur system byggs. Denna uppsats sökte svar på frågor om när och hur man ska börja med serverless genom att granska tillgängliga publikationer, genomföra intervjuer med IT-experter, analysera tillgängliga verktyg och identifiera begränsningarna i serverless-arkitekturen. Fokus ligger på AWS serverless stack, men de huvudsakliga slutsatserna är generiska och gäller för alla serverless-leverantörer – serverless håller vad den lovar, men djävulen bor i detaljerna. Tjänsteleverantörerna jobbar oavbrutet med att lösa begränsningarna eller skapa nya tjänster och lösningar som ska göra serverless till nästa fas i molnevolutionen.
204

Cascading permissions policy model for token-based access control in the web of things

Amir, Mohammad, Pillai, Prashant, Hu, Yim Fun January 2014 (has links)
No / The merger of the Internet of Things (IoT) with cloud computing has given birth to a Web of Things (WoT) which hosts heterogeneous and rapidly varying data. Traditional access control mechanisms such as Role-Based Access schemes are no longer suitable for modelling access control on such a large and dynamic scale as the actors may also change all the time. For such a dynamic mix of applications, data and actors, a more distributed and flexible model is required. Token-Based Access Control is one such scheme which can easily model and comfortably handle interactions with big data in the cloud and enable provisioning of access to fine levels of granularity. However, simple token access models quickly become hard to manage in the face of a rapidly growing repository. This paper proposes a novel token access model based on a cascading permissions policy model which can easily control interactivity with big data without becoming a menace to manage and administer.
205

Evaluating energy-efficient cloud radio access networks for 5G

Sigwele, Tshiamo, Alam, Atm S., Pillai, Prashant, Hu, Yim Fun 04 February 2016 (has links)
Yes / Next-generation cellular networks such as fifth-generation (5G) will experience tremendous growth in traffic. To accommodate such traffic demand, there is a necessity to increase the network capacity that eventually requires the deployment of more base stations (BSs). Nevertheless, BSs are very expensive and consume a significant amount of energy. Meanwhile, cloud radio access networks (C-RAN) has been proposed as an energy-efficient architecture that leverages cloud computing technology where baseband processing is performed in the cloud, i.e., the computing servers or baseband processing units (BBUs) are located in the cloud. With such an arrangement, more energy saving gains can be achieved by reducing the number of BBUs used. This paper proposes a bin packing scheme with three variants such as First-fit (FT), First-fit decreasing (FFD) and Next-fit (NF) for minimizing energy consumption in 5G C-RAN. The number of BBUs are reduced by matching the right amount of baseband computing load with traffic load. In the proposed scheme, BS traffic items that are mapped into processing requirements, are to be packed into computing servers, called bins, such that the number of bins used are minimized and idle servers can then be switched off to save energy. Simulation results demonstrate that the proposed bin packing scheme achieves an enhanced energy performance compared to the existing distributed BS architecture.
206

Elastic Resource Management in Cloud Computing Platforms

Sharma, Upendra 01 May 2013 (has links)
Large scale enterprise applications are known to observe dynamic workload; provisioning correct capacity for these applications remains an important and challenging problem. Predicting high variability fluctuations in workload or the peak workload is difficult; erroneous predictions often lead to under-utilized systems or in some situations cause temporarily outage of an otherwise well provisioned web-site. Consequently, rather than provisioning server capacity to handle infrequent peak workloads, an alternate approach of dynamically provisioning capacity on-the-fly in response to workload fluctuations has become popular. Cloud platforms are particularly suited for such applications due to their ability to provision capacity when needed and charge for usage on pay-per-use basis. Cloud environments enable elastic provisioning by providing a variety of hardware configurations as well as mechanisms to add or remove server capacity. The first part of this thesis presents Kingfisher, a cost-aware system that provides a generalized provisioning framework for supporting elasticity in the cloud by (i) leveraging multiple mechanisms to reduce the time to transition to new configurations, and (ii) optimizing the selection of a virtual server configuration that minimize cost. Majority of these enterprise applications, deployed as web applications, are distributed or replicated with a multi-tier architecture. SLAs for such applications are often expressed as a high percentile of a performance metric, for e.g. 99 percentile of end to end response time is less than 1 sec. In the second part of this thesis I present a model driven technique which provisions a multi-tier application for such an SLA and is targeted for cloud platforms. Enterprises critically depend on these applications and often own large IT infrastructure to support the regular operation of these applications. However, provisioning for a peak load or for high percentile of response time could be prohibitively expensive. Thus there is a need of hybrid cloud model, where the enterprise uses its own private resources for the majority of its computing, but then "bursts" into the cloud when local resources are insufficient. I discuss a new system, namely Seagull, which performs dynamic provisioning over a hybrid cloud model by enabling cloud bursting. Finally, I describe a methodology to model the configuration patterns (i.e deployment topologies) of different control plane services of a cloud management system itself. I present a generic methodology, based on empirical profiling, which provides initial deployment configuration of a control plane service and also a mechanism which iteratively adjusts the configuration to avoid violation of control plane's Service Level Objective (SLO).
207

Transformation of organizations through cloud technologies – challenges & benefits. A case study in Rwanda. / Transformation av organisationer med hjälp av molnteknologi - utmaningar och fördelar. En fallstudie i Rwanda.

Twagiramungu, Jean Robert January 2022 (has links)
From start-ups and small businesses to enterprises, cloud computing significantly impacts their operations. Many studies and research projects on cloud computing have been carried out to assess the impact of how organizations deliver IT services. This study has been conducted in organizations in Rwanda, one of the fastest-growing African countries in the ICT sector, which is turning the nation into a knowledge-based economy. Cloud computing provides several advantages to organizations, and the future also seems promising. However, organizations face several risks and challenges in using this technology. Therefore, they must be aware of challenges when migrating the workload to the cloud. This may alleviate not only the challenges but also create a graceful transition to the cloud. Therefore, this study will measure the perceptions of the challenges and benefits of using cloud computing technology in organizations in Rwanda. As the study shows, the major challenge organizations face is a shortage of cloud experts in the industry. The study revealed that migration is complex and risky, data security and privacy are concerns, and a high-speed internet connection is required and costly. The benefits include: minimizing IT costs with a pay-as-you-go model, better performance and speed, flexibility and scalability as the needs change, collaboration, and communication in the organization. It was concluded that, as cloud computing technology is a new concept to organizations, challenges in its implementation may occur. To help Rwanda's ICT sector grow, policymakers, organization managers, and executives should develop a comprehensive solution policy to address these challenges.
208

Energy Efficient Offloading for Competing Users on a Shared Communication Channel

Meskar, Erfan January 2016 (has links)
In this thesis we consider a set of mobile users that employ cloud-based computation offloading. In computation offloading, user energy consumption can be decreased by uploading and executing jobs on a remote server, rather than processing the jobs locally. In order to execute jobs in the cloud however, the user uploads must occur over a base station channel which is shared by all of the uploading users. Since the job completion times are subject to hard deadline constraints, this restricts the feasible set of jobs that can be remotely processed, and may constrain the users ability to reduce energy usage. The system is modelled as a competitive game in which each user is interested in minimizing its own energy consumption. The game is subject to the real-time constraints imposed by the job execution deadlines, user specific channel bit rates, and the competition over the shared communication channel. The thesis shows that for a variety of parameters, a game where each user independently sets its offloading decisions always has a pure Nash equilibrium, and a Gauss-Seidel method for determining this equilibrium is introduced. Results are presented which illustrate that the system always converges to a Nash equilibrium using the Gauss-Seidel method. Data is also presented which show the number of Nash equilibria that are found, the number of iterations required, and the quality of the solutions. We find that the solutions perform well compared to a lower bound on total energy performance. / Thesis / Master of Applied Science (MASc)
209

A Comparative Evaluation of Failover Mechanisms for Mission-critical Financial Applications in Public Clouds

Gustavsson, Albert January 2023 (has links)
Computer systems can fail for a vast range of reasons, and handling failures is crucial to any critical computer system. Many modern computer systems are migrating to public clouds, which provides more flexible resource consumption and in many cases reduced costs, while the migration can also require system changes due to limitations in the provided cloud environment. This thesis evaluates a few methods of achieving failover when migrating a system to a public cloud, with the main goal of finding a replacement for failover mechanisms that can only be used in self-managed infrastructure. A few different failover methods are evaluated by looking into different aspects of how each method would change an existing system. Two methods using \textit{etcd} and \textit{Apache ZooKeeper} are used for experimental evaluation where failover time is measured in two simulated scenarios where the primary process terminates and a standby process needs to be promoted to the primary status. In one scenario, the primary process is not able to notify other processes in the system before terminating, and in the other scenario, the primary process can release the primary status to another instance before terminating. The etcd and ZooKeeper solutions are shown to behave quite similarly in the testing setup, while the ZooKeeper solution might be able to achieve lower failover time in low-latency environments.
210

Predictive Scaling for Microservices-Based Systems

Pettersson, Simon January 2023 (has links)
This thesis aims to explore the use of a predictive scaling algorithm to scale a microservices-based system according to a predicted system load. A scalable system along with a predictive scaling algorithm is developed and tested by applying a periodic load to the system. The developed scaling algorithm is a combination of a reactive and a predictive algorithm, where the reactive algorithm is used to scale the system when no significant load changes are predicted. The results show that the periodical load is predicted by the algorithm, that the system can be scaled preemptively, and that the algorithm has room for improvement in terms of accuracy. / Detta examensarbete siktar på att utforska möjligheten att använda förutsägande skalningsalgoritmer för att skala ett microservices-baserat system enligt en förutspådd belastning på systemet. Ett skalbart system utvecklas tillsammans med en förutsägande skalningsalgoritm, och testas genom att applicera en periodisk belastning på systemet. Den utvecklade skalningsalgoritmen är en kombination av en reaktiv och förutsägande algoritm, där den reaktiva algoritmen används för att skala systemet då inga signifikanta belastningar förutspås. Resultaten visar att systemets belastning kan förutspås och att systemet kan skalas med hjälp av den förutspådda belastningen, samt att algoritmen har utrymme för förbättringar.

Page generated in 0.0593 seconds