871 |
Hands-on Comparison of Cloud Computing Services for Developing ApplicationsRollino, Sebastian January 2022 (has links)
When developing applications, developers face a challenge when they have to select the technologies that best fit the requirements gathered for the application that is going to be developed. New programming languages, frameworks, stacks, etc., have arisen in recent years making the choice even harder. Cloud computing is a new technology that has gained popularity in the last two decades providing computing resources to developers and companies. As with the other technologies, there are many cloud service providers to choose from. In this thesis, the two biggest cloud service providers Amazon Web Services and Microsoft Azure are compared. Furthermore, after comparing the providers a prototype of a customer relationship management system was deployed to the selected provider. From the data gathered it could be seen that further research needs to be done to decide which provider might fit better for application development.
|
872 |
Comparative Analysis of ERP Emerging TechnologiesEngebrethson, Ryan 01 June 2012 (has links) (PDF)
This Master's Thesis compares technologies used in the architecture of Enterprise Resource Planning (ERP) Systems to evaluate the benefits and advantages of emerging technologies. The emerging technologies, Cloud Computing, Software as a Service (SaaS) and Multi-Tenancy, could significantly alter the current ERP space and become a primary part of ERP Systems of the future. A survey was sent to industry professionals to obtain feedback on their company's ERP Systems and to collect their comments on these new technologies. The survey results and related analysis show that Emerging Cloud ERP Systems outperform Traditional Legacy ERP Systems in all important characteristics - Accessibility, Business Cost, Implementation Time, Mobility, Scalability, Upgradability, and Usability. Cloud Systems were also found to have a shorter implementation time and a larger proportion of Cloud Systems were on the most recent version of software. Furthermore, industry professionals identified Cloud Computing, SaaS and Mobility as the emerging technologies of the coming decade. This thesis demonstrates that there are significant benefits for companies to use ERP Systems that use the emerging technologies and that the shift to Cloud ERP Systems has begun.
|
873 |
Comparative Analysis of Load Balancing in Cloud Platforms for an Online Bookstore Web Application using Apache BenchmarkPothuganti, Srilekha, Samanth, Malepiti January 2023 (has links)
Background :Cloud computing has transformed the landscape of application deploy-ment, offering on-demand access to compute resources, databases, and services viathe internet. This thesis explores the development of an innovative online book-storeweb application, harnessing the power of cloud infrastructure across AWS,Azure, andGCP. The front end utilises HTML, CSS, and JavaScript to create responsive webpages with an intuitive user interface. The back-end is constructed using Node.jsand Express for high-performance server-side logic and routing, while MongoDB, adistributed NoSQL database, stores the data. This cloud-native architecture facili-tates easy scaling and ensures high availability. Objectives: The main objectives of this thesis are to develop an intuitive onlinebookstore enabling users to add, exchange, and purchase books, deploy it acrossAWS, Azure, and GCP for scalability, implement load balancers for enhanced per-formance, and conduct load testing and benchmarking to compare the efficiency ofthese load balancers. The study aims to determine the best-performing cloud plat-form and load-balancing strategy to ensure an exceptional user experience for ouronline bookstore. Comparing load balancer data across these platforms to determinetheir performance ensures the best user experience for our online bookstore by takingthe metrics. Methods: The website deployment is done on three cloud platforms by creatinginstances separately on each platform, and then the load balance is created for eachof the services. By using the monitoring tools of every platform, we get the resultinggraphs for the metrics. From this, we increase and decrease the load in the ApacheBenchmark tool by taking the specific tasks from the website and comparing thevisualisation of the results done in an aggregate graph and summary reports. It isthen used to test the website’s overall performance by using metrics like throughput,CPU utilisation, error percentage, and cost efficiency. Results: The results are based on the Apache Benchmark Load Testing Tool of aselected website between the cloud platforms. The results of AWS, Azure, and GCPcan be shown in the aggregate graph. The graph results are based on the testingtool to determine which service is best for users because it shows less load on theserver and requests data in the shortest amount of time. We have considered 10 and50 requests, and based on the results, we have compared the metrics of throughput,CPU utilisation, error percentage, and cost efficiency. The 10 and 50 requests’ resultsare compared to determine which cloud platform performs better. Conclusions: According to the results from the 10 and 50 requests, it can be con-cluded that GCP has a higher throughput and CPU utilisation than AWS and Azure.They are less flexible and efficient for users. Thus, it can be said that GCP outper-forms in terms of load balancing.
|
874 |
Collaborative Computing Cloud: Architecture and Management PlatformKhalifa, Ahmed Abdelmonem Abuelfotooh Ali 11 March 2015 (has links)
We are witnessing exponential growth in the number of powerful, multiply-connected, energy-rich stationary and mobile nodes, which will make available a massive pool of computing and communication resources. We claim that cloud computing can provide resilient on-demand computing, and more effective and efficient utilization of potentially infinite array of resources. Current cloud computing systems are primarily built using stationary resources. Recently, principles of cloud computing have been extended to the mobile computing domain aiming to form local clouds using mobile devices sharing their computing resources to run cloud-based services.
However, current cloud computing systems by and large fail to provide true on-demand computing due to their lack of the following capabilities: 1) providing resilience and autonomous adaptation to the real-time variation of the underlying dynamic and scattered resources as they join or leave the formed cloud; 2) decoupling cloud management from resource management, and hiding the heterogeneous resource capabilities of participant nodes; and 3) ensuring reputable resource providers and preserving the privacy and security constraints of these providers while allowing multiple users to share their resources. Consequently, systems and consumers are hindered from effectively and efficiently utilizing the virtually infinite pool of computing resources.
We propose a platform for mobile cloud computing that integrates: 1) a dynamic real-time resource scheduling, tracking, and forecasting mechanism; 2) an autonomous resource management system; and 3) a cloud management capability for cloud services that hides the heterogeneity, dynamicity, and geographical diversity concerns from the cloud operation. We hypothesize that this would enable 'Collaborative Computing Cloud (C3)' for on-demand computing, which is a dynamically formed cloud of stationary and/or mobile resources to provide ubiquitous computing on-demand. The C3 would support a new resource-infinite computing paradigm to expand problem solving beyond the confines of walled-in resources and services by utilizing the massive pool of computing resources, in both stationary and mobile nodes.
In this dissertation, we present a C3 management platform, named PlanetCloud, for enabling both a new resource-infinite computing paradigm using cloud computing over stationary and mobile nodes, and a true ubiquitous on-demand cloud computing. This has the potential to liberate cloud users from being concerned about resource constraints and provides access to cloud anytime and anywhere.
PlanetCloud synergistically manages 1) resources to include resource harvesting, forecasting and selection, and 2) cloud services concerned with resilient cloud services to include resource provider collaboration, application execution isolation from resource layer concerns, seamless load migration, fault-tolerance, the task deployment, migration, revocation, etc. Specifically, our main contributions in the context of PlanetCloud are as follows.
1. PlanetCloud Resource Management
• Global Resource Positioning System (GRPS):
• Global mobile and stationary resource discovery and monitoring. A novel distributed spatiotemporal resource calendaring mechanism with real-time synchronization is proposed to mitigate the effect of failures occurring due to unstable connectivity and availability in the dynamic mobile environment, as well as the poor utilization of resources. This mechanism provides a dynamic real-time scheduling and tracking of idle mobile and stationary resources. This would enhance resource discovery and status tracking to provide access to the right-sized cloud resources anytime and anywhere.
• Collaborative Autonomic Resource Management System (CARMS):
Efficient use of idle mobile resources. Our platform allows sharing of resources, among stationary and mobile devices, which enables cloud computing systems to offer much higher utilization, resulting in higher efficiency. CARMS provides system-managed cloud services such as configuration, adaptation and resilience through collaborative autonomic management of dynamic cloud resources and membership. This helps in eliminating the limited self and situation awareness and collaboration of the idle mobile resources.
2. PlanetCloud Cloud Management
Architecture for resilient cloud operation on dynamic mobile resources to provide stable cloud in a continuously changing operational environment. This is achieved by using trustworthy fine-grained virtualization and task management layer, which isolates the running application from the underlying physical resource enabling seamless execution over heterogeneous stationary and mobile resources. This prevents the service disruption due to variable resource availability. The virtualization and task management layer comprises a set of distributed powerful nodes that collaborate autonomously with resource providers to manage the virtualized application partitions. / Ph. D.
|
875 |
An approach to failure prediction in a cloud based environmentAdamu, Hussaini, Bashir, Mohammed, Bukar, Ali M., Cullen, Andrea J., Awan, Irfan U. January 2017 (has links)
yes / Failure in a cloud system is defined as an even that occurs when the delivered service deviates from the correct intended behavior. As the cloud computing systems continue to grow in scale and complexity, there is an urgent need for cloud service providers (CSP) to guarantee a reliable on-demand resource to their customers in the presence of faults thereby fulfilling their service level agreement (SLA). Component failures in cloud systems are very familiar phenomena. However, large cloud service providers’ data centers should be designed to provide a certain level of availability to the business system. Infrastructure-as-a-service (Iaas) cloud delivery model presents computational resources (CPU and memory), storage resources and networking capacity that ensures high availability in the presence of such failures. The data in-production-faults recorded within a 2 years period has been studied and analyzed from the National Energy Research Scientific computing center (NERSC). Using the real-time data collected from the Computer Failure Data Repository (CFDR), this paper presents the performance of two machine learning (ML) algorithms, Linear Regression (LR) Model and Support Vector Machine (SVM) with a Linear Gaussian kernel for predicting hardware failures in a real-time cloud environment to improve system availability. The performance of the two algorithms have been rigorously evaluated using K-folds cross-validation technique. Furthermore, steps and procedure for future studies has been presented. This research will aid computer hardware companies and cloud service providers (CSP) in designing a reliable fault-tolerant system by providing a better device selection, thereby improving system availability and minimizing unscheduled system downtime.
|
876 |
Improving Data Center Resource Management, Deployment, and Availability with VirtualizationWood, Timothy 01 September 2011 (has links)
The increasing demand for storage and computation has driven the growth of large data centers--the massive server farms that run many of today's Internet and business applications. A data center can comprise many thousands of servers and can use as much energy as a small city. The massive amounts of computation power contained in these systems results in many interesting distributed systems and resource management problems. In this thesis we investigate challenges related to data centers, with a particular emphasis on how new virtualization technologies can be used to simplify deployment, improve resource efficiency, and reduce the cost of reliability, all in application agnostic ways. We first study problems that relate to the initial capacity planning required when deploying applications into a virtualized data center. We demonstrate how models of virtualization overheads can be utilized to accurately predict the resource needs of virtualized applications, allowing them to be smoothly transitioned into a data center. We next study how memory similarity can be used to guide placement when adding virtual machines to a data center, and demonstrate how memory sharing can be exploited to reduce the memory footprints of virtual machines. This allows for better server consolidation, reducing hardware and energy costs within the data center. We then discuss how virtualization can be used to improve the performance and efficiency of data centers through the use of "live'' migration and dynamic resource allocation. We present automated, dynamic provisioning schemes that can effectively respond to the rapid fluctuations of Internet workloads without hurting application performance. We then extend these migration tools to support seamlessly moving applications across low bandwidth Internet links. Finally, we discuss the reliability challenges faced by data centers and present a new replication technique that allows cloud computing platforms to offer high performance, no data loss disaster recovery services despite high network latencies.
|
877 |
Multi-Agent Reinforcement Learning for Cooperative Edge Cloud Computing / 協調的エッジクラウドコンピューティングのためのマルチエージェント強化学習Ding, Shiyao 26 September 2022 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24261号 / 情博第805号 / 新制||情||136(附属図書館) / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 伊藤 孝行, 教授 吉川 正俊, 教授 神田 崇行, 特定准教授 LIN Donghui / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
878 |
The Impact of Cloud Computing Towards Early Stage Startups in Sweden : Case of Three Stockholm-Based Early Stage StartupsSetiawan, Abraham January 2015 (has links)
In the last decades, the technology in ICT sector has advanced significantly. Rapid improvement of Internet services and virtualization techniques have caused the birth of a handful of computing paradigms, including the cloud computing. There are a number of major global cloud service providers that offers various cloud services to individual and companies. Consequently, there are increasing numbers of companies that are moving to the cloud leading to proliferation of cloud computing market. This thesis explores the impact of cloud computing towards early stage startups in terms of usage, benefit, competitive advantage, and dependency in order to be sustainable in the focus of a specific country: Sweden. Stockholm has become one of the top tech startup scenes in Europe and has given birth to a great deal of startups, some of the internationally recognized ones including Spotify, Klarna, and King while there are other ones that have a potential to catch up with them. In order to give an insight about what the impacts of cloud computing towards the early stage startups, three Stockholm-based early stage startups from 3 different field of business were interviewed. To ensure the anonymity of the startups, the companies are referred to The Healthy Company, a startup that sells healthy food through pop-up bicycle; The Invest Company, a startup that develops mobile application to connect startups and investors; and The Learning Company, a startup that summarizes business books that take 8 hours to finish into just half an hour. Based on the findings of this study, there are several characteristics that are similar in all 3 startups regardless of their field of business.
|
879 |
Cooperative caching for object storageKaynar Terzioglu, Emine Ugur 29 October 2022 (has links)
Data is increasingly stored in data lakes, vast immutable object stores that can be accessed from anywhere in the data center. By providing low cost and scalable storage, today immutable object-storage based data lakes are used by a wide range of applications with diverse access patterns. Unfortunately, performance can suffer for applications that do not match the access patterns for which the data lake was designed. Moreover, in many of today's (non-hyperscale) data centers, limited bisectional bandwidth will limit data lake performance. Today many computer clusters integrate caches both to address the mismatch between application performance requirements and the capabilities of the shared data lake, and to reduce the demand on the data center network. However, per-cluster caching;
i) means the expensive cache resources cannot be shifted between clusters based on demand,
ii) makes sharing expensive because data accessed by multiple clusters is independently cached by each of them,
and
iii) makes it difficult for clusters to grow and shrink if their servers are being used to cache storage.
In this dissertation, we present two novel data-center wide cooperative cache architectures, Datacenter-Data-Delivery Network (D3N) and Directory-Based Datacenter-Data-Delivery Network (D4N) that are designed to be part of the data lake itself rather than part of the computer clusters that use it. D3N and D4N distribute caches across the data center to enable data sharing and elasticity of cache resources where requests are transparently directed to nearby cache nodes. They dynamically adapt to changes in access patterns and accelerate workloads while providing the same consistency, trust, availability, and resilience guarantees as the underlying data lake. We nd that exploiting the immutability of object stores significantly reduces the complexity and provides opportunities for cache management strategies that were not feasible for previous cooperative cache systems for le or block-based storage.
D3N is a multi-layer cooperative cache that targets workloads with large read-only datasets like big data analytics. It is designed to be easily integrated into existing data lakes with only limited support for write caching of intermediate data, and avoiding any global state by, for example, using consistent hashing for locating blocks and making all caching decisions based purely on local information. Our prototype is performant enough to fully exploit the (5 GB/s read) SSDs and (40, Gbit/s) NICs in our system and improve the runtime of realistic workloads by up to 3x. The simplicity of D3N has enabled us, in collaboration with industry partners, to upstream the two-layer version of D3N into the existing code base of the Ceph object store as a new experimental feature, making it available to the many data lakes around the world based on Ceph.
D4N is a directory-based cooperative cache that provides a reliable write tier and a distributed directory that maintains a global state. It explores the use of global state to implement more sophisticated cache management policies and enables application-specific tuning of caching policies to support a wider range of applications than D3N. In contrast to previous cache systems that implement their own mechanism for maintaining dirty data redundantly, D4N re-uses the existing data lake (Ceph) software for implementing a write tier and exploits the semantics of immutable objects to move aged objects to the shared data lake. This design greatly reduces the barrier to adoption and enables D4N to take advantage of sophisticated data lake features such as erasure coding. We demonstrate that D4N is performant enough to saturate the bandwidth of the SSDs, and it automatically adapts replication to the working set of the demands and outperforms the state of art cluster cache Alluxio. While it will be substantially more complicated to integrate the D4N prototype into production quality code that can be adopted by the community, these results are compelling enough that our partners are starting that effort.
D3N and D4N demonstrate that cooperative caching techniques, originally designed for file systems, can be employed to integrate caching into today’s immutable object-based data lakes. We find that the properties of immutable object storage greatly simplify the adoption of these techniques, and enable integration of caching in a fashion that enables re-use of existing battle tested software; greatly reducing the barrier of adoption. In integrating the caching in the data lake, and not the compute cluster, this research opens the door to efficient data center wide sharing of data and resources.
|
880 |
On energy minimization of heterogeneos cloud radio access networksSigwele, Tshiamo, Pillai, Prashant, Hu, Yim Fun January 2016 (has links)
No / Next-generation 5G networks is the future of information networks and it will experience a tremendous growth in traffic. To meet such traffic demands, there is a necessity to increase the network capacity, which requires the deployment of ultra dense heterogeneous base stations (BSs). Nevertheless, BSs are very expensive and consume a significant amount of energy. Meanwhile, cloud radio access networks (C-RAN) has been proposed as an energy-efficient architecture that leverages the cloud computing technology where baseband processing is performed in the cloud. In addition, the BS sleeping is considered as a promising solution to conserving the network energy. This paper integrates the cloud technology and the BS sleeping approach. It also proposes an energy-efficient scheme for reducing energy consumption by switching off remote radio heads (RRHs) and idle BBUs using a greedy and first fit decreasing (FFD) bin packing algorithms, respectively. The number of RRHs and BBUs are minimized by matching the right amount of baseband computing load with traffic load. Simulation results demonstrate that the proposed scheme achieves an enhanced energy performance compared to the existing distributed long term evolution advanced (LTE-A) system.
|
Page generated in 0.1074 seconds