851 |
Designing and Building Efficient HPC Cloud with Modern Networking Technologies on Heterogeneous HPC ClustersZhang, Jie, Zhang January 2018 (has links)
No description available.
|
852 |
A ZERO-TRUST-BASED IDENTITY MANAGEMENT MODEL FOR VOLUNTEER CLOUD COMPUTINGalbuali, abdullah 01 December 2021 (has links) (PDF)
Non-conventional cloud computing models such as volunteer and mobile clouds have been increasingly popular in cloud computing research. Volunteer cloud computing is a more economical, greener alternative to the current model based on data centers in which tens of thousands of dedicated servers facilitate cloud services. Volunteer clouds offer numerous benefits: no upfront investment to procure the many servers needed for traditional data center hosting; no maintenance costs, such as electricity for cooling and running servers; and physical closeness to edge computing resources, such as individually owned PCs. Despite these benefits, such systems introduce their own technical challenges due to the dynamics and heterogeneity of volunteer computers that are shared not only among cloud users but also between cloud and local users. The key issues in cloud computing such as security, privacy, reliability, and availability thus need to be addressed more critically in volunteer cloud computing.Emerging paradigms are plagued by security issues, such as in volunteer cloud computing, where trust among entities is nonexistent. Thus, this study presents a zero-trust model that does not assign trust to any volunteer node (VN) and always verifies using a server-client topology for all communications, whether internal or external (between VNs and the system). To ensure the model chooses only the most trusted VNs in the system, two sets of monitoring mechanisms are used. The first uses a series of reputation-based trust management mechanisms to filter VNs at various critical points in their life-cycle. This set of mechanisms helps the volunteer cloud management system detect malicious activities, violations, and failures among VNs through innovative monitoring policies that affect the trust scores of less trusted VNs and reward the most trusted VNs during their life-cycle in the system. The second set of mechanisms uses adaptive behavior evaluation contexts in VN identity management. This is done by calculating the challenge score and risk rate of each node to calculate and predict a trust score. Furthermore, the study resulted in a volunteer computing as a service (VCaaS) cloud system using undedicated hosts as resources. Both cuCloud and the open-source CloudSim platform are used to evaluate the proposed model.The results shows that zero-trust identity management for volunteer clouds can execute a range of applications securely, reliably, and efficiently. With the help of the proposed model, volunteer clouds can be a potential enabler for various edge computing applications. Edge computing could use volunteer cloud computing along with the proposed trust system and penalty module (ZTIMM and ZTIMM-P) to manage the identity of all VNs that are part of the volunteer edge computing architecture.
|
853 |
Assessing Practices of Cloud Storage Security Among Users : A Study on Security Threats in Storage as a Service EnvironmentJoo Jonsson, Hugo, Karlsson, Vilgot January 2023 (has links)
With the immense amount of data generated daily, relying solely on physical storage is insufficient. Therefore, Cloud services have become a big part of our day-to-day life, as they allow users to store data and relieve customers from the burden of maintenance. However, this technology relies on the internet, which increases the potential security risks and threats. This survey-based study investigates users' security practices concerning Storage as a Service, along with a literature review of current security threats targeting users of these services. Additionally, a comparative analysis is conducted to compare the security features offered by the cloud storage providers. The study shows that users are generally concerned about internet security, and service providers have implemented appropriate security features to protect users.
|
854 |
Cross region cloud redundancy : A comparison of a single-region and a multi-region approachLindén, Oskar January 2023 (has links)
In order to increase the resiliency and redundancy of a distributed system, it is common to keep standby systems and backups of data in different locations than the primary site, separated by a meaningful distance in order to tolerate local outages. Nasdaq has accomplished this by maintaining primary-standby pairs or primary-standby-disaster triplets with at least one system residing in a different site. The team at Nasdaq is experimenting with a redundant deployment scheme in Kubernetes with three availability zones, located within a single geographical region, in Amazon Web Services. They want to move the disaster zone to another geographical region in order to improve the redundancy and resiliency of the system. The aim of this thesis is to investigate how this could be done and to compare the different approaches. To compare the different approaches, a simple observable model of the chain replicating strategy is implemented. The model is deployed in an Elastic Kubernetes Cluster on Amazon Web Services, using Helm. The supporting infrastructure is defined and created using Terraform. This model is subjected to evaluation through HTTP requests with different configurations and scenarios, to measure latency and throughput. The first scenario is a single user making HTTP requests to the system, and the second scenario is multiple users making requests to the system. The results show that the throughput is lower and the latency is higher with the multi-region approach. The relative difference in median throughput is -54.41% and the relative difference in median latency is 119.20%, in the single-producer case. In the multi-producer case, both the relative difference in median throughput and latency is reduced when increasing the amount of partitions in the system.
|
855 |
Monitoring software usage and usage behaviour based on SaaS data: case Gemini Water portfolioBredberg, August January 2023 (has links)
Software as a Service (SaaS) platforms paired with cloud-based storage is a common schema used among software providers across the globe. Such solutions usually accumulate vast amounts of usage and usage behaviour data. Utilizing this data in the form of monitoring solutions can potentially generate great value for both software users and software providers. Swedish authorities have taken a restrictive standpoint against incorporating cloud-based storage solutions into software used in the public sector. The project aims to identify a practical and reusable way of utilizing cloud data and to demonstrate to Swedish authorities how cloud-based storage models can be beneficial. The case of this project is the SaaS solution Gemini Portal+, a water and sewage management solution. The end result was a monitoring module to Gemini Portal+ where users can view the digital maturity of their Gemini Portal+ usage. The digital maturity is conveyed in an easily digested manner, with concrete and actionable information on how to increase digital maturity. The result has passed the requirements and stakeholders are satisfied with the result.
|
856 |
Cloud-Based Collaborative Local-First SoftwareVallin, Tor January 2023 (has links)
Local-first software has the potential to offer users a great experience by combining the best aspects of traditional applications with cloud-based applications. However, not much has been documented regarding developing backends for local-first software, particularly one that is scalable while still supporting end-to-end encryption. This thesis presents a backend architecture that was then implemented and evaluated. The implementation was shown to be scalable and was able to maintain an estimated end-to-end latency of around 30-50ms as the number of simulated clients increased. The architecture supports end-to-end encryption to offer user privacy and to ensure that neither cloud nor service providers can access user data. Furthermore, by occasionally performing snapshots the encryption overhead was shown to be manageable compared to the raw data, at around 18.2% in the best case and 118.9% when using data from automerge-perf, a standard benchmark. Lastly, the processing times were shown to be upwards of 50 times faster when using snapshots compared to handling individual changes.
|
857 |
Evaluating the performance andusability of HTTP vs gRPC in communication between microservicesHamo, Najem, Saberian, Simon January 2023 (has links)
Microservices is an architectural technique that has only gotten more popular as the need for scalable and performant internet-based applications has grown. One of the characteristics of microservices is communication through lightweight protocols like HTTP. These protocols are usually provided through frameworks that enable an abstracted form of communication and when implementing services using the Go language, the most common frameworks are gRPC and net/http. The aim of this thesis is to evaluate and compare the performance and usability of gRPC and HTTP frameworks in order to determine which one is better suited for microservices so that developers can be empowered to be more informed when making choices about their technology. We investigated the performance and usability by conducting two experiments. For the first one, we created two services that were implemented as identically as possible using Go but one communicated using the net/http framework and the other using gRPC. The services implemented methods that return small, medium, and large payload sizes and were then load-tested at varying numbers of virtual users. The second experiment was conducted by recruiting a set of participants that were tasked with completing two sets of coding tasks once using gRPC and once using HTTP. After the tasks were completed they were asked to fill out a questionnaire to measure their experience using the frameworks, the answers were then turned into a score which we could use to analyze the frameworks. The results from the performance experiment indicated that gRPC performed better in terms of throughput and latency, while HTTP performed better in scalability, and the results from the usability experiment indicated that HTTP was found to be more usable by the participants.
|
858 |
Extending the Kubernetes operator Kubegres to handle database restoration from dump filesBemm, Rickard January 2023 (has links)
The use of cloud-native technologies has grown in popularity in recent years. With its ability to take advantage of the full benefits of cloud computing, cloud-native architecture has become a hot topic among developers and IT professionals. It refers to building and running applications using cloud services and architectures, including containerization, microservices, and automation tools such as Kubernetes to enable fast and continuous delivery of software applications. In Kubernetes, the desired state of a resource is described declaratively and then handles the details of how to get there. Databases are notoriously hard to deploy in such environments, and the Kubernetes operator pattern extends the resources it manages and how to get to the desired state, called reconcile function. Operators exist to manage PostgreSQL databases with backup and restore functionality, and some require a license to be used. Kubegres is a free-to-use open-source operator, but it lacks restore functionality. This thesis aims to extend the Kubegres operator to support database restoration using dump files. It includes how to create the restore process in Kubernetes, what modifications must be done to the current architecture, and how to make the reconcile function robust and self-healing yet customizable to fit many different needs. Research has been done to explore the design of other operators that already support database restoration. It inspired the design of the resource definition and the restoration process. A new resource definition was added to define the desired state of the database restoration and a new reconcile function to define how to act on it. The state is repeatedly created each time the reconcile function is triggered. During the restoration, a new database is always the target, and once completed, the resources to restore it are deleted, and only the PostgreSQL database is left. The performance of the modified operator impact compared to the original operator was measured to evaluate the operator. The tests consisted of operations both versions of the operator supported, including PostgreSQL database creation, cluster scaling, and changing resource limits. The two collected metrics, CPU- and memory usage, increased by 0.058-0.4 mvCPU (12-33%) and 8.2 MB (29%), respectively. A qualitative evaluation of the operator using qualities such as robustness, self-healing, customizability, and correctness showed that the design fulfils most of the qualities.
|
859 |
Network Performance Improvement for Cloud Computing using Jumbo FramesKanthla, Arjun Reddy January 2014 (has links)
The surge in the cloud computing is due to its cost effective benefits and the rapid scalability of computing resources, and the crux of this is virtualization. Virtualization technology enables a single physical machine to be shared by multiple operating systems. This increases the eciency of the hardware, hence decreases the cost of cloud computing. However, as the load in the guest operating system increases, at some point the physical resources cannot support all the applications efficiently. Input and output services, especially network applications, must share the same total bandwidth and this sharing can be negatively affected by virtualization overheads. Network packets may undergo additional processing and have to wait until the virtual machine is scheduled by the underlying hypervisor before reaching the final service application, such as a web server.In a virtualized environment it is not the load (due to the processing of the user data) but the network overhead, that is the major problem. Modern network interface cards have enhanced network virtualization by handling IP packets more intelligently through TCP segmentation offload, interrupt coalescence, and other virtualization specific hardware. Jumbo frames have long been proposed for their advantages in traditional environment. They increase network throughput and decrease CPU utilization. Jumbo frames can better exploit Gigabit Ethernet and offer great enhancements to the virtualized environment by utilizing the bandwidth more effectively while lowering processor overhead. This thesis shows a network performance improvement of 4.7% in a Xen virtualized environment by using jumbo frames. Additionally the thesis examines TCP's performance in Xen and compares Xen with the same operations running on a native Linux system. / Den kraftiga ökningen i datormoln är på grund av dess kostnadseffektiva fördelar och den snabba skalbarhet av datorresurser, och kärnan i detta är virtualisering. Virtualiseringsteknik möjliggör att man kan köra era operativsystem på en enda fysisk maskin. Detta ökar effektiviteten av hårdvaran, vilket gör att kostnaden minskar för datormoln. Men eftersom lasten i gästoperativsystemet ökar, gör att de fysiska resurserna inte kan stödja alla program på ett effektivt sätt. In-och utgångstjänster, speciellt nätverksapplikationer, måste dela samma totala bandbredd gör att denna delning kan påverkas negativt av virtualisering. Nätverkspaket kan genomgå ytterligare behandling och måste vänta tills den virtuella maskinen är planerad av den underliggande hypervisor innan den slutliga services applikation, till exempel en webbserver. I en virtuell miljö är det inte belastningen (på grund av behandlingen av användarens data) utan nätverket overhead, som är det största problemet. Moderna nätverkskort har förbättrat nätverk virtualisering genom att hantera IP-paket mer intelligent genom TCP- segmenterings avlastning, avbrotts sammansmältning och genom en annan hårdvara som är specifik för virtualisering. Jumboramar har länge föreslagits för sina fördelar i traditionell miljö. De ökar nätverk genomströmning och minska CPU-användning. Genom att använda Jumbo frames kan Gigabit Ethernet användandet förbättras samt erbjuda stora förbättringar för virtualiserad miljö genom att utnyttja bandbredden mer effektivt samtidigt sänka processor overhead. Det här examensarbetet visar ett nätverk prestandaförbättring på 4,7% i en Xen virtualiserad miljö genom att använda jumbo frames. Dessutom undersöker det TCP prestanda i Xen och jämför Xen med samma funktion som körs på en Linux system.
|
860 |
Efficient Social Network Data Query Processing on MapReduceLiu, Liu 01 January 2013 (has links) (PDF)
Social network data analysis becomes increasingly important today. In order to improve the integration and reuse of their data, many social networks start to apply RDF to present the data. Accordingly, one common approach for social network data analysis is to employ SPARQL to query RDF data.
As the sizes of social networks expand rapidly, queries need to be executed in parallel such as using the MapReduce framework. However, the state-of-the-art translation from SPARQL queries to MapReduce jobs mainly follows a two layer rule, in which SPARQL is first translated to SQL join, is not efficient. In this thesis, we introduce two primitives to enable automatic translation from SPARQL to MapReduce, and to enable efficient execution of the SPARQL queries. We use multiple-join-with-filter to substitute traditional SQL multiple join when feasible, and merge different stages in the MapReduce query workflow. The evaluation on social network benchmarks shows that these two primitives can achieve up to 2x speedup in query running time compared with the original two layer scheme.
|
Page generated in 0.0924 seconds