Spelling suggestions: "subject:"amazon web services"" "subject:"imazon web services""
31 |
Μελέτη και ανάπτυξη τεχνικών για την αποτελεσματική διαχείριση πόρων σε δίκτυα πλέγματος και υποδομές υπολογιστικών νεφώνΚρέτσης, Αριστοτέλης 25 February 2014 (has links)
Οι τεχνολογίες κατανεμημένου υπολογισμού, όπως τα δίκτυα πλέγματος και οι υποδομές Νέφους, έχουν διαμορφώσει πλέον ένα καινούργιο περιβάλλον σχετικά με τον τρόπο που εκτελούνται οι εργασίες των χρηστών, αποθηκεύονται τα δεδομένα και γενικότερα χρησιμοποιούνται οι εφαρμογές. Τα δίκτυα πλέγματος αποτέλεσαν το επίκεντρο της σχετικής ερευνητικής δραστηριότητας για μεγάλο χρονικό διάστημα, με βασικό στόχο τη δημιουργία υποδομών για την εκτέλεση ερευνητικών εφαρμογών με πολύ υψηλές υπολογιστικές και αποθηκευτικές απαιτήσεις. Ωστόσο είναι πλέον προφανές ότι υπάρχει μια στροφή προς τις υποδομές Νέφους που προσφέρουν υπηρεσίες κατανεμημένου υπολογισμού και αποθήκευσης μέσω πλήρως διαχειρίσιμων πόρων. Η συγκεκριμένη μετάβαση έχει ως αποτέλεσμα μια μετατόπιση από το μοντέλο των πολλών και ισχυρών πόρων που βρίσκονται κατανεμημένοι σε διάφορες περιοχές του κόσμου (όπως στα δίκτυα πλέγματος) προς σχετικά λιγότερα αλλά πολύ μεγαλύτερα ως προς το μέγεθος κέντρα δεδομένων τα οποία αποτελούνται από χιλιάδες υπολογιστικούς πόρους οι οποίοι φιλοξενούν ακόμη περισσότερες εικονικές μηχανές.
Η έρευνα που διεξάγαμε ακολούθησε αυτή την αλλαγή, μελετώντας αλγοριθμικά θέματα για δίκτυα πλέγματος και υποδομές Νεφών και αναπτύσσοντας μια σειρά από εργαλεία και εφαρμογές που διαχειρίζονται, παρακολουθούν και αξιοποιούν τους πόρους που προσφέρουν οι συγκεκριμένες υποδομές.
Αρχικά, μελετούμε τα ζητήματα που προκύπτουν κατά την υλοποίηση αλγορίθμων χρονοπρογραμματισμού, που είχαν προηγουμένως μελετηθεί σε περιβάλλοντα προσομοίωσης, σε ένα πραγματικό σύστημα ενδιάμεσου λογισμικού για δίκτυα πλέγματος, και συγκεκριμένα το gLite. Το πρώτο ζήτημα που αντιμετωπίσαμε είναι το γεγονός ότι οι πληροφορίες που παρέχει το ενδιάμεσο λογισμικό gLite στους αλγορίθμους χρονοπρογραμματισμού δεν είναι πάντα έγκυρες, γεγονός που επηρεάζει την αποδοσή τους. Για την αντιμετώπιση του προβλήματος αναπτύξαμε ένα εσωτερικό, στο χρονοπρογραμματιστή, μηχανισμό που καταγράφει τις αποφάσεις του σχετικά με ποιές εργασίες ανατέθηκαν σε ποιούς υπολογιστικούς πόρους και λειτουργεί συµπληρωµατικά µε την υπηρεσία πληροφοριών του gLite. Επιπλέον, εξετάζουμε το ζήτημα του δίκαιου διαμοιρασμού της υπολογιστικής χωρητικότητας ενός πόρου στις εργασίες που έχουν ανατεθεί σε αυτόν. Για το σκοπό αυτό, επεκτείνουμε το ενδιάμεσο λογισμικό gLite ώστε να περιλαμβάνει ένα νέο μηχανισμό που μέσω της αξιοποίησης της τεχνολογίας εικονικοποίησης επιτρέπει τον ταυτόχρονο διαμοιρασμό της υπολογιστικής χωρητικότητας ενός κόμβου σε πολλές εργασίες.
Στην συνέχεια εξατάζουμε το πρόβλημα της συνδυασμένης μεταφοράς πολλαπλών εικονικών μηχανών σε σύγχρονες υπολογιστικές υποδομές. Πιο συγκεκριμένα, προτείνουμε μια μεθοδολογία που στοχεύει στην καλύτερη χρησιμοποίηση των διαθέσιμων υπολογιστικών και δικτυακών πόρων, λαμβάνοντας υπόψη στις αποφάσεις σχετικά με τη συνδυασμένη μεταφορά εικονικών μηχανών τις αλληλεξαρτήσεις που δημιουργούνται από την επικοινωνία τους. Η προτεινόμενη μεθοδολογία χρησιμοποιεί την προσέγγιση πολλαπλών κριτηρίων για την επιλογή των εικονικών μηχανών που θα μετακινηθούν, αναθέτοντας διαφορετικά βάρη στα διάφορα κριτήρια ενδιαφέροντος. Επιπλέον, επιλέγει τους υπολογιστικούς κόμβους όπου οι μετακινούμενες εικονικές μηχανές θα φιλοξενηθούν, λαμβάνοντας υπόψη τον τρόπο με τον οποίο οι μετακινήσεις επηρεάζουν τις λογικές (ή εικονικές) τοπολογίες που σχηματίζονται από την επικοινωνία τους και αντιμετωπίζοντας τη συγκεκριμένη επιλογή ως ένα πρόβλημα αναδιάρθρωσης λογικών τοπολογιών. Η αξιολόγηση επιβεβαίωσε τη δυνατότητα της μεθοδολογίας να επιλύει, μέσω των κατάλληλων μετακινήσεων, ένα σημαντικό αριθμό προβλημάτων που οφείλονται σε ελλείψεις υπολογιστικών ή επικοινωνιακών πόρων, ελαχιστοποιώντας παράλληλα τον αριθμό των μετακινήσεων και την προκαλούμενη επιβάρυνση του δικτύου.
Το επόμενο θέμα που εξετάζουμε αφορά το πρόβλημα της ανάλυσης δεδομένων επικοινωνίας μεταξύ εικονικών μηχανών οι οποίες φιλοξενούνται σε ένα κέντρο δεδομένων. Προτείνουμε και αξιολογούμε, μέσω της ανάλυσης δεδομένων από ένα πραγματικό κέντρο δεδομένων, την εφαρμογή μετρικών και τεχνικών από τη θεωρία ανάλυσης κοινωνικών δικτύων για τον προσδιορισμό σημαντικών εικονικών μηχανών, για παράδειγμα εικονικές μηχανές οι οποίες απαιτούν περισσότερο εύρος ζώνης σε σχέση με άλλες, και ομάδων εικονικών μηχανών που συσχετίζονται με κάποιο τρόπο μεταξύ τους. Μέσω της συγκεκριμένης προσέγγισης έχουμε τη δυνατότητα να εξάγουμε σημαντικές πληροφορίες οι οποίες μπορούν να αξιοποιηθούν για τη λήψη καλύτερων αποφάσεων σχετικά με τη διαχείριση του πολύ μεγάλου πλήθους των εικονικών μηχανών που φιλοξενούνται στα σύγχρονα κέντρα δεδομένων.
Στη συνέχεια προσδιορίζουμε τρόπους με τους οποίους οι πληροφορίες παρακολούθησης που συλλέγονται από τη λειτουργία μιας δημόσιας υποδομής Υπολογιστικού Νέφους, και ιδίως από την υπηρεσία Amazon Web Services (AWS), μπορούν να χρησιμοποιηθούν με ένα αποδοτικό τρόπο προκειμένου να εξάγουμε πολύτιμες πληροφορίες, που μπορούν να αξιοποιηθούν από τους τελικούς χρήστες για την αποτελεσματικότερη διαχείριση των εικονικών πόρων τους. Πιο συγκεκριμένα, παρουσιάζουμε το σχεδιασμό και την υλοποίηση ενός εργαλείου ανοιχτού κώδικα, του SuMo, στο όποιο έχουμε υλοποίησει όλη την απαραίτητη λειτουργικότητα για τη συλλογή και ανάλυση δεδομένων παρακολούθησης από την υπηρεσία AWS. Επιπλέον, προτείνουμε ένα μηχανισμό για τη βελτιστοποίηση του κόστους και της αξιοποίησης (Cost and Utilization Optimization - CUO) των εικονικών υπολογιστικών πόρων της υπηρεσίας AWS. Ο μηχανισμός CUO χρησιμοποιεί πληροφορίες (πλήθος, ακριβή χαρακτηριστικά, ποσοστό αξιοποίησης) για τους διαθέσιμους εικονικούς πόρους ενός χρήστη και προτείνει ένα νέο (βέλτιστο) σύνολο πόρων που θα μπορούσαν να χρησιμοποιηθούν για την αποδοτικότερη εξυπηρέτηση του ίδιου φορτίου εργασίας με μειωμένο κόστος.
Τέλος, παρουσιάζουμε την υλοποίηση ενός ολοκληρωμένου εργαλείου, που ονομάζουμε Mantis, για το σχεδιασμό και τη λειτουργία των μελλοντικών ευέλικτων (flex-grid) οπτικών δικτύων που υποστηρίζει επιπλέον οπτικά δίκτυα σταθερού πλέγματος τόσο μοναδικού ρυθμού μετάδοσης όσο και πολλαπλών ρυθμών μετάδοσης. Οι χρήστες έχουν τη δυνατότητα να καθορίζουν δικτυακές τοπολογίες, απαιτήσεις κίνησης, παραμέτρους για το κόστος απόκτησης και λειτουργίας των δικτυακών συσκευών, ενώ επιπλέον έχουν πρόσβαση σε αρκετούς αλγορίθμους για το σχεδιασμό, λειτουργία και αξιολόγηση διαφόρων οπτικών δικτύων. Το εργαλείο έχει σχεδιαστεί ώστε να μπορεί να λειτουργεί είτε ως υπηρεσία (Software as a Service) είτε ως κλασσική εφαρμογή (Desktop Application). Λειτουργώντας ως υπηρεσία παρέχει κλιμάκωση με βάση τις απαιτήσεις των χρηστών, αξιοποιώντας τα πλεονεκτήματα των υποδομών Υπολογιστικού Νέφους, εκτελώντας γρήγορα και αποτελεσματικά τις εργασίες των χρηστών. Για τη λειτουργία αυτή, μπορεί να χρησιμοποιεί τόσο δημόσιες υποδομές Υπολογιστικού Νέφους όπως η υπηρεσία Amazon Web Services (AWS) και η υπηρεσία της ΕΔΕΤ (~okeanos), όσο και ιδιωτικές που βασίζονται στο OpenStack. Επιπλέον, η αρθρωτή αρχιτεκτονική και η υλοποίηση των διαφόρων λειτουργικών τμημάτων επιτρέπουν την εύκολη επέκταση του εργαλείου ώστε να υποστηρίζει μελλοντικά περισσότερες υποδομές Υπολογιστικού Νέφους. / Distributed computing technologies, like grids and clouds, shape today a new environment, regarding the way tasks are executed, data are stored and retrieved, and applications are used. Though grids and desktop grids have been the focus of the research community for a long time, a shift has become evident today towards cloud and virtualization related technologies in general, which are supported by large computing factories, namely the data centers. As a result there is also a shift from the model of several powerful resources distributed at various locations in the world (as in grids) towards fewer huge data centers consisting of thousands of “simple” computers that host Virtual Machines.
The research performed over the course of my PhD followed this shift, investigating algorithmic issues in the context of grids and then of clouds and developing a number of tools and applications that manage, monitor and utilize these kinds of resources.
Initially, we describe the steps followed, the difficulties encountered, and the solutions provided in developing and evaluating a scheduling policy, initially implemented in a simulation environment, in the gLite grid middleware. Our focus is on a scheduling algorithm that allocates in a fair way the available resources among the requested users or jobs. During the actual implementation of this algorithm in gLite, we observed that the validity of the information used by the scheduler for its decisions affects greatly its performance. To improve the accuracy of this information, we developed an internal feedback mechanism that operates along with the scheduling algorithm. Also, a Grid computation resource cannot be shared concurrently between different users or jobs, making it difficult to provide actual fairness. For this reason we investigated the use of virtualization technology in the gLite middleware. We implement and evaluate our scheduling algorithm and the proposed mechanisms in a small gLite testbed.
Next, we present a methodology, called communication-aware virtual infrastructures (COMAVI), for the concurrent migration of multiple Virtual Machines (VMs) in computing infrastructures, which aims at the optimum use of the available computational and network resources, by capturing the interdependencies between the communicating VMs. This methodology uses multiple criteria for selecting the VMs that will migrate, with different weights assigned to each of them. COMAVI also selects the computing sites where the migrating VMs will be hosted, by accounting for the way migration affects the logical (or virtual) topologies formed by the communicating VMs and viewing this selection as a logical topology reconfiguration problem. We apply COMAVI to two basic computing infrastructures that exhibit different constraints/criteria and characteristics: a grid infrastructure operating over a wide area network (WAN) and a data center infrastructure operating over a local area network (LAN). Through the presented methodology different communication-aware VM migration algorithms can be tailored to the needs of the resource provider. The algorithms presented resolve the maximum possible number of VM violations (due to computing or communication resource shortages), while tending to minimize the number of migrations performed, the induced network overhead, the logical topology reconfigurations required, and the corresponding service interruptions. We evaluate the proposed methods through simulations in realistic computing environments, and we exhibit their performance benefits.
We also consider the use of social network analysis methods on communication traces, collected from Virtual Machines (VMs) located in computing infrastructures, like a data center. Our aim is to identify important VMs, for example VMs that require more bandwidth than other VMs or VMs that communicate often with other VMs. We believe that this approach can handle the large number of VMs present in computing infrastructures and their interactions in the same way social interactions of millions of people are analyzed in today’s social networks. We are interested in identifying measures that can locate these important VMs or groups of interacting VMs, missed through other usual metrics and also capture the time-dynamicity of their interactions. In our work we use real traces and evaluate the applicability of the considered methods and measures.
In addition, we consider the analysis and optimization of public clouds. For this reason, we identify important algorithmic operations that should be part of a cloud analysis and optimization tool, including resource profiling, performance spike detection and prediction, resource resizing, and others, and we investigate ways in which the collected monitoring information can be processed towards these purposes. The analyzed information is valuable since it can drive important virtual resource management decisions. We also present an open-source tool we developed, called SuMo, which contains the necessary functionalities for collecting monitoring data from Amazon Web Services (AWS), analyzing them and providing resource optimization suggestions. We also present a Cost and Utilization Optimization (CUO) mechanism for optimizing the cost and the utilization of a set of running Amazon EC2 instances, which is formulated as an Integer Linear Programming (ILP) problem. This CUO mechanism receives information regarding the current set of instances used (their number, type, utilization) and proposes a new set of instances for serving the same load, so as to minimize cost and maximize utilization and performance efficiency.
Finally, we present a network planning and operation tool, called Mantis, for designing the next generation optical networks, supporting both flexible and mixed line rate WDM networks. Through Mantis, the user is able to define the network topology, current and forecasted traffic matrices, CAPEX/OPEX parameters, set up basic configuration parameters, and use a library of algorithms to plan, operate, or run what-if scenarios for an optical network of interest. Mantis is designed to be deployed either as a cloud service or as a desktop application. Using the cloud infrastructures features Mantis can scale according to the user demands, executing fast and efficiently the scenarios requested. Mantis supports different cloud platforms either public such as Amazon Elastic Compute Cloud (Amazon EC2) and ~okeanos the GRNET’s cloud service or private based on OpenStack, while its modular architecture allows other cloud infrastructures to be adopted in the future with minimum effort.
|
32 |
Mobile Cloud Computing: Offloading Mobile Processing to the CloudZambrano, Jesus 01 January 2015 (has links)
The current proliferation of mobile systems, such as smart phones, PDA and tablets, has led to their adoption as the primary computing platforms for many users. This trend suggests that designers will continue to aim towards the convergence of functionality on a single mobile device. However, this convergence penalizes the mobile system in computational resources such as processor speed, memory consumption, disk capacity, as well as in weight, size, ergonomics and the user’s most important component, battery life. Therefore, this current trend aims towards the efficient and effective use of its hardware and software components. Hence, energy consumption and response time are major concerns when executing complex algorithms on mobile devices because they require significant resources to solve intricate problems.
Current cloud computing environments for performing complex and data intensive computation remotely are likely to be an excellent solution for off-loading computation and data processing from mobile devices restricted by reduced resources. In cloud computing, virtualization enables a logical abstraction of physical components in a scalable manner that can overcome the physical constraint of resources. This optimizes IT infrastructure and makes cloud computing a worthy cost effective solution.
The intent of this thesis is to determine the types of applications that are better suited to be off-loaded to the cloud from mobile devices. To this end, this thesis quantitatively and
qualitatively compares the performance of executing two different kinds of workloads locally on two different mobile devices and remotely on two different cloud computing providers. The results of this thesis are expected to provide valuable insight to developers and architects of mobile applications by providing information on the applications that can be performed remotely in order to save energy and get better response times while remaining transparent to users.
|
33 |
Simmulating and prototyping software definednetworking (SDN) using Mininet approach to optimise host communication in realistic programmable networking environmentZulu, Lindinkosi Lethukuthula 11 1900 (has links)
In this project, two tests were performed. On the first test, Mininet-WiFi was used to simulate a
Software Defined Network to demonstrate Mininet-WiFi’ s ability to be used as the Software
Defined Network emulator which can also be integrated to the existing network using a Network
Virtualized Function (NVF). A typical organization’s computer network was simulated which
consisted of a website hosted on the LAMP (Linux, Apache, MySQL, PHP) virtual machine, and
an F5 application delivery controller (ADC) which provided load balancing of requests sent to the
web applications. A website page request was sent from the virtual stations inside Mininet-WiFi.
The request was received by the application delivery controller, which then used round robin
technique to send the request to one of the web servers on the LAMP virtual machine. The web
server then returned the requested website to the requesting virtual stations using the simulated
virtual network. The significance of these results is that it presents Mininet-WiFi as an emulator,
which can be integrated into a real programmable networking environment offering a portable,
cost effective and easily deployable testing network, which can be run on a single computer. These
results are also beneficial to modern network deployments as the live network devices can also
communicate with the testing environment for the data center, cloud and mobile provides.
On the second test, a Software Defined Network was created in Mininet using python script. An
external interface was added to enable communication with the network outside of Mininet. The
amazon web services elastic computing cloud was used to host an OpenDaylight controller. This
controller is used as a control plane device for the virtual switch within Mininet. In order to test
the network, a webserver hosted on the Emulated Virtual Environment – Next Generation (EVENG)
software is connected to Mininet. EVE-NG is the Emulated Virtual Environment for
networking. It provides tools to be able to model virtual devices and interconnect them with other
virtual or physical devices. The OpenDaylight controller was able to create the flows to facilitate
communication between the hosts in Mininet and the webserver in the real-life network. / Electrical and Mining Engineering
|
34 |
DATA-CENTRIC DECISION SUPPORT SYSTEM FRAMEWORK FOR SELECTED APPLICATIONSXiang Gu (11090106) 15 December 2021 (has links)
<p>The web and digital technologies have
been continuously growing in the recent five years. The data generated from the
Internet of Things (IoT) devices are heterogeneous, increasing data storage and
management difficulties. The thesis developed user-friendly data management
system frameworks in the local environment and cloud platform. The two frameworks
applied to two applications in the industrial field: the agriculture
informatics system and the personal healthcare management system. The systems
are capable of information management and two-way communication through a
user-friendly interface. </p>
|
35 |
Unsupervised anomaly detection for structured data - Finding similarities between retail productsFockstedt, Jonas, Krcic, Ema January 2021 (has links)
Data is one of the most contributing factors for modern business operations. Having bad data could therefore lead to tremendous losses, both financially and for customer experience. This thesis seeks to find anomalies in real-world, complex, structured data, causing an international enterprise to miss out on income and the potential loss of customers. By using graph theory and similarity analysis, the findings suggest that certain countries contribute to the discrepancies more than other countries. This is believed to be an effect of countries customizing their products to match the market’s needs. This thesis is just scratching the surface of the analysis of the data, and the number of opportunities for future work are therefore many.
|
36 |
Simulating and prototyping software defined networking (SDN) using Mininet approach to optimise host communication in realistic programmable networking environmentZulu, Lindinkosi Lethukuthula 11 1900 (has links)
In this project, two tests were performed. On the first test, Mininet-WiFi was used to simulate a
Software Defined Network to demonstrate Mininet-WiFi’ s ability to be used as the Software
Defined Network emulator which can also be integrated to the existing network using a Network
Virtualized Function (NVF). A typical organization’s computer network was simulated which
consisted of a website hosted on the LAMP (Linux, Apache, MySQL, PHP) virtual machine, and
an F5 application delivery controller (ADC) which provided load balancing of requests sent to the
web applications. A website page request was sent from the virtual stations inside Mininet-WiFi.
The request was received by the application delivery controller, which then used round robin
technique to send the request to one of the web servers on the LAMP virtual machine. The web
server then returned the requested website to the requesting virtual stations using the simulated
virtual network. The significance of these results is that it presents Mininet-WiFi as an emulator,
which can be integrated into a real programmable networking environment offering a portable,
cost effective and easily deployable testing network, which can be run on a single computer. These
results are also beneficial to modern network deployments as the live network devices can also
communicate with the testing environment for the data center, cloud and mobile provides.
On the second test, a Software Defined Network was created in Mininet using python script. An
external interface was added to enable communication with the network outside of Mininet. The
amazon web services elastic computing cloud was used to host an OpenDaylight controller. This
controller is used as a control plane device for the virtual switch within Mininet. In order to test
the network, a webserver hosted on the Emulated Virtual Environment – Next Generation (EVENG)
software is connected to Mininet. EVE-NG is the Emulated Virtual Environment for
networking. It provides tools to be able to model virtual devices and interconnect them with other
virtual or physical devices. The OpenDaylight controller was able to create the flows to facilitate
communication between the hosts in Mininet and the webserver in the real-life network. / Electrical and Mining Engineering / M. Tech. (Electrical Engineering)
|
37 |
Cloud Computing and the GLAM sector : A case study of the new Digital Archive Project of Åland Maritime Museum.Faruqi, Ubaid Ali January 2023 (has links)
This thesis examines the benefits and drawbacks of cloud computing technology within the GLAM (Galleries, Libraries, Archives, and Museums) sector of Sweden and Finland. It employs the case study of the recently developed and launched Digital Archive Project at Åland Maritime Museum which leveraged the Amazon Web Services (AWS) technology stack to provide a cloud-based digital platform for the museum's archival materials. The primary objective of this study is to understand the interaction, usage, and suitability of cloud computing technologies and the impact of User Experience (UX) (primary users being the GLAM professionals) on digitalization efforts. This study analyzes eight GLAM institutions in Sweden and Finland using semi-structured interviews and compares the trust and readiness of adapting to private cloud service providers. The findings reveal that Finland has a more ‘aggressive’ and experimental approach to newer technologies such as cloud computing tools, compared to Sweden. In Sweden, there is an appreciation for pleasant UX and methods to make heritage material more accessible, but there is also a lot of hesitation due to the data privacy regulations in the aftermath of the Schrems II Judgment and the invalidation of the EU-U.S. Privacy Shield Agreement. The study concludes that AWS as a cloud provider is difficult to incorporate in the public sector GLAM institutions compared to the private sector. The study also provides practical recommendations for GLAM institutions and professionals and calls for further interdisciplinary research with Digital Humanists at the center of it.
|
38 |
A Qualitative Comparative Analysis of Data Breaches at Companies with Air-Gap Cloud Security and Multi-Cloud EnvironmentsT Richard Stroupe Jr. (17420145) 20 November 2023 (has links)
<p dir="ltr">The purpose of this qualitative case study was to describe how multi-cloud and cloud-based air gapped system security breaches occurred, how organizations responded, the kinds of data that were breached, and what security measures were implemented after the breach to prevent and repel future attacks. Qualitative research methods and secondary survey data were combined to answer the research questions. Due to the limited information available on successful unauthorized breaches to multi-cloud and cloud-based air gapped systems and corresponding data, the study was focused on the discovery of variables from several trustworthily sources of secondary data, including breach reports, press releases, public interviews, and news articles from the last five years and qualitative survey data. The sample included highly trained cloud professionals with air-gapped cloud experience from Amazon Web Services, Microsoft, Google and Oracle. The study utilized unstructured interviews with open-ended questions and observations to record and document data and analyze results.</p><p dir="ltr">By describing instances of multi-cloud and cloud-based air gapped system breaches in the last five years this study could add to the body of literature related to best practices for securing cloud-based data, preventing data breach on such systems, and for recovering from breach once it has occurred. This study would have significance to companies aiming to protect secure data from cyber attackers. It would also be significant to individuals who have provided their confidential data to companies who utilize such systems. In the primary data, 12 themes emerged. The themes were Air Gap Weaknesses Same as Other Systems, Misconfiguration of Cloud Settings, Insider Threat as Attack Vector, Phishing as Attack Vector, Software as Attack Vector, and Physical Media as Attack Vector, Lack of Reaction to Breaches, Better Authentication to Prevent Breaches, Communications, and Training in Response to Breach, Specific Responses to Specific Problems, Greater Separation of Risk from User End, and Greater Separation of Risk from Service End. For secondary data, AWS had four themes, Microsoft Azure had two, and both Google Cloud and Oracle had three.</p>
|
Page generated in 0.4556 seconds