Spelling suggestions: "subject:"'cloud computing'"" "subject:"'aloud computing'""
41 |
Utilization of Cloud Computing Applications in Commercial CompaniesJindra, Martin January 2012 (has links)
No description available.
|
42 |
Data intensive ATLAS workflows in the CloudRzehorz, Gerhard Ferdinand 09 May 2018 (has links)
No description available.
|
43 |
Cloud computing in a South African BankVan der Merwe, Arno 30 June 2014 (has links)
This research looked at cloud computing in a South African bank. Interviews were
conducted in the information technology sector of a major bank in South Arica, as part
of a deductive research method, to establish how cloud computing should be
understood, what are specific benefits, obstacles, risks and if the benefits outweigh the
obstacles and risks.
The research demonstrated that cloud computing is a fairly new concept in South
African banks especially when it comes to the public cloud. Private clouds are currently
in existence, especially in the form of data centres and virtualised services. The
research also indicated that benefits outweigh obstacles and risks, with cost seen as the
most important benefit in contrast to privacy and security as the most important obstacle
to consider.
It would be difficult for a bank in South Africa to move into the public cloud and the focus
would be to move no-core services into a public cloud and to keep the core services
within the bank.
It should be noted that the research sample was limited to only one of the major banks
in South African and that it would be inaccurate to present the results as a complete
view of banks in South Africa. / Dissertation (MBA)--University of Pretoria, 2013. / pagibs2014 / Gordon Institute of Business Science (GIBS) / MBA / Unrestricted
|
44 |
Towards a framework for enhancing user trust in cloud computingNyoni, Tamsanqa B January 2014 (has links)
Cloud computing is one of the latest appealing technological trends to emerge in the Information Technology (IT) industry. However, despite the surge in activity and interest, there are significant and persistent concerns about cloud computing, particularly with regard to trusting the platform in terms of confidentiality, integrity and availability of user data stored through these applications. These factors are significant in determining trust in cloud computing and thus provide the foundation for this study. The significant role that trust plays in the use of cloud computing was considered in relation to various trust models, theories and frameworks. Cloud computing is still considered to be a new technology in the business world, therefore minimal work and academic research has been done on enhancing trust in cloud computing. Academic research which focuses on the adoption of cloud computing and, in particular, the building of user trust has been minimal. The available trust models, frameworks and cloud computing adoption strategies that exist mainly focus on cost reduction and the various benefits that are associated with migrating to a cloud computing platform. Available work on cloud computing does not provide clear guidelines for establishing user trust in a cloud computing application. The issue of establishing a reliable trust context for data and security within cloud computing is, up to this point, not well defined. This study investigates the impact that a lack of user trust has on the use of cloud computing. Strategies for enhancing user trust in cloud computing are required to overcome the data security concerns. This study focused on establishing methods to enhance user trust in cloud computing applications through the theoretical contributions of the Proposed Trust Model by Mayer, Davis, and Schoorman (1995) and the Confidentiality, Integrity, Availability (CIA) Triad by Steichen (2010). A questionnaire was used as a means of gathering data on trust-related perceptions of the use of cloud computing. The findings of this questionnaire administered to users and potential users of cloud computing applications are reported in this study. The questionnaire primarily investigates key concerns which result in self-moderation of cloud computing use and factors which would improve trust in cloud computing. Additionally, results relating to user awareness of potential confidentiality, integrity and availability risks are described. An initial cloud computing adoption model was proposed based on a content analysis of existing cloud computing literature. This initial model, empirically tested through the questionnaire, was an important foundation for the establishment of the Critical Success Factors (CSFs) and therefore the framework to enhance user trust in cloud computing applications. The framework proposed by this study aims to assist new cloud computing users to determine the appropriateness of a cloud computing service, thereby enhancing their trust in cloud computing applications.
|
45 |
Methods and Benchmarks for Auto-Scaling Mechanisms in Elastic Cloud Environments / Methoden und Messverfahren für Mechanismen des automatischen Skalierens in elastischen CloudumgebungenHerbst, Nikolas Roman January 2018 (has links) (PDF)
A key functionality of cloud systems are automated resource management mechanisms at the infrastructure level. As part of this, elastic scaling of allocated resources is realized by so-called auto-scalers that are supposed to match the current demand in a way that the performance remains stable while resources are efficiently used.
The process of rating cloud infrastructure offerings in terms of the quality of their achieved elastic scaling remains undefined. Clear guidance for the selection and configuration of an auto-scaler for a given context is not available. Thus, existing operating solutions are optimized in a highly application specific way and usually kept undisclosed.
The common state of practice is the use of simplistic threshold-based approaches. Due to their reactive nature they incur performance degradation during the minutes of provisioning delays. In the literature, a high-number of auto-scalers has been proposed trying to overcome the limitations of reactive mechanisms by employing proactive prediction methods.
In this thesis, we identify potentials in automated cloud system resource management and its evaluation methodology. Specifically, we make the following contributions:
We propose a descriptive load profile modeling framework together with automated model extraction from recorded traces to enable reproducible workload generation with realistic load intensity variations. The proposed Descartes Load Intensity Model (DLIM) with its Limbo framework provides key functionality to stress and benchmark resource management approaches in a representative and fair manner.
We propose a set of intuitive metrics for quantifying timing, stability and accuracy aspects of elasticity. Based on these metrics, we propose a novel approach for benchmarking the elasticity of Infrastructure-as-a-Service (IaaS) cloud platforms independent of the performance exhibited by the provisioned underlying resources.
We tackle the challenge of reducing the risk of relying on a single proactive auto-scaler by proposing a new self-aware auto-scaling mechanism, called Chameleon, combining multiple different proactive methods coupled with a reactive fallback mechanism.
Chameleon employs on-demand, automated time series-based forecasting methods to predict the arriving load intensity in combination with run-time service demand estimation techniques to calculate the required resource consumption per work unit without the need for a detailed application instrumentation. It can also leverage application knowledge by solving product-form queueing networks used to derive optimized scaling actions. The Chameleon approach is first in resolving conflicts between reactive and proactive scaling decisions in an intelligent way.
We are confident that the contributions of this thesis will have a long-term impact on the way cloud resource management approaches are assessed. While this could result in an improved quality of autonomic management algorithms, we see and discuss arising challenges for future research in cloud resource management and its assessment methods: The adoption of containerization on top of virtual machine instances introduces another level of indirection. As a result, the nesting of virtual resources increases resource fragmentation and causes unreliable provisioning delays. Furthermore, virtualized compute resources tend to become more and more inhomogeneous associated with various priorities and trade-offs. Due to DevOps practices, cloud hosted service updates are released with a higher frequency which impacts the dynamics in user behavior. / Eine Schlüsselfunktionalität von Cloud-Systemen sind automatisierte Mechanismen zur Ressourcenverwaltung auf Infrastrukturebene. Als Teil hiervon wird das elastische Skalieren der allokierten Ressourcen durch eigene Mechanismen realisiert. Diese sind dafür verantwortlich, dass die dynamische Ressourcenzuteilung die aktuelle Nachfrage in einem Maße trifft, welches die Performance stabil hält und gleichzeitig Ressourcen effizient auslastet.
Prozesse, welche die Bewertung der Qualität von elastischem Skalierungsverhalten in der Realität ermöglichen, sind derzeit nicht umfassend definiert. Folglich fehlt es an Leitfäden und Entscheidungskriterien bei der Auswahl und Konfiguration automatisch skalierender Mechanismen. In der Praxis zum Einsatz kommende Lösungen sind auf ihr Anwendungsszenario optimiert und werden in fast allen Fällen unter Verschluss gehalten.
Mehrheitlich werden einfache, schwellenwertbasierte Regelungsansätze eingesetzt. Diese nehmen aufgrund ihres inhärent reaktiven Charakters verschlechterte Performance während der Bereitstellungsverzögerung im Minutenbereich in Kauf. In der Literatur wird eine große Anzahl an Mechanismen zur automatischen Skalierung vorgeschlagen, welche versuchen, diese Einschränkung durch Einsatz von Schätzverfahren zu umgehen. Diese können in Ansätze aus der Warteschlangentheorie, der Kontrolltheorie, der Zeitreihenanalyse und des maschinellen Lernens eingeteilt werden. Jedoch erfreuen sich prädiktive Mechanismen zum automatischen Skalieren aufgrund des damit verknüpften hohen Risikos, sich auf einzelne Schätzverfahren zu verlassen, bislang keines breiten Praxiseinsatzes.
Diese Dissertation identifiziert Potenziale im automatisierten Ressourcenmanagement von Cloud-Umgebungen und deren Bewertungsverfahren. Die Beiträge liegen konkret in den folgenden Punkten:
Es wird eine Sprache zur deskriptiven Modellierung von Lastintensitätsprofilen und deren automatischer Extraktion aus Aufzeichnungen entwickelt, um eine wiederholbare Generierung von realistischen und in ihrer Intensität variierenden Arbeitslasten zu ermöglichen. Das vorgeschlagene Descartes Lastintensitätsmodell (DLIM) zusammen mit dem Limbo Software-Werkzeug stellt hierbei Schlüsselfunktionalitäten zur repräsentativen Arbeitslastgenerierung und fairen Bewertung von Ressourcenmanagementansätzen zur Verfügung.
Es wird eine Gruppe intuitiver Metriken zur Quantifizierung der zeit-, genauigkeits- und stabilitätsbezogenen Qualitätsaspekte elastischen Verhaltens vorgeschlagen. Basierend auf diesen zwischenzeitlich von der Forschungsabteilung der Standard Performance Evaluation Corporation (SPEC) befürworteten Metriken, wird ein neuartiges Elastizitätsmessverfahren zur fairen Bewertung von Infrastruktur-Cloud-Dienstleistungen, unabhängig von der Leistungsfähigkeit der zugrunde liegenden Ressourcen, entwickelt.
Durch die Entwicklung eines neuartigen, hybriden Ansatzes zum automatischen Skalieren, genannt Chameleon, wird das Risiko reduziert, welches sich aus dem Einsatz einzelner proaktiver Methoden automatischen Skalierens ergibt. Chameleon kombiniert mehrere verschiedene proaktive Methoden und ist mit einer reaktiven Rückfallebene gekoppelt. Dazu verwendet Chameleon bei Bedarf automatische Zeitreihenvorhersagen, um ankommende Arbeitslasten abzuschätzen. Ergänzend dazu kommen Techniken der Serviceanforderungsabschätzung zur Systemlaufzeit zum Einsatz, um den Ressourcenverbrauch einzelner Arbeitspakete in etwa zu bestimmen, ohne dass eine feingranulare Instrumentierung der Anwendung erforderlich ist. Abgesehen davon nutzt Chameleon anwendungsbezogenes Wissen, um Warteschlangennetze in Produktform zu lösen und optimale Skalierungsaktionen abzuleiten. Der Chameleon-Ansatz ist der erste seiner Art, welcher Konflikte zwischen reaktiven und proaktiven Skalierungsaktionen in intelligenter Art und Weise aufzulösen vermag.
Zusammenfassend kann gesagt werden, dass die Beiträge dieser Dissertation auf lange Sicht die Art und Weise beeinflussen dürften, in welcher Ressourcenmanagementansätze in Cloudumgebungen bewertet werden. Ein Ergebnis wäre unter anderem eine verbesserte Qualität der Algorithmen für ein automatisches Ressourcenmanagement.
Als Grundlage für zukünftige Forschungsarbeiten werden aufkommende Herausforderungen identifiziert und diskutiert: Die Einführung der Containerisierung innerhalb von virtuellen Maschineninstanzen bewirkt eine weitere Ebene der Indirektion. Als Folge dieser Verschachtelung der virtuellen Ressourcen wird die Fragmentierung erhöht und unzuverlässige Bereitstellungsverzögerungen verursacht. Außerdem tendieren die virtualisierten Rechenressourcen aufgrund von Priorisierung und Zielkonflikten mehr und mehr zu inhomogenen Systemlandschaften. Aufgrund von DevOps-Praktiken werden Softwareupdates von Diensten in Cloudumgebungen mit einer höheren Frequenz durchgeführt, welche sich auf das Benutzungsverhalten dynamisierend auswirken kann.
|
46 |
Performance Engineering of Serverless Applications and Platforms / Performanz Engineering von Serverless Anwendungen und PlattformenEismann, Simon January 2023 (has links) (PDF)
Serverless computing is an emerging cloud computing paradigm that offers a highlevel
application programming model with utilization-based billing. It enables the
deployment of cloud applications without managing the underlying resources or
worrying about other operational aspects. Function-as-a-Service (FaaS) platforms
implement serverless computing by allowing developers to execute code on-demand
in response to events with continuous scaling while having to pay only for the
time used with sub-second metering. Cloud providers have further introduced
many fully managed services for databases, messaging buses, and storage that also
implement a serverless computing model. Applications composed of these fully
managed services and FaaS functions are quickly gaining popularity in both industry
and in academia.
However, due to this rapid adoption, much information surrounding serverless
computing is inconsistent and often outdated as the serverless paradigm evolves.
This makes the performance engineering of serverless applications and platforms
challenging, as there are many open questions, such as: What types of applications
is serverless computing well suited for, and what are its limitations? How should
serverless applications be designed, configured, and implemented? Which design
decisions impact the performance properties of serverless platforms and how can
they be optimized? These and many other open questions can be traced back to an
inconsistent understanding of serverless applications and platforms, which could
present a major roadblock in the adoption of serverless computing.
In this thesis, we address the lack of performance knowledge surrounding serverless
applications and platforms from multiple angles: we conduct empirical studies
to further the understanding of serverless applications and platforms, we introduce
automated optimization methods that simplify the operation of serverless applications,
and we enable the analysis of design tradeoffs of serverless platforms by
extending white-box performance modeling. / Serverless Computing ist ein neues Cloud-Computing-Paradigma, das ein High-Level-Anwendungsprogrammiermodell mit nutzungsbasierter Abrechnung bietet. Es ermöglicht die Bereitstellung von Cloud-Anwendungen, ohne dass die zugrunde liegenden Ressourcen verwaltet werden müssen oder man sich um andere betriebliche Aspekte kümmern muss. FaaS-Plattformen implementieren Serverless Computing, indem sie Entwicklern die Möglichkeit geben, Code nach Bedarf als Reaktion auf Ereignisse mit kontinuierlicher Skalierung auszuführen, während sie nur für die genutzte Zeit mit sekundengenauer Abrechnung zahlen müssen. Cloud-Anbieter haben darüber hinaus viele vollständig verwaltete Dienste für Datenbanken, Messaging-Busse und Orchestrierung eingeführt, die ebenfalls ein Serverless Computing-Modell implementieren. Anwendungen, die aus diesen vollständig verwalteten Diensten und FaaS-Funktionen bestehen, werden sowohl in der Industrie als auch in der Wissenschaft immer beliebter.
Aufgrund dieser schnellen Verbreitung sind jedoch viele Informationen zum Serverless Computing inkonsistent und oft veraltet, da sich das Serverless Paradigma weiterentwickelt. Dies macht das Performanz-Engineering von Serverless Anwendungen und Plattformen zu einer Herausforderung, da es viele offene Fragen gibt, wie zum Beispiel: Für welche Arten von Anwendungen ist Serverless Computing gut geeignet und wo liegen seine Grenzen? Wie sollten Serverless Anwendungen konzipiert, konfiguriert und implementiert werden? Welche Designentscheidungen wirken sich auf die Performanzeigenschaften von Serverless Plattformen aus und wie können sie optimiert werden? Diese und viele andere offene Fragen lassen sich auf ein uneinheitliches Verständnis von Serverless Anwendungen und Plattformen zurückführen, was ein großes Hindernis für die Einführung von Serverless Computing darstellen könnte.
In dieser Arbeit adressieren wir den Mangel an Performanzwissen zu Serverless Anwendungen und Plattformen aus mehreren Blickwinkeln: Wir führen empirische Studien durch, um das Verständnis von Serverless Anwendungen und Plattformen zu fördern, wir stellen automatisierte Optimierungsmethoden vor, die das benötigte Wissen für den Betrieb von Serverless Anwendungen reduzieren, und wir erweitern die White-Box-Performanzmodellierungerung für die Analyse von Designkompromissen von Serverless Plattformen.
|
47 |
Channel and Server Scheduling for Energy-Fair Mobile Computation OffloadingMoscardini, Jonathan A. January 2016 (has links)
This thesis investigates energy fairness in an environment where multiple mobile cloud computing users are attempting to utilize both a shared channel and a shared server to offload jobs to remote computation resources, a technique known as mobile computation offloading. This offloading is done in an effort to reduce energy consumption at the mobile device, which has been demonstrated to be highly effective in previous work. However, insufficient resources are available for all mobile devices to offload all generated jobs due to constraints at the shared channel and server. In addition to these constraints, certain mobile devices are at a disadvantage relative to others in their achievable offloading rate. Hence, the shared resources are not necessarily shared fairly, and an effort must be made to do so.
A method for improving offloading fairness in terms of total energy is derived, in which the state of the queue of jobs waiting for offloading is evaluated in an online fashion, at each job arrival, in order to inform an offloading decision for that newest arrival; no prior state or future predictions are used to determine the optimal decision. This algorithm is evaluated by comparing it on several criteria to standard scheduling methods, as well as to an optimal offline (i.e., non-causal) schedule derived from the solution of a min-max energy integer linear program. Various results derived by simulation demonstrate the improvements in energy fairness achieved. / Thesis / Master of Applied Science (MASc)
|
48 |
Data Parallel Application Development and Performance with AzureZhang, Dean 08 September 2011 (has links)
No description available.
|
49 |
Optimal Mobile Computation Offloading With Hard Task DeadlinesHekmati, Arvin January 2019 (has links)
This thesis considers mobile computation offloading where task completion times are subject to hard deadline constraints. Hard deadlines are difficult to meet in conventional computation offloading due to the stochastic nature of the wireless channels involved. Rather than using binary offload decisions, we permit concurrent remote and local job execution when it is needed to ensure task completion deadlines. The thesis addresses this problem for homogeneous Markovian wireless channels. Two online energy-optimal computation offloading algorithms, OnOpt and MultiOpt, are proposed. OnOpt uploads the job to the server continuously and MultiOpt uploads the job in separate parts, each of which requires a separate offload initiation decision. The energy optimality of the algorithms is shown by constructing a time-dilated absorbing Markov process and applying dynamic programming. Closed form results are derived for general Markovian channels. The Gilbert-Elliott channel model is used to show how a particular Markov chain structure can be exploited to compute optimal offload initiation times more efficiently. The performance of the proposed algorithms is compared to three others, namely, Immediate Offloading, Channel Threshold, and Local Execution. Performance results show that the proposed algorithms can significantly improve mobile device energy consumption compared to the other approaches while guaranteeing hard task execution deadlines. / Thesis / Master of Applied Science (MASc)
|
50 |
Governance a management služeb cloud computingu z pohledu spotřebitele / Cloud computing governance a management from consumer point of viewKarkošková, Soňa January 2017 (has links)
Cloud computing brings widely recognized benefits as well as new challenges and risks resulting mainly from the fact that cloud service provider is an external third party that provides public cloud services in multi-tenancy model. At present, widely accepted IT governance frameworks lack focus on cloud computing governance and do not fully address the requirements of cloud computing from cloud consumer viewpoint. Given the absence of any comprehensive cloud computing governance and management framework, this doctoral dissertation thesis focuses on specific aspects of cloud service governance and management from consumer perspective. The main aim of doctoral dissertation thesis is the design of methodological framework for cloud service governance and management (Cloud computing governance and management) from consumer point of view. Cloud consumer is considered as a medium or large-sized enterprise that uses services in public cloud computing model, which are offered and delivered by cloud service provider. Theoretical part of this doctoral dissertation thesis identifies the main theoretical concepts of IT governance, IT management and cloud computing (chapter 2). Analytical part of this doctoral dissertation thesis reviews the literature dealing with specifics of cloud services utilization and their impact on IT governance and IT management, cloud computing governance and cloud computing management (chapter 3). Further, existing IT governance and IT management frameworks (SOA Governance, COBIT, ITIL and MBI) were analysed and evaluated in terms of the use of cloud services from cloud consumer perspective (chapter 4). Scientific research was based on Design Science Research Methodology with intention to design and evaluate artifact methodological framework. The main part of this doctoral dissertation thesis proposes methodical framework Cloud computing governance and management based on SOA Governance, COBIT 5 and ITIL 2011 (chapter 5, 6 and 7). Verification of proposed methodical framework Cloud computing governance and management from cloud consumer perspective was based on scientific method of case study (chapter 8). The main objective of the case study was to evaluate and verify proposed methodical framework Cloud computing governance and management in a real business environment. The main contribution of this doctoral dissertation thesis is both the use of existing knowledge, approaches and methodologies in area of IT governance and IT management to design methodical framework Cloud computing governance and management and the extension of Management of Business Informatics (MBI) framework by a set of new tasks containing procedures and recommendations relating to adoption and utilization of cloud computing services.
|
Page generated in 0.0947 seconds