Spelling suggestions: "subject:"[een] CLOUD COMPUTING"" "subject:"[enn] CLOUD COMPUTING""
121 |
Inköp av cloud-tjänsten Software as a Service : En studie om hur beslutsprocessen gått till vid inköp av cloud-tjänsten på två små IT-företagPussinen, Kenny, Gustafsson, Emili January 2012 (has links)
Studien avser att beskriva om och hur faktorernakärnkompetens, osäkerhet, nyttoaspekter och kontroll/flexibilitet påverkat besluten vid inköp av cloud-tjänsten SaaS. Uppsatsen är en kvalitativ studie som är baserad på semistruktuerade intervjuer med nyckelpersoner på två små IT-företag samt vetenskapliga artiklar och litteratur inom ämnesområdet. Studiens slutsats är att de båda företagen påverkats av några eller alla faktorer vid inköpen av cloud-tjänsten. Faktorn nyttoaspekter påverkade båda företagens inköpsbeslut i hög grad. Faktorerna osäkerhet och kontroll/flexibilitet påverkade inte alls inköpsbeslutet hos det ena företaget medan de till stor del påverkade inköpsbeslutet hos det andra företaget. Faktorn kärnverksamhet påverkade det ena företagets inköpsbeslut till en viss del och det andra företaget till en stor del.
|
122 |
Towards Systematic and Accurate Environment Selection for Emerging Cloud ApplicationsLi, Ang January 2012 (has links)
<p>As cloud computing is gaining popularity, many application owners are migrating their</p><p>applications into the cloud. However, because of the diversity of the cloud environments</p><p>and the complexity of the modern applications, it is very challenging to find out which</p><p>cloud environment is best fitted for one's application.</p><p>In this dissertation, we design and build systems to help application owners select the</p><p>most suitable cloud environments for their applications. The first part of this thesis focuses</p><p>on how to compare the general fitness of the cloud environments. We present CloudCmp,</p><p>a novel comparator of public cloud providers. CloudCmp measures the elastic computing,</p><p>persistent storage, and networking services offered by a cloud along metrics that directly</p><p>reflect their impact on the performance of customer applications. CloudCmp strives to</p><p>ensure fairness, representativeness, and compliance of these measurements while limiting</p><p>measurement cost. Applying CloudCmp to four cloud providers that together account</p><p>for most of the cloud customers today, we find that their offered services vary widely in</p><p>performance and costs, underscoring the need for thoughtful cloud environment selection.</p><p>From case studies on three representative cloud applications, we show that CloudCmp can</p><p>guide customers in selecting the best-performing provider for their applications.</p><p>The second part focuses on how to let customers compare cloud environments in the</p><p>context of their own applications. We describe CloudProphet, a novel system that can</p><p>accurately estimate an application's performance inside a candidate cloud environment</p><p>without the need of migration. CloudProphet generates highly portable shadow programs</p><p>to mimic the behavior of a real application, and deploys them inside the cloud to estimate</p><p>the application's performance. We use the trace-and-replay technique to automatically</p><p>generate high-fidelity shadows, and leverage the popular dispatcher-worker pattern</p><p>to accurately extract and enforce the inter-component dependencies. Our evaluation in</p><p>three popular cloud platforms shows that CloudProphet can help customers pick the bestperforming</p><p>cloud environment, and can also accurately estimate the performance of a</p><p>variety of applications.</p> / Dissertation
|
123 |
The competition strategy research of Taiwan cloud computing industryLin, Yi-Chun 16 August 2010 (has links)
To say cloud computing is a brand new technology or industry development trend, I would prefer to say Cloud computer is the result of commercial business model revolution. The growth of Global information technology industry in recent years have been exhausted, the PC industry in the past, Intel and Microsoft, the Wintel architecture, across the world which exclusive more than 80% of market share. Each year the new products launches, all consumers must pay the bill without exception! However, when Microsoft introduced the new Vista operating system, the sales doesn¡¦t pan out as expectation. The consumers finally decide to penalize steadily increasing selling price. Meanwhile Intel also takes action to provide low-cost processor solutions to response to market needs and rescue declining rate of the market share.
When global network coverage gets matured, the era of high-speed network is coming and human life will make a significant change because the business opportunities occur from the Internet. From the message propagation, interpersonal interaction and even food and lifestyle all hook up with the network; this huge business opportunity happens and it is appetizing! In recent years, ¡§Service" becomes the central idea of industry reconstruction. Cloud computing in fact is to serve as a starting point and the resulting value. "Cloud computing" has no "unified" specifications or definition right now, this study attempts to present to a limited data collected and discussed, with five force analysis, competitive analysis, management theory, explained the future of "possible" to become a huge business opportunity for the industry, and the feasibility to Taiwan in the light of the direction.
The conclusions of this study are summarized as below:
(1)Cloud computing has large business opportunity in the future
(2)Taiwan Cloud computing businees opportunity can have 2 portions: one for hardware adding value, another for product reasearch
(3)Taiwan has advantage to work with China for Cloud computing market
(4)Taiwan government Cloud computing policy can study from Japan or Korea
(5)Taiwan government Cloud computing policy can be a favor for local market
|
124 |
Design and implementation of a Hadoop-based secure cloud computing architectureCheng, Sheng-Lun 31 January 2011 (has links)
The goal in this research is to design and implement a secure Hadoop cluster. The
cloud computing is a type of network computing, where most data is transmitted through
network. To develop a secure cloud architecture, we need to validate users first, and
guarantee transmitting data against stealing and falsification. In case of someone steals the
data, he is still hard to know content. Therefore, we focus on the following points:
I. Authorization¡G First, we investigate the user authorization problem in Hadoop
system, and then, propose two solutions: SOCKS Authorization and Service Level
Authorization. SOCKS Authorization is a external authorization in Hadoop System,
and uses username/password to identify users. Service Level Authorization is a new
authorization mechanism in Hadoop 0.20. This mechanism to ensure clients connecting
to a particular Hadoop service have the necessary, pre-configured, permissions and are
authorized to access the given service.
II. Transmission Encryption¡G To keep important data, such as Block ID, Job ID,
username, etc, away from exposedness in non-trusted networks, we examine Hadoop
transmissions in practice, and point out possible security problems. Subsequently, we
use IPSec to implement transmission encryption and packet verification for Hadoop.
III. Architecture Design¡G Based on the implementation framework of Hadoop mentioned
above, we propose a secure architecture of Hadoop cluster to solve the security
problems. In addition, we also evaluate the performances of HDFS and MapRduce in
this architecture.
|
125 |
Performance Analysis of Relational Database over Distributed File SystemsTsai, Ching-Tang 08 July 2011 (has links)
With the growing of Internet, people use network frequently. Many PC applications have moved to the network-based environment, such as text processing, calendar, photo management, and even users can develop applications on the network. Google is a company providing web services. Its popular services are search engine and Gmail which attract people by short response time and lots amount of data storage. It also charges businesses to place their own advertisements. Another hot social network is Facebook which is also a popular website. It processes huge instant messages and social relationships between users. The magic power of doing this depends on the new technique, Cloud Computing.
Cloud computing has ability to keep high-performance processing and short response time, and its kernel components are distributed data storage and distributed data processing. Hadoop is a famous open source to build cloud distributed file system and distributed data analysis. Hadoop is suitable for batch applications and write-once-and-read-many applications. Thus, currently there are only fewer applications, such as pattern searching and log file analysis, to be implemented over Hadoop. However, almost all database applications are still using relational databases. To port them into cloud platform, it becomes necessary to let relational database running over HDFS. So we will test the solution of FUSE-DFS which is an interface to mount HDFS into a system and is used like a local filesystem. If we can make FUSE-DFS performance satisfy user¡¦s application, then we can easier persuade people to port their application into cloud platform with least overhead.
|
126 |
IMPLEMENTATION OF A CLOUD SHELL FOR LIGHT-WEIGHT UNIX PROGRAMMABILITY SUPPORT IN A DISTRIBUTED CLOUD ENVIRONMENTWei, Tzu-Chieh 09 February 2012 (has links)
This thesis describes the implementation of a UNIX-styled shell environment for cloud systems. This new scripting language, the cloud shell (CLSH), uses a syntax based upon the familiar BASH shell of UNIX systems. This familiar syntax allows users to quickly learn the new environment. The difference, as compared to BASH, is that CLSH gives the user easy access to the parallelism of the cloud. Indeed, the user does not need to explicitly refer to the cloud at all; the cloud becomes simply a virtual file system and the user experience is quite similar to standard bash programming.
This cloud shell is built into Hadoop¡¦s HDFS file system. The difference, as compared to HDFS, is that CLSH offers a full range of UNIX-style commands, rather than a small subset of simple commands. Moreover, CLSH is a full-fledged scripting language that offers much more control over file management than does HDFS. To achieve comparable behavior within HDFS, the user must use either the Pig Latin tool or else use java scripting. Not only are these alternatives harder to use than CLSH, but they also perform slower and are incapable of performing certain tasks that CLSH can easily achieve. Moreover, the cloud shell environment simply provides the user with a better cloud interface; it does not preclude the use of Pig Latin or Java scripts.
|
127 |
The Design of Cloud-Economical Computing Services for Program TradingHsu, Chi-Shin 26 August 2012 (has links)
Program Trading has gotten more popular recent years. According to thestatistics, there was about 53.6% of daily volume in the United States, and increased to 73% in 2009. With the universal of program trading, more people have begun to research program trading.
The purpose of this paper is constructing a developed platform of program trading for researching or developing. In addition to developed platform, we provide the run-time environment, and three main functions:
1. The job scheduler
2. The high scalability
3. The developed platform
In this paper, we use SLURM to implement an economical computing service for program trading. SLURM is a resource management software for some large clusters.
However it lacked for an easy interface to the ended users. We modify Xinetd as the external interface for SLURM, and implement the program trading development platform for researching or developing.
According to the result, using our scheduler and the external interface that modify from Xinetd can be effective in controlling the server resource and increase the availability.
|
128 |
Planning and Optimization During the Life-Cycle of Service Level Agreements for Cloud ComputingLu, Kuan 16 February 2015 (has links)
Ein Service Level Agreement (SLA) ist ein elektronischer Vertrag zwischen dem Kunden
und dem Anbieter eines Services. Die beteiligten Partner kl aren ihre Erwartungen
und Verp
ichtungen in Bezug auf den Dienst und dessen Qualit at. SLAs werden
bereits f ur die Beschreibung von Cloud-Computing-Diensten eingesetzt. Der
Diensteanbieter stellt sicher, dass die Dienstqualit at erf ullt wird und mit den Anforderungen
des Kunden bis zum Ende der vereinbarten Laufzeit ubereinstimmt.
Die Durchf uhrung der SLAs erfordert einen erheblichen Aufwand, um Autonomie,
Wirtschaftlichkeit und E zienz zu erreichen. Der gegenw artige Stand der Technik
im SLA-Management begegnet Herausforderungen wie SLA-Darstellung f ur Cloud-
Dienste, gesch aftsbezogene SLA-Optimierungen, Dienste-Outsourcing und Ressourcenmanagement.
Diese Gebiete scha en zentrale und aktuelle Forschungsthemen. Das
Management von SLAs in unterschiedlichen Phasen w ahrend ihrer Laufzeit erfordert
eine daf ur entwickelte Methodik. Dadurch wird die Realisierung von Cloud SLAManagement
vereinfacht.
Ich pr asentiere ein breit gef achertes Modell im SLA-Laufzeitmanagement, das die
genannten Herausforderungen adressiert. Diese Herangehensweise erm oglicht eine automatische
Dienstemodellierung, sowie Aushandlung, Bereitstellung und Monitoring
von SLAs. W ahrend der Erstellungsphase skizziere ich, wie die Modellierungsstrukturen
verbessert und vereinfacht werden k onnen. Ein weiteres Ziel von meinem Ansatz
ist die Minimierung von Implementierungs- und Outsourcingkosten zugunsten von
Wettbewerbsf ahigkeit. In der SLA-Monitoringphase entwickle ich Strategien f ur die
Auswahl und Zuweisung von virtuellen Cloud Ressourcen in Migrationsphasen. Anschlie
end pr ufe ich mittels Monitoring eine gr o ere Zusammenstellung von SLAs, ob
die vereinbarten Fehlertoleranzen eingehalten werden.
Die vorliegende Arbeit leistet einen Beitrag zu einem Entwurf der GWDG und
deren wissenschaftlichen Communities. Die Forschung, die zu dieser Doktorarbeit
gef uhrt hat, wurde als Teil von dem SLA@SOI EU/FP7 integriertem Projekt durchgef
uhrt (contract No. 216556).
|
129 |
Network performance isolation for virtual machinesCheng, Luwei., 程芦伟. January 2011 (has links)
Cloud computing is a new computing paradigm that aims to transform computing
services into a utility, just as providing electricity in a “pay-as-you-go”
manner. Data centers are increasingly adopting virtualization technology for the
purpose of server consolidation, flexible resource management and better fault
tolerance. Virtualization-based cloud services host networked applications in virtual
machines (VMs), with each VM provided the desired amount of resources
using resource isolation mechanisms.
Effective network performance isolation is fundamental to data centers, which
offers significant benefit of performance predictability for applications. This research
is application-driven. We study how network performance isolation can be
achieved for latency-sensitive cloud applications. For media streaming applications,
network performance isolation means both predicable network bandwidth
and low-jittered network latency. The current resource sharing methods for VMs
mainly focus on resource proportional share, whereas ignore the fact that I/O latency
in VM-hosted platforms is mostly related to resource provisioning rate. The
resource isolation with only quantitative promise does not sufficiently guarantee
performance isolation. Even the VM is allocated with adequate resources such as
CPU time and network bandwidth, problems such as network jitter (variation in
packet delays) can still happen if the resources are provisioned at inappropriate
moments. So in order to achieve performance isolation, the problem is not only
how many/much resources each VM gets, but more importantly whether the resources are provisioned in a timely manner. How to guarantee both requirements
to be achieved in resource allocation is challenging.
This thesis systematically analyzes the causes of unpredictable network latency
in VM-hosted platforms, with both technical discussion and experimental
illustration. We identify that the varied network latency is jointly caused by
VMM CPU scheduler and network traffic shaper, and then address the problem
in these two parts. In our solutions, we consider the design goals of resource
provisioning rate and resource proportionality as two orthogonal dimensions. In
the hypervisor, a proportional share CPU scheduler with soft real-time support
is proposed to guarantee predictable scheduling delay; in network traffic shaper,
we introduce the concept of smooth window to smooth packet delay and apply
closed-loop feedback control to maintain network bandwidth consumption.
The solutions are implemented in Xen 4.1.0 and Linux 2.6.32.13, which are
both the latest versions when this research was conducted. Extensive experiments
have been carried out using both real-life applications and low-level benchmarks.
Testing results show that the proposed solutions can effectively guarantee network
performance isolation, by achieving both predefined network bandwidth and low-jittered
network latency. / published_or_final_version / Computer Science / Master / Master of Philosophy
|
130 |
Cost-aware online VM purchasing for cloud-based application service providers with arbitrary demandsShi, Shengkai, 石晟恺 January 2014 (has links)
Recent years witness the proliferation of Infrastructure-as-a-Service (IaaS) cloud services, which provide on-demand resources (CPU, RAM, disk, etc.) in the form of virtual machines (VMs) for hosting services of third parties. As such, the way of enabling scalable and dynamic Internet applications has been remarkably revolutionized. More and more Application Service Providers (ASPs) are launching their applications in clouds, eliminating the need to construct and operate their owned IT hardware and software. Given the state-of-the-art IaaS offerings, it is still a problem of fundamental importance how the ASPs should rent VMs from the clouds to serve their application needs, in order to minimize the cost while meeting their job demands over a long run. Cloud providers offer different pricing options to meet computing requirements of a variety of applications. The commonly adopted cloud pricing schemes are (1) reserved instance pricing, (2) on-demand instance pricing, and (3) spot instance pricing. However, the challenge facing an ASP is how these pricing schemes can be blended to accommodate arbitrary demands at the optimal cost. In this thesis, we seek to integrate all available pricing options and design effective online algorithms for the long-term operation of ASPs. We formulate the long-term-averaged VM cost minimization problem of an ASP with time-varying and delay-tolerant workloads in a stochastic optimization model. An efficient online VM purchasing algorithm is designed to guide the VM purchasing decisions of the ASP based on the Lyapunov optimization technique. In stark contrast with the existing studies, our online VM purchasing algorithm does not require any a priori knowledge of the workload or any future information. Moreover, it addresses the possible job interruption due to uncertain availability of spot instances. Rigorous analysis shows that our algorithm can achieve a time-averaged VM purchasing cost with a constant gap from its offline minimum. Trace-driven simulations further verify the efficacy of our algorithm. / published_or_final_version / Computer Science / Master / Master of Philosophy
|
Page generated in 0.031 seconds