Spelling suggestions: "subject:"ehe cloud"" "subject:"ehe aloud""
711 |
Satellite Remote Sensing of Mid-level CloudsJin, Hongchun 1980- 14 March 2013 (has links)
This dissertation aims to study the mid-level clouds using satellite observations. It consists of two major parts: characteristics (including cloud top/base heights, cloud top pressure and temperature, and cloud thickness) and thermodynamic phase of mid-level clouds. Each part devotes to a particular issue of significant importance for satellite-based remote sensing of mid-level clouds.
The first part of this dissertation focuses on the impacts of three definitions of the mid-level clouds based on cloud top pressure, cloud top height, and cloud base height on mid-level cloud characteristics. The impacts of multi-layer clouds on satellite-based global statistics of clouds at different levels, particularly for mid- level clouds, are demonstrated. Mid-level clouds are found to occur more frequently than underlying upper-level clouds. Comparisons of cloud amounts between a merged CALIPSO, CloudSat, CERES, and MODIS (CCCM) dataset and International Satellite Cloud Climatology Project (ISCCP) climatology are made between July 2006 and December 2009. Midlevel cloud characteristics are shown to be sensitive to perturbations in midlevel boundary pressures and heights.
The second part focuses on the thermodynamic phase of mid-level clouds. A new algorithm to detect cloud phase using Atmospheric Infrared Sounder (AIRS) high spectral measurements is introduced. The AIRS phase algorithm is based on the newly developed High-spectral-resolution cloudy-sky Radiative Transfer Model (HRTM). The AIRS phase algorithm is evaluated using the CALIPSO cloud phase products for single-layer, heterogeneous, and multi-layer scenes. The AIRS phase algorithm has excellent performance (>90%) in detecting ice clouds compared to the CALIPSO ice clouds. It is capable of detecting optically thin ice clouds in tropics and clouds in the mid-temperature range. Thermodynamic phase of mid-level clouds are investigated using the spatially collocated AIRS phase and CALIPSO phase products between December 2007 and November 2008. Overall, the statistics show that ice, liquid water, and mixed-phase of the mid-level clouds are approximately 20%, 40%, and 40%, globally.
|
712 |
Flexible Computing with Virtual MachinesLagar Cavilla, Horacio Andres 30 March 2011 (has links)
This thesis is predicated upon a vision of the future of computing with a separation of functionality between core and edges, very
similar to that governing the Internet itself. In this vision, the core of our computing infrastructure is made up of vast server farms with an abundance of storage and processing cycles. Centralization of
computation in these farms, coupled with high-speed wired or wireless connectivity, allows for pervasive access to a highly-available and well-maintained repository for data, configurations, and applications. Computation in the edges is concerned with provisioning application state and user data to rich clients, notably mobile devices equipped with powerful displays and graphics processors.
We define flexible computing as systems support for applications that dynamically leverage the resources available in the core
infrastructure, or cloud. The work in this thesis focuses on two instances of flexible computing that are crucial to the
realization of the aforementioned vision. Location flexibility aims to, transparently and seamlessly, migrate applications between
the edges and the core based on user demand. This enables performing the interactive tasks on rich edge clients and the computational tasks on powerful core servers. Scale flexibility is the ability of
applications executing in cloud environments, such as parallel jobs or
clustered servers, to swiftly grow and shrink their footprint according to execution demands.
This thesis shows how we can use system virtualization to implement systems that provide scale and location flexibility. To that effect we build and evaluate two system prototypes: Snowbird and SnowFlock. We present techniques for manipulating virtual machine state that turn running software into a malleable entity which is easily manageable, is decoupled from the underlying hardware, and is capable of dynamic relocation and scaling. This thesis demonstrates that virtualization technology is a powerful and suitable tool to
enable solutions for location and scale flexibility.
|
713 |
Looking through the Clouds : A Tale of Two UniversitiesMelin, Ulf, Sarkar, Pradip, K., Young, Leslie, W. January 2012 (has links)
Cloud computing has become a popular buzzword and a trend in the IT industry. With characteristic features of scalable computing resources on-demand, and accessibility on a pay-per-use basis, it has been promoted as the harbinger of good tidings to its subscribers, such as the minimization of in-house IT infrastructures, substantial cost savings, and diminished administrative hurdles, thereby appearing as an appealing outsourcing proposition for non-IT enterprises, such as universities. This paper presents a comparative case study of two universities, one in Australia (UniOz) and one in Sweden (UniSwed). The two universities illustrate examples of how contemporary organisations interpret cloud computing, of drivers behind moving services into the cloud, and of prevailing concerns. Similarities pertaining to drivers for cloud computing are identified at the two cases (seeking scalable computing resources, and the re-allocation of IT resources to focus on core enterprise operations, with an aim to trim costs). This is identified in spite of differences in the culture of respective IT departments. Differences were also identified in terms of student vs. staff driven sourcing of services (email), and early vs. late adoption. The case study also illustrates interesting patterns in terms of the organisational implications of cloud services over time that calls for longitudinal studies. The implication of this paper is three-fold; two cases are consistent with outsourcing theories, they point to a transformation of the status quo, rather than an erosion of the role and influence of the internal IT department, and also reveals gaps in outsourcing theories and a possible future research direction in strengthening the relevant theoretical framework.
|
714 |
The contribution of cloud computing to SMEs competitive advantage : A resource-based viewEkström Winroth, Sten, Bettels, Franco January 2012 (has links)
The phenomenon analyzed within this thesis is the uprising of cloud computing technologies and their potential impact on SMEs. Cloud computing is ought to enable new capabilities for SMEs by the key benefits of being less expensive and its ubiquitous accessibility. The research was constituted on the theoretical framework of the resource based view and was conducted via semi-structured interviews along the themes of application history, financial impact, structural impact, strategic impact, risk considerations and future outlook. Thereby the core research question was to understand the impact of cloud computing technologies as new resources on the competitive advantage of SMEs. Therefore, 183 SMEs were contacted via email which were selected from a commerce authority and internet inquiry from which 6 agreed to an interview. The interview outcomes were analyzed by coding and an interpretation of the qualitative findings. Significant outcomes were that cloud computing provides SMEs capabilities of collaboration and mobility. The impact on innovation could not be verified but was indicated. Moreover, the adaption of cloud computing has led to SMEs saving resources in terms of time, IT budget and specific IT knowledge. The provision of the new capabilities and the savings of resources have shown to improve the SMEs’ overall performance by complementing, supplementing and substituting existing resources. Nevertheless, no direct linkage could be identified to a contribution to a competitive advantage but suggestive indirect linkages were found. However, the findings might deliver new comprehension about the organizational impact of cloud computing on SMEs in terms of resource enhancement and their impact on business practices. Furthermore, implications for future research include the need for investigation of the internal charging of cloud services and particular to narrow down the key advantages of cloud computing for SMEs and raise their business awareness as a source of competitive advantage. / <p>This paper has been a collective effort between Sten Ekström Winroth and Franco Bettels, the thesis has though been written in collaborative between the Management and Informatics department at Jönköping International Business School. These two departments have different requirements of the thesis writing process, which entails that we the authors have had different requirements. This is the reason that two theses exist with the same name, these two theses differ in the details, but are overall the same. </p>
|
715 |
Towards Systematic and Accurate Environment Selection for Emerging Cloud ApplicationsLi, Ang January 2012 (has links)
<p>As cloud computing is gaining popularity, many application owners are migrating their</p><p>applications into the cloud. However, because of the diversity of the cloud environments</p><p>and the complexity of the modern applications, it is very challenging to find out which</p><p>cloud environment is best fitted for one's application.</p><p>In this dissertation, we design and build systems to help application owners select the</p><p>most suitable cloud environments for their applications. The first part of this thesis focuses</p><p>on how to compare the general fitness of the cloud environments. We present CloudCmp,</p><p>a novel comparator of public cloud providers. CloudCmp measures the elastic computing,</p><p>persistent storage, and networking services offered by a cloud along metrics that directly</p><p>reflect their impact on the performance of customer applications. CloudCmp strives to</p><p>ensure fairness, representativeness, and compliance of these measurements while limiting</p><p>measurement cost. Applying CloudCmp to four cloud providers that together account</p><p>for most of the cloud customers today, we find that their offered services vary widely in</p><p>performance and costs, underscoring the need for thoughtful cloud environment selection.</p><p>From case studies on three representative cloud applications, we show that CloudCmp can</p><p>guide customers in selecting the best-performing provider for their applications.</p><p>The second part focuses on how to let customers compare cloud environments in the</p><p>context of their own applications. We describe CloudProphet, a novel system that can</p><p>accurately estimate an application's performance inside a candidate cloud environment</p><p>without the need of migration. CloudProphet generates highly portable shadow programs</p><p>to mimic the behavior of a real application, and deploys them inside the cloud to estimate</p><p>the application's performance. We use the trace-and-replay technique to automatically</p><p>generate high-fidelity shadows, and leverage the popular dispatcher-worker pattern</p><p>to accurately extract and enforce the inter-component dependencies. Our evaluation in</p><p>three popular cloud platforms shows that CloudProphet can help customers pick the bestperforming</p><p>cloud environment, and can also accurately estimate the performance of a</p><p>variety of applications.</p> / Dissertation
|
716 |
The competition strategy research of Taiwan cloud computing industryLin, Yi-Chun 16 August 2010 (has links)
To say cloud computing is a brand new technology or industry development trend, I would prefer to say Cloud computer is the result of commercial business model revolution. The growth of Global information technology industry in recent years have been exhausted, the PC industry in the past, Intel and Microsoft, the Wintel architecture, across the world which exclusive more than 80% of market share. Each year the new products launches, all consumers must pay the bill without exception! However, when Microsoft introduced the new Vista operating system, the sales doesn¡¦t pan out as expectation. The consumers finally decide to penalize steadily increasing selling price. Meanwhile Intel also takes action to provide low-cost processor solutions to response to market needs and rescue declining rate of the market share.
When global network coverage gets matured, the era of high-speed network is coming and human life will make a significant change because the business opportunities occur from the Internet. From the message propagation, interpersonal interaction and even food and lifestyle all hook up with the network; this huge business opportunity happens and it is appetizing! In recent years, ¡§Service" becomes the central idea of industry reconstruction. Cloud computing in fact is to serve as a starting point and the resulting value. "Cloud computing" has no "unified" specifications or definition right now, this study attempts to present to a limited data collected and discussed, with five force analysis, competitive analysis, management theory, explained the future of "possible" to become a huge business opportunity for the industry, and the feasibility to Taiwan in the light of the direction.
The conclusions of this study are summarized as below:
(1)Cloud computing has large business opportunity in the future
(2)Taiwan Cloud computing businees opportunity can have 2 portions: one for hardware adding value, another for product reasearch
(3)Taiwan has advantage to work with China for Cloud computing market
(4)Taiwan government Cloud computing policy can study from Japan or Korea
(5)Taiwan government Cloud computing policy can be a favor for local market
|
717 |
Design and implementation of a Hadoop-based secure cloud computing architectureCheng, Sheng-Lun 31 January 2011 (has links)
The goal in this research is to design and implement a secure Hadoop cluster. The
cloud computing is a type of network computing, where most data is transmitted through
network. To develop a secure cloud architecture, we need to validate users first, and
guarantee transmitting data against stealing and falsification. In case of someone steals the
data, he is still hard to know content. Therefore, we focus on the following points:
I. Authorization¡G First, we investigate the user authorization problem in Hadoop
system, and then, propose two solutions: SOCKS Authorization and Service Level
Authorization. SOCKS Authorization is a external authorization in Hadoop System,
and uses username/password to identify users. Service Level Authorization is a new
authorization mechanism in Hadoop 0.20. This mechanism to ensure clients connecting
to a particular Hadoop service have the necessary, pre-configured, permissions and are
authorized to access the given service.
II. Transmission Encryption¡G To keep important data, such as Block ID, Job ID,
username, etc, away from exposedness in non-trusted networks, we examine Hadoop
transmissions in practice, and point out possible security problems. Subsequently, we
use IPSec to implement transmission encryption and packet verification for Hadoop.
III. Architecture Design¡G Based on the implementation framework of Hadoop mentioned
above, we propose a secure architecture of Hadoop cluster to solve the security
problems. In addition, we also evaluate the performances of HDFS and MapRduce in
this architecture.
|
718 |
Performance Analysis of Relational Database over Distributed File SystemsTsai, Ching-Tang 08 July 2011 (has links)
With the growing of Internet, people use network frequently. Many PC applications have moved to the network-based environment, such as text processing, calendar, photo management, and even users can develop applications on the network. Google is a company providing web services. Its popular services are search engine and Gmail which attract people by short response time and lots amount of data storage. It also charges businesses to place their own advertisements. Another hot social network is Facebook which is also a popular website. It processes huge instant messages and social relationships between users. The magic power of doing this depends on the new technique, Cloud Computing.
Cloud computing has ability to keep high-performance processing and short response time, and its kernel components are distributed data storage and distributed data processing. Hadoop is a famous open source to build cloud distributed file system and distributed data analysis. Hadoop is suitable for batch applications and write-once-and-read-many applications. Thus, currently there are only fewer applications, such as pattern searching and log file analysis, to be implemented over Hadoop. However, almost all database applications are still using relational databases. To port them into cloud platform, it becomes necessary to let relational database running over HDFS. So we will test the solution of FUSE-DFS which is an interface to mount HDFS into a system and is used like a local filesystem. If we can make FUSE-DFS performance satisfy user¡¦s application, then we can easier persuade people to port their application into cloud platform with least overhead.
|
719 |
A Storage QoS and Power Saving Distributed Storage System for Cloud ComputingTai, Hsieh-Chang 29 September 2011 (has links)
In order to achieve the storage QoS and power saving, we proposed a fast data migration/transmission scheme and a power saving algorithm for Dataenode management. The fast data migration/ transmission scheme consists of three mechanisms. First, it uses multicast to improve the network bandwidth and solve the I/O and bandwidth bottlenecks. Then, a network coding is used to increase the network throughput and retain high fault tolerance. Third, it uses a user/Dataenode connection management to prevent missing the important message and collocates with CPU & I/O bound scheduling to make data evenly stored in the system. Experimental results show the proposed fast data migration/transmission improves 56% and 85% efficiency in the upload bandwidth and the response time. The proposed power saving algorithm applies the Kalman filter first and then add with the pattern analysis to predict the system workload to adjust the number of Dataenodes dynamically in order to save power. Experimental results show that the proposed power saving algorithm for Dataenode management achieves more than 92.97% accuracy in the workload prediction and improves 52.25% in power consumption with 3.82% error rate.
|
720 |
The Study of Marshalling in Android:Case Implementation of Data Retrieval from Cloud Database ServiceJhan, Bo-Chao 18 November 2011 (has links)
With the smart handheld devices and the rapid development of network applications, data exchange between devices as the first problem. There are many ways the information can be transmitted from one end to the other end, but which one is the best way?
This paper examines several common data package method, compare their features, advantages and disadvantages, and to test the effectiveness of the data package, the size of data packaged, the package needed time.
In order to prove the practicality of packages, designed a "file synchronization system," using Protocol Buffer as data exchange formats, implementing the Android system.
|
Page generated in 0.0844 seconds