• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1643
  • 335
  • 250
  • 163
  • 127
  • 116
  • 52
  • 50
  • 44
  • 43
  • 25
  • 20
  • 19
  • 18
  • 11
  • Tagged with
  • 3304
  • 1643
  • 712
  • 493
  • 427
  • 412
  • 391
  • 325
  • 317
  • 311
  • 310
  • 310
  • 309
  • 263
  • 261
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Winning Customers in the Era of Cloud Business Intelligence: : Key Adoption Factors from a Small and Medium Enterprise perspective

agostini, Alessandro January 2013 (has links)
Due to the fast development of new technologies, the Business Intelligence market is changing rapidly, forcing vendors to adapt their offerings to the customers’ needs. As the amount of data available to companies has been substantially increasing in the past years, the need of suitable software tools that perform the right analyses became essential, even in the small and medium sized business' environment. The previous literature,focused on big firms and traditional implementation of Business Intelligence solutions, highlighted the importance of understanding the key factors in successful projects. In the past few years, a new delivery model for Business Intelligence software is taking place: the cloud computing. To date, key factors for adopting cloud Business Intelligence in small and medium sized enterprises (SMEs) have not been systematically investigated. Existing studies have rarely considered these arguments and we lack of a proven framework. This paper is aimed to fill this gap and the structure of the article is subordinated to this objective. Firstly, the thesis offers an overview of the subject and the terminology used in it with the purpose of facilitating the understanding of a rather complex argument. Therefore, it starts with a short historical overview of the Business Intelligence sector, it defines the term Business Intelligence, and it explains both the characteristics of the Business Intelligence systems (cloud vs on-premise) and the importance of having a business intelligence solution for SME. Subsequently, the theoretical framework of this study is defined, combining the prior theories and empirical data collected through the interviews with four Business Intelligence vendors and customers. Initially, the existing Critical Success Factors (CSFs) of IT and BI projects proposed by different authors in the literature are reviewed. Afterwards, the evaluation criteria for the cloud software are taken into consideration. By integrating insights drawn from these studies, as well as adding new factors coming from the interviews, a framework has been created and utilized as a basis for the further questionnaire development. The choice of pursuing both the quantitative and qualitative approaches is aimed at improving the study’s reliability. Empirical data are mainly primary data, collected during a survey and four interviews, supported by secondary data such as general companies' reports, market and trends analysis from trustworthy sources. Based on the findings, the author of this thesis has ranked the key aspects of a cloud BI adoption in SMEs. It is revealed the most important key adoption factors that SMEs evaluate when purchasing a cloud BI solution are the level of software functionalities, the ubiquitous access to data, the responsive answers to customer support requests, the ability to handle big amount of data and the implementation cost. Regarding the managerial implications, the study’s practical relevance consists in offering to BI suppliers' managers, executives and decision-makers interesting areas of discussion for improving the knowledge of SMEs' needs. Moreover, the results of this investigation can be used by Business Intelligence newcomers as a guidance for evaluating solutions available in the market.
62

Mobile cloud computing

Wang, Qian 15 March 2011 (has links)
As mobile network infrastructures continuously improve, they are becoming popular clients to consume any Web resources, especially Web Services (WS). However, there are problems in connecting mobile devices to existing WS. This thesis focuses on three of the following challenge: loss of connection, bandwidth/latency, and limited resources. This research implements and develops a cross-platform architecture for connecting mobile devices to the WS. The architecture includes a platform independent design of mobile service client and a middleware for enhancing the interaction between mobile clients and WS. The middleware also provides a personal service mashup platform for the mobile client. Finally, the middleware can be deployed on Cloud Platforms, like Google App Engine and Amazon EC2, to enhance the scalability and reliability. The experiments evaluate the optimization/adaptation, overhead of the middleware, middleware pushing via email, and performance of Cloud Platforms.
63

Design and Implementation of Web-based Streaming Service in Cloud Computing Environments

Liu, Yu-wen 27 July 2010 (has links)
With the popularity of the Internet and the wider bandwidth, more and more people watch streaming movies online. The larger the scale of the web site, the more load it has to handle. Thus, how to efficiently process users' queries, reduce network latency and packet loss, and improve data reliability at once are top issues. Cloud environments, in this thesis, are used to solve these problems. Also, a cloud-based streaming system that enables users query movie information and watch movies streaming online is designed and implemented to deliver compelling user experiences.
64

Reducing Communication Overhead and Computation Costs in a Cloud Network by Early Combination of Partial Results

Huang, Jun-neng 22 August 2011 (has links)
This thesis describes a method of reducing communication overheads within the MapReduce infrastructure of a cloud computing environment. MapReduce is an framework for parallelizing the processing on massive data systems stored across a distributed computer network. One of the benefits of MapReduce is that the computation is usually performed on a computer (node) that holds the data file. Not only does this approach achieve parallelism, but it also benefits from a characteristic common to many applications: that the answer derived from a computation is often smaller than the size of the input file. Our new method benefits also from this feature. We delay the transmission of individual answers out a given node, so as to allow these answers to be combined locally, first. This combination has two advantages. First, it allows for a further reduction in the amount of data to ultimately transmit. And second, it allows for additional computation across files (such as a merge-sort). There is a limit to the benefit of delaying transmission, however, because the reducer stage of MapReduce cannot begin its work until the nodes transmit their answers. We therefore consider a mechanism to allow the user to adjust the amount of delay before data transmission out of each node.
65

An Overview of Virtualization Technologies for Cloud Computing

Chen, Wei-Min 07 September 2012 (has links)
Cloud computing is a new concept that incorporates many existing technologies, such as virtualization. Virtualization is important for the establishment of cloud computing. With virtualization, cloud computing can virtualize the hardware resources into a huge resource pool for users to utilize. This thesis begins with an introduction to how a widely used service model classifies cloud computing into three layers. From the bottom up, they are IaaS, PaaS, and SaaS. Some service provides are taken as examples for each service model, such as Amazon Beanstalk and Google App Engine for PaaS; Amazon CloudFormation and Microsoft mCloud for IaaS. Next, we turn our discussion to the hypervisors and the technologies for virtualizing hardware resources, such as CPUs, memory, and devices. Then, storage and network virtualization techniques are discussed. Finally, the conclusions and the future directions of virtualization are drawn.
66

Investigation of the aerosol-cloud interaction using the WRF framework

Li, Guohui 2008 August 1900 (has links)
In this dissertation, a two-moment bulk microphysical scheme with aerosol effects is developed and implemented into the Weather Research and Forecasting (WRF) model to investigate the aerosol-cloud interaction. Sensitivities of cloud properties to the representation of aerosol size distributions are first evaluated using a simple box model and a cloud resolving model with a detailed spectral-bin microphysics, indicating that the three-moment method generally exhibits better performance in modeling cloud properties than the two-moment method against the sectional approach. A convective cloud event occurring on August 24, 2000 in Houston, Texas is investigated using the WRF model, and the simulation results are qualitatively in agreement with the measurements. Simulations with various aerosol profiles demonstrate that the response of precipitation to the increase of aerosol concentrations is non-monotonic. The maximal cloud cover, core updraft, and maximal vertical velocity exhibit similar responses as precipitation. The WRF model with the two-moment microphysical scheme successfully simulates the development of a squall line that occurred in the south plains of the U.S. Model experiments varying aerosol concentrations from the clean background case to the polluted continental case show that the aerosol concentrations insignificantly influence the rainfall pattern/distribution, but can remarkably alter the precipitation intensity. The WRF experiment with polluted aerosols predicts 12.8% more precipitation than that with clean aerosols, as well as more intensive rainfall locally. Using the monthly mean cloudiness from the International Satellite Cloud Climatology Project (ISCCP), a trend of increasing deep convective clouds over the north Pacific in winter from 1984 to 2005 is detected. Additionally, through analyzing the results from the Global Precipitation Climatology Project (GPCP) version 2, we also show a trend of increasing wintertime precipitation over the north Pacific from 1984 to 2005. Simulations with the WRF model reveal that the increased deep convective clouds and precipitation are reproduced when accounting for the aerosol effect from the increasing Asian pollution outflow.
67

An investigation of ice production mechanisms using a 3-D cloud model with explicit microphysics /

Ovtchinnikov, Mikhail, January 1997 (has links)
Thesis (Ph. D.)--University of Oklahoma, 1997. / Includes bibliographical references (leaves 125-128).
68

Modeling of the aerosol-cloud interactions in marine stratocumulus /

Liu, Qingfu, January 1997 (has links)
Thesis (Ph. D.)--University of Oklahoma, 1997. / Includes bibliographical references (leaves 125-131).
69

Lightweight task mobility support for elastic cloud computing

Ma, Ka-kui., 馬家駒. January 2011 (has links)
Cloud computing becomes popular nowadays. It allows applications to use the enormous resources in the clouds. With the combination of mobile computing, mobile cloud computing is evolved. With the use of clouds, mobile applications can offload tasks to clouds in client-server model. For cloud computing, migration is an important function for supporting elasticity. Lightweight and portable task migration support allows better resource utilization and data access locality, which are essentials for the success of cloud computing. Various migration techniques are available, such as process migration, thread migration, and virtual machine live migration. However, for these existing migration techniques, migrations are too coarse-grained and costly, and this offsets the benefits from migration. Besides, the migration path is monotonic, and mobile and clouds resources cannot be utilized. In this study, we propose a new computation migration technique called stack-on-demand (SOD). This technique is based on the stack structure of tasks. Computation migration is carried out by exporting parts of the execution state to achieve lightweight and flexible migration. Compared to traditional task migration techniques, SOD allows lightweight computation migration. It allows dynamic execution flows in a multi-domain workflow style. With its lightweight feature, tasks of a large process can be migrated from clouds to small-capacity devices, such as iPhone, in order to use the unique resources, such as photos, found in the devices. In order to support its lightweight feature, various techniques have been introduced. To allow efficient access to remote objects in task migration, we propose an object faulting technique for efficient detection of remote objects. This technique avoids the checking of object status. To allow portable, lightweight application-level migration, asynchronous migration technique and twin method hierarchy instrumentation technique are proposed. This allows lightweight task migration from mobile device to cloud nodes, and vice versa. We implement the SOD concept as a middleware in a mobile cloud environment to allow transparent execution migration of Java programs. It has shown that SOD migration cost is pretty low, comparing to several existing migration mechanisms. We also conduct experiments with mobile devices to demonstrate the elasticity of SOD, in which server-side heavyweight processes can run adaptively on mobile devices to use the unique resources in the devices. On the other hand, mobile devices can seamlessly offload tasks to the cloud nodes to use the cloud resources. In addition, the system has incorporated a restorable communication layer, and this allows parallel programs to communicate properly with SOD migration. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
70

Cloud-assisted multimedia content delivery

Wu, Yu, 吴宇 January 2013 (has links)
Cloud computing, which is among the trendiest computing paradigms in recent years, is believed to be most suitable for supporting network-centric applications by providing elastic amounts of bandwidth for accessing a wide range of resources on the y. In particular, geo-distributed cloud systems are widely in construction nowadays. They span multiple data centers at different geographical locations, thus offering many advantages to large-scale multimedia applications because of the abundance of on-demand storage/bandwidth capacities and their geographical proximity to different groups of users. In this thesis, we investigate the common fundamental challenges in how to efficiently leverage the power of cloud resources to facilitate multimedia content delivery in various modern real world applications, from different perspectives. First, from the perspective of application providers, we propose tractable procedures for both model analysis and system designs of supporting representative large scale multimedia applications in a cloud system, i.e., VoD streaming applications and social media applications, respectively. We further verify the effectiveness of these algorithms and the feasibility of their deployment under dynamic realistic settings in real-life cloud systems. Second, from the perspective of end users, we target our focus at mobile users. The rapidly increasing power of personal mobile devices, dwarfing even high-end devices, is providing much richer contents and social interactions to users on the move, and many more challenging applications are on the horizon. We explore the tough challenges of how to effectively exploit cloud resources to facilitate mobile services by introducing two cloud-assisted mobile systems (i.e., CloudMoV and vSky-Conf), and explain in details their design philosophies and implementation. Finally, from the perspective of the cloud providers, we realize existing data center networks lack the flexibility to support many core services, given our hands-on experiences from working with public cloud systems. One of the specific problem is, “bulk data transfers across geo-distributed datacenters". After formulating a novel and well-formed optimization model for treating the data migration problem, we design and implement a Delay Tolerant Migration (DTM) system based on the Beacon platform and standard OpenFlow APIs. The system realizes a reliable Datacenter to Datacenter (D2D) network by applying the software defined networking (SDN) paradigm. Real-world experiments under realistic network traffic demonstrate the efficiency of the design. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy

Page generated in 0.0396 seconds