• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 779
  • 217
  • 122
  • 65
  • 54
  • 34
  • 32
  • 30
  • 28
  • 21
  • 15
  • 14
  • 9
  • 9
  • 7
  • Tagged with
  • 1600
  • 1600
  • 392
  • 281
  • 244
  • 242
  • 235
  • 231
  • 231
  • 227
  • 218
  • 210
  • 176
  • 175
  • 154
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

Timed-Release Proxy Conditional Re-Encryption for Cloud Computing

Chen, Jun-Cheng 30 August 2011 (has links)
The mobile technology is being developed very fast and it is a general situation where people can fetch or edit files via the Internet by mobile devices such as notebooks, smart phones, and so on. Due to possible possession of various devices of a user, it may be inconvenient for him to synchronize a file such that he cannot edit the same file via his devices easily. Recently, the cloud technology is becoming more and more popular and there are some new business models launched. One of them is a storage platform Dropbox which can synchronize users' files in their own devices and also allow users to share their files to others. However, Dropbox was indicated that the privacy of the files has not been protected well. Many encryption schemes have been proposed in the literature, but most of them do not support the property of secret file sharing when deploying them in cloud environment. Even though some schemes support the property, they can only provide a file owner to share all of his files with others. In some situations, the file owner may want to ensure that the receiver cannot decrypt the ciphertext until a specified time arrives. The existing encryption schemes cannot achieve these goals simultaneously. Hence, in order to cope with these problems, we propose a timed-release proxy conditional re-encryption scheme for cloud computing. Not only are users¡¦ files stored safely but also each user can freely share a desired file with another user. Furthermore, the receiver cannot obtain any information of the file until the chosen time arrives. Finally, we also demonstrate the security of our proposed scheme via formal proofs.
582

Dual Migration for Cloud Service

Chen, Ya-Yin 12 July 2012 (has links)
none
583

User¡¦s Risk Management for the Personal Data of the Cloud Computing Service Industires

Huang, Yen-Lin 06 August 2012 (has links)
With the rapid development of Information Technology, ¡§Cloud Computing¡¨ is becoming increasingly popular in the industry as it is accessible to various data processing services just by connecting to third-party cloud service providers via network. A new global technological trend has thus been ushered as a result of powerful processing, elastic usage and low cost of the cloud computing. Although ¡§Cloud Computing¡¨ provides a cloud which is more large-scaled, relevant and beneficial, most practical cloud patrons are aware that what matters is its corresponding security. Any who has ever used the Internet, whether an enterprise or an individual, will inevitably run the risks of information recorded, copied, leaked, deleted inappropriately or accidently or even used for inappropriate purposes by third-parties. The private data and business secrets of the stakeholders of an enterprise, including its customers, partners, employees or suppliers, will also suffer from the information vulnerability. Therefore, as for the cloud computing industry, what matters for the government, enterprise or individual is to provide an information security shelter rather than a network environment in which the personal data is highly exposed. Cavoukin (2010) argues that the issue of information security related to the cloud computing is one of issues in the public domain. The data generated from the digital cloud computing management and the people involved are so large that each citizen is drawn to be concerned with the government policies and laws (Lee, 2010). In this paper, we make a risk management for the cloud computing and discuss the risk management mechanisms for the cloud computing industry with the Freeman¡¦s stakeholder theory.
584

Attribute-Based Proxy Re-Encryption

Chen, Chun-Hung 30 August 2012 (has links)
Cloud computing has been developed rapidly in recent years, and offers novel concepts and innovations in computer use. One application of cloud computing is that people can designate a proxy to help them to execute a number of tasks in certain situations instead of undertaking all tasks themselves. With this application, people can benefit from the proxy; however, some information is revealed to the proxy, such as their activities, and private data. That is, the proxy is aware of the actions of people through delegation processes, and proxy re-encryption which is a cryptographic primitive has been proposed to solve this problem. In the proxy re-encryption system, when a user (e.g., Alice) wants to send a ciphertext that is encrypted by her secret key and stored in the cloud to another user (e.g., Bob), she can designate a proxy to transform the ciphertext into a different ciphertext that can be decrypted by Bob¡¦s private key. Based on attribute-based encryption and proxy re-encryption, we propose attribute-based proxy re-encryption with bilinear pairing. Furthermore, in the proposed scheme, third paries cannot decrypt the ciphertext if they do no have matching attributes, regardless of being helped by proxy. Finally, we offer security proofs to demonstrate that the proposed scheme satisfies the essential requirements of attribute-based encryption schemes and proxy re-encryption schemes.
585

Network products distribution channels co-opetition strategy analysis and research

Huang, Chen-hsin 03 September 2012 (has links)
With nearly two decades of development of Agency Business, the market has experienced a lot of industry up and down. Each time when the market approaches saturation, we always observe the survival drama on the industrial supply chain where the competitions among O.E.Ms are intensive and all are fighting for a limited market. Due to the rapid changes on the high-speed Internet applications with massive global information exchange requirements, revolutionary innovative technology and products are often developed, thus it is expected that network channels industry is still considerable room for growth in the future. Although the revolution of products is expected, in addition to the product's competitive, how to leverage soft power (such as professional and technical services), in the market to seize the initiative and continue to maintain the advantage of competitiveness, is still the subject for the case company to study. We expect by this thesis research to conceive in response to the vigorous development of modern information applications, of which case companies will face issues and challenges, such as: 1. Agency and brand resources coordinating and competing 2. Upstream, downstream and alliance companies¡¦ value chain relationships and competing strategies 3. Agency and OEM branding and product conflict 4. The changes of demand side such as new technologies and applications that may impact the assessment of the industry supply chain and coping strategies Based on three dimensions: industry trends, supply chain competing, internal allocation of resources with the long-term observation of the development of the case company, the internal executives in-depth interviews and secondary data collection methodologies, this research is to explore the case companies¡¦ response to strategy with the result, also try to analyze the feasibility of taking advantage of business operation management (five forces analysis / value network / game theory), to seek the best strategy for the case company in response to the rapidly changing business competition.
586

Multi-dimensional optimization for cloud based multi-tier applications

Jung, Gueyoung 09 November 2010 (has links)
Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these applications at a very fine granularity. Meanwhile, resource virtualization has recently gained considerable attention in the design of computer systems and become a key ingredient for cloud computing. It provides significant improvement of aggregated power efficiency and high resource utilization by enabling resource consolidation. It also allows infrastructure providers to manage their resources in an agile way under highly dynamic conditions. However, these trends also raise significant challenges to researchers and practitioners to successfully achieve agile resource management in consolidated environments. First, they must deal with very different responsiveness of different applications, while handling dynamic changes in resource demands as applications' workloads change over time. Second, when provisioning resources, they must consider management costs such as power consumption and adaptation overheads (i.e., overheads incurred by dynamically reconfiguring resources). Dynamic provisioning of virtual resources entails the inherent performance-power tradeoff. Moreover, indiscriminate adaptations can result in significant overheads on power consumption and end-to-end performance. Hence, to achieve agile resource management, it is important to thoroughly investigate various performance characteristics of deployed applications, precisely integrate costs caused by adaptations, and then balance benefits and costs. Fundamentally, the research question is how to dynamically provision available resources for all deployed applications to maximize overall utility under time-varying workloads, while considering such management costs. Given the scope of the problem space, this dissertation aims to develop an optimization system that not only meets performance requirements of deployed applications, but also addresses tradeoffs between performance, power consumption, and adaptation overheads. To this end, this dissertation makes two distinct contributions. First, I show that adaptations applied to cloud infrastructures can cause significant overheads on not only end-to-end response time, but also server power consumption. Moreover, I show that such costs can vary in intensity and time scale against workload, adaptation types, and performance characteristics of hosted applications. Second, I address multi-dimensional optimization between server power consumption, performance benefit, and transient costs incurred by various adaptations. Additionally, I incorporate the overhead of the optimization procedure itself into the problem formulation. Typically, system optimization approaches entail intensive computations and potentially have a long delay to deal with a huge search space in cloud computing infrastructures. Therefore, this type of cost cannot be ignored when adaptation plans are designed. In this multi-dimensional optimization work, scalable optimization algorithm and hierarchical adaptation architecture are developed to handle many applications, hosting servers, and various adaptations to support various time-scale adaptation decisions.
587

Monitoring-as-a-service in the cloud

Meng, Shicong 03 April 2012 (has links)
State monitoring is a fundamental building block for Cloud services. The demand for providing state monitoring as services (MaaS) continues to grow and is evidenced by CloudWatch from Amazon EC2, which allows cloud consumers to pay for monitoring a selection of performance metrics with coarse-grained periodical sampling of runtime states. One of the key challenges for wide deployment of MaaS is to provide better balance among a set of critical quality and performance parameters, such as accuracy, cost, scalability and customizability. This dissertation research is dedicated to innovative research and development of an elastic framework for providing state monitoring as a service (MaaS). We analyze limitations of existing techniques, systematically identify the need and the challenges at different layers of a Cloud monitoring service platform, and develop a suite of distributed monitoring techniques to support for flexible monitoring infrastructure, cost-effective state monitoring and monitoring-enhanced Cloud management. At the monitoring infrastructure layer, we develop techniques to support multi-tenancy of monitoring services by exploring cost sharing between monitoring tasks and safeguarding monitoring resource usage. To provide elasticity in monitoring, we propose techniques to allow the monitoring infrastructure to self-scale with monitoring demand. At the cost-effective state monitoring layer, we devise several new state monitoring functionalities to meet unique functional requirements in Cloud monitoring. Violation likelihood state monitoring explores the benefits of consolidating monitoring workloads by allowing utility-driven monitoring intensity tuning on individual monitoring tasks and identifying correlations between monitoring tasks. Window based state monitoring leverages distributed windows for the best monitoring accuracy and communication efficiency. Reliable state monitoring is robust to both transient and long-lasting communication issues caused by component failures or cross-VM performance interferences. At the monitoring-enhanced Cloud management layer, we devise a novel technique to learn about the performance characteristics of both Cloud infrastructure and Cloud applications from cumulative performance monitoring data to increase the cloud deployment efficiency.
588

An empirical approach to automated performance management for elastic n-tier applications in computing clouds

Malkowski, Simon J. 03 April 2012 (has links)
Achieving a high degree of efficiency is non-trivial when managing the performance of large web-facing applications such as e-commerce websites and social networks. While computing clouds have been touted as a good solution for elastic applications, many significant technological challenges still have to be addressed in order to leverage the full potential of this new computing paradigm. In this dissertation I argue that the automation of elastic n-tier application performance management in computing clouds presents novel challenges to classical system performance management methodology that can be successfully addressed through a systematic empirical approach. I present strong evidence in support of my thesis in a framework of three incremental building blocks: Experimental Analysis of Elastic System Scalability and Consolidation, Modeling and Detection of Non-trivial Performance Phenomena in Elastic Systems, and Automated Control and Configuration Planning of Elastic Systems. More concretely, I first provide a proof of concept for the feasibility of large-scale experimental database system performance analyses, and illustrate several complex performance phenomena based on the gathered scalability and consolidation data. Second, I extend these initial results to a proof of concept for automating bottleneck detection based on statistical analysis and an abstract definition of multi-bottlenecks. Third, I build a performance control system that manages elastic n-tier applications efficiently with respect to complex performance phenomena such as multi-bottlenecks. This control system provides a proof of concept for automated online performance management based on empirical data.
589

Performance modeling and optimization solutions for networking systems

Zhao, Jian, 趙建 January 2014 (has links)
This thesis targets at modeling and resolving practical problems using mathematical tools in two representative networking systems nowadays, i.e., peer-to-peer (P2P) video streaming system and cloud computing system. In the first part, we study how to mitigate the following tussle between content service providers and ISPs in P2P video streaming systems: network-agnostic P2P protocol designs bring lots of inter-ISP traffic and increase traffic relay cost of ISPs; in turn, ISPs start to throttle P2P packets, which significantly deteriorates P2P streaming performance. First, we investigate the problem in a mesh-based P2P live streaming system. We use end-to-end streaming delays as performance, and quantify the amount of inter-ISP traffic with the number of copies of the live streams imported into each ISP. Considering multiple ISPs at different bandwidth levels, we model the generic relationship between the volume of inter-ISP traffic and streaming performance, which provides useful insights on the design of effective locality-aware peer selection protocols and server deployment strategies across multiple ISPs. Next, we study a similar problem in a hybrid P2P-cloud CDN system for VoD streaming. We characterize the relationship between the costly bandwidth consumption from the cloud CDN and the inter-ISP traffic. We apply a loss network model to derive the bandwidth consumption under any given chunk distribution pattern among peer caches and any streaming request dispatching strategy among ISPs, and derive the optimal peer caching and request dispatching strategies which minimize the bandwidth demand from the cloud CDN. Based on the fundamental insights from our analytical results, we design a locality-aware, hybrid P2P-cloud CDN streaming protocol. In the second part, we study the profit maximization and cost minimization problems in Infrastructure-as- a- Service (IaaS) cloud systems. The first problem is how a geo-distributed cloud system should price its datacenter resources at different locations, such that its overall profit is maximized over long-term operation. We design an efficient online algorithm for dynamic pricing of VM resources across datacenters, together with job scheduling and server provisioning in each datacenter, to maximize the cloud's profit over the long run. Theoretical analysis shows that our algorithm can schedule jobs within their respective deadlines, while achieving a time-averaged overall profit closely approaching the offline maximum, which is computed by assuming perfect information on future job arrivals is freely available. The second problem is how federated clouds should trade their computing resources among each other to reduce the cost, by exploiting diversities of different clouds' workloads and operational costs. We formulate a global cost minimization problem among multiple clouds under the cooperative scenario where each individual cloud's workload and cost information is publicly available. Taking into considerations jobs with disparate length, a non-preemptive approximation algorithm for leftover job migration and new job scheduling is designed. Given to the selfishness of individual clouds, we further design a randomized double auction mechanism to elicit clouds' truthful bidding for buying or selling virtual machines. The auction mechanism is proven to be truthful, and to guarantee the same approximation ratio to what the cooperative approximation algorithm achieves. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
590

A study of transient bottlenecks: understanding and reducing latency long-tail problem in n-tier web applications

Wang, Qingyang 21 September 2015 (has links)
An essential requirement of cloud computing or data centers is to simultaneously achieve good performance and high utilization for cost efficiency. High utilization through virtualization and hardware resource sharing is critical for both cloud providers and cloud consumers to reduce management and infrastructure costs (e.g., energy cost, hardware cost) and to increase cost-efficiency. Unfortunately, achieving good performance (e.g., low latency) for web applications at high resource utilization remains an elusive goal. Both practitioners and researchers have experienced the latency long-tail problem in clouds during periods of even moderate utilization (e.g., 50%). In this dissertation, we show that transient bottlenecks are an important contributing factor to the latency long-tail problem. Transient bottlenecks are bottlenecks with a short lifespan on the order of tens of milliseconds. Though short-lived, transient bottleneck can cause a long-tail response time distribution that spans a spectrum of 2 to 3 orders of magnitude, from tens of milliseconds to tens of seconds, due to the queuing effect propagation and amplification caused by complex inter-tier resource dependencies in the system. Transient bottlenecks can arise from a wide range of factors at different system layers. For example, we have identified transient bottlenecks caused by CPU dynamic voltage and frequency scaling (DVFS) control at the CPU architecture layer, Java garbage collection (GC) at the system software layer, and virtual machine (VM) consolidation at the application layer. These factors interact with naturally bursty workloads from clients, often leading to transient bottlenecks that cause overall performance degradation even if all the system resources are far from being saturated (e.g., less than 50%). By combining fine-grained monitoring tools and a sophisticated analytical method to generate and analyze monitoring data, we are able to detect and study transient bottlenecks in a systematic way.

Page generated in 0.0805 seconds