• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 778
  • 220
  • 122
  • 65
  • 54
  • 33
  • 32
  • 30
  • 28
  • 21
  • 15
  • 14
  • 9
  • 9
  • 7
  • Tagged with
  • 1599
  • 1599
  • 390
  • 281
  • 244
  • 243
  • 240
  • 236
  • 231
  • 226
  • 215
  • 210
  • 177
  • 174
  • 152
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
581

Attribute-Based Proxy Re-Encryption

Chen, Chun-Hung 30 August 2012 (has links)
Cloud computing has been developed rapidly in recent years, and offers novel concepts and innovations in computer use. One application of cloud computing is that people can designate a proxy to help them to execute a number of tasks in certain situations instead of undertaking all tasks themselves. With this application, people can benefit from the proxy; however, some information is revealed to the proxy, such as their activities, and private data. That is, the proxy is aware of the actions of people through delegation processes, and proxy re-encryption which is a cryptographic primitive has been proposed to solve this problem. In the proxy re-encryption system, when a user (e.g., Alice) wants to send a ciphertext that is encrypted by her secret key and stored in the cloud to another user (e.g., Bob), she can designate a proxy to transform the ciphertext into a different ciphertext that can be decrypted by Bob¡¦s private key. Based on attribute-based encryption and proxy re-encryption, we propose attribute-based proxy re-encryption with bilinear pairing. Furthermore, in the proposed scheme, third paries cannot decrypt the ciphertext if they do no have matching attributes, regardless of being helped by proxy. Finally, we offer security proofs to demonstrate that the proposed scheme satisfies the essential requirements of attribute-based encryption schemes and proxy re-encryption schemes.
582

Network products distribution channels co-opetition strategy analysis and research

Huang, Chen-hsin 03 September 2012 (has links)
With nearly two decades of development of Agency Business, the market has experienced a lot of industry up and down. Each time when the market approaches saturation, we always observe the survival drama on the industrial supply chain where the competitions among O.E.Ms are intensive and all are fighting for a limited market. Due to the rapid changes on the high-speed Internet applications with massive global information exchange requirements, revolutionary innovative technology and products are often developed, thus it is expected that network channels industry is still considerable room for growth in the future. Although the revolution of products is expected, in addition to the product's competitive, how to leverage soft power (such as professional and technical services), in the market to seize the initiative and continue to maintain the advantage of competitiveness, is still the subject for the case company to study. We expect by this thesis research to conceive in response to the vigorous development of modern information applications, of which case companies will face issues and challenges, such as: 1. Agency and brand resources coordinating and competing 2. Upstream, downstream and alliance companies¡¦ value chain relationships and competing strategies 3. Agency and OEM branding and product conflict 4. The changes of demand side such as new technologies and applications that may impact the assessment of the industry supply chain and coping strategies Based on three dimensions: industry trends, supply chain competing, internal allocation of resources with the long-term observation of the development of the case company, the internal executives in-depth interviews and secondary data collection methodologies, this research is to explore the case companies¡¦ response to strategy with the result, also try to analyze the feasibility of taking advantage of business operation management (five forces analysis / value network / game theory), to seek the best strategy for the case company in response to the rapidly changing business competition.
583

Multi-dimensional optimization for cloud based multi-tier applications

Jung, Gueyoung 09 November 2010 (has links)
Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these applications at a very fine granularity. Meanwhile, resource virtualization has recently gained considerable attention in the design of computer systems and become a key ingredient for cloud computing. It provides significant improvement of aggregated power efficiency and high resource utilization by enabling resource consolidation. It also allows infrastructure providers to manage their resources in an agile way under highly dynamic conditions. However, these trends also raise significant challenges to researchers and practitioners to successfully achieve agile resource management in consolidated environments. First, they must deal with very different responsiveness of different applications, while handling dynamic changes in resource demands as applications' workloads change over time. Second, when provisioning resources, they must consider management costs such as power consumption and adaptation overheads (i.e., overheads incurred by dynamically reconfiguring resources). Dynamic provisioning of virtual resources entails the inherent performance-power tradeoff. Moreover, indiscriminate adaptations can result in significant overheads on power consumption and end-to-end performance. Hence, to achieve agile resource management, it is important to thoroughly investigate various performance characteristics of deployed applications, precisely integrate costs caused by adaptations, and then balance benefits and costs. Fundamentally, the research question is how to dynamically provision available resources for all deployed applications to maximize overall utility under time-varying workloads, while considering such management costs. Given the scope of the problem space, this dissertation aims to develop an optimization system that not only meets performance requirements of deployed applications, but also addresses tradeoffs between performance, power consumption, and adaptation overheads. To this end, this dissertation makes two distinct contributions. First, I show that adaptations applied to cloud infrastructures can cause significant overheads on not only end-to-end response time, but also server power consumption. Moreover, I show that such costs can vary in intensity and time scale against workload, adaptation types, and performance characteristics of hosted applications. Second, I address multi-dimensional optimization between server power consumption, performance benefit, and transient costs incurred by various adaptations. Additionally, I incorporate the overhead of the optimization procedure itself into the problem formulation. Typically, system optimization approaches entail intensive computations and potentially have a long delay to deal with a huge search space in cloud computing infrastructures. Therefore, this type of cost cannot be ignored when adaptation plans are designed. In this multi-dimensional optimization work, scalable optimization algorithm and hierarchical adaptation architecture are developed to handle many applications, hosting servers, and various adaptations to support various time-scale adaptation decisions.
584

Monitoring-as-a-service in the cloud

Meng, Shicong 03 April 2012 (has links)
State monitoring is a fundamental building block for Cloud services. The demand for providing state monitoring as services (MaaS) continues to grow and is evidenced by CloudWatch from Amazon EC2, which allows cloud consumers to pay for monitoring a selection of performance metrics with coarse-grained periodical sampling of runtime states. One of the key challenges for wide deployment of MaaS is to provide better balance among a set of critical quality and performance parameters, such as accuracy, cost, scalability and customizability. This dissertation research is dedicated to innovative research and development of an elastic framework for providing state monitoring as a service (MaaS). We analyze limitations of existing techniques, systematically identify the need and the challenges at different layers of a Cloud monitoring service platform, and develop a suite of distributed monitoring techniques to support for flexible monitoring infrastructure, cost-effective state monitoring and monitoring-enhanced Cloud management. At the monitoring infrastructure layer, we develop techniques to support multi-tenancy of monitoring services by exploring cost sharing between monitoring tasks and safeguarding monitoring resource usage. To provide elasticity in monitoring, we propose techniques to allow the monitoring infrastructure to self-scale with monitoring demand. At the cost-effective state monitoring layer, we devise several new state monitoring functionalities to meet unique functional requirements in Cloud monitoring. Violation likelihood state monitoring explores the benefits of consolidating monitoring workloads by allowing utility-driven monitoring intensity tuning on individual monitoring tasks and identifying correlations between monitoring tasks. Window based state monitoring leverages distributed windows for the best monitoring accuracy and communication efficiency. Reliable state monitoring is robust to both transient and long-lasting communication issues caused by component failures or cross-VM performance interferences. At the monitoring-enhanced Cloud management layer, we devise a novel technique to learn about the performance characteristics of both Cloud infrastructure and Cloud applications from cumulative performance monitoring data to increase the cloud deployment efficiency.
585

An empirical approach to automated performance management for elastic n-tier applications in computing clouds

Malkowski, Simon J. 03 April 2012 (has links)
Achieving a high degree of efficiency is non-trivial when managing the performance of large web-facing applications such as e-commerce websites and social networks. While computing clouds have been touted as a good solution for elastic applications, many significant technological challenges still have to be addressed in order to leverage the full potential of this new computing paradigm. In this dissertation I argue that the automation of elastic n-tier application performance management in computing clouds presents novel challenges to classical system performance management methodology that can be successfully addressed through a systematic empirical approach. I present strong evidence in support of my thesis in a framework of three incremental building blocks: Experimental Analysis of Elastic System Scalability and Consolidation, Modeling and Detection of Non-trivial Performance Phenomena in Elastic Systems, and Automated Control and Configuration Planning of Elastic Systems. More concretely, I first provide a proof of concept for the feasibility of large-scale experimental database system performance analyses, and illustrate several complex performance phenomena based on the gathered scalability and consolidation data. Second, I extend these initial results to a proof of concept for automating bottleneck detection based on statistical analysis and an abstract definition of multi-bottlenecks. Third, I build a performance control system that manages elastic n-tier applications efficiently with respect to complex performance phenomena such as multi-bottlenecks. This control system provides a proof of concept for automated online performance management based on empirical data.
586

Performance modeling and optimization solutions for networking systems

Zhao, Jian, 趙建 January 2014 (has links)
This thesis targets at modeling and resolving practical problems using mathematical tools in two representative networking systems nowadays, i.e., peer-to-peer (P2P) video streaming system and cloud computing system. In the first part, we study how to mitigate the following tussle between content service providers and ISPs in P2P video streaming systems: network-agnostic P2P protocol designs bring lots of inter-ISP traffic and increase traffic relay cost of ISPs; in turn, ISPs start to throttle P2P packets, which significantly deteriorates P2P streaming performance. First, we investigate the problem in a mesh-based P2P live streaming system. We use end-to-end streaming delays as performance, and quantify the amount of inter-ISP traffic with the number of copies of the live streams imported into each ISP. Considering multiple ISPs at different bandwidth levels, we model the generic relationship between the volume of inter-ISP traffic and streaming performance, which provides useful insights on the design of effective locality-aware peer selection protocols and server deployment strategies across multiple ISPs. Next, we study a similar problem in a hybrid P2P-cloud CDN system for VoD streaming. We characterize the relationship between the costly bandwidth consumption from the cloud CDN and the inter-ISP traffic. We apply a loss network model to derive the bandwidth consumption under any given chunk distribution pattern among peer caches and any streaming request dispatching strategy among ISPs, and derive the optimal peer caching and request dispatching strategies which minimize the bandwidth demand from the cloud CDN. Based on the fundamental insights from our analytical results, we design a locality-aware, hybrid P2P-cloud CDN streaming protocol. In the second part, we study the profit maximization and cost minimization problems in Infrastructure-as- a- Service (IaaS) cloud systems. The first problem is how a geo-distributed cloud system should price its datacenter resources at different locations, such that its overall profit is maximized over long-term operation. We design an efficient online algorithm for dynamic pricing of VM resources across datacenters, together with job scheduling and server provisioning in each datacenter, to maximize the cloud's profit over the long run. Theoretical analysis shows that our algorithm can schedule jobs within their respective deadlines, while achieving a time-averaged overall profit closely approaching the offline maximum, which is computed by assuming perfect information on future job arrivals is freely available. The second problem is how federated clouds should trade their computing resources among each other to reduce the cost, by exploiting diversities of different clouds' workloads and operational costs. We formulate a global cost minimization problem among multiple clouds under the cooperative scenario where each individual cloud's workload and cost information is publicly available. Taking into considerations jobs with disparate length, a non-preemptive approximation algorithm for leftover job migration and new job scheduling is designed. Given to the selfishness of individual clouds, we further design a randomized double auction mechanism to elicit clouds' truthful bidding for buying or selling virtual machines. The auction mechanism is proven to be truthful, and to guarantee the same approximation ratio to what the cooperative approximation algorithm achieves. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
587

A study of transient bottlenecks: understanding and reducing latency long-tail problem in n-tier web applications

Wang, Qingyang 21 September 2015 (has links)
An essential requirement of cloud computing or data centers is to simultaneously achieve good performance and high utilization for cost efficiency. High utilization through virtualization and hardware resource sharing is critical for both cloud providers and cloud consumers to reduce management and infrastructure costs (e.g., energy cost, hardware cost) and to increase cost-efficiency. Unfortunately, achieving good performance (e.g., low latency) for web applications at high resource utilization remains an elusive goal. Both practitioners and researchers have experienced the latency long-tail problem in clouds during periods of even moderate utilization (e.g., 50%). In this dissertation, we show that transient bottlenecks are an important contributing factor to the latency long-tail problem. Transient bottlenecks are bottlenecks with a short lifespan on the order of tens of milliseconds. Though short-lived, transient bottleneck can cause a long-tail response time distribution that spans a spectrum of 2 to 3 orders of magnitude, from tens of milliseconds to tens of seconds, due to the queuing effect propagation and amplification caused by complex inter-tier resource dependencies in the system. Transient bottlenecks can arise from a wide range of factors at different system layers. For example, we have identified transient bottlenecks caused by CPU dynamic voltage and frequency scaling (DVFS) control at the CPU architecture layer, Java garbage collection (GC) at the system software layer, and virtual machine (VM) consolidation at the application layer. These factors interact with naturally bursty workloads from clients, often leading to transient bottlenecks that cause overall performance degradation even if all the system resources are far from being saturated (e.g., less than 50%). By combining fine-grained monitoring tools and a sophisticated analytical method to generate and analyze monitoring data, we are able to detect and study transient bottlenecks in a systematic way.
588

Factors influencing cloud computing readiness in small and medium enterprises.

Sibanyoni, Jabu Lucky. January 2015 (has links)
M. Tech. Business Information Systems / Business innovation driven by technology is widely seen as a key driver to transform enterprises and in particular, the development of Small and Medium Enterprises (SMEs). Any organization eager to improve competitiveness, retain sustainability and cost effectively will require new and better technologies with great capabilities. However not all organizations are ready to adopt these innovative technologies, largely because new and rapidly changing technologies come with new and unique challenges. The emergence of cloud computing paradigm in recent years is rapidly gaining momentum as an alternative to the traditional approach to provide or consume Information Technology (IT) services and resources. It is a significant trend with the potential to increase agility and lower costs of IT. Although, embracing this paradigm promises several benefits to an organization, an effective adoption and implementation of cloud computing in an organization requires these organizations to understand different factors. Current literature have shown that there are inadequate guidelines to guide SMEs in developing economies to determine a company's degree of readiness to adopt technological innovations such as cloud computing to transform the operations of the organisation. The purpose of this study is to investigate factors influencing cloud computing readiness in South African small and medium enterprises.
589

Critical analysis of the key drivers for adopting cloud computing : a case study of an information technology user organisation in Durban

Modiba, Maimela Daniel. January 2013 (has links)
M. Tech. Business Administration / The aim of this research is to explore the factors that drives the adoption of cloud computing within a South African information technology user organisation. It also identifies benefits and risks associated with the adopting of cloud computing within an information and communication technology (ICT) from a South African company perspective.
590

Coding-Based System Primitives for Airborne Cloud Computing

Lin, Chit-Kwan January 2011 (has links)
The recent proliferation of sensors in inhospitable environments such as disaster or battle zones has not been matched by in situ data processing capabilities due to a lack of computing infrastructure in the field. We envision a solution based on small, low-altitude unmanned aerial vehicles (UAVs) that can deploy elastically-scalable computing infrastructure anywhere, at any time. This airborne compute cloud—essentially, micro-data centers hosted on UAVs—would communicate with terrestrial assets over a bandwidth-constrained wireless network with variable, unpredictable link qualities. Achieving high performance over this ground-to-air mobile radio channel thus requires making full and efficient use of every single transmission opportunity. To this end, this dissertation presents two system primitives that improve throughput and reduce network overhead by using recent distributed coding methods to exploit natural properties of the airborne environment (i.e., antenna beam diversity and anomaly sparsity). We first built and deployed an UAV wireless networking testbed and used it to characterize the ground-to-UAV wireless channel. Our flight experiments revealed that antenna beam diversity from using multiple SISO radios boosts reception range and aggregate throughput. This observation led us to develop our first primitive: ground-to-UAV bulk data transport. We designed and implemented FlowCode, a reliable link layer for uplink data transport that uses network coding to harness antenna beam diversity gains. Via flight experiments, we show that FlowCode can boost reception range and TCP throughput as much as 4.5-fold. Our second primitive permits low-overhead cloud status monitoring. We designed CloudSense, a network switch that compresses cloud status streams in-network via compressive sensing. CloudSense is particularly useful for anomaly detection tasks requiring global relative comparisons (e.g., MapReduce straggler detection) and can achieve up to 16.3-fold compression as well as early detection of the worst anomalies. Our efforts have also shed light on the close relationship between network coding and compressive sensing. Thus, we offer FlowCode and CloudSense not only as first steps toward the airborne compute cloud, but also as exemplars of two classes of applications—approximation intolerant and tolerant—to which network coding and compressive sensing should be judiciously and selectively applied. / Engineering and Applied Sciences

Page generated in 0.1048 seconds