• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 4
  • Tagged with
  • 144
  • 30
  • 26
  • 24
  • 24
  • 19
  • 13
  • 11
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Energy and performance aware resource management in heterogeneous cloud datacenters

Zakarya, Muhammad January 2017 (has links)
In cloud computing, datacenters are the principal consumers of electricity. In 2014, Cloud datacenters reportedly accounted for some 70 billion kWh, which is the equivalent of 1.8% of the US’ total energy consumption. With growth in on-line services, but increased computational power per unit of energy, consumption is projected to account for 73 billion kWh by 2020. Datacenters comprise large numbers of servers, as well as storage, that cloud customers can use in the amounts they require for as long as they are willing to pay. In infrastructure clouds, customers request the launch of Virtual Machines (VMs) which will consume server and storage resources. The provider decides which server is selected, and the customer decides how long to run the VM for. The unpredictability of customers of infrastructure clouds can result in datacenters having a number of servers either idle or running a minimal VM loading at various times, and wasting energy as a consequence. Improvements to management techniques such as VM allocation and resource consolidation can help to improve energy and performance efficiency. However, for a particular VM the energy consumption and runtime may be different in different servers due to: (i) the number of VMs the servers run; and (ii) the performance of servers. Therefore, w.r.t VM allocation it might be more energy and performance efficient to place VMs on servers that consume less energy and can meet the VM performance goals. Moreover, consolidation brings two, related, problems: (i) consolidation involves migrating VMs across servers, which adds to energy consumption, and will only be more energy efficient if this cost can be recovered; and (ii) due to resource heterogeneity the performance of VMs varies with the underlying hardware, and with it, runtimes and energy usage, and hence costs. In respect to (i), if the VM terminates during or just after the migration has finished, the migration effort is definitely wasted, which implies a cost recovery time objective after which further energy can be saved as the VM subsequently runs more efficiently. In respect to (ii), if the VM is migrated to a server with lower performance, increased runtime can decrease datacenter throughput and energy efficiency, and increase agreed (pay per use) customer cost. We explore how consolidation of VMs can help to decrease datacenter energy consumption whilst ensuring that migration costs are recoverable in the vast majority of cases, and also ensuring that workload performance is not negatively affected. Several algorithms for energy-performance efficient VM allocation and consolidation are proposed, implemented through extensions and modifications to the popular Cloud simulation environment, CloudSim, and evaluated in respect to a large dataset of workload information from a major cloud provider. Principal findings from these simulations are: (i) efficient VM allocation can be at least 1.72% (±0.02 error) more energy-efficient than consolidation; (ii) it is 3.52% (±0.05 error) more energy-efficient to migrate relatively long-running VMs; and (iii) for heterogeneous workloads and clouds, different scheduling and migration techniques demonstrate a diversity in energy efficiency and performance (hence cost) trade-off. An energy-performance efficient migration approach can be up to 3.66% (±0.05 error) more energy efficient, and 1.87% (±0.025 error) more performance efficient, than a no migration strategy. This suggests a saving of approximately $0.72m annually, which compares favourably to a maximum projected usage of the Google's cluster (12,583 hosts) of $1.58m/year. Based on these results, cloud providers could both reduce their energy usage, reducing costs and either pass savings to customers, invest in more infrastructure, or increase profits; more broadly, such reductions in energy usage could reduce the impact of global warming.

The domain name system advisor : a model-based quality assurance framework

Radwan, Marwan Mohammed Mahmoud January 2017 (has links)
The Domain Name System (DNS) has a direct and strong impact on the performance of nearly all aspects of the Internet. DNS relies on a delegation-based architecture, where resolution of names to their IP addresses require resolving the names of the servers responsible for those names. The recursive graphs of the inter-dependencies that exist between servers associated with each zone are called Dependency Graphs. We constructed a DNS Dependency Model as a unified representation of these Dependency Graphs. We utilized a set of Structural Metrics defined over this model as indicators of external quality attributes of the DNS. We applied machine learning in order to construct Prediction Models of the perceived quality attributes of the DNS out of the structural metrics of the model and evaluate the accuracy of these models. Operational Bad Smells are configuration and deployment decisions, made by zone administrators, that are not totally errant or technically incorrect and do not currently prevent the system from doing its designated functionality. Instead, they indicate weaknesses that may impose additional overhead on DNS queries, or increase system vulnerability to threats, or increase the risk of failures in the future. We proposed the ISDR (Identification, Specification, Detection and Refactoring) Method that enables DNS administrators to identify bad smells on a high-level abstraction using a consistent taxonomy and reusable vocabulary. We developed techniques for systematic detection and recommendations of reaction mechanisms in the form of graph-based refactoring rules. The ISDR Method along with the DNS Quality Prediction Models are used to build the DNS Quality Assurance Framework and the DNS Advisor Tool. Assessing the perceived quality attributes of the DNS at an early stage enables us to avoid the implications of defective and low-quality designs. We identify configuration changes that improve the availability, security, stability and resiliency postures of the DNS.

On characterisation and decomposition of Internet traffic dynamics

Marnerides, Angelos January 2011 (has links)
The comprehension of backbone and edge network traffic dynamics provides the foundational element for crucial traffic engineering tasks employed by Internet Service Providers (ISPs) . These tasks include anomaly diagnosis, network capacity planning, traffic classification and Quality of Service (QoS) provisioning. Due to the rapid development of emerging Internet technologies, the expansion of networked devices and evolving user behaviour, the task of statistically interpreting traffic characteristics poses great challenges. There have been numerous propositions having network traffic scenarios mapped as statistical, information-theoretic and signal processing case studies. Nevertheless, by virtue of the different volume traffic characteristics exposed by each network independently and the different intra and inter-networking interactions conducted in each case, the idea of manifesting a generic modelling solution is a very difficult task. In parallel, the macroscopic, composite volume analysis performed by various methods in most cases invokes low visibility with respect to protocol-specific behaviour. This thesis critically re-assesses well known traffic profiling modelling schemes as used in the past and introduces new approaches for capturing the fundamental properties of traffic characterisation. Through network volume aggregate analysis, as well as via the employment of a traffic decomposition -approach where transport layer protocols are independently modelled, highly fluctuating and dynamic characteristics appear in backbone and edge network links. These characteristics are statistically interpreted and justified with one of the main contributions of this work, namely the employment of Time-Frequency (TF) representations and higher order spectra for validating de facto statistical assumptions on a microscopic, protocol-specific approach. Validation of Gaussianity, linearity and stationarity constitutes a core pre-requisite within the modelling process on JP networks which has not been thoroughly investigated by the majority of studies in current and past literature. The thesis explicitly addresses this issue and indicates its importance. Furthermore, through the direct exploitation of higher order spectral capabilities and particularly the bispectrum, traffic engineering tasks may be beneficially improved. In spite of the advantageous diagnostics offered in several traffic engineering applications such as anomaly diagnosis, the practical capabilities offered by the bispectrum are exhibited within a particular traffic peak analysis scenario which provides a basic element within the traffic engineering process of network capacity planning. I The validation of the stationarity hypothesis has identified the existence of highly non-stationary traffic characteristics on a volume aggregate and protocol-oriented basis. By virtue of this outcome, this work contributes to the applicability of energy TF distributions for the explicit traffic characterisation task of application-based traffic classification. The suitability of energy TF distributions for profiling non-stationary signals allows the employment of a novel signaloriented classification scheme. In particular, the thesis illustrates the classification of application layer protocols based on the volume utilization initiated on the transport layer by TCP and UDP.

Supporting network visualisation, control and management in distributed virtual worlds

Song, Terence Min Khian January 2004 (has links)
As the demand for greater observability and controllability increases, an intuitive user interface and the ability to visualise and interact with complex relational structures will be essential for the successful management of next generation networks and services. With object-oriented architectures, interfaces, and information models becoming the fundamental approach to advance information networks, a three-dimensional virtual world, with its higher-level of semantic interaction, is a natural choice to provide corresponding paradigm shift in perception, interaction, and collaborative capabilities.

Supporting members of online communities through the use of visualisations

Mohamed, Rehman January 2007 (has links)
No description available.

Design and performance analysis of fail-signal based consensus protocols for Byzantine faults

Tariq, Qurat-ul-Ain Inayat January 2007 (has links)
Services offered by computing systems continue to play a crucial role in our every day lives. This thesis examines and solves a challenging problem in making these services dependable using means that can be assured not to compromise service responsiveness, particularly when no failure occurs. Causes of undependability are faults and faults of all known origins, including malicious attacks, are collectively referred to as Byzantine faults. Service or state machine replication is the only known technique for tolerating Byzantine faults. It becomes more effective when replicas are spaced out over a wide area network (WAN) such as the Internet - adding tolerance to localised disasters. It requires that replicas process the randomly arriving user requests in an identical order. Achieving this requirement together with deterministic termination guarantees is impossible in a fail-prone environment. This impossibility prevails because of the inability to accurately estimate a bound on inter-replica communication delays over a WAN. Canonical protocols in the literature are designed to delay termination until the WAN preserves convergence between actual delays and the estimate used. They thus risk performance degradation of the replicated service. We eliminate this risk by using Fail-Signal processesto circumvent the impossibility. A fail-signal (FS) process is made up of redundant, Byzantine-prone processes that continually check each other's performance. Consequently, it fails only by crashing and also signals its imminent failure. Using FS process constructs, a family of three order protocols has been developed: Protocol-0, Protocol-I and Protocol-11. Each protocol caters for a particular set of assumptions made in the FS process construction and the subsequent FS process behaviour. Protocol-I is extensively compared with a canonical protocol of Castro and Liskov which is widely acknowledged for its desirable performance. The study comprehensively establishes the cost and benefits of our approach in a variety of both real and emulated network settings, by varying number of replicas, system load and cryptographic techniques. The study shows that Protocol-I has superior performancp when no failures occur.

Optimisation analytics for bandwidth resource management in converted IP networks

Sheykhkanloo, Naghmeh Moradpoor January 2012 (has links)
The Internet Protocol (IP) based converged Next Generation Networks (NGN) [130] appears in order to provide an efficient, cost-aware and reliable network infrastructure in support of emerging sophisticated and bandwidth hungry applications and services [129]. Addressing the International Telecommunications Union - Telecommunication Standardisation Sector (ITU-T) [132], the NGN brings significant advantages to telecom companies as well as Subscriber Stations (SSs) such as support for End-to-End (ETE) Quality of Service (QoS), mobility features, converged services and applications as well as converged infrastructure between fixed and mobile networks. The ultimate goal of the NGN is to provide the Internet applications and services wherever, whenever and in whatever format with reasonable costs for both SSs and telecom companies as well as the satisfactory coverage, capacity, speed and maintenance. Optical technology, as a best nominee for the next generation fixed broadband access networks, is tied up and restricted to the fixed infrastructure but wherever it goes it provides the huge bandwidth with relatively lower cost for both SSs and telecom companies. On the other hand, wireless technology supports flexibility as well as mobility features and is not tied up to the fixed infrastructure but it is highly restricted to the capacity, transmission power as well as the transmission range. Taking into consideration the converged infrastructure of the NGN [132], the future broadband applications and services must leverage on both fixed and wireless technologies which forms the idea for development of the integrated fixed, particularly optical, and wireless access networks. However, in order to successfully integrated these two technologies there are some technical concerns in terms of architectural aspects, physical layer issues and Media Access Control (MAC) related topics which need to be addressed effectively and efficiently in order to provide the smooth End-to-End (ETE) integrated structure and optimum or near optimum utilisation of network resources. This thesis takes up the challenge of addressing these issues by providing a detailed converged framework with support of a distributed, real-time, dynamic, scalable and intelligent wavelength and bandwidth allocation algorithm for the converged scenario of the NGN. The conventional works related to optical and wireless technology, where a traditional single channel optical network has been employed as a backhaul solution for the wireless counterpart, do have some shortcomings in providing the level of capacity, scalability and intelligence which is required in the current NGN environment [131]. The integrated scenario between the multi-channel optical network and wireless counterpart has gained popularity as the foundation of providing the higher bandwidth and capacity due to employing the multi wavelengths over a same fibre infrastructure with great security and protocol transparency [24]. On the other hand optimisation techniques [84] have attracted huge attention particularly in telecommunication field as the foundation of compilation speed, real-time support, low error level, scalability, CPU overhead and memory usage. Once appropriately coded they can provide the selection of the optimum or near optimum elements from some set of the available alternatives with relatively low error levels. Hence, the overall objective of this thesis is design, development and evaluation of an intelligent and dynamic resource (wavelength/bandwidth) allocation algorithm for multi-channel optical network integration with wireless technology with the support of optimisation techniques. In the pursuit of fulfilling the addressed objectives of this thesis, a Genetic Algorithm (GA) optimisation technique emerged as an efficient solution to the identified resource allocation problems.

A novel architecture for secure database processing in cloud computing

Chen, Hung-Kwan January 2016 (has links)
Security, particularly data privacy, is one of the biggest barriers to the adoption of Database-as-a-Service (DBaaS) in Cloud Computing. Recent security breaches demonstrate that a more powerful protection mechanism is needed to protect data confidentiality from any honest-but-curious administrator. Typical prior effort on addressing this security problem is either prohibitively slow or highly restrictive in operation. In this thesis, a novel cloud system architecture CypherDB, which makes use of a secure processor, is proposed to protect the confidentiality of outsourced database processing. To achieve this, a framework is developed to use these secure processors in the cloud for secure database processing. This framework allows distributed and parallel processing of the encrypted data and exhibits virtualization features in Cloud Computing. The CypherDB architecture also relies on two major components to protect the privacy of an outsourced database against any honest-but-curious administrator of high performance. Firstly, a novel database encryption scheme is developed to protect the outsourced database which can be executed under a CypherDB secure processor with high performance. Our proposed scheme makes use of custom instructions to hide the encryption latency from the program execution. This scheme is extensively validated through an integration with SQLite, a practical database application program. Secondly, a novel secure processor architecture is also developed to provide architectural support to our proposed database encryption scheme and efficient protection mechanism to secure all intermediate data generated on-the-fly during query execution. The efficiency, robustness and the cost of our novel processor architecture are validated and evaluated through extensive simulations and implementation on a FPGA platform. A fully-functional Field-Programmable Gate Array (FPGA) implementation of our CypherDB secure processor and simulation studies demonstrate that our proposed architecture is cost-effective and of high performance. Our experiment of running the TPC-H database benchmark on SQLite demonstrates 10 to 14 percent performance overhead on average. The security components in CypherDB consume about 21K Logic Elements and 54 Block RAMs on the FPGA. The modification of SQLite only consists of 208 lines of code (LOC).

Diagnosing, predicting and managing application performance in virtualised multi-tenant clouds

Chen, Xi January 2016 (has links)
As the computing industry enters the cloud era, multicore architectures and virtualisation technologies are replacing traditional IT infrastructures for several reasons including reduced infrastructure costs, lower energy consumption and ease of management. Cloud-based software systems are expected to deliver reliable performance under dynamic workloads while efficiently allocating resources. However, with the increasing diversity and sophistication of the environment, managing performance of applications in such environments becomes difficult. The primary goal of this thesis is to gain insight into performance issues of applications running in clouds. This is achieved by a number of innovations with respect to the monitoring, modelling and managing of virtualised computing systems: (i) Monitoring - we develop a monitoring and resource control platform that, unlike early cloud benchmarking systems, enables service level objectives (SLOs) to be expressed graphically as Performance Trees; these source both live and historical data. (ii) Modelling - we develop stochastic models based on Queue- ing Networks and Markov chains for predicting the performance of applications in multicore virtualised computing systems. The key feature of our techniques is their ability to characterise performance bottlenecks effectively by modelling both the hypervisor and the hardware. (iii) Managing - through the integration of our benchmarking and modelling techniques with a novel interference-aware prediction model, adaptive on-line reconfiguration and resource control in virtualised environments become lightweight target-specific operations that do not require sophisticated pre-training or micro-benchmarking. The validation results show that our models are able to predict the expected scalability behaviour of CPU/network intensive applications running on virtualised multicore environments with relative errors of between 8 and 26%. We also show that our performance interference prediction model can capture a broad range of workloads efficiently, achieving an average error of 9% across different applications and setups. We implement this model in a private cloud deployment in our department, and we evaluate it using both synthetic benchmarks and real user applications. We also explore the applicability of our model to both hypervisor reconfiguration and resource scheduling. The hypervisor reconfiguration can improve network throughput by up to 30% while the interference-aware scheduler improves application performance by up to 10% compared to the default CloudStack scheduler.

Enterprise adoption oriented cloud computing performance optimization

Noureddine, Moustafa January 2014 (has links)
Cloud computing in the Enterprise has emerged as a new paradigm that brings both business opportunities and software engineering challenges. In Cloud computing, business participants such as service providers, enterprise solutions, and marketplace applications are required to adopt a Cloud architecture engineered for security and performance. One of the major hurdles of formal adoption of Cloud solutions in the enterprise is performance. Enterprise applications (e.g., SAP, SharePoint, Yammer, Lync Server, and Exchange Server) require a mechanism to predict and manage performance expectations in a secure way. This research addresses two areas of performance challenges: Capacity planning to ensure resources are provisioned in a way that meets requirements while minimizing total cost of ownership; and optimization to authentication protocols that enable enterprise applications to authenticate among each other and meet the performance requirements for enterprise servers, including third party marketplace applications. For the first set of optimizations, the theory was formulated using a stochastic process where multiple experiments were monitored and data collected over time. The results were then validated using a real-life enterprise product called Lync Server. The second set of optimizations was achieved by introducing provisioning steps to pre-establish trust among enterprise applications servers, the associated authorisation server, and the clients interested in access to protected resources. In this architecture, trust is provisioned and synchronized as a pre-requisite step 3 to authentication among all communicating entities in the authentication protocol and referral tokens are used to establish trust federation for marketplace applications across organizations. Various case studies and validation on commercially available products were used throughout the research to illustrate the concepts. Such performance optimizations have proved to help enterprise organizations meet their scalability requirements. Some of the work produced has been adopted by Microsoft and made available as a downloadable tool that was used by customers around the globe assisting them with Cloud adoption.

Page generated in 1.6922 seconds