• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 4
  • Tagged with
  • 144
  • 30
  • 26
  • 24
  • 24
  • 19
  • 13
  • 11
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

GridRM : a resource monitoring framework

Smith, Garry Mark January 2004 (has links)
No description available.

Design and performance analysis of fail-signal based consensus protocols for Byzantine faults

Tariq, Qurat-ul-Ain Inayat January 2007 (has links)
Services offered by computing systems continue to play a crucial role in our every day lives. This thesis examines and solves a challenging problem in making these services dependable using means that can be assured not to compromise service responsiveness, particularly when no failure occurs. Causes of undependability are faults and faults of all known origins, including malicious attacks, are collectively referred to as Byzantine faults. Service or state machine replication is the only known technique for tolerating Byzantine faults. It becomes more effective when replicas are spaced out over a wide area network (WAN) such as the Internet - adding tolerance to localised disasters. It requires that replicas process the randomly arriving user requests in an identical order. Achieving this requirement together with deterministic termination guarantees is impossible in a fail-prone environment. This impossibility prevails because of the inability to accurately estimate a bound on inter-replica communication delays over a WAN. Canonical protocols in the literature are designed to delay termination until the WAN preserves convergence between actual delays and the estimate used. They thus risk performance degradation of the replicated service. We eliminate this risk by using Fail-Signal processesto circumvent the impossibility. A fail-signal (FS) process is made up of redundant, Byzantine-prone processes that continually check each other's performance. Consequently, it fails only by crashing and also signals its imminent failure. Using FS process constructs, a family of three order protocols has been developed: Protocol-0, Protocol-I and Protocol-11. Each protocol caters for a particular set of assumptions made in the FS process construction and the subsequent FS process behaviour. Protocol-I is extensively compared with a canonical protocol of Castro and Liskov which is widely acknowledged for its desirable performance. The study comprehensively establishes the cost and benefits of our approach in a variety of both real and emulated network settings, by varying number of replicas, system load and cryptographic techniques. The study shows that Protocol-I has superior performancp when no failures occur.

Supporting network visualisation, control and management in distributed virtual worlds

Song, Terence Min Khian January 2004 (has links)
As the demand for greater observability and controllability increases, an intuitive user interface and the ability to visualise and interact with complex relational structures will be essential for the successful management of next generation networks and services. With object-oriented architectures, interfaces, and information models becoming the fundamental approach to advance information networks, a three-dimensional virtual world, with its higher-level of semantic interaction, is a natural choice to provide corresponding paradigm shift in perception, interaction, and collaborative capabilities.

Supporting members of online communities through the use of visualisations

Mohamed, Rehman January 2007 (has links)
No description available.

Reducing uncertainty in environmental assessments of internet services

Schien, Daniel January 2015 (has links)
The continuing growth of internet services such as streaming videos, browsing websites or generally exchanging data has drawn the attention of academic researchers, industry, and the general public towards their environmental impact. Past assessments of this impact come to differing results due to the complexity of information and communication technology systems, including networks, data centres and user devices. Assuming a life-cycle perspective, this thesis reduces some of this uncertainty and thus works towards more robust assessments and ultimately decision-making. The first part of this thesis consists of modelling the energy consumption of routers and fibre-optical equipment that comprise the networks. As a result, new estimates of the energy intensity of networks are made, that can be used to derive the energy consumption of data transfer through the network. In t he second part, the energy consumption by data centres and use devices is included , which combined give a comprehensive assessment of the system end-to-end. One chapter is dedicated to the detailed analysis of the varying environmental footprint between different user devices and types of media. A separate chapter then develops and show-cases a more integrated assessment for an complete digital service over one year and demonstrates several new approaches to reducing uncertainty around use r device and access network energy consumption. The methods and models presented in this thesis are applicable to a wide range of services and contribute to more robust estimates of the energy consumption. The aim is to enable sustainability practitioners to carry out environmental assessments of digital services.

Scientific workflow execution reproducibility using cloud-aware provenance

Ahmad, M. K. H. January 2016 (has links)
Scientific experiments and projects such as CMS and neuGRIDforYou (N4U) are annually producing data of the order of Peta-Bytes. They adopt scientific workflows to analyse this large amount of data in order to extract meaningful information. These workflows are executed over distributed resources, both compute and storage in nature, provided by the Grid and recently by the Cloud. The Cloud is becoming the playing field for scientists as it provides scalability and on-demand resource provisioning. Reproducing a workflow execution to verify results is vital for scientists and have proven to be a challenge. As per a study (Belhajjame et al. 2012) around 80% of workflows cannot be reproduced, and 12% of them are due to the lack of information about the execution environment. The dynamic and on-demand provisioning capability of the Cloud makes this more challenging. To overcome these challenges, this research aims to investigate how to capture the execution provenance of a scientific workflow along with the resources used to execute the workflow in a Cloud infrastructure. This information will then enable a scientist to reproduce workflow-based scientific experiments on the Cloud infrastructure by re-provisioning the similar resources on the Cloud. Provenance has been recognised as information that helps in debugging, verifying and reproducing a scientific workflow execution. Recent adoption of Cloud-based scientific workflows presents an opportunity to investigate the suitability of existing approaches or to propose new approaches to collect provenance information from the Cloud and to utilize it for workflow reproducibility on the Cloud. From literature analysis, it was found that the existing approaches for Grid or Cloud do not provide detailed resource information and also do not present an automatic provenance capturing approach for the Cloud environment. To mitigate the challenges and fulfil the knowledge gap, a provenance based approach, ReCAP, has been proposed in this thesis. In ReCAP, workflow execution reproducibility is achieved by (a) capturing the Cloud-aware provenance (CAP), b) re-provisioning similar resources on the Cloud and re-executing the workflow on them and (c) by comparing the provenance graph structure including the Cloud resource information, and outputs of workflows. ReCAP captures the Cloud resource information and links it with the workflow provenance to generate Cloud-aware provenance. The Cloud-aware provenance consists of configuration parameters relating to hardware and software describing a resource on the Cloud. This information once captured aids in re-provisioning the same execution infrastructure on the Cloud for workflow re-execution. Since resources on the Cloud can be used in static or dynamic (i.e. destroyed when a task is finished) manner, this presents a challenge for the devised provenance capturing approach. In order to deal with these scenarios, different capturing and mapping approaches have been presented in this thesis. These mapping approaches work outside the virtual machine and collect resource information from the Cloud middleware, thus they do not affect job performance. The impact of the collected Cloud resource information on the job as well as on the workflow execution has been evaluated through various experiments in this thesis. In ReCAP, the workflow reproducibility is verified by comparing the provenance graph structure, infrastructure details and the output produced by the workflows. To compare the provenance graphs, the captured provenance information including infrastructure details is translated to a graph model. These graphs of original execution and the reproduced execution are then compared in order to analyse their similarity. In this regard, two comparison approaches have been presented that can produce a qualitative analysis as well as quantitative analysis about the graph structure. The ReCAP framework and its constituent components are evaluated using different scientific workflows such as ReconAll and Montage from the domains of neuroscience (i.e. N4U) and astronomy respectively. The results have shown that ReCAP has been able to capture the Cloud-aware provenance and demonstrate the workflow execution reproducibility by re-provisioning the same resources on the Cloud. The results have also demonstrated that the provenance comparison approaches can determine the similarity between the two given provenance graphs. The results of workflow output comparison have shown that this approach is suitable to compare the outputs of scientific workflows, especially for deterministic workflows.

Cloud computing in the large scale organisation : potential benefits and overcoming barriers to deployment

Bellamy, Martin Clifford January 2013 (has links)
There are three focal questions addressed in this thesis: • Firstly whether large organisations, particularly public sector or governmental, can realise benefits by transitioning from the ICT delivery models prevalent in the late 2000s to use Cloud computing services? • Secondly, in what circumstances can the benefits best be realised, and how and when can the associated risk reward trade-off be managed effectively? • Thirdly, what steps can be taken to ensure maximum benefit is gained from using Cloud computing? This includes a consideration of the technical and organisational obstacles that need to be overcome to realise these benefits in large organisations. The potential benefits for organisations using Cloud computing services include cost reductions, faster innovation, delivery of modern information based services that meet consumers' expectations, and improved choice and affordability of specialist services. There are many examples of successful Cloud computing deployments in large organisations that are saving time and money, although in larger organisations these are generally in areas that do not involve use of sensitive information. Despite the benefits, by 2013 cloud computing services account for less than 5% most large organisations' ICT budgets. The key inhibitor to wider deployment is that use of Cloud computing services exposes organisations to new risks that can be costly to address. However, the level of cost reduction that can be attained means that progressive deployment of Cloud computing services seems inevitable. The challenge therefore is how best to manage the associated risks in an effective and efficient manner. This thesis considers the origin and benefits of Cloud computing, identifies the barriers to take up and explores how these can be overcome, and considers how cloud service brokerages can potentially develop further to close the gap by building new capabilities to accelerate take-up and benefits realisation.

Hypergraph partitioning in the cloud

Lotfifar, Foad January 2016 (has links)
The thesis investigates the partitioning and load balancing problem which has many applications in High Performance Computing (HPC). The application to be partitioned is described with a graph or hypergraph. The latter is of greater interest as hypergraphs, compared to graphs, have a more general structure and can be used to model more complex relationships between groups of objects such as non-symmetric dependencies. Optimal graph and hypergraph partitioning is known to be NP-Hard but good polynomial time heuristic algorithms have been proposed. In this thesis, we propose two multi-level hypergraph partitioning algorithms. The algorithms are based on rough set clustering techniques. The first algorithm, which is a serial algorithm, obtains high quality partitionings and improves the partitioning cut by up to 71\% compared to the state-of-the-art serial hypergraph partitioning algorithms. Furthermore, the capacity of serial algorithms is limited due to the rapid growth of problem sizes of distributed applications. Consequently, we also propose a parallel hypergraph partitioning algorithm. Considering the generality of the hypergraph model, designing a parallel algorithm is difficult and the available parallel hypergraph algorithms offer less scalability compared to their graph counterparts. The issue is twofold: the parallel algorithm and the complexity of the hypergraph structure. Our parallel algorithm provides a trade-off between global and local vertex clustering decisions. By employing novel techniques and approaches, our algorithm achieves better scalability than the state-of-the-art parallel hypergraph partitioner in the Zoltan tool on a set of benchmarks, especially ones with irregular structure. Furthermore, recent advances in cloud computing and the services they provide have led to a trend in moving HPC and large scale distributed applications into the cloud. Despite its advantages, some aspects of the cloud, such as limited network resources, present a challenge to running communication-intensive applications and make them non-scalable in the cloud. While hypergraph partitioning is proposed as a solution for decreasing the communication overhead within parallel distributed applications, it can also offer advantages for running these applications in the cloud. The partitioning is usually done as a pre-processing step before running the parallel application. As parallel hypergraph partitioning itself is a communication-intensive operation, running it in the cloud is hard and suffers from poor scalability. The thesis also investigates the scalability of parallel hypergraph partitioning algorithms in the cloud, the challenges they present, and proposes solutions to improve the cost/performance ratio for running the partitioning problem in the cloud. Our algorithms are implemented as a new hypergraph partitioning package within Zoltan. It is an open source Linux-based toolkit for parallel partitioning, load balancing and data-management designed at Sandia National Labs. The algorithms are known as FEHG and PFEHG algorithms.

Activity recognition in event driven IoT-service architectures

Meissner, Stefan January 2016 (has links)
With the advent of the Internet-of-Things way more sensor-generated data streams came available that researchers want to exploit context from. Many researchers worked on context recognition for rather unimodal data in pervasive systems, but recent works about object virtualisation in the Internet-of-Things domain enable context-exploitation based on processing multi-modal information collected from pervasive systems. Additionally to the sensed data there is formalised knowledge about the real world objects emitted by IoT services as contributed by the author in [1], [2] and [3]. In this work an approach for context recognition is proposed that takes knowledge about virtual objects and its relationships into account in order to improve context recognition. The approach will only recognise context that has been predefined manually beforehand, no new context information can be exploited with the work proposed here. This work’s scope is about recognising the activity that a user is most likely involved in by observing the evolving context of a user of a pervasive system. As an assumption for this work the activities have to be modelled as graphs in which the nodes are situations observable by a pervasive system. The pervasive system to be utilised has to be built compliant to the Architectural Reference Model for the IoT (ARM) to which the author has contributed to in [4] and [5]. The hybrid context model proposed in this thesis is made of an ontology-based part as well as a probability-based part. Ontologies assist in adapting the probability distributions for the Hidden Markov Model-based recognition technique according to the current context. It could be demonstrated in this work that the context-aware adaptation of the recognition model increased the detection rate of the activity recognition system.

Insider threat : memory confidentiality and integrity in the cloud

Rocha, Francisco January 2015 (has links)
The advantages of always available services, such as remote device backup or data storage, have helped the widespread adoption of cloud computing. However, cloud computing services challenge the traditional boundary between trusted inside and untrusted outside. A consumer’s data and applications are no longer in premises, fundamentally changing the scope of an insider threat. This thesis looks at the security risks associated with an insider threat. Specifically, we look into the critical challenge of assuring data confidentiality and integrity for the execution of arbitrary software in a consumer’s virtual machine. The problem arises from having multiple virtual machines sharing hardware resources in the same physical host, while an administrator is granted elevated privileges over such host. We used an empirical approach to collect evidence of the existence of this security problem and implemented a prototype of a novel prevention mechanism for such a problem. Finally, we propose a trustworthy cloud architecture which uses the security properties our prevention mechanism guarantees as a building block. To collect the evidence required to demonstrate how an insider threat can become a security problem to a cloud computing infrastructure, we performed a set of attacks targeting the three most commonly used virtualization software solutions. These attacks attempt to compromise data confidentiality and integrity of cloud consumers’ data. The prototype to evaluate our novel prevention mechanism was implemented in the Xen hypervisor and tested against known attacks. The prototype we implemented focuses on applying restrictions to the permissive memory access model currently in use in the most relevant virtualization software solutions. We envision the use of a mandatory memory access control model in the virtualization software. This model enforces the principle of least privilege to memory access, which means cloud administrators are assigned with only enough privileges to successfully perform their administrative tasks. Although the changes we suggest to the virtualization layer make it more restrictive, our solution is versatile enough to port all the functionality available in current virtualization viii solutions. Therefore, our trustworthy cloud architecture guarantees data confidentiality and integrity and achieves a more transparent trustworthy cloud ecosystem while preserving functionality. Our results show that a malicious insider can compromise security sensitive data in the three most important commercial virtualization software solutions. These virtualization solutions are publicly available and the number of cloud servers using these solutions accounts for the majority of the virtualization market. The prevention mechanism prototype we designed and implemented guarantees data confidentiality and integrity against such attacks and reduces the trusted computing base of the virtualization layer. These results indicate how current virtualization solutions need to reconsider their view on insider threats.

Page generated in 0.1017 seconds