• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 4
  • Tagged with
  • 144
  • 30
  • 26
  • 24
  • 24
  • 19
  • 13
  • 11
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Contribution to agents-based negotiation and monitoring of cloud computing service level agreement

Alsrheed, F. January 2014 (has links)
Cloud Computing environments are dynamic and open systems, where cloud providers and consumers frequently join and leave the cloud marketplaces. Due to the increasing number of cloud consumers and providers, it is becoming almost impossible to facilitate face to face meetings to negotiate and establish a Service Level Agreement (SLA); thus automated negotiation is needed to establish SLAs between service providers and consumers with no previous knowledge of each other. In this thesis, I propose, an Automated Cloud Service Level Agreement framework (ACSLA). ACSLA is made up of five stages, and the corresponding software agent components: Gathering, Filtering, Negotiation, SLA Agreement and Monitoring. In the Gathering stage all the information about the providers and what they can offer is gathered. In the Filtering stage the customer’s agent will send the request to ACSLA, which will filter all the providers in order to recommend the best matched candidates. In Negotiation stage the customer’s agent will negotiate separately with each candidate provider using different negotiation algorithms, which will be evaluated and for which recommendations and guidelines will be provided. The output of this stage is that the best outcome from the customer’s perspective will be picked up, which will be the agreed value for each parameter in the SLA. In SLA Agreement stage the provider’s agent and the customer‘s agent will be informed about the Agreement, which will be specified in measurable terms. The output of the SLA Agreement stage will be a list of metrics that can be monitored in the Monitoring stage. Customer’s agent and provider’s agent will also negotiate and agree about the penalties and actions will be taken in case the SLA has been violated and unfulfilled. There is a variety of actions that can be taken, like informing both sides, recommending solutions, self-healing and hot-swapping. ACSLA is evaluated using case studies which show its flexibility and effectiveness. ACSAL offers a novel approach to tackle many challenging issues in the current and likely future, cloud computing market. It is the first complete automated framework for cloud SLA. There are many automated negotiation algorithms and protocols, which have been developed over the years in other research areas; establishing functional solutions applicable to the cloud-computing environment is not an easy task. Rubinstein’s Alternating Offers Protocol, also known as the Rubinstein bargaining model, has been investigated for application in automated cloud SLA, and it offers a satisfactory technical solution for this challenging problem. The purpose of this research was also to apply the state of the art in negotiation automated algorithms/agents within a described Cloud Computing SLA framework, to develop new algorithms, and to evaluate and recommend the most appropriate negotiation approach based on many criteria.
102

Enhancing security in public IaaS cloud systems through VM monitoring : a consumer's perspective

Al Said, Taimur January 2016 (has links)
Cloud computing is attractive for both consumers and providers to benefit from potential economies of scale in reducing cost of use (for consumers) and operation of infrastructure (for providers). In the IaaS service deployment model of the cloud, consumers can launch their own virtual machines (VMs) on an infrastructure made available by a cloud provider, enabling a number of different applications to be hosted within the VM. The cloud provider generally has full control and access to the VM, providing the potential for a provider to access both VM configuration parameters and the hosted data. Trust between the consumer and the provider is key in this context, and generally assumed to exist. However, relying on this assumption alone can be limiting. We argue that the VM owner must have greater access to operations that are being carried out on their VM by the provider and greater visibility on how this VM and its data are stored and processed in the cloud. In the case where VMs are migrated by the provider to another region, without notifying the owner, this can raise some privacy concerns. Therefore, mechanisms must be in place to ensure that violation of the confidentiality, integrity and SLA does not happen. In this thesis, we present a number of contributions in the field of cloud security which aim at supporting trustworthy cloud computing. We propose monitoring of security-related VM events as a solution to some of the cloud security challenges. Therefore, we present a system design and architecture to monitor security-related VM events in public IaaS cloud systems. To enable the system to achieve focused monitoring, we propose a taxonomy of security-related VM events. The architecture was supported by a prototype implementation of the monitoring tool called: VMInformant, which keeps the user informed and alerted about various events that have taken place on their VM. The tool was evaluated to learn about the performance and storage overheads associated with monitoring such events using CPU and I/O intensive benchmarks. Since events in multiple VMs, belonging to the same owner, may be related, we suggested an architecture of a system, called: Inspector Station, to aggregate and analyse events from multiple VMs. This system enables the consumer: (1) to learn about the overall security status of multiple VMs; (2) to find patterns in the events; and (3) to make informed decisions related to security. To ensure that VMs are not migrated to another region without notifying the owner, we proposed a hybrid approach, which combines multiple metrics to estimate the likelihood of a migration event. The technical aspects in this thesis are backed up by practical experiments to evaluate the approaches in real public IaaS cloud systems, e.g. Amazon AWS and Google Cloud Platform. We argue that having this level of transparency is essential to improve the trust between a cloud consumer and provider, especially in the context of a public cloud system.
103

Pervasive service discovery in low-power and lossy networks

Djamaa, B. January 2016 (has links)
Pervasive Service Discovery (SD) in Low-power and Lossy Networks (LLNs) is expected to play a major role in realising the Internet of Things (IoT) vision. Such a vision aims to expand the current Internet to interconnect billions of miniature smart objects that sense and act on our surroundings in a way that will revolutionise the future. The pervasiveness and heterogeneity of such low-power devices requires robust, automatic, interoperable and scalable deployment and operability solutions. At the same time, the limitations of such constrained devices impose strict challenges regarding complexity, energy consumption, time-efficiency and mobility. This research contributes new lightweight solutions to facilitate automatic deployment and operability of LLNs. It mainly tackles the aforementioned challenges through the proposition of novel component-based, automatic and efficient SD solutions that ensure extensibility and adaptability to various LLN environments. Building upon such architecture, a first fully-distributed, hybrid pushpull SD solution dubbed EADP (Extensible Adaptable Discovery Protocol) is proposed based on the well-known Trickle algorithm. Motivated by EADPs’ achievements, new methods to optimise Trickle are introduced. Such methods allow Trickle to encompass a wide range of algorithms and extend its usage to new application domains. One of the new applications is concretized in the TrickleSD protocol aiming to build automatic, reliable, scalable, and time-efficient SD. To optimise the energy efficiency of TrickleSD, two mechanisms improving broadcast communication in LLNs are proposed. Finally, interoperable standards-based SD in the IoT is demonstrated, and methods combining zero-configuration operations with infrastructure-based solutions are proposed. Experimental evaluations of the above contributions reveal that it is possible to achieve automatic, cost-effective, time-efficient, lightweight, and interoperable SD in LLNs. These achievements open novel perspectives for zero-configuration capabilities in the IoT and promise to bring the ‘things’ to all people everywhere.
104

An adaptive security framework for evaluating and assessing security implementations in PaaS cloud models

Akinbi, Olushola Alexander January 2015 (has links)
The security risks of cloud computing and ambiguity of security mechanisms implemented on an ondemand cloud service such as Platform-as-a-Service (PaaS), continues to raise concerns by cloud consumers. These concerns continue to hinder the adoption of the potentials offered by provisioning of computer resources of this scale. It also indicates a lot needs to be done to improve security controls implemented on cloud computing services as a whole. There is the need to understand and evaluate security mechanisms and controls implemented to preserve the confidentiality, integrity and availability of data stored, processed and accessed in the cloud. Also there is the need to ensure these mechanisms meet security standards and requirements to mitigate any security risks. Although most organisations and cloud service providers (CSPs) have various information security management systems they used to evaluate their computer security and CSPs try to obtain security certifications based on industry standards, cloud customers are however not sure of the security mechanisms implemented on cloud services and how these mechanism are integrated to provide adequate security for their data and applications developed and deployed in the cloud. This research study highlights the use of a systematic and comprehensive approach developed by the researcher to understand in detail, the security architecture of PaaS clouds. This approach presents the development of a security framework which is used as a tool in an attempt to identify and evaluate security mechanism implemented on each PaaS component. The primary findings and preliminary analysis of the evaluation enabled the researcher determine the security provisions, capabilities and limitations of security features implemented on this type of cloud delivery model.
105

Ad hoc cloud computing

McGilvary, Gary Andrew January 2014 (has links)
Commercial and private cloud providers offer virtualized resources via a set of co-located and dedicated hosts that are exclusively reserved for the purpose of offering a cloud service. While both cloud models appeal to the mass market, there are many cases where outsourcing to a remote platform or procuring an in-house infrastructure may not be ideal or even possible. To offer an attractive alternative, we introduce and develop an ad hoc cloud computing platform to transform spare resource capacity from an infrastructure owner’s locally available, but non-exclusive and unreliable infrastructure, into an overlay cloud platform. The foundation of the ad hoc cloud relies on transferring and instantiating lightweight virtual machines on-demand upon near-optimal hosts while virtual machine checkpoints are distributed in a P2P fashion to other members of the ad hoc cloud. Virtual machines found to be non-operational are restored elsewhere ensuring the continuity of cloud jobs. In this thesis we investigate the feasibility, reliability and performance of ad hoc cloud computing infrastructures. We firstly show that the combination of both volunteer computing and virtualization is the backbone of the ad hoc cloud. We outline the process of virtualizing the volunteer system BOINC to create V-BOINC. V-BOINC distributes virtual machines to volunteer hosts allowing volunteer applications to be executed in the sandbox environment to solve many of the downfalls of BOINC; this however also provides the basis for an ad hoc cloud computing platform to be developed. We detail the challenges of transforming V-BOINC into an ad hoc cloud and outline the transformational process and integrated extensions. These include a BOINC job submission system, cloud job and virtual machine restoration schedulers and a periodic P2P checkpoint distribution component. Furthermore, as current monitoring tools are unable to cope with the dynamic nature of ad hoc clouds, a dynamic infrastructure monitoring and management tool called the Cloudlet Control Monitoring System is developed and presented. We evaluate each of our individual contributions as well as the reliability, performance and overheads associated with an ad hoc cloud deployed on a realistically simulated unreliable infrastructure. We conclude that the ad hoc cloud is not only a feasible concept but also a viable computational alternative that offers high levels of reliability and can at least offer reasonable performance, which at times may exceed the performance of a commercial cloud infrastructure.
106

The inter-cloud meta-scheduling

Sotiriadis, Stelios January 2013 (has links)
Inter-cloud is a recently emerging approach that expands cloud elasticity. By facilitating an adaptable setting, it purposes at the realization of a scalable resource provisioning that enables a diversity of cloud user requirements to be handled efficiently. This study’s contribution is in the inter-cloud performance optimization of job executions using metascheduling concepts. This includes the development of the inter-cloud meta-scheduling (ICMS) framework, the ICMS optimal schemes and the SimIC toolkit. The ICMS model is an architectural strategy for managing and scheduling user services in virtualized dynamically inter-linked clouds. This is achieved by the development of a model that includes a set of algorithms, namely the Service-Request, Service-Distribution, Service-Availability and Service-Allocation algorithms. These along with resource management optimal schemes offer the novel functionalities of the ICMS where the message exchanging implements the job distributions method, the VM deployment offers the VM management features and the local resource management system details the management of the local cloud schedulers. The generated system offers great flexibility by facilitating a lightweight resource management methodology while at the same time handling the heterogeneity of different clouds through advanced service level agreement coordination. Experimental results are productive as the proposed ICMS model achieves enhancement of the performance of service distribution for a variety of criteria such as service execution times, makespan, turnaround times, utilization levels and energy consumption rates for various inter-cloud entities, e.g. users, hosts and VMs. For example, ICMS optimizes the performance of a non-meta-brokering inter-cloud by 3%, while ICMS with full optimal schemes achieves 9% optimization for the same configurations. The whole experimental platform is implemented into the inter-cloud Simulation toolkit (SimIC) developed by the author, which is a discrete event simulation framework.
107

Cloud BI : a multi-party authentication framework for securing business intelligence on the Cloud

Al-Aqrabi, Hussain January 2016 (has links)
Business intelligence (BI) has emerged as a key technology to be hosted on Cloud computing. BI offers a method to analyse data thereby enabling informed decision making to improve business performance and profitability. However, within the shared domains of Cloud computing, BI is exposed to increased security and privacy threats because an unauthorised user may be able to gain access to highly sensitive, consolidated business information. The business process contains collaborating services and users from multiple Cloud systems in different security realms which need to be engaged dynamically at runtime. If the heterogamous Cloud systems located in different security realms do not have direct authentication relationships then it is technically difficult to enable a secure collaboration. In order to address these security challenges, a new authentication framework is required to establish certain trust relationships among these BI service instances and users by distributing a common session secret to all participants of a session. The author addresses this challenge by designing and implementing a multiparty authentication framework for dynamic secure interactions when members of different security realms want to access services. The framework takes advantage of the trust relationship between session members in different security realms to enable a user to obtain security credentials to access Cloud resources in a remote realm. This mechanism can help Cloud session users authenticate their session membership to improve the authentication processes within multi-party sessions. The correctness of the proposed framework has been verified by using BAN Logics. The performance and the overhead have been evaluated via simulation in a dynamic environment. A prototype authentication system has been designed, implemented and tested based on the proposed framework. The research concludes that the proposed framework and its supporting protocols are an effective functional basis for practical implementation testing, as it achieves good scalability and imposes only minimal performance overhead which is comparable with other state-of-art methods.
108

A linear logic approach to RESTful web service modelling and composition

Zhao, Xia January 2013 (has links)
RESTful Web Services are gaining increasing attention from both the service and the Web communities. The rising number of services being implemented and made available on the Web is creating a demand for modelling techniques that can abstract REST design from the implementation in order better to specify, analyse and implement large-scale RESTful Web systems. It can also help by providing suitable RESTful Web Service composition methods which can reduce costs by effi ciently re-using the large number of services that are already available and by exploiting existing services for complex business purposes. This research considers RESTful Web Services as state transition systems and proposes a novel Linear Logic based approach, the first of its kind, for both the modelling and the composition of RESTful Web Services. The thesis demonstrates the capabilities of resource-sensitive Linear Logic for modelling five key REST constraints and proposes a two-stage approach to service composition involving Linear Logic theorem proving and proof-as-process based on the π-calculus. Whereas previous approaches have focused on each aspect of the composition of RESTful Web Services individually (e.g. execution or high-level modelling), this work bridges the gap between abstract formal modelling and application-level execution in an efficient and effective way. The approach not only ensures the completeness and correctness of the resulting composed services but also produces their process models naturally, providing the possibility to translate them into executable business languages. Furthermore, the research encodes the proposed modelling and composition method into the Coq proof assistant, which enables both the Linear Logic theorem proving and the π-calculus extraction to be conducted semi-automatically. The feasibility and versatility studies performed in two disparate user scenarios (shopping and biomedical service composition) show that the proposed method provides a good level of scalability when the numbers of services and resources grow.
109

Contributions to big geospatial data rendering and visualisations

Tully, D. A. January 2017 (has links)
Current geographical information systems lack features and components which are commonly found within rendering and game engines. When combined with computer game technologies, a modern geographical information system capable of advanced rendering and data visualisations are achievable. We have investigated the combination of big geospatial data, and computer game engines for the creation of a modern geographical information system framework capable of visualising densely populated real-world scenes using advanced rendering algorithms. The pipeline imports raw geospatial data in the form of Ordnance Survey data which is provided by the UK government, LiDAR data provided by a private company, and the global open mapping project of OpenStreetMap. The data is combined to produce additional terrain data where data is missing from the high resolution data sources of LiDAR by utilising interpolated Ordnance Survey data. Where data is missing from LiDAR, the same interpolation techniques are also utilised. Once a high resolution terrain data set which is complete in regards to coverage, is generated, sub datasets can be extracted from the LiDAR using OSM boundary data as a perimeter. The boundaries of OSM represent buildings or assets. Data can then be extracted such as the heights of buildings. This data can then be used to update the OSM database. Using a novel adjacency matrix extraction technique, 3D model mesh objects can be generated using both LiDAR and OSM information. The generation of model mesh objects created from OSM data utilises procedural content generation techniques, enabling the generation of GIS based 3D real-world scenes. Although only LiDAR and Ordnance Survey for UK data is available, restricting the generation to the UK borders, using OSM alone, the system is able to procedurally generate any place within the world covered by OSM. In this research, to manage the large amounts of data, a novel scenegraph structure has been generated to spatially separate OSM data according to OS coordinates, splitting the UK into 1kilometer squared tiles, and categorising OSM assets such as buildings, highways, amenities. Once spatially organised, and categorised as an asset of importance, the novel scenegraph allows for data dispersal through an entire scene in real-time. The 3D real-world scenes visualised within the runtime simulator can be manipulated in four main aspects; • Viewing at any angle or location through the use of a 3D and 2D camera system. • Modifying the effects or effect parameters applied to the 3D model mesh objects to visualise user defined data by use of our novel algorithms and unique lighting data-structure effect file with accompanying material interface. • Procedurally generating animations which can be applied to the spatial parameters of objects, or the visual properties of objects. • Applying Indexed Array Shader Function and taking advantage of the novel big geospatial scenegraph structure to exploit better rendering techniques in the context of a modern Geographical Information System, which has not been done, to the best of our knowledge. Combined with a novel scenegraph structure layout, the user can view and manipulate real-world procedurally generated worlds with additional user generated content in a number of unique and unseen ways within the current geographical information system implementations. We evaluate multiple functionalities and aspects of the framework. We evaluate the performance of the system, measuring frame rates with multi sized maps by stress testing means, as well as evaluating the benefits of the novel scenegraph structure for categorising, separating, manoeuvring, and data dispersal. Uniform scaling by n2 of scenegraph nodes which contain no model mesh data, procedurally generated model data, and user generated model data. The experiment compared runtime parameters, and memory consumption. We have compared the technical features of the framework against that of real-world related commercial projects; Google Maps, OSM2World, OSM-3D, OSM-Buildings, OpenStreetMap, ArcGIS, Sustainability Assessment Visualisation and Enhancement (SAVE), and Autonomous Learning Agents for Decentralised Data and Information (ALLADIN). We conclude that when compared to related research, the framework produces data-sets relevant for visualising geospatial assets from the combination of real-world data-sets, capable of being used by a multitude of external game engines, applications, and geographical information systems. The ability to manipulate the production of said data-sets at pre-compile time aids processing speeds for runtime simulation. This ability is provided by the pre-processor. The added benefit is to allow users to manipulate the spatial and visual parameters in a number of varying ways with minimal domain knowledge. The features of creating procedural animations attached to each of the spatial parameters and visual shading parameters allow users to view and encode their own representations of scenes which are unavailable within all of the products stated. Each of the alternative projects have similar features, but none which allow full animation ability of all parameters of an asset; spatially or visually, or both. We also evaluated the framework on the implemented features; implementing the needed algorithms and novelties of the framework as problems arose in the development of the framework. Examples of this is the algorithm for combining the multiple terrain data-sets we have (Ordnance Survey terrain data and Light Detection and Ranging Digital Surface Model data and Digital Terrain Model data), and combining them in a justifiable way to produce maps with no missing data values for further analysis and visualisation. A majority of visualisations are rendered using an Indexed Array Shader Function effect file, structured to create a novel design to encapsulate common rendering effects found in commercial computer games, and apply them to the rendering of real-world assets for a modern geographical information system. Maps of various size, in both dimensions, polygonal density, asset counts, and memory consumption prove successful in relation to real-time rendering parameters i.e. the visualisation of maps do not create a bottleneck for processing. The visualised scenes allow users to view large dense environments which include terrain models within procedural and user generated buildings, highways, amenities, and boundaries. The use of a novel scenegraph structure allows for the fast iteration and search from user defined dynamic queries. The interaction with the framework is allowed through a novel Interactive Visualisation Interface. Utilising the interface, a user can apply procedurally generated animations to both spatial and visual properties to any node or model mesh within the scene. We conclude that the framework has been a success. We have completed what we have set out to develop and create, we have combined multiple data-sets to create improved terrain data-sets for further research and development. We have created a framework which combines the real-world data of Ordnance Survey, LiDAR, and OpenStreetMap, and implemented algorithms to create procedural assets of buildings, highways, terrain, amenities, model meshes, and boundaries. for visualisation, with implemented features which allows users to search and manipulate a city’s worth of data on a per-object basis, or user-defined combinations. The successful framework has been built by the cross domain specialism needed for such a project. We have combined the areas of; computer games technology, engine and framework development, procedural generation techniques and algorithms, use of real-world data-sets, geographical information system development, data-parsing, big-data algorithmic reduction techniques, and visualisation using shader techniques.
110

Evidence-based accountability audits for cloud computing

Rübsamen, Thomas January 2016 (has links)
Cloud computing is known for its on-demand service provisioning and has now become mainstream. Many businesses as well as individuals are using cloud services on a daily basis. There is a big variety of services that ranges from the provision of computing resources to services such as productivity suites and social networks. The nature of these services varies heavily in terms of what kind of information is being out-sourced to the cloud provider. Often, that data is sensitive, for instance when PII is being shared by an individual. Also, businesses that move (parts of) their processes to the cloud are actively participating in a major paradigm shift from having data on-premise to transfering data to a third-party provider. However, many new challenges come along with this trend, which are closely tied to the loss of control over data. When moving to the cloud, direct control over geographical storage location, who has access to it and how it is shared and processed is given up. Because of this loss of control, cloud customers have to trust cloud providers that they treat their data in an appropriate and responsible way. Cloud audits can be used to check how data has been processed in the cloud (i.e., by whom, for what purpose) and whether or not this happened in compliance with what has been defined in agreed-upon privacy and data storage, usage and maintenance (i.e., data handling) policies. This way, a cloud customer can regain some of the control he has given up by moving to the cloud. In this thesis, accountability audits are presented as a way to strengthen trust in cloud computing by providing assurance about the processing of data in the cloud according to data handling and privacy policies. In cloud accountability audits, various distributed evidence sources need to be considered. The research presented in this thesis discusses the use of various heterogeous evidence sources on all cloud layers. This way, a complete picture of the actual data handling practices that is based on hard facts can be presented to the cloud consumer. Furthermore, this strengthens transparency of data processing in the cloud, which can lead to improved trust in cloud providers, if they choose to adopt these mechanisms in order to assure their customers that their data is being handled according to their expectations. The system presented in this thesis enables continuous auditing of a cloud provider's adherence to data handling policies in an automated way that shortens audit intervals and that is based on evidence that is produced by cloud subsystems. An important aspect of many cloud offerings is the combination of multiple distinct cloud services that are offered by independent providers. Data is thereby freuqently exchanged between the cloud providers. This also includes trans-border flows of data, where one provider may be required to adhere to more strict data protection requirements than the others. The system presented in this thesis addresses such scenarios by enabling the collection of evidence at providers and evaluating it during audits. Securing evidence quickly becomes a challenge in the system design, when information that is needed for the audit is deemed sensitive or confidential. This means that securing the evidence at-rest as well as in-transit is of utmost importance, in order not to introduce a new liability by building an insecure data heap. This research presents the identification of security and privacy protection requirements alongside proposed solutions that enable the development of an architecture for secure, automated, policy-driven and evidence-based accountability audits.

Page generated in 0.0398 seconds