• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 4
  • Tagged with
  • 144
  • 30
  • 26
  • 24
  • 24
  • 19
  • 13
  • 11
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Efficient and scalable replication of services over wide-area networks

Abouzamazem, Abdallah January 2013 (has links)
Service replication ensures reliability and availability, but accomplishing it requires solving the total-order problem of guaranteeing that all replicas receive service requests in the same order. The problem, however, cannot be solved for a specific combination of three factors, namely, when (i) the message transmission delays cannot be reliably bounded, as often the case over wide-area networks such as the Internet, (ii) replicas can fail, e.g., by crashing, the very events that have to be tolerated through replication, and finally (iii) the solution has to be deterministic as distributed algorithms generally are. Therefore, total-order protocols are developed by avoiding one or more of these three factors by resorting to realistic assumptions based on system contexts. Nevertheless, they tend to be complex in structure and impose time overhead with potentials to slow down the performance of replicated services themselves. This thesis work develops an efficient total-order protocol by leveraging the emergence of cluster computing. It assumes that a server replica is not a stand-alone computer but is a part of a cluster from which it can enlist the cooperation of some of its peers for solving the total-order problem locally. The local solution is then globalised with replicas spread over a wide-area network. This two-staged solution is highly scalable and is experimentally demonstrated to have a smaller performance overhead than a single-stage solution applied directly over a wide-area network. The local solution is derived from an existing, multi-coordinator protocol, Mencius, which is known to have the best performance. Through a careful analysis, the derivation modifies some aspects of Mencius for further performance improvements while retaining the best aspects.

Optimized self-service resource containers for next generation cloud delivery

Musa, Ibrahim Kabiru January 2014 (has links)
Next generation cloud computing is envisioned to allow any logical combination of information technology (IT) and network resources to deliver cloud services in the most cost-effective and efficient manner. Advanced self-service, infrastructure convergence, and high flexibility are crucial in such an environment. To achieve the vision of such a next generation paradigm, suitable means of addressing complex interaction between various virtual resources need to be addressed. This thesis proposes a novel service framework, the virtual-cells-as-a-service (vCAAS), to address the need for advanced interaction of cloud services. vCAAS enables converged cloud services in which virtual machines, network and storage resources are delivered in a self-service virtual infrastructure container. The approach views cloud resources in a holistic manner where components interact and complement each other to complete a task without manual intervention. The thesis begins with a statement of the problem addressed and the objectives of the research. The methodology adopted for the research is .described subsequently and the outline of the thesis presented. These are followed by a brief introduction highlighting the current developments in cloud computing and the enabling technologies for the new paradigm. Next, the thesis presents a framework for the proposed vCAAS. Various components and enabling functionalities required to realise the framework are described. Multi-objective optimization strategies suitable for the problems in vCAAS are presented. A case for hybrid optical and electrical switching for intra-cloud datacenters to enable cloud services is also made. Novel algorithms for traffic management in the hybrid cloud data center are proposed and demonstrated in a simulation experiment. Finally, the thesis presents a practical application of the novel concept of vCAAS in solving a real world scientific data analysis problem.

An approach to compliance conformance for cloud-based business applications leveraging service level agreements and continuous auditing

Sinclair, J. G. January 2014 (has links)
Organisations increasingly use flexible, adaptable and scalable IT infrastructures, such as cloud computing resources, for hosting business applications and storing customer data. To prevent the misuse of personal data, auditors can assess businesses for legal compliance conformance. For data privacy compliance there are many applicable pieces of legislation as well as regulations and standards. Businesses operate globally and typically have systems that are dynamic and mobile; in contrast current data privacy laws often have geographical jurisdictions and so conflicts can arise between the law and the technological framework of cloud computing. Traditional auditing approaches are unsuitable for cloud-based environments because of the complexity of potentially short-lived, migratory and scalable real-time virtual systems. My research goal is to address the problem of auditing cloud-based services for data privacy compliance by devising an appropriate machine-readable Service Level Agreement (SLA) framework for specifying applicable legal conditions. This allows the development of a scalable Continuous Compliance Auditing Service (CCAS) for monitoring data privacy in cloud-based environments. The CCAS architecture utilises agreed SLA conditions to process service events for compliance conformance. The CCAS architecture has been implemented and customised for a real world Electronic Health Record (EHR) scenario in order to demonstrate geo-location compliance monitoring using data privacy restrictions. Finally, the automated audit process of CCAS has been compared and evaluated against traditional auditing approaches and found to have the potential for providing audit capabilities in complex IT environments.

The use of high-level requirements ontologies for discovering resources in a multi-provider cloud environment

Sun, Y. L. January 2014 (has links)
This thesis proposes the use of high-level requirement ontologies for discovering resources in a multi-provider cloud environment. A high-level framework for deploying cloud-oriented applications, which harnessing existing cloud technologies, is developed. The framework provides an abstract multi-layered ontological model for specifying cloud application requirements. Domain-specific ontologies are used to specify high-level application requirements. These are translated into infrastructure ontologies which are agnostic to underlying providers and low-level resources. Resource and cost ontologies are used for specifying the capabilities and cost of infrastructure resources. The proposed model provides an abstract application-centric mechanism for specifying an application's requirements and searching for a set of suitable resources in a multi-provider cloud environment. A two-phase resource discovery approach for selecting cloud resources is developed. In the first phase, a set of possible resources which meet an application's mandatory requirements is identified. In the second phase, a suitable heuristic is used to filter the initial resource set by taking into consideration other requirements. This approach enables a selection of appropriate resources based on the needs of the application at the time it is being deployed. Furthermore, a meta programming model is developed to facilitate an unified approach to the management of cloud resources (offered by different providers). The proposed framework allows cloud users to specify application requirements without being overly concerned about the complexity of underlying provider frameworks and resources. The framework provides an effective mechanism for searching for a set of suitable resources that satisfy the application's requirements, specified at design time, while having the capability to adapt to requirement changes at runtime. Cloud resources can be utilised effectively in order to maximize the performance of an application and minimise its deployment cost in a multi-provider cloud environment.

GridRM : a resource monitoring framework

Smith, Garry Mark January 2004 (has links)
No description available.

Information infrastructure development in Sub-Saharan Africa

Yahaya, Lateef Folarin January 2000 (has links)
Since the post-war period researchers have been pointing to a shift towards a new techno-economic paradigm. Whilst the macroeconomic impact of this powerful wave of technology has yet to be determined, it is sensed intuitively as being more important than generally suspected and to have major multiplier effects on national development. The convergence of information technology and modern communications has raised renewed hopes for enhancing national development in developing countries. At the same time, there are legitimate fears of increased marginalisation for those countries that fail to keep pace in the technological race. Grappling with the complexity involved in constructing an infrastructure that can improve their ability to achieve development objectives, and may lay the foundations for their future competitive advantage, few Sub-Saharan African countries have constructed a coordinated policy response to the complexities involved in creating an effective information infrastructure. Economically and politically fragile, and with only the promise of technological potentialities, the vast majority of African policy-makers are adopting a cautious approach. In the face of such a policy vacuum external actors such as multilateral development agencies, have taken it upon themselves to design, implement and fund initiatives with the idea of information infrastructure at their core. Such initiatives, whilst bringing much needed infrastructure to the region, are often short-termist in outlook and do not necessarily dovetail with local development objectives. If less developed countries and regions are to implement telecommunication networks and information services that will serve their interests, they must prioritise objectives that rest firmly in their particular economic, political, cultural and social context. Within a broad, multi-dimensional research schema, the research examines the main actors in the field of information infrastructure development in Africa. These are identified as development agencies, indigenous government and the foreign private sector. By articulating the respective roles of these actors and their spheres of influence, the research provides a coherent understanding of information infrastructure development activities within Sub-Saharan Africa. The research outlines a policy framework, which at both the conceptual and practical levels, argues that government plays the critical role in articulating national strategies for the coordination of disparate actors and scarce resources. The main contribution of the research is a practical policy framework that pinpoints priority areas for information infrastructure development within the Sub-Saharan Africa region.

Quantitative analysis of distributed systems

Zeng, Wen January 2014 (has links)
Computing Science addresses the security of real-life systems by using various security-oriented technologies (e.g., access control solutions and resource allocation strategies). These security technologies signficantly increase the operational costs of the organizations in which systems are deployed, due to the highly dynamic, mobile and resource-constrained environments. As a result, the problem of designing user-friendly, secure and high efficiency information systems in such complex environment has become a major challenge for the developers. In this thesis, firstly, new formal models are proposed to analyse the secure information flow in cloud computing systems. Then, the opacity of work flows in cloud computing systems is investigated, a threat model is built for cloud computing systems, and the information leakage in such system is analysed. This study can help cloud service providers and cloud subscribers to analyse the risks they take with the security of their assets and to make security related decision. Secondly, a procedure is established to quantitatively evaluate the costs and benefits of implementing information security technologies. In this study, a formal system model for data resources in a dynamic environment is proposed, which focuses on the location of different classes of data resources as well as the users. Using such a model, the concurrent and probabilistic behaviour of the system can be analysed. Furthermore, efficient solutions are provided for the implementation of information security system based on queueing theory and stochastic Petri nets. This part of research can help information security officers to make well judged information security investment decisions.

Systematic support for accountability in the cloud

Wongthai, Winai January 2014 (has links)
Cloud computing offers computational resources such as processing, networking, and storage to customers. Infrastructure as a Service (IaaS) consists of a cloud-based infrastructure to offer consumers raw computation resources such as storage and networking. These resources are billed using a pay-per-use cost model. However, IaaS is far from being a secure cloud infrastructure as the seven main security threats defined by the Cloud Security Alliance (CSA) indicate. Use of logging systems can provide evidence to support accountability for an IaaS cloud. An accountability helps when mitigating known threats. However, previous accountability with logging systems solutions are provided without systematic approaches. These solutions are usually either for the cloud customer side or for the cloud provider side, not for both of them. Moreover, the solutions also lack descriptions of logging systems in the context of a design pattern of the systems' components. This design pattern facilitates analysis of logging systems in terms of their quality. Additionally, there is a number of benefits of this pattern. They could be: to promote the reusability of design and development of logging systems; that designers can access this pattern more easily; to assist a designer adopts design approaches which make a logging system reusable and not to choose approaches which do not concern reusability concepts; and to enhance the documentation and maintenance of existing logging systems. Thus, the aim of this thesis is to provide support for accountability in the cloud with systematic approaches to assist in mitigating the risks associated with real world CSA threats, to benefit both customers and providers. We research the extent to which such logging systems help us to mitigate risks associated with the threats identified by the CSA. The thesis also presents a way of identifying the reference components of logging systems and how they may be arranged to satisfy logging requirements. 'Generic logging components' for logging systems are proposed. These components encompass all possible instantiations of logging solutions for IaaS cloud. The generic logging components can be used to map existing logging systems for the purposes of analysis of the systems' security. Based on the generic components, the thesis identifies design patterns in the context of logging in IaaS cloud. We believe that these identified patterns facilitate analysis of logging systems in terms of their quality. We also argue that: these identified patterns could increase reusability of the design and development of logging systems; designers should access these patterns more easily; the patterns could assist a designer adopts design approaches which make a logging system reusable and not to choose approaches which do not concern reusability concepts; and they can enhance the documentation and maintenance of existing logging systems. We identify a logging solution which is based on the generic logging components to mitigate the risks associated with CSA threat number one. An example of the threat is malicious activities, for example spamming, which are performed in consumers' virtual machines or VMs. We argue that the generic logging components we suggest could be used to perform a systematic analysis of logging systems in terms of security before deploying them in production systems. To assist in mitigating the risks associated with this threat to benefit both customers and providers, we investigate how CSA threat number one can affect the security of both consumers and providers. Then we propose logging solutions based on the generic logging components and the identified patterns. We systematically design and implement a prototype system of the proposed logging solutions in an IaaS to record history of customer's files. This prototype system can be also modified in order to record VMs' process behaviour log files. This system can record the log files while having a smaller trusted computing base, compared to previous work. Additionally, the system can be seen as possible solutions that could tackle the dificult problem of logging file and process activities in the IaaS. Thus, the proposed logging solutions can assist in mitigating the risks associated with the CSA threats to benefit both consumers and providers. This could promote systematic support for accountability in the cloud.

Exploring traffic and QoS management mechanisms to support mobile cloud computing using service localisation in heterogeneous environments

Sardis, Fragkiskos January 2015 (has links)
In recent years, mobile devices have evolved to support an amalgam of multimedia applications and content. However, the small size of these devices poses a limit the amount of local computing resources. The emergence of Cloud technology has set the ground for an era of task offloading for mobile devices and we are now seeing the deployment of applications that make more extensive use of Cloud processing as a means of augmenting the capabilities of mobiles. Mobile Cloud Computing is the term used to describe the convergence of these technologies towards applications and mechanisms that offload tasks from mobile devices to the Cloud. In order for mobile devices to access Cloud resources and successfully offload tasks there, a solution for constant and reliable connectivity is required. The proliferation of wireless technology ensures that networks are available almost everywhere in an urban environment and mobile devices can stay connected to a network at all times. However, user mobility is often the cause of intermittent connectivity that affects the performance of applications and ultimately degrades the user experience. 5th Generation Networks are introducing mechanisms that enable constant and reliable connectivity through seamless handovers between networks and provide the foundation for a tighter coupling between Cloud resources and mobiles. This convergence of technologies creates new challenges in the areas of traffic management and QoS provisioning. The constant connectivity to and reliance of mobile devices on Cloud resources have the potential of creating large traffic flows between networks. Furthermore, depending on the type of application generating the traffic flow, very strict QoS may be required from the networks as suboptimal performance may severely degrade an application’s functionality. In this thesis, I propose a new service delivery framework, centred on the convergence of Mobile Cloud Computing and 5G networks for the purpose of optimising service delivery in a mobile environment. The framework is used as a guideline for identifying different aspects of service delivery in a mobile environment and for providing a path for future research in this field. The focus of the thesis is placed on the service delivery mechanisms that are responsible for optimising the QoS and managing network traffic. I present a solution for managing traffic through dynamic service localisation according to user mobility and device connectivity. I implement a prototype of the solution in a virtualised environment as a proof of concept and demonstrate the functionality and results gathered from experimentation. Finally, I present a new approach to modelling network performance by taking into account user mobility. The model considers the overall performance of a persistent connection as the mobile node switches between different networks. Results from the model can be used to determine which networks will negatively affect application performance and what impact they will have for the duration of the user's movement. The proposed model is evaluated using an analytical approach.

Reducing uncertainty in environmental assessments of internet services

Schien, Daniel January 2015 (has links)
The continuing growth of internet services such as streaming videos, browsing websites or generally exchanging data has drawn the attention of academic researchers, industry, and the general public towards their environmental impact. Past assessments of this impact come to differing results due to the complexity of information and communication technology systems, including networks, data centres and user devices. Assuming a life-cycle perspective, this thesis reduces some of this uncertainty and thus works towards more robust assessments and ultimately decision-making. The first part of this thesis consists of modelling the energy consumption of routers and fibre-optical equipment that comprise the networks. As a result, new estimates of the energy intensity of networks are made, that can be used to derive the energy consumption of data transfer through the network. In t he second part, the energy consumption by data centres and use devices is included , which combined give a comprehensive assessment of the system end-to-end. One chapter is dedicated to the detailed analysis of the varying environmental footprint between different user devices and types of media. A separate chapter then develops and show-cases a more integrated assessment for an complete digital service over one year and demonstrates several new approaches to reducing uncertainty around use r device and access network energy consumption. The methods and models presented in this thesis are applicable to a wide range of services and contribute to more robust estimates of the energy consumption. The aim is to enable sustainability practitioners to carry out environmental assessments of digital services.

Page generated in 0.0606 seconds