• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 4
  • Tagged with
  • 144
  • 30
  • 26
  • 24
  • 24
  • 19
  • 13
  • 11
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Optimized self-service resource containers for next generation cloud delivery

Musa, Ibrahim Kabiru January 2014 (has links)
Next generation cloud computing is envisioned to allow any logical combination of information technology (IT) and network resources to deliver cloud services in the most cost-effective and efficient manner. Advanced self-service, infrastructure convergence, and high flexibility are crucial in such an environment. To achieve the vision of such a next generation paradigm, suitable means of addressing complex interaction between various virtual resources need to be addressed. This thesis proposes a novel service framework, the virtual-cells-as-a-service (vCAAS), to address the need for advanced interaction of cloud services. vCAAS enables converged cloud services in which virtual machines, network and storage resources are delivered in a self-service virtual infrastructure container. The approach views cloud resources in a holistic manner where components interact and complement each other to complete a task without manual intervention. The thesis begins with a statement of the problem addressed and the objectives of the research. The methodology adopted for the research is .described subsequently and the outline of the thesis presented. These are followed by a brief introduction highlighting the current developments in cloud computing and the enabling technologies for the new paradigm. Next, the thesis presents a framework for the proposed vCAAS. Various components and enabling functionalities required to realise the framework are described. Multi-objective optimization strategies suitable for the problems in vCAAS are presented. A case for hybrid optical and electrical switching for intra-cloud datacenters to enable cloud services is also made. Novel algorithms for traffic management in the hybrid cloud data center are proposed and demonstrated in a simulation experiment. Finally, the thesis presents a practical application of the novel concept of vCAAS in solving a real world scientific data analysis problem.
32

An approach to compliance conformance for cloud-based business applications leveraging service level agreements and continuous auditing

Sinclair, J. G. January 2014 (has links)
Organisations increasingly use flexible, adaptable and scalable IT infrastructures, such as cloud computing resources, for hosting business applications and storing customer data. To prevent the misuse of personal data, auditors can assess businesses for legal compliance conformance. For data privacy compliance there are many applicable pieces of legislation as well as regulations and standards. Businesses operate globally and typically have systems that are dynamic and mobile; in contrast current data privacy laws often have geographical jurisdictions and so conflicts can arise between the law and the technological framework of cloud computing. Traditional auditing approaches are unsuitable for cloud-based environments because of the complexity of potentially short-lived, migratory and scalable real-time virtual systems. My research goal is to address the problem of auditing cloud-based services for data privacy compliance by devising an appropriate machine-readable Service Level Agreement (SLA) framework for specifying applicable legal conditions. This allows the development of a scalable Continuous Compliance Auditing Service (CCAS) for monitoring data privacy in cloud-based environments. The CCAS architecture utilises agreed SLA conditions to process service events for compliance conformance. The CCAS architecture has been implemented and customised for a real world Electronic Health Record (EHR) scenario in order to demonstrate geo-location compliance monitoring using data privacy restrictions. Finally, the automated audit process of CCAS has been compared and evaluated against traditional auditing approaches and found to have the potential for providing audit capabilities in complex IT environments.
33

The use of high-level requirements ontologies for discovering resources in a multi-provider cloud environment

Sun, Y. L. January 2014 (has links)
This thesis proposes the use of high-level requirement ontologies for discovering resources in a multi-provider cloud environment. A high-level framework for deploying cloud-oriented applications, which harnessing existing cloud technologies, is developed. The framework provides an abstract multi-layered ontological model for specifying cloud application requirements. Domain-specific ontologies are used to specify high-level application requirements. These are translated into infrastructure ontologies which are agnostic to underlying providers and low-level resources. Resource and cost ontologies are used for specifying the capabilities and cost of infrastructure resources. The proposed model provides an abstract application-centric mechanism for specifying an application's requirements and searching for a set of suitable resources in a multi-provider cloud environment. A two-phase resource discovery approach for selecting cloud resources is developed. In the first phase, a set of possible resources which meet an application's mandatory requirements is identified. In the second phase, a suitable heuristic is used to filter the initial resource set by taking into consideration other requirements. This approach enables a selection of appropriate resources based on the needs of the application at the time it is being deployed. Furthermore, a meta programming model is developed to facilitate an unified approach to the management of cloud resources (offered by different providers). The proposed framework allows cloud users to specify application requirements without being overly concerned about the complexity of underlying provider frameworks and resources. The framework provides an effective mechanism for searching for a set of suitable resources that satisfy the application's requirements, specified at design time, while having the capability to adapt to requirement changes at runtime. Cloud resources can be utilised effectively in order to maximize the performance of an application and minimise its deployment cost in a multi-provider cloud environment.
34

An artistic perspective on distributed computer networks : creativity in human-machine systems

Gapsevicius, Mindaugas January 2016 (has links)
This thesis is written from an artistic perspective as a reflection on currently significant discussions in media theory, with a focus on the impact of technology on society. While mapping boundaries of contemporary art, post-digital art is considered the best for describing current discourses in media theory in the context of this research. Bringing into the discussion artworks by Martin Howse & Jonathan Kemp (2001-2008), Maurizio Bolognini (Bolognini 1988-present), and myself (mi_ga 2006), among many others, this research defines post-digital art, which in turn defines a complexity of interactions between elements of different natures, such as the living and non-living, human and machine, art and science. Within the analysis of P2P networks, I highlight Milgram's (1967) idea of six degrees of separation, which, at least from a speculative point of view, is interesting for the implementation of human-machine concepts in future technological developments. From this perspective, I argue that computer networks could, in the future, have more potential for merging with society if developed similarly to the computer routing scheme implemented in the Freenet distributed information storage and retrieval system. The thesis then describes my own artwork, 0.30402944246776265, including two newly developed plugins for the Freenet storage system; the first plugin is constructed to fulfill the idea of interacting elements of different natures (in this case, the WWW and Freenet), while the other plugin attempts to visualize data flow within the Freenet storage and retrieval system. All together, this paper proposes that a reconsideration of distributed and self-organized information systems, through an artistic and philosophical lens, can open up a space for the rethinking of the current integration of society and technology.
35

Cloud-computing strategies for sustainable ICT utilization : a decision-making framework for non-expert Smart Building managers

Mualla, Karmin Jamil January 2016 (has links)
Virtualization of processing power, storage, and networking applications via cloud-computing allows Smart Buildings to operate heavy demand computing resources off-premises. While this approach reduces in-house costs and energy use, recent case-studies have highlighted complexities in decision-making processes associated with implementing the concept of cloud-computing. This complexity is due to the rapid evolution of these technologies without standardization of approach by those organizations offering cloud-computing provision as a commercial concern. This study defines the term Smart Building as an ICT environment where a degree of system integration is accomplished. Non-expert managers are highlighted as key users of the outcomes from this project given the diverse nature of Smart Buildings’ operational objectives. This research evaluates different ICT management methods to effectively support decisions made by non-expert clients to deploy different models of cloud-computing services in their Smart Buildings ICT environments. The objective of this study is to reduce the need for costly 3rd party ICT consultancy providers, so non-experts can focus more on their Smart Buildings’ core competencies rather than the complex, expensive, and energy consuming processes of ICT management. The gap identified by this research represents vulnerability for non-expert managers to make effective decisions regarding cloud-computing cost estimation, deployment assessment, associated power consumption, and management flexibility in their Smart Buildings ICT environments. The project analyses cloud-computing decision-making concepts with reference to different Smart Building ICT attributes. In particular, it focuses on a structured programme of data collection which is achieved through semi-structured interviews, cost simulations and risk-analysis surveys. The main output is a theoretical management framework for non-expert decision-makers across variously-operated Smart Buildings. Furthermore, a decision-support tool is designed to enable non-expert managers to identify the extent of virtualization potential by evaluating different implementation options. This is presented to correlate with contract limitations, security challenges, system integration levels, sustainability, and long-term costs. These requirements are explored in contrast to cloud demand changes observed across specified periods. Dependencies were identified to greatly vary depending on numerous organizational aspects such as performance, size, and workload. The study argues that constructing long-term, sustainable, and cost-efficient strategies for any cloud deployment, depends on the thorough identification of required services off and on-premises. It points out that most of today’s heavy-burdened Smart Buildings are outsourcing these services to costly independent suppliers, which causes unnecessary management complexities, additional cost, and system incompatibility. The main conclusions argue that cloud-computing cost can differ depending on the Smart Building attributes and ICT requirements, and although in most cases cloud services are more convenient and cost effective at the early stages of the deployment and migration process, it can become costly in the future if not planned carefully using cost estimation service patterns. The results of the study can be exploited to enhance core competencies within Smart Buildings in order to maximize growth and attract new business opportunities.
36

Provenance-driven diagnostic framework for task evictions mitigating strategy in cloud computing

Albatli, Abdulaziz Mohammed N. January 2017 (has links)
Cloud computing is an evolving paradigm. It delivers virtualized, scalable and elastic resources (e.g. CPU, memory) over a network (e.g. Internet) from data centres to users (e.g. individuals, enterprises, governments). Applications, platforms, and infrastructures are Cloud services that users can access. Clouds enable users to run highly complex operations to satisfy computation needs through resource virtualization. Virtualization is a method to run a number of virtual machines (VM) on a single physical server. However, VMs are not a necessity in the Clouds. Cloud providers tend to overcommit resources, aiming to leverage unused capacity and maximize profits. This over-commitment of resources can lead to an overload of the actual physical machine, which lowers the performance or lead to the failure of tasks due to lack of resources, i.e. CPU or RAM, and consequently lead to SLA violations. There are a number of different strategies to mitigate the overload, one of which is VM task eviction. The ambition of this research is to adapt a provenance model, PROV, to help understand the historical usage of a Cloud system and the components contributed to the overload, so that the causes for task eviction can be identified for future prevention. A novel provenance-driven diagnostic framework is proposed. By studying Google’s 29-day Cloud dataset, the PROV model was extended to PROV-TE that underpinned a number of diagnostic algorithms for identifying evicted tasks due to specific causes. The framework was implemented and tested against the Google dataset. To further evaluate the framework, a simulation tool, SEED, was used to replicate task eviction behaviour with the specifications of Google Cloud and Amazon EC2. The framework, specifically the diagnostic algorithms, was then applied to audit the causes and to identify the relevant evicted tasks. The results were then analysed using precision and recall measures. The average precision and recall of the diagnostic algorithms are 83% and 90%, respectively.
37

Advanced design and traffic management methods for multi-service networks

Pasias, Vasilios January 2007 (has links)
This PhD thesis considers some of the more emerging problems in network modelling, namely the design of survivable hierarchical networks, <i>Traffic Engineering </i>(TE)<i> </i>and generally traffic management in survivable multi-service networks with <i>Quality of Service </i>(QoS) prerequisites and the planning of wireless access networks. So, in the context of the research work presented in this thesis:- Novel survivable hierarchical network design, wireless access network planning and traffic management techniques were developed. These techniques involve optimisation methods based on <i>Linear Programming </i>(LP) and <i>Integer Linear Programming </i>(ILP), as well as heuristic methods based on <i>graph theory  </i>and <i>computational intelligence (genetic optimisation </i>and <i>simulated annealing). </i> A unified framework for off-line TE, on-line/dynamic routing and path restoration (facility restoration) that can be used in survivable multi-service QoS networks was also developed. Existing traffic management techniques were improved so that to support advanced QoS and survivability characteristics. At first, the objectives of this project are presented followed by a brief analysis of the problems encountered in the network design process. Next, the new methods for designing survivable hierarchical networks are analytically described followed by the developed wireless access network design techniques. After that, the novel traffic management methods and the aforementioned framework, developed in the context of this thesis are presented. Test results are provided together with most of the developed methods. The test results actually indicate that the developed methods can efficiently solve small, medium or even large problems, all developed methods are computationally tractable and the performance of the developed heuristic method is very close to this of the corresponding LP and ILP optimisation methods. The new heuristic methods are solved in a fraction of the time (less than 30%) that the equivalent optimisation methods are solved. Note that the specially developed design and simulation software tool <i>NetLab </i>was used in order to test and evaluate the new design and traffic management methods. Finally, a summary of the work carried out and the results achieved is presented followed by the conclusions and suggestions for further work.
38

Integrated framework for mobile low power IoT devices

Al-Nidawi, Yaarob Mahjoob Nafel January 2016 (has links)
Ubiquitous object networking has sparked the concept of the Internet of Things (IoT) which defines a new era in the world of networking. The IoT principle can be addressed as one of the important strategic technologies that will positively influence the humans’ life. All the gadgets, appliances and sensors around the world will be connected together to form a smart environment, where all the entities that connected to the Internet can seamlessly share data and resources. The IoT vision allows the embedded devices, e.g. sensor nodes, to be IP-enabled nodes and interconnect with the Internet. The demand for such technique is to make these embedded nodes act as IP-based devices that communicate directly with other IP networks without unnecessary overhead and to feasibly utilize the existing infrastructure built for the Internet. In addition, controlling and monitoring these nodes is maintainable through exploiting the existed tools that already have been developed for the Internet. Exchanging the sensory measurements through the Internet with several end points in the world facilitates achieving the concept of smart environment. Realization of IoT concept needs to be addressed by standardization efforts that will shape the infrastructure of the networks. This has been achieved through the IEEE 802.15.4, 6LoWPAN and IPv6 standards. The bright side of this new technology is faced by several implications since the IoT introduces a new class of security issues, such as each node within the network is considered as a point of vulnerability where an attacker can utilize to add malicious code via accessing the nodes through the Internet or by compromising a node. On the other hand, several IoT applications comprise mobile nodes that is in turn brings new challenges to the research community due to the effect of the node mobility on the network management and performance. Another defect that degrades the network performance is the initialization stage after the node deployment step by which the nodes will be organized into the network. The recent IEEE 802.15.4 has several structural drawbacks that need to be optimized in order to efficiently fulfil the requirements of low power mobile IoT devices. This thesis addresses the aforementioned three issues, network initialization, node mobility and security management. In addition, the related literature is examined to define the set of current issues and to define the set of objectives based upon this. The first contribution is defining a new strategy to initialize the nodes into the network based on the IEEE 802.15.4 standard. A novel mesh-under cluster-based approach is proposed and implemented that efficiently initializes the nodes into clusters and achieves three objectives: low initialization cost, shortest path to the sink node, low operational cost (data forwarding). The second contribution is investigating the mobility issue within the IoT media access control (MAC) infrastructure and determining the related problems and requirements. Based on this, a novel mobility scheme is presented that facilitates node movement inside the network under the IEEE 802.15.4e time slotted channel hopping (TSCH) mode. The proposed model mitigates the problem of frequency channel hopping and slotframe issue in the TSCH mode. The next contribution in this thesis is determining the mobility impact on low latency deterministic (LLDN) network. One of the significant issues of mobility is increasing the latency and degrading packet delivery ratio (PDR). Accordingly, a novel mobility protocol is presented to tackle the mobility issue in LLDN mode and to improve network performance and lessen impact of node movement. The final contribution in this thesis is devising a new key bootstrapping scheme that fits both IEEE 802.15.4 and 6LoWPAN neighbour discovery architectures. The proposed scheme permits a group of nodes to establish the required link keys without excessive communication/computational overhead. Additionally, the scheme supports the mobile node association process by ensuring secure access control to the network and validates mobile node authenticity in order to eliminate any malicious node association. The purposed key management scheme facilitates the replacement of outdated master network keys and release the required master key in a secure manner. Finally, a modified IEEE 802.15.4 link-layer security structure is presented. The modified architecture minimizes both energy consumption and latency incurred through providing authentication/confidentiality services via the IEEE 802.15.4.
39

Energy-aware profiling and prediction modelling of virtual machines in cloud computing environments

Alzamil, Ibrahim Ali M. January 2017 (has links)
Cloud Computing has changed the way in which individuals and businesses use IT resources. Instead of buying their own IT resources, they can use the Cloud services offered by Cloud providers with reasonable costs based on a “pay-per-use” model. With the wide adoption of Cloud Computing, the costs for maintaining the Cloud infrastructure have become a vital issue for the providers, especially with the large amount of energy being consumed to operate these resources. Hence, the excessive use of energy consumption in Cloud infrastructures has become one of the major cost factors for Cloud providers. In order to reduce the energy consumption and enhance the energy efficiency of Cloud resources, proactive and reactive management tools are used with consideration of physical resources’ energy consumption. However, these tools need to be supported with energy-awareness not only at the physical machine (PM) level but also at virtual machine (VM) level in order to make enhanced energy-aware decisions. As the VMs do not have physical interface, identifying the energy consumption at the VM level is difficult and not directly measured. This thesis introduces an energy-aware Cloud system architecture that aims to enable energy-awareness at the deployment and operational levels of a Cloud environment. At the operational level, an energy-aware profiling model is introduced to identify energy consumption for heterogeneous and homogeneous VMs running on the same PM based on the size and CPU utilisation of each VM. At the deployment level, an energy-aware prediction framework is introduced to forecast future VMs’ energy consumption. This framework first predicts the VMs’ workload based on historical workload patterns, particularly static and periodic, using Auto-regressive Integrated Moving Average (ARIMA) model. The predicted VM workload is then correlated to the physical resources within this framework in order to get the predicted VM energy consumption. The evaluation of the proposed work on a real Cloud testbed reveals that the proposed energy-aware profiling model is capable of fairly attributing the physical energy consumption to homogeneous and heterogeneous VMs, therefore enabling energy-awareness at the VM level. Compared with actual results obtained in this testbed, the predicted results show that the proposed energy-aware prediction framework is capable of forecasting the energy consumption for the VMs with a good prediction accuracy for static and periodic Cloud application workload patterns. The application of the proposed work is providing energy-awareness which can be used and incorporated by other reactive and proactive management tools to make enhanced energy-aware decisions and efficiently manage the Cloud resources. This can lead towards a reduction of energy consumption, and therefore lowering the cost of operational expenditure (OPEX) for Cloud providers and having less impact on the environment.
40

Provenance-based data traceability model and policy enforcement framework for cloud services

Ali, Mufajjul January 2016 (has links)
In the context of software, provenance holds the key to retaining a reproduceable instance of the duration of a service, which can be replayed/reproduced from the beginning. This entails the nature of invocations that took place, how/where the data were created, modified, updated and the user's engagement with the service. With the emergence of the cloud and the benefits it encompasses, there has been a rapid proliferation of services being developed and adopted by commercial businesses. However, these services expose very little internal workings to their customers, and insufficient means to check for the right working order. This can cause transparency and compliance issues, especially in the event of a fault or violation, customers and providers are left to point finger at each other. Provenance-based traceability provides a means to address a part of this problem by being able to capture and query events that have occurred in the past to understand how and why it took place. On top of that, provenance-based policies are required to facilitate the validation and enforcement of business level requirements for end-users satisfaction. This dissertation makes four contributions to the state of the art: i) By defining and implementing an enhanced provenance-based cloud traceability model (cProv), that extends the standardized Prov model to support characteristics related to cloud services. The model is then able to conceptualize the traceability of a running cloud service. ii) By the creation of a provenance-based policy language (cProvl) in order to facilitate the declaration and enforcement of the business level requirements. iii) By developing a traceability framework, that provides client and server-side stacks for integrating service-level traceability and policy-based enforcement of business rules. iv) Finally by the implementation and evaluation of the framework, that leverages on the standardized industry solutions. The framework is then applied to the commercial service: `ConfidenShare' as a proof of concept.

Page generated in 0.1818 seconds