• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 4
  • Tagged with
  • 144
  • 30
  • 26
  • 24
  • 24
  • 19
  • 13
  • 11
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Cloud-computing strategies for sustainable ICT utilization : a decision-making framework for non-expert Smart Building managers

Mualla, Karmin Jamil January 2016 (has links)
Virtualization of processing power, storage, and networking applications via cloud-computing allows Smart Buildings to operate heavy demand computing resources off-premises. While this approach reduces in-house costs and energy use, recent case-studies have highlighted complexities in decision-making processes associated with implementing the concept of cloud-computing. This complexity is due to the rapid evolution of these technologies without standardization of approach by those organizations offering cloud-computing provision as a commercial concern. This study defines the term Smart Building as an ICT environment where a degree of system integration is accomplished. Non-expert managers are highlighted as key users of the outcomes from this project given the diverse nature of Smart Buildings’ operational objectives. This research evaluates different ICT management methods to effectively support decisions made by non-expert clients to deploy different models of cloud-computing services in their Smart Buildings ICT environments. The objective of this study is to reduce the need for costly 3rd party ICT consultancy providers, so non-experts can focus more on their Smart Buildings’ core competencies rather than the complex, expensive, and energy consuming processes of ICT management. The gap identified by this research represents vulnerability for non-expert managers to make effective decisions regarding cloud-computing cost estimation, deployment assessment, associated power consumption, and management flexibility in their Smart Buildings ICT environments. The project analyses cloud-computing decision-making concepts with reference to different Smart Building ICT attributes. In particular, it focuses on a structured programme of data collection which is achieved through semi-structured interviews, cost simulations and risk-analysis surveys. The main output is a theoretical management framework for non-expert decision-makers across variously-operated Smart Buildings. Furthermore, a decision-support tool is designed to enable non-expert managers to identify the extent of virtualization potential by evaluating different implementation options. This is presented to correlate with contract limitations, security challenges, system integration levels, sustainability, and long-term costs. These requirements are explored in contrast to cloud demand changes observed across specified periods. Dependencies were identified to greatly vary depending on numerous organizational aspects such as performance, size, and workload. The study argues that constructing long-term, sustainable, and cost-efficient strategies for any cloud deployment, depends on the thorough identification of required services off and on-premises. It points out that most of today’s heavy-burdened Smart Buildings are outsourcing these services to costly independent suppliers, which causes unnecessary management complexities, additional cost, and system incompatibility. The main conclusions argue that cloud-computing cost can differ depending on the Smart Building attributes and ICT requirements, and although in most cases cloud services are more convenient and cost effective at the early stages of the deployment and migration process, it can become costly in the future if not planned carefully using cost estimation service patterns. The results of the study can be exploited to enhance core competencies within Smart Buildings in order to maximize growth and attract new business opportunities.
32

Provenance-based data traceability model and policy enforcement framework for cloud services

Ali, Mufajjul January 2016 (has links)
In the context of software, provenance holds the key to retaining a reproduceable instance of the duration of a service, which can be replayed/reproduced from the beginning. This entails the nature of invocations that took place, how/where the data were created, modified, updated and the user's engagement with the service. With the emergence of the cloud and the benefits it encompasses, there has been a rapid proliferation of services being developed and adopted by commercial businesses. However, these services expose very little internal workings to their customers, and insufficient means to check for the right working order. This can cause transparency and compliance issues, especially in the event of a fault or violation, customers and providers are left to point finger at each other. Provenance-based traceability provides a means to address a part of this problem by being able to capture and query events that have occurred in the past to understand how and why it took place. On top of that, provenance-based policies are required to facilitate the validation and enforcement of business level requirements for end-users satisfaction. This dissertation makes four contributions to the state of the art: i) By defining and implementing an enhanced provenance-based cloud traceability model (cProv), that extends the standardized Prov model to support characteristics related to cloud services. The model is then able to conceptualize the traceability of a running cloud service. ii) By the creation of a provenance-based policy language (cProvl) in order to facilitate the declaration and enforcement of the business level requirements. iii) By developing a traceability framework, that provides client and server-side stacks for integrating service-level traceability and policy-based enforcement of business rules. iv) Finally by the implementation and evaluation of the framework, that leverages on the standardized industry solutions. The framework is then applied to the commercial service: `ConfidenShare' as a proof of concept.
33

The Web of community trust : amateur fiction online : a case study in community focused design for the Semantic Web

Lawrence, K. Faith January 2007 (has links)
This thesis describes a case study online community: online amateur authors. Taking this case study community as a base, this thesis considers how the concept of community is applied within the Semantic Web domain. Considering the community structures that can be demonstrated through the case study, this thesis makes the case for the recognition of a specific type of social network structure, one that fulfils the traditional definitions of ‘community’. We argue that this sub-type occupies an important position within social networks and our understanding of them due to the structures required for them to be so defined and that there are assumptions and inferences which can be made about nodes within this type of community group but not others. Having detailed our case study community and the type of network it represents, this thesis goes on to consider how the community could be supported beyond the mailing lists and journalling sites upon which it currently relies. Through our investigation of the community’s issues and requirements, we focus on identity and explore this concept within the context of community membership. Further we analyse the community practice of metadata annotation, in comparison to other metadata systems such as tagging, and as it related to the development of the community. We propose a number of ontological models which we argue could assist the community and, finally, consider ways in which these models could be made available to the community in keeping with current practice and level of technical knowledge as evidenced by the community.
34

Delegated private set intersection on outsourced private datasets

Abadi, Aydin Kheirbakhsh January 2017 (has links)
The significance of cloud computing is increasing and the cloud is receiving growing attention from individuals and companies. The cloud enables ubiquitous and on-demand access to a pool of configurable computing resources that can be scaled up easily. However, the cloud is vulnerable to data security breaches such as exposing confidential data, data tampering,and denial of service. Thus, it cannot be fully trusted and it is crucial for the clients who use the cloud to protect the security of their own data. In this thesis, we design cryptographic protocols to allow clients to outsource their private data to the cloud and delegate certain computation tothe cloud securely. We focus on the computation of set intersection which has a broad range of applications such as privacy-preserving data mining,and homeland security. Traditionally, the goal of Private Set Intersection(PSI) protocols has been to enable two parties to jointly compute the intersection without revealing their own set to the other party. Many such protocols have been designed. But, in the cases where data and computation are outsourced to the cloud, the setting and trust assumptions are considerably changed. The traditional PSI protocols cannot be used directly to solve security problems, without sacrificing the advantages the cloud offers. The contribution of this thesis is a set of delegated PSI protocols that meet a variety of security and functional requirements in the cloud environment. For most clients, the most critical security concern when outsourcing data and computation to the cloud is data privacy. We start from here and design O-PSI, a novel protocol in which clients encrypt their data before outsourcing it to the cloud. The cloud uses the encrypted data to compute the intersection when requested. The outsourced data remain private against the cloud all the time since the data stored in the cloud is encrypted and the computation process leaks no information. O-PSI ensures that the computation can be performed only with the clients’ consent. The protocol also takes into account several functional requirements in order to take full advantage of the cloud. For example, clients can independently prepare and upload their data to the cloud, and the clients are able to delegate to the cloud the computation an unlimited number of times, without the need to locally re-prepare the data. We then extend O-PSI in several ways to provide additional properties:* EO-PSI is a more efficient version of O-PSI that does not require public key operations.* UEO-PSI extends EO-PSI with efficient update operations, making it possible to efficiently handle dynamic data.* VD-PSI extends O-PSI with verifiability, i.e. the clients can efficiently verify the integrity of the computation result. For each protocol, we provide a formal simulation-based security analysis. We also compare the protocols against the state of the art. In addition to that, we have implemented the O-PSI and EO-PSI protocols and provide an evaluation of their performance based on our implementation.
35

Advanced design and traffic management methods for multi-service networks

Pasias, Vasilios January 2007 (has links)
This PhD thesis considers some of the more emerging problems in network modelling, namely the design of survivable hierarchical networks, <i>Traffic Engineering </i>(TE)<i> </i>and generally traffic management in survivable multi-service networks with <i>Quality of Service </i>(QoS) prerequisites and the planning of wireless access networks. So, in the context of the research work presented in this thesis:- Novel survivable hierarchical network design, wireless access network planning and traffic management techniques were developed. These techniques involve optimisation methods based on <i>Linear Programming </i>(LP) and <i>Integer Linear Programming </i>(ILP), as well as heuristic methods based on <i>graph theory  </i>and <i>computational intelligence (genetic optimisation </i>and <i>simulated annealing). </i> A unified framework for off-line TE, on-line/dynamic routing and path restoration (facility restoration) that can be used in survivable multi-service QoS networks was also developed. Existing traffic management techniques were improved so that to support advanced QoS and survivability characteristics. At first, the objectives of this project are presented followed by a brief analysis of the problems encountered in the network design process. Next, the new methods for designing survivable hierarchical networks are analytically described followed by the developed wireless access network design techniques. After that, the novel traffic management methods and the aforementioned framework, developed in the context of this thesis are presented. Test results are provided together with most of the developed methods. The test results actually indicate that the developed methods can efficiently solve small, medium or even large problems, all developed methods are computationally tractable and the performance of the developed heuristic method is very close to this of the corresponding LP and ILP optimisation methods. The new heuristic methods are solved in a fraction of the time (less than 30%) that the equivalent optimisation methods are solved. Note that the specially developed design and simulation software tool <i>NetLab </i>was used in order to test and evaluate the new design and traffic management methods. Finally, a summary of the work carried out and the results achieved is presented followed by the conclusions and suggestions for further work.
36

Integrated framework for mobile low power IoT devices

Al-Nidawi, Yaarob Mahjoob Nafel January 2016 (has links)
Ubiquitous object networking has sparked the concept of the Internet of Things (IoT) which defines a new era in the world of networking. The IoT principle can be addressed as one of the important strategic technologies that will positively influence the humans’ life. All the gadgets, appliances and sensors around the world will be connected together to form a smart environment, where all the entities that connected to the Internet can seamlessly share data and resources. The IoT vision allows the embedded devices, e.g. sensor nodes, to be IP-enabled nodes and interconnect with the Internet. The demand for such technique is to make these embedded nodes act as IP-based devices that communicate directly with other IP networks without unnecessary overhead and to feasibly utilize the existing infrastructure built for the Internet. In addition, controlling and monitoring these nodes is maintainable through exploiting the existed tools that already have been developed for the Internet. Exchanging the sensory measurements through the Internet with several end points in the world facilitates achieving the concept of smart environment. Realization of IoT concept needs to be addressed by standardization efforts that will shape the infrastructure of the networks. This has been achieved through the IEEE 802.15.4, 6LoWPAN and IPv6 standards. The bright side of this new technology is faced by several implications since the IoT introduces a new class of security issues, such as each node within the network is considered as a point of vulnerability where an attacker can utilize to add malicious code via accessing the nodes through the Internet or by compromising a node. On the other hand, several IoT applications comprise mobile nodes that is in turn brings new challenges to the research community due to the effect of the node mobility on the network management and performance. Another defect that degrades the network performance is the initialization stage after the node deployment step by which the nodes will be organized into the network. The recent IEEE 802.15.4 has several structural drawbacks that need to be optimized in order to efficiently fulfil the requirements of low power mobile IoT devices. This thesis addresses the aforementioned three issues, network initialization, node mobility and security management. In addition, the related literature is examined to define the set of current issues and to define the set of objectives based upon this. The first contribution is defining a new strategy to initialize the nodes into the network based on the IEEE 802.15.4 standard. A novel mesh-under cluster-based approach is proposed and implemented that efficiently initializes the nodes into clusters and achieves three objectives: low initialization cost, shortest path to the sink node, low operational cost (data forwarding). The second contribution is investigating the mobility issue within the IoT media access control (MAC) infrastructure and determining the related problems and requirements. Based on this, a novel mobility scheme is presented that facilitates node movement inside the network under the IEEE 802.15.4e time slotted channel hopping (TSCH) mode. The proposed model mitigates the problem of frequency channel hopping and slotframe issue in the TSCH mode. The next contribution in this thesis is determining the mobility impact on low latency deterministic (LLDN) network. One of the significant issues of mobility is increasing the latency and degrading packet delivery ratio (PDR). Accordingly, a novel mobility protocol is presented to tackle the mobility issue in LLDN mode and to improve network performance and lessen impact of node movement. The final contribution in this thesis is devising a new key bootstrapping scheme that fits both IEEE 802.15.4 and 6LoWPAN neighbour discovery architectures. The proposed scheme permits a group of nodes to establish the required link keys without excessive communication/computational overhead. Additionally, the scheme supports the mobile node association process by ensuring secure access control to the network and validates mobile node authenticity in order to eliminate any malicious node association. The purposed key management scheme facilitates the replacement of outdated master network keys and release the required master key in a secure manner. Finally, a modified IEEE 802.15.4 link-layer security structure is presented. The modified architecture minimizes both energy consumption and latency incurred through providing authentication/confidentiality services via the IEEE 802.15.4.
37

Energy-aware profiling and prediction modelling of virtual machines in cloud computing environments

Alzamil, Ibrahim Ali M. January 2017 (has links)
Cloud Computing has changed the way in which individuals and businesses use IT resources. Instead of buying their own IT resources, they can use the Cloud services offered by Cloud providers with reasonable costs based on a “pay-per-use” model. With the wide adoption of Cloud Computing, the costs for maintaining the Cloud infrastructure have become a vital issue for the providers, especially with the large amount of energy being consumed to operate these resources. Hence, the excessive use of energy consumption in Cloud infrastructures has become one of the major cost factors for Cloud providers. In order to reduce the energy consumption and enhance the energy efficiency of Cloud resources, proactive and reactive management tools are used with consideration of physical resources’ energy consumption. However, these tools need to be supported with energy-awareness not only at the physical machine (PM) level but also at virtual machine (VM) level in order to make enhanced energy-aware decisions. As the VMs do not have physical interface, identifying the energy consumption at the VM level is difficult and not directly measured. This thesis introduces an energy-aware Cloud system architecture that aims to enable energy-awareness at the deployment and operational levels of a Cloud environment. At the operational level, an energy-aware profiling model is introduced to identify energy consumption for heterogeneous and homogeneous VMs running on the same PM based on the size and CPU utilisation of each VM. At the deployment level, an energy-aware prediction framework is introduced to forecast future VMs’ energy consumption. This framework first predicts the VMs’ workload based on historical workload patterns, particularly static and periodic, using Auto-regressive Integrated Moving Average (ARIMA) model. The predicted VM workload is then correlated to the physical resources within this framework in order to get the predicted VM energy consumption. The evaluation of the proposed work on a real Cloud testbed reveals that the proposed energy-aware profiling model is capable of fairly attributing the physical energy consumption to homogeneous and heterogeneous VMs, therefore enabling energy-awareness at the VM level. Compared with actual results obtained in this testbed, the predicted results show that the proposed energy-aware prediction framework is capable of forecasting the energy consumption for the VMs with a good prediction accuracy for static and periodic Cloud application workload patterns. The application of the proposed work is providing energy-awareness which can be used and incorporated by other reactive and proactive management tools to make enhanced energy-aware decisions and efficiently manage the Cloud resources. This can lead towards a reduction of energy consumption, and therefore lowering the cost of operational expenditure (OPEX) for Cloud providers and having less impact on the environment.
38

Assessing the evidential value of artefacts recovered from the cloud

Mustafa, Zareefa S. January 2017 (has links)
Cloud computing offers users low-cost access to computing resources that are scalable and flexible. However, it is not without its challenges, especially in relation to security. Cloud resources can be leveraged for criminal activities and the architecture of the ecosystem makes digital investigation difficult in terms of evidence identification, acquisition and examination. However, these same resources can be leveraged for the purposes of digital forensics, providing facilities for evidence acquisition, analysis and storage. Alternatively, existing forensic capabilities can be used in the Cloud as a step towards achieving forensic readiness. Tools can be added to the Cloud which can recover artefacts of evidential value. This research investigates whether artefacts that have been recovered from the Xen Cloud Platform (XCP) using existing tools have evidential value. To determine this, it is broken into three distinct areas: adding existing tools to a Cloud ecosystem, recovering artefacts from that system using those tools and then determining the evidential value of the recovered artefacts. From these experiments, three key steps for adding existing tools to the Cloud were determined: the identification of the specific Cloud technology being used, identification of existing tools and the building of a testbed. Stemming from this, three key components of artefact recovery are identified: the user, the audit log and the Virtual Machine (VM), along with two methodologies for artefact recovery in XCP. In terms of evidential value, this research proposes a set of criteria for the evaluation of digital evidence, stating that it should be authentic, accurate, reliable and complete. In conclusion, this research demonstrates the use of these criteria in the context of digital investigations in the Cloud and how each is met. This research shows that it is possible to recover artefacts of evidential value from XCP.
39

Policy-driven governance in cloud service ecosystems

Kourtesis, Dimitrios January 2016 (has links)
Cloud application development platforms facilitate new models of software co-development and forge environments best characterised as cloud service ecosystems. The value of those ecosystems increases exponentially with the addition of more users and third-party services. Growth however breeds complexity and puts reliability at risk, requiring all stakeholders to exercise control over changes in the ecosystem that may affect them. This is a challenge of governance. From the viewpoint of the ecosystem coordinator, governance is about preventing negative ripple effects from new software added to the platform. From the viewpoint of third-party developers and end-users, governance is about ensuring that the cloud services they consume or deliver comply with requirements on a continuous basis. To facilitate different forms of governance in a cloud service ecosystem we need governance support systems that achieve separation of concerns between the roles of policy provider, governed resource provider and policy evaluator. This calls for better modularisation of the governance support system architecture, decoupling governance policies from policy evaluation engines and governed resources. It also calls for an improved approach to policy engineering with increased automation and efficient exchange of governance policies and related data between ecosystem partners. The thesis supported by this research is that governance support systems that satisfy such requirements are both feasible and useful to develop through a framework that integrates Semantic Web technologies and Linked Data principles. The PROBE framework presented in this dissertation comprises four components: (1) a governance ontology serving as shared ecosystem vocabulary for policies and resources; (2) a method for the definition of governance policies; (3) a method for sharing descriptions of governed resources between ecosystem partners; (4) a method for evaluating governance policies against descriptions of governed ecosystem resources. The feasibility and usefulness of PROBE are demonstrated with the help of an industrial case study on cloud service ecosystem governance.
40

An integrated security protocol communication scheme for Internet of Things using the Locator/ID Separation Protocol network

Raheem, Ali Hussein January 2017 (has links)
Internet of Things communication is mainly based on a machine-to-machine pattern, where devices are globally addressed and identified. However, as the number of connected devices increase, the burdens on the network infrastructure increase as well. The major challenges are the size of the routing tables and the efficiency of the current routing protocols in the Internet backbone. To address these problems, an Internet Engineering Task Force (IETF) working group, along with the research group at Cisco, are still working on the Locator/ID Separation Protocol as a routing architecture that can provide new semantics for the IP addressing, to simplify routing operations and improve scalability in the future of the Internet such as the Internet of Things. Nonetheless, The Locator/ID Separation Protocol is still at an early stage of implementation and the security Protocol e.g. Internet Protocol Security (IPSec), in particular, is still in its infancy. Based on this, three scenarios were considered: Firstly, in the initial stage, each Locator/ID Separation Protocol-capable router needs to register with a Map-Server. This is known as the Registration Stage. Nevertheless, this stage is vulnerable to masquerading and content poisoning attacks. Secondly, the addresses resolving stage, in the Locator/ID Separation Protocol the Map Server (MS) accepts Map-Request from Ingress Tunnel Routers and Egress Tunnel Routers. These routers in trun look up the database and return the requested mapping to the endpoint user. However, this stage lacks data confidentiality and mutual authentication. Furthermore, the Locator/ID Separation Protocol limits the efficiency of the security protocol which works against redirecting the data or acting as fake routers. Thirdly, As a result of the vast increase in the different Internet of Things devices, the interconnected links between these devices increase vastly as well. Thus, the communication between the devices can be easily exposed to disclosures by attackers such as Man in the Middle Attacks (MitM) and Denial of Service Attack (DoS). This research provided a comprehensive study for Communication and Mobility in the Internet of Things as well as the taxonomy of different security protocols. It went on to investigate the security threats and vulnerabilities of Locator/ID Separation Protocol using X.805 framework standard. Then three Security protocols were provided to secure the exchanged transitions of communication in Locator/ID Separation Protocol. The first security protocol had been implemented to secure the Registration stage of Locator/ID separation using ID/Based cryptography method. The second security protocol was implemented to address the Resolving stage in the Locator/ID Separation Protocol between the Ingress Tunnel Router and Egress Tunnel Router using Challenge-Response authentication and Key Agreement technique. Where, the third security protocol had been proposed, analysed and evaluated for the Internet of Things communication devices. This protocol was based on the authentication and the group key agreement via using the El-Gamal concept. The developed protocols set an interface between each level of the phase to achieve security refinement architecture to Internet of Things based on Locator/ID Separation Protocol. These protocols were verified using Automated Validation Internet Security Protocol and Applications (AVISPA) which is a push button tool for the automated validation of security protocols and achieved results demonstrating that they do not have any security flaws. Finally, a performance analysis of security refinement protocol analysis and an evaluation were conducted using Contiki and Cooja simulation tool. The results of the performance analysis showed that the security refinement was highly scalable and the memory was quite efficient as it needed only 72 bytes of memory to store the keys in the Wireless Sensor Network (WSN) device.

Page generated in 0.0337 seconds