• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 4
  • Tagged with
  • 144
  • 30
  • 26
  • 24
  • 24
  • 19
  • 13
  • 11
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Internet traffic volumes characterization and forecasting

Vlachos, Nikolaos January 2016 (has links)
Internet usage increases every year and the need to estimate the growth of the generated traffic has become a major topic. Forecasting actual figures in advance is essential for bandwidth allocation, networking design and investment planning. In this thesis novel mathematical equations are presented to model and to predict long-term Internet traffic in terms of total aggregating volume, globally and more locally. Historical traffic data from consecutive years have revealed hidden numerical patterns as the values progress year over year and this trend can be well represented with appropriate mathematical relations. The proposed formulae have excellent fitting properties over long-history measurements and can indicate forthcoming traffic for the next years with an exceptionally low prediction error. In cases where pending traffic data have already become available, the suggested equations provide more successful results than the respective projections that come from worldwide leading research. The studies also imply that future traffic strongly depends on the past activity and on the growth of Internet users, provided that a big and representative sample of pertinent data exists from large geographical areas. To the best of my knowledge this work is the first to introduce effective prediction methods that exclusively rely on the static attributes and the progression properties of historical values.

An integrated security protocol communication scheme for Internet of Things using the Locator/ID Separation Protocol network

Raheem, Ali Hussein January 2017 (has links)
Internet of Things communication is mainly based on a machine-to-machine pattern, where devices are globally addressed and identified. However, as the number of connected devices increase, the burdens on the network infrastructure increase as well. The major challenges are the size of the routing tables and the efficiency of the current routing protocols in the Internet backbone. To address these problems, an Internet Engineering Task Force (IETF) working group, along with the research group at Cisco, are still working on the Locator/ID Separation Protocol as a routing architecture that can provide new semantics for the IP addressing, to simplify routing operations and improve scalability in the future of the Internet such as the Internet of Things. Nonetheless, The Locator/ID Separation Protocol is still at an early stage of implementation and the security Protocol e.g. Internet Protocol Security (IPSec), in particular, is still in its infancy. Based on this, three scenarios were considered: Firstly, in the initial stage, each Locator/ID Separation Protocol-capable router needs to register with a Map-Server. This is known as the Registration Stage. Nevertheless, this stage is vulnerable to masquerading and content poisoning attacks. Secondly, the addresses resolving stage, in the Locator/ID Separation Protocol the Map Server (MS) accepts Map-Request from Ingress Tunnel Routers and Egress Tunnel Routers. These routers in trun look up the database and return the requested mapping to the endpoint user. However, this stage lacks data confidentiality and mutual authentication. Furthermore, the Locator/ID Separation Protocol limits the efficiency of the security protocol which works against redirecting the data or acting as fake routers. Thirdly, As a result of the vast increase in the different Internet of Things devices, the interconnected links between these devices increase vastly as well. Thus, the communication between the devices can be easily exposed to disclosures by attackers such as Man in the Middle Attacks (MitM) and Denial of Service Attack (DoS). This research provided a comprehensive study for Communication and Mobility in the Internet of Things as well as the taxonomy of different security protocols. It went on to investigate the security threats and vulnerabilities of Locator/ID Separation Protocol using X.805 framework standard. Then three Security protocols were provided to secure the exchanged transitions of communication in Locator/ID Separation Protocol. The first security protocol had been implemented to secure the Registration stage of Locator/ID separation using ID/Based cryptography method. The second security protocol was implemented to address the Resolving stage in the Locator/ID Separation Protocol between the Ingress Tunnel Router and Egress Tunnel Router using Challenge-Response authentication and Key Agreement technique. Where, the third security protocol had been proposed, analysed and evaluated for the Internet of Things communication devices. This protocol was based on the authentication and the group key agreement via using the El-Gamal concept. The developed protocols set an interface between each level of the phase to achieve security refinement architecture to Internet of Things based on Locator/ID Separation Protocol. These protocols were verified using Automated Validation Internet Security Protocol and Applications (AVISPA) which is a push button tool for the automated validation of security protocols and achieved results demonstrating that they do not have any security flaws. Finally, a performance analysis of security refinement protocol analysis and an evaluation were conducted using Contiki and Cooja simulation tool. The results of the performance analysis showed that the security refinement was highly scalable and the memory was quite efficient as it needed only 72 bytes of memory to store the keys in the Wireless Sensor Network (WSN) device.

Relationship and cloud factors affecting government confidence in the public cloud

Alghanim, Waleed January 2017 (has links)
Despite the advantages of the public cloud governments are still reluctant to deploy sensitive data and critical systems into the public cloud. The advantages of scalability and cost are attractive for governments and the current trend is for governments to consider placing more of their data and systems in the public cloud towards a more comprehensive government cloud solution. However, there are major concerns related to the public cloud that are especially significant to governments which are cause of reluctance in terms of public cloud adoption. Such concerns include security and privacy, governance, compliance, and performance. If these concerns are answered, governments will perceive less risk and be more confident to deploy to the public cloud. Besides the obvious technical solutions, which include improving security, another solution is an effective cloud service provider (CSP) - government relationship. Towards the development of such a solution the study contributes a novel approach to researching the CSP-government relationship in order to reveal, in depth and comprehensively, the relevant relationship and associated cloud issues, often neglected in previous research. Specifically, the developed research design was achieved through a mixed methods approach using a questionnaire and semi-structured interviews with senior IT professionals in various government ministries and departments in Saudi Arabia. The findings not only offer a comprehensive and in-depth understanding of the relationship, but also reveal specific relationship and cloud issues as problems towards the development of a solution to increase government confidence in the public cloud. Specifically, it was found that government were more concerned about areas of the cloud that are more relevant to government and there was often an associate lack of trust or perception of risk for these areas. Moreover, it was found that in relation to more specific areas of the cloud there was increasing concern in terms of trust and risk, the ability to negotiate and collaborate, and the perception of reputation. Based on these findings, which also revealed the various interplays between relationship factors as a novel contribution, the study offers recommendations to CSPs on how they may improve their relationship with the government. This is to be achieved through resolving relationship issues and associated cloud concerns within the relationship context towards improving government confidence in the public cloud. The findings also have implications for other parties which include governments considering the public cloud and those engaged in academic research in the area of government reluctance to use the public cloud.

Assessing the evidential value of artefacts recovered from the cloud

Mustafa, Zareefa S. January 2017 (has links)
Cloud computing offers users low-cost access to computing resources that are scalable and flexible. However, it is not without its challenges, especially in relation to security. Cloud resources can be leveraged for criminal activities and the architecture of the ecosystem makes digital investigation difficult in terms of evidence identification, acquisition and examination. However, these same resources can be leveraged for the purposes of digital forensics, providing facilities for evidence acquisition, analysis and storage. Alternatively, existing forensic capabilities can be used in the Cloud as a step towards achieving forensic readiness. Tools can be added to the Cloud which can recover artefacts of evidential value. This research investigates whether artefacts that have been recovered from the Xen Cloud Platform (XCP) using existing tools have evidential value. To determine this, it is broken into three distinct areas: adding existing tools to a Cloud ecosystem, recovering artefacts from that system using those tools and then determining the evidential value of the recovered artefacts. From these experiments, three key steps for adding existing tools to the Cloud were determined: the identification of the specific Cloud technology being used, identification of existing tools and the building of a testbed. Stemming from this, three key components of artefact recovery are identified: the user, the audit log and the Virtual Machine (VM), along with two methodologies for artefact recovery in XCP. In terms of evidential value, this research proposes a set of criteria for the evaluation of digital evidence, stating that it should be authentic, accurate, reliable and complete. In conclusion, this research demonstrates the use of these criteria in the context of digital investigations in the Cloud and how each is met. This research shows that it is possible to recover artefacts of evidential value from XCP.

Policy-driven governance in cloud service ecosystems

Kourtesis, Dimitrios January 2016 (has links)
Cloud application development platforms facilitate new models of software co-development and forge environments best characterised as cloud service ecosystems. The value of those ecosystems increases exponentially with the addition of more users and third-party services. Growth however breeds complexity and puts reliability at risk, requiring all stakeholders to exercise control over changes in the ecosystem that may affect them. This is a challenge of governance. From the viewpoint of the ecosystem coordinator, governance is about preventing negative ripple effects from new software added to the platform. From the viewpoint of third-party developers and end-users, governance is about ensuring that the cloud services they consume or deliver comply with requirements on a continuous basis. To facilitate different forms of governance in a cloud service ecosystem we need governance support systems that achieve separation of concerns between the roles of policy provider, governed resource provider and policy evaluator. This calls for better modularisation of the governance support system architecture, decoupling governance policies from policy evaluation engines and governed resources. It also calls for an improved approach to policy engineering with increased automation and efficient exchange of governance policies and related data between ecosystem partners. The thesis supported by this research is that governance support systems that satisfy such requirements are both feasible and useful to develop through a framework that integrates Semantic Web technologies and Linked Data principles. The PROBE framework presented in this dissertation comprises four components: (1) a governance ontology serving as shared ecosystem vocabulary for policies and resources; (2) a method for the definition of governance policies; (3) a method for sharing descriptions of governed resources between ecosystem partners; (4) a method for evaluating governance policies against descriptions of governed ecosystem resources. The feasibility and usefulness of PROBE are demonstrated with the help of an industrial case study on cloud service ecosystem governance.

Delegated private set intersection on outsourced private datasets

Abadi, Aydin Kheirbakhsh January 2017 (has links)
The significance of cloud computing is increasing and the cloud is receiving growing attention from individuals and companies. The cloud enables ubiquitous and on-demand access to a pool of configurable computing resources that can be scaled up easily. However, the cloud is vulnerable to data security breaches such as exposing confidential data, data tampering,and denial of service. Thus, it cannot be fully trusted and it is crucial for the clients who use the cloud to protect the security of their own data. In this thesis, we design cryptographic protocols to allow clients to outsource their private data to the cloud and delegate certain computation tothe cloud securely. We focus on the computation of set intersection which has a broad range of applications such as privacy-preserving data mining,and homeland security. Traditionally, the goal of Private Set Intersection(PSI) protocols has been to enable two parties to jointly compute the intersection without revealing their own set to the other party. Many such protocols have been designed. But, in the cases where data and computation are outsourced to the cloud, the setting and trust assumptions are considerably changed. The traditional PSI protocols cannot be used directly to solve security problems, without sacrificing the advantages the cloud offers. The contribution of this thesis is a set of delegated PSI protocols that meet a variety of security and functional requirements in the cloud environment. For most clients, the most critical security concern when outsourcing data and computation to the cloud is data privacy. We start from here and design O-PSI, a novel protocol in which clients encrypt their data before outsourcing it to the cloud. The cloud uses the encrypted data to compute the intersection when requested. The outsourced data remain private against the cloud all the time since the data stored in the cloud is encrypted and the computation process leaks no information. O-PSI ensures that the computation can be performed only with the clients’ consent. The protocol also takes into account several functional requirements in order to take full advantage of the cloud. For example, clients can independently prepare and upload their data to the cloud, and the clients are able to delegate to the cloud the computation an unlimited number of times, without the need to locally re-prepare the data. We then extend O-PSI in several ways to provide additional properties:* EO-PSI is a more efficient version of O-PSI that does not require public key operations.* UEO-PSI extends EO-PSI with efficient update operations, making it possible to efficiently handle dynamic data.* VD-PSI extends O-PSI with verifiability, i.e. the clients can efficiently verify the integrity of the computation result. For each protocol, we provide a formal simulation-based security analysis. We also compare the protocols against the state of the art. In addition to that, we have implemented the O-PSI and EO-PSI protocols and provide an evaluation of their performance based on our implementation.

Factors that impact the cloud portability of legacy Web applications

Costa Silva, Gabriel January 2016 (has links)
The technological dependency of products or services provided by a particular cloud platform or provider (i.e. cloud vendor lock-in) leaves cloud users unprotected against service failures and providers going out of business, and unable to modernise their software applications by exploiting new technologies and cheaper services from alternative clouds. High portability is key to ensure a smooth migration of software applications between clouds, reducing the risk of vendor lock-in. This research identifies and models key factors that impact the portability of legacy web applications in cloud computing. Unlike existing cloud portability studies, we use a combination of techniques from empirical software engineering, software quality and areas related to cloud, including service-oriented computing and distributed systems, to carry out a rigorous experimental study of four factors impacting on cloud application portability. In addition, we exploit established methods for software effort prediction to build regression models for predicting the effort required to increase cloud application portability. Our results show that software coupling, authentication technology, cloud platform and service are statistically significant and scientifically relevant factors for cloud application portability in the experiments undertaken. Furthermore, the experimental data enabled the development of fair (mean magnitude of relative error, MMRE, between 0.493 and 0.875), good (MMRE between 0.386 and 0.493) and excellent (MMRE not exceeding 0.368) regression models for predicting the effort of increasing the portability of legacy cloud applications. By providing empirical evidence of factors that impact cloud application portability and building effort prediction models, our research contributes to improving decision making when migrating legacy applications between clouds, and to mitigating the risks associated with cloud vendor lock-in.

Application partitioning and offloading in mobile cloud computing

Javied, Asad January 2017 (has links)
With the emergence of high quality and rich multimedia content, the end user demands of content processing and delivery are increasing rapidly. In view of increasing user demands and quality of service (QoS), cloud computing offers a huge amount of online processing and storage resources which can be exploited on demand. Moreover, the current high speed 4G mobile network i.e. Long Term Evolution (LTE) enables leveraging of the cloud resources. Mobile Cloud Computing (MCC) is an emerging paradigm comprising three heterogeneous domains of mobile computing, cloud computing, and wireless networks. MCC aims to enhance computational capabilities of resource-constrained mobile devices towards rich user experience. Decreasing cloud cost and latency is attracting the research community to exploit the cloud computing resource to offload and process multimedia content in the cloud. High bandwidth and low latency of LTE makes it a suitable candidate for delivering of rich multi-media cloud content back to the user. The convergence of cloud and LTE give rise to an end-to-end communication framework which opens up the possibility for new applications and services. In addition to cloud and network, end user and application constitute the other enti-ties of the end-to-end communication framework. End user quality of service and particular application profile dictate about resource allocation in the cloud and the wireless network. This research formulates different building blocks of the end-to-end communications and in-troduces a new paradigm to exploit the network and cloud resources for the end user. In this way, we employ a multi-objective optimization strategy to propose and simulate an end-to-end communication framework which promises to optimize the behavior of MCC based end-to-end communication to deliver appropriate quality of service (QoS) with utilization of min-imum cloud and network resources. Then we apply application partitioning and offloading schemes to offload certain parts of an application to the cloud to improve energy efficiency and response time. As deliverables of this research, behavior of different entities (cloud, LTE based mobile network, user and application context) have been modeled. In addition, a com-prehensive application partitioning and offloading framework has been proposed in order to minimize the cloud and network resources to achieve user required QoS. Keywords: Long Term Evolution (LTE), Cloud computing, Application partitioning and offloading, Image Retrieval.

Enterprise adoption oriented cloud computing performance optimization

Noureddine, Moustafa January 2014 (has links)
Cloud computing in the Enterprise has emerged as a new paradigm that brings both business opportunities and software engineering challenges. In Cloud computing, business participants such as service providers, enterprise solutions, and marketplace applications are required to adopt a Cloud architecture engineered for security and performance. One of the major hurdles of formal adoption of Cloud solutions in the enterprise is performance. Enterprise applications (e.g., SAP, SharePoint, Yammer, Lync Server, and Exchange Server) require a mechanism to predict and manage performance expectations in a secure way. This research addresses two areas of performance challenges: Capacity planning to ensure resources are provisioned in a way that meets requirements while minimizing total cost of ownership; and optimization to authentication protocols that enable enterprise applications to authenticate among each other and meet the performance requirements for enterprise servers, including third party marketplace applications. For the first set of optimizations, the theory was formulated using a stochastic process where multiple experiments were monitored and data collected over time. The results were then validated using a real-life enterprise product called Lync Server. The second set of optimizations was achieved by introducing provisioning steps to pre-establish trust among enterprise applications servers, the associated authorisation server, and the clients interested in access to protected resources. In this architecture, trust is provisioned and synchronized as a pre-requisite step 3 to authentication among all communicating entities in the authentication protocol and referral tokens are used to establish trust federation for marketplace applications across organizations. Various case studies and validation on commercially available products were used throughout the research to illustrate the concepts. Such performance optimizations have proved to help enterprise organizations meet their scalability requirements. Some of the work produced has been adopted by Microsoft and made available as a downloadable tool that was used by customers around the globe assisting them with Cloud adoption.

Workflow framework for cloud-based distributed simulation

Chaudhry, Nauman Riaz January 2016 (has links)
Although distributed simulation (DS) using parallel computing has received considerable research and development in a number of compute-intensive fields, it has still to be significantly adopted by the wider simulation community. According to scientific literature, major reasons for low adoption of cloud-based services for DS execution are the perceived complexities of understanding and managing the underlying architecture and software for deploying DS models, as well as the remaining challenges in performance and interoperability of cloud-based DS. The focus of this study, therefore, has been to design and test the feasibility of a well-integrated, generic, workflow structured framework that is universal in character and transparent in implementation. The choice of a workflow framework for implementing cloud-based DS was influenced by the ability of scientific workflow management systems to define, execute, and actively manage computing workflows. As a result of this study, a hybrid workflow framework, combined with four cloud-based implementation services, has been used to develop an integrated potential standard for workflow implementation of cloud-based DS, which has been named the WORLDS framework (Workflow Framework for Cloud-based Distributed Simulation). The main contribution of this research study is the WORLDS framework itself, which identifies five services (including a Parametric Study Service) that can potentially be provided through the use of workflow technologies to deliver effective cloud-based distributed simulation that is transparently provisioned for the user. This takes DS a significant step closer to its provision as a viable cloud-based service (DSaaS). In addition, the study introduces a simple workflow solution to applying parametric studies to distributed simulations. Further research to confirm the generic nature of the workflow framework, to apply and test modified HLA standards, and to introduce a simulation analytics function by modifying the workflow is anticipated.

Page generated in 0.0694 seconds