• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 4
  • Tagged with
  • 144
  • 30
  • 26
  • 24
  • 24
  • 19
  • 13
  • 11
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Trolling in computer-mediated communication : impoliteness, desception and manipulation online

Hardaker, Claire January 2012 (has links)
Computer-mediated communication (CMC), or the communication that humans engage in via networked devices such as computers (December 1997; Ferris 1997; Herring 2003: 612), provides a rich area for the study of im/politeness and face -threat. Whilst CMC has many benefits, such as allowing quick and easy communication by those spatially and temporally separated (Herring, Job-Sluder, Scheckler & Barab 2002: 371), it is also predisposed towards higher levels of aggression than forms of interaction such as face-to-face communication (FtF). CMC can offer a degree of anonymity that may encourage deception, aggression, and manipulation due to a sense of impunity and a loss of empathy with the non-present recipient-an effect known as deindividuation (Kiesler, Siegel & McGuire 1984; Siegel, Dubrovsky, Kiesler & McGuire 1986; Sproull & Kiesler 1986). Using two WS_e_Q:e_t corpora with a combined wordcount of 86,412,727 words, I primarily investigate a negatively marked online behaviour (NMOB) known as trolling, which involves deliberately attempting to provoke online conflict. I secondarily investigate related NMOBs such as flaming (a reaction or over-reaction to perceived provocation), cyberbullying, cyberharassment, and cyberstalking. The analysis establishes that academia and legislation use these terms in vague, contradictory, or widely overlapping ways. This thesis aims to answer three research questions. The first (what is trolling?) formulates a definition of trolling, including its interrelationships with other NMOB, using a quantitative and qualitative corpus linguistic approach. The second (how is trolling carried out?) outlines the major trolling strategies found in the dataset, along with the user responses to those strategies, and the troller defences to those user responses. The third (how is trolling co-constructed?), which is closely related to the second, qualitatively investigates one extended example of trolling to see how this NMOB is co-constructed by the group via impoliteness, identity construction, and deception. Wordcount excluding front- and back-matter: 89,823
52

A model to support the decision process for migration to cloud computing

Alkhalil, Adel January 2016 (has links)
Cloud computing is an emerging paradigm for provisioning computing and IT services. Migration from traditional systems setting up to cloud computing is a strategic organisational decision that can affect organisations’ performance, productivity, and growth as well as competitiveness. Organisations wishing to migrate their legacy systems to the cloud often need to go through a difficult and complicated decision-making process. This can be due to multiple factors including restructuring IT resources, the still evolving nature of the cloud environment, and the continuous expansion of the cloud services, configurations and providers. This research explores the factors that would influence decision making for migration to the cloud, its impact on IT management, and the main tasks that organisations should consider to ensure successful migration projects. The sequential exploratory strategy is followed for the exploration. This strategy is implemented through the utilisation of a two-stage survey for collecting the primary data. The analysis of the two-stage survey as well as the literature identified eleven determinants that increase the complexity in the decisions to migrate to the cloud. In the literature some of those determinants were realised, accordingly, there have been many proposed methods for supporting migration to the cloud. However, no systematic decision making process exists that clearly identifies the main steps and explicitly describes the tasks to be performed within each step. This research aims to fill this need by proposing a model to support the decision process for migrating to cloud. The model provides a structure which covers the whole process of migration decisions. It guides decision makers through a step-by-step approach aiding organisations with their decision making. The model was evaluated by exploring the views of a group of the cloud practitioners on it. The analysis of the views demonstrated a high level of acceptance by the practitioners with regard to the structure, tasks, and issues addressed by the model. The model offers an encouraging preliminary structure for developing a cloud Knowledge-Based Decision Support System.
53

The role of transparency and trust in the selection of cloud service providers

Almanea, Mohammed Ibrahim M. January 2015 (has links)
Potential customers started to adopt cloud computing because of the promised benefits such as the flexibility of resources and most importantly cost reduction. In spite of the benefits that could flow from its adoption, cloud computing brings new challenges associated with its potential lack of transparency, trust and loss of controls. In the shadow of these challenges, the number of cloud service providers in the marketplace is growing, making the comparison and selection process very difficult for potential customers and requiring methods for selecting trustworthy and transparent providers. This thesis discusses the existing tools, methods and frameworks that promote the adoption of cloud computing models, and the selection of trustworthy cloud service providers. A set of customer assurance requirements has been proposed as a basis for comparative evaluation, and is applied to several popular tools (Cloud Security Alliance Security, Trust, and Assurance Registry (CSA STAR), CloudTrust Protocol (CTP), Complete, Auditable, and Reportable Approach (C.A.RE) and Cloud Provider Transparency Scorecard (CPTS)). In addition, a questionnaire-based survey has been developed and launched where by respondents evaluate the extent to which these tools have been used, and assess their usefulness. The majority of respondents agreed on the importance of using the tools to assist migration to the cloud and, although most respondents have not used the tools, those who have used them reported them to be helpful. It has been noticed that there might be a relationship between a tool’s compliance to the proposed requirements and the popularity of using these tools, and these results should encourage cloud providers to address customers’ assurance requirements. Some previous studies have focused on comparing cloud providers based on trustworthiness measurement and others focused only on transparency measurement. In this thesis, a framework (called CloudAdvisor) is proposed that couples both of these features. CloudAdvisor aims to provide potential cloud customers with a way to assess trustworthiness based on the history of the cloud provider and to measure transparency based on the Cloud Controls Matrix (CCM) framework. The reason for choosing CCM is because it aims to promote transparency in cloud computing by adopting the best industry standards. The selection process is based on a set of assurance requirements that, if met by the cloud provider or if it has been considered in a tool, could bring assurance and confidence to cloud customers. Two possible approaches (Questionnaire-based and Simulation-based approach) are proposed in order to evaluate the CloudAdvisor framework.
54

A methodology for developing Second Life environments using case-based reasoning techniques

Shubati, Ahmad January 2010 (has links)
Launched in 2003, Second Life is a computer-based pseudo-environment accessed via the Internet. Although a number of individuals and companies have developed a presence (lands) in Second Life, no appropriate methodology has been put into place for undertaking such developments. Although users have adapted existing methods to their individual needs, this research project explores the development of a methodology for developing lands specifically within Second Life. After researching and examining a variety of different software methods and techniques, it was decided to base this research project methodology on Case-Based Reasoning (CBR) techniques, which shares a number of synergies with Second Life itself. With some modifications, a web-based system was designed based on CBR to work in accordance with Second Life. Collecting and analyzing the feedback for the first version of the web-based system identified the adjustments and improvements needed. Therefore, from tracking its progress against previous specifications and future activity, an updated version of the CBR web-based system covering the latest changes and improvements of the tool was introduced. In addition to this, new functionalities have been added in the improved version in order to refine and develop the original prototype to become a highly effective SL development tool. New feedback platforms have been provided to facilitate the use of the system and to obtain results which are more closely related to the users recommendations. Through the feedback process, the tool is becoming ever more useful to developers of Second Life systems. This research project discusses the use of Case-based reasoning techniques and evaluates their application to the development of space within Second Life.
55

Insider threat : memory confidentiality and integrity in the cloud

Rocha, Francisco January 2015 (has links)
The advantages of always available services, such as remote device backup or data storage, have helped the widespread adoption of cloud computing. However, cloud computing services challenge the traditional boundary between trusted inside and untrusted outside. A consumer’s data and applications are no longer in premises, fundamentally changing the scope of an insider threat. This thesis looks at the security risks associated with an insider threat. Specifically, we look into the critical challenge of assuring data confidentiality and integrity for the execution of arbitrary software in a consumer’s virtual machine. The problem arises from having multiple virtual machines sharing hardware resources in the same physical host, while an administrator is granted elevated privileges over such host. We used an empirical approach to collect evidence of the existence of this security problem and implemented a prototype of a novel prevention mechanism for such a problem. Finally, we propose a trustworthy cloud architecture which uses the security properties our prevention mechanism guarantees as a building block. To collect the evidence required to demonstrate how an insider threat can become a security problem to a cloud computing infrastructure, we performed a set of attacks targeting the three most commonly used virtualization software solutions. These attacks attempt to compromise data confidentiality and integrity of cloud consumers’ data. The prototype to evaluate our novel prevention mechanism was implemented in the Xen hypervisor and tested against known attacks. The prototype we implemented focuses on applying restrictions to the permissive memory access model currently in use in the most relevant virtualization software solutions. We envision the use of a mandatory memory access control model in the virtualization software. This model enforces the principle of least privilege to memory access, which means cloud administrators are assigned with only enough privileges to successfully perform their administrative tasks. Although the changes we suggest to the virtualization layer make it more restrictive, our solution is versatile enough to port all the functionality available in current virtualization viii solutions. Therefore, our trustworthy cloud architecture guarantees data confidentiality and integrity and achieves a more transparent trustworthy cloud ecosystem while preserving functionality. Our results show that a malicious insider can compromise security sensitive data in the three most important commercial virtualization software solutions. These virtualization solutions are publicly available and the number of cloud servers using these solutions accounts for the majority of the virtualization market. The prevention mechanism prototype we designed and implemented guarantees data confidentiality and integrity against such attacks and reduces the trusted computing base of the virtualization layer. These results indicate how current virtualization solutions need to reconsider their view on insider threats.
56

Security strategies in wireless sensor networks

Harbin, James R. January 2011 (has links)
This thesis explores security issues in wireless sensor networks (WSNs), and network-layer countermeasures to threats involving routing metrics. Before WSNs can mature to the point of being integrated into daily infrastructure, it is vital that the sensor network technologies involved become sufficiently mature and robust against malicious attack to be trustworthy. Although cryptographic approaches and dedicated security modules are vital, it is important to employ defence in depth via a suite of approaches. A productive approach is to integrate security awareness into the network-layer delivery mechanisms, such as multihop routing or longer-range physical layer approaches. An ideal approach would be workable within realistic channel conditions, impose no complexity for additional control packets or sentry packets, while being fully distributed and scalable. A novel routing protocol is presented (disturbance-based routing) which attempts to avoid wormholes via their static and dynamic topology properties. Simulation results demonstrate its avoidance performance advantages in a variety of topologies. A reputation-based routing approach is introduced, drawing insights from reinforcement learning, which retains routing decisions from an earlier stabilisation phase. Results again demonstrate favourable avoidance properties at a reduced energy cost. Distributed beamforming is explored at the system level, with an architecture provided allowing it to support data delivery in a predominantly multihop routing topology. The vulnerability of beamforming data transmission to jamming attacks is considered analytically and via simulation, and contrasted with multihop routing. A cross-layer approach (physical reputation-based routing) which feeds physical-layer information into the reputation-based routing algorithm is presented, permitting candidate routes that make use of the best beamforming relays to be discovered. Finally, consideration is given to further work on how cognitive security can save energy by allowing nodes to develop a more efficient awareness of their threat environment.
57

A framework enabling the cross-platform development of service-based cloud applications

Gonidis, Fotios January 2015 (has links)
Among all the different kinds of service offering available in the cloud, ranging from compute, storage and networking infrastructure to integrated platforms and software services, one of the more interesting is the cloud application platform, a kind of platform as a service (PaaS) which integrates cloud applications with a collection of platform basic services. This kind of platform is neither so open that it requires every application to be developed from scratch, nor so closed that it only offers services from a pre-designed toolbox. Instead, it supports the creation of novel service-based applications, consisting of micro-services supplied by multiple third-party providers. Software service development at this granularity has the greatest prospect for bringing about the future software service ecosystem envisaged for the cloud. Cloud application developers face several challenges when seeking to integrate the different micro-service offerings from third-party providers. There are many alternative offerings for each kind of service, such as mail, payment or image processing services, and each assumes a slightly different business model. We characterise these differences in terms of (i) workflow, (ii) exposed APIs and (iii) configuration settings. Furthermore, developers need to access the platform basic services in a consistent way. To address this, we present a novel design methodology for creating service-based applications. The methodology is exemplified in a Java framework, which (i) integrates platform basic services in a seamless way and (ii) alleviates the heterogeneity of third-party services. The benefit is that designers of complete service-based applications are no longer locked into the vendor-specific vagaries of third-party micro-services and may design applications in a vendor-agnostic way, leaving open the possibility of future micro-service substitution. The framework architecture is presented in three phases. The first describes the abstraction of platform basic services and third-party micro-service workflows,. The second describes the method for extending the framework for each alternative micro-service implementation, with examples. The third describes how the framework executes each workflow and generates suitable client adaptors for the web APIs of each micro-service.
58

A quality-aware cloud selection service for computational modellers

Nizamani, Shahzad Ahmed January 2012 (has links)
This research sets out to help computational modellers, to select the most cost effective Cloud service provider. This is when they opt to use Cloud computing in preference to using the in-house High Performance Computing (HPC) facilities. A novel Quality-aware computational Cloud Selection (QAComPS) service is proposed and evaluated. This selects the best (cheapest) Cloud provider‟s service. After selection it automatically sets-up and runs the selected service. QaComPS includes an integrated ontology that makes use of OWL 2 features. The ontology provides a standard specification and a common vocabulary for describing different Cloud provider‟s services. The semantic descriptions are processed by the QaComPS Information Management service. These provider descriptions are then used by a filter and the MatchMaker to automatically select the highest ranked service that meets the user‟s requirements. A SAWSDL interface is used to transfer semantic information to/from the QAComPS Information Management service and the non semantic selection and run services. QAComPS selection service has been quantitatively evaluated for accuracy and efficiency against Quality Matchmaking Process (QMP) and Analytical Hierarchy Process (AHP). The service was also evaluated qualitatively by a group of computational modellers. The results for the evaluation were very promising and demonstrated QaComPS‟s potential to make Cloud computing more accessible and cost effective for computational modellers.
59

Scientific workflow execution reproducibility using cloud-aware provenance

Ahmad, M. K. H. January 2016 (has links)
Scientific experiments and projects such as CMS and neuGRIDforYou (N4U) are annually producing data of the order of Peta-Bytes. They adopt scientific workflows to analyse this large amount of data in order to extract meaningful information. These workflows are executed over distributed resources, both compute and storage in nature, provided by the Grid and recently by the Cloud. The Cloud is becoming the playing field for scientists as it provides scalability and on-demand resource provisioning. Reproducing a workflow execution to verify results is vital for scientists and have proven to be a challenge. As per a study (Belhajjame et al. 2012) around 80% of workflows cannot be reproduced, and 12% of them are due to the lack of information about the execution environment. The dynamic and on-demand provisioning capability of the Cloud makes this more challenging. To overcome these challenges, this research aims to investigate how to capture the execution provenance of a scientific workflow along with the resources used to execute the workflow in a Cloud infrastructure. This information will then enable a scientist to reproduce workflow-based scientific experiments on the Cloud infrastructure by re-provisioning the similar resources on the Cloud. Provenance has been recognised as information that helps in debugging, verifying and reproducing a scientific workflow execution. Recent adoption of Cloud-based scientific workflows presents an opportunity to investigate the suitability of existing approaches or to propose new approaches to collect provenance information from the Cloud and to utilize it for workflow reproducibility on the Cloud. From literature analysis, it was found that the existing approaches for Grid or Cloud do not provide detailed resource information and also do not present an automatic provenance capturing approach for the Cloud environment. To mitigate the challenges and fulfil the knowledge gap, a provenance based approach, ReCAP, has been proposed in this thesis. In ReCAP, workflow execution reproducibility is achieved by (a) capturing the Cloud-aware provenance (CAP), b) re-provisioning similar resources on the Cloud and re-executing the workflow on them and (c) by comparing the provenance graph structure including the Cloud resource information, and outputs of workflows. ReCAP captures the Cloud resource information and links it with the workflow provenance to generate Cloud-aware provenance. The Cloud-aware provenance consists of configuration parameters relating to hardware and software describing a resource on the Cloud. This information once captured aids in re-provisioning the same execution infrastructure on the Cloud for workflow re-execution. Since resources on the Cloud can be used in static or dynamic (i.e. destroyed when a task is finished) manner, this presents a challenge for the devised provenance capturing approach. In order to deal with these scenarios, different capturing and mapping approaches have been presented in this thesis. These mapping approaches work outside the virtual machine and collect resource information from the Cloud middleware, thus they do not affect job performance. The impact of the collected Cloud resource information on the job as well as on the workflow execution has been evaluated through various experiments in this thesis. In ReCAP, the workflow reproducibility is verified by comparing the provenance graph structure, infrastructure details and the output produced by the workflows. To compare the provenance graphs, the captured provenance information including infrastructure details is translated to a graph model. These graphs of original execution and the reproduced execution are then compared in order to analyse their similarity. In this regard, two comparison approaches have been presented that can produce a qualitative analysis as well as quantitative analysis about the graph structure. The ReCAP framework and its constituent components are evaluated using different scientific workflows such as ReconAll and Montage from the domains of neuroscience (i.e. N4U) and astronomy respectively. The results have shown that ReCAP has been able to capture the Cloud-aware provenance and demonstrate the workflow execution reproducibility by re-provisioning the same resources on the Cloud. The results have also demonstrated that the provenance comparison approaches can determine the similarity between the two given provenance graphs. The results of workflow output comparison have shown that this approach is suitable to compare the outputs of scientific workflows, especially for deterministic workflows.
60

Cloud computing in the large scale organisation : potential benefits and overcoming barriers to deployment

Bellamy, Martin Clifford January 2013 (has links)
There are three focal questions addressed in this thesis: • Firstly whether large organisations, particularly public sector or governmental, can realise benefits by transitioning from the ICT delivery models prevalent in the late 2000s to use Cloud computing services? • Secondly, in what circumstances can the benefits best be realised, and how and when can the associated risk reward trade-off be managed effectively? • Thirdly, what steps can be taken to ensure maximum benefit is gained from using Cloud computing? This includes a consideration of the technical and organisational obstacles that need to be overcome to realise these benefits in large organisations. The potential benefits for organisations using Cloud computing services include cost reductions, faster innovation, delivery of modern information based services that meet consumers' expectations, and improved choice and affordability of specialist services. There are many examples of successful Cloud computing deployments in large organisations that are saving time and money, although in larger organisations these are generally in areas that do not involve use of sensitive information. Despite the benefits, by 2013 cloud computing services account for less than 5% most large organisations' ICT budgets. The key inhibitor to wider deployment is that use of Cloud computing services exposes organisations to new risks that can be costly to address. However, the level of cost reduction that can be attained means that progressive deployment of Cloud computing services seems inevitable. The challenge therefore is how best to manage the associated risks in an effective and efficient manner. This thesis considers the origin and benefits of Cloud computing, identifies the barriers to take up and explores how these can be overcome, and considers how cloud service brokerages can potentially develop further to close the gap by building new capabilities to accelerate take-up and benefits realisation.

Page generated in 0.0192 seconds