• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 4
  • Tagged with
  • 144
  • 30
  • 26
  • 24
  • 24
  • 19
  • 13
  • 11
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Detecting perceptual breakthrough in RSVP with applications in deception detection methodological, behavioural and electrophysiological explorations

Zoumpoulaki, Alexia January 2016 (has links)
This thesis explores perceptual breakthrough in rapid serial visual presentation (RSVP), for deception detection applications. In RSVP, visual stimuli are presented in rapid succession, pushing the perceptual processing system to the limit, allowing only a limited number of stimuli to be processed and en- coded. In this thesis we investigate what type of stimuli capture attention in RSVP, taking advantage of both physiological and behavioural measurements. The main focus of the studies presented here follows up on work that shows that perceptual breakthrough in RSVP can be used as a marker of concealed knowledge in deception detection tests (Fringe P300). The thesis is divided into two research contribution parts. Firstly, we develop methods for analysing Event Related Potential (ERP) data, in order to facilitate assessment of perceptual breakthrough in experiments presented later in this thesis. We focus on reducing false positives while at the same time successfully measuring the underlying effects. We present and evaluate methods for measuring latencies and selecting Regions of Interest (ROIs) through simulations and experimental data. Secondly, we explore perceptual breakthrough in RSVP with applications in deception detection. For that purpose, we conducted two studies. The first study explores incidentally acquired information by recording the P300 ERP component from participants after acting out a mock crime scenario. The main hypothesis was that concealed information is salient to a guilty person, and thus associated stimuli will be involuntary perceived. The second study explores the type of stimuli that capture attention in RSVP, by addressing issues related to encoding and emotional arousal, and whether attention can be directed through contextual priming independent of the main task. These studies increase our understanding of how stimuli are processed in RSVP and can provide useful suggestions for designing more successful ERP and RSVP based, deception detection applications, both in terms of stimulus presentation and data analysis.
42

Relationship and cloud factors affecting government confidence in the public cloud

Alghanim, Waleed January 2017 (has links)
Despite the advantages of the public cloud governments are still reluctant to deploy sensitive data and critical systems into the public cloud. The advantages of scalability and cost are attractive for governments and the current trend is for governments to consider placing more of their data and systems in the public cloud towards a more comprehensive government cloud solution. However, there are major concerns related to the public cloud that are especially significant to governments which are cause of reluctance in terms of public cloud adoption. Such concerns include security and privacy, governance, compliance, and performance. If these concerns are answered, governments will perceive less risk and be more confident to deploy to the public cloud. Besides the obvious technical solutions, which include improving security, another solution is an effective cloud service provider (CSP) - government relationship. Towards the development of such a solution the study contributes a novel approach to researching the CSP-government relationship in order to reveal, in depth and comprehensively, the relevant relationship and associated cloud issues, often neglected in previous research. Specifically, the developed research design was achieved through a mixed methods approach using a questionnaire and semi-structured interviews with senior IT professionals in various government ministries and departments in Saudi Arabia. The findings not only offer a comprehensive and in-depth understanding of the relationship, but also reveal specific relationship and cloud issues as problems towards the development of a solution to increase government confidence in the public cloud. Specifically, it was found that government were more concerned about areas of the cloud that are more relevant to government and there was often an associate lack of trust or perception of risk for these areas. Moreover, it was found that in relation to more specific areas of the cloud there was increasing concern in terms of trust and risk, the ability to negotiate and collaborate, and the perception of reputation. Based on these findings, which also revealed the various interplays between relationship factors as a novel contribution, the study offers recommendations to CSPs on how they may improve their relationship with the government. This is to be achieved through resolving relationship issues and associated cloud concerns within the relationship context towards improving government confidence in the public cloud. The findings also have implications for other parties which include governments considering the public cloud and those engaged in academic research in the area of government reluctance to use the public cloud.
43

Partitioning workflow applications over federated clouds to meet non-functional requirements

Wen, Zhenyu January 2016 (has links)
With cloud computing, users can acquire computer resources when they need them on a pay-as-you-go business model. Because of this, many applications are now being deployed in the cloud, and there are many di erent cloud providers worldwide. Importantly, all these various infrastructure providers o er services with di erent levels of quality. For example, cloud data centres are governed by the privacy and security policies of the country where the centre is located, while many organisations have created their own internal \private cloud" to meet security needs. With all this varieties and uncertainties, application developers who decide to host their system in the cloud face the issue of which cloud to choose to get the best operational conditions in terms of price, reliability and security. And the decision becomes even more complicated if their application consists of a number of distributed components, each with slightly di erent requirements. Rather than trying to identify the single best cloud for an application, this thesis considers an alternative approach, that is, combining di erent clouds to meet users' non-functional requirements. Cloud federation o ers the ability to distribute a single application across two or more clouds, so that the application can bene t from the advantages of each one of them. The key challenge for this approach is how to nd the distribution (or deployment) of application components, which can yield the greatest bene ts. In this thesis, we tackle this problem and propose a set of algorithms, and a framework, to partition a work ow-based application over federated clouds in order to exploit the strengths of each cloud. The speci c goal is to split a distributed application structured as a work ow such that the security and reliability requirements of each component are met, whilst the overall cost of execution is minimised. To achieve this, we propose and evaluate a cloud broker for partitioning a work ow application over federated clouds. The broker integrates with the e-Science Central cloud platform to automatically deploy a work ow over public and private clouds. We developed a deployment planning algorithm to partition a large work ow appli- - i - cation across federated clouds so as to meet security requirements and minimise the monetary cost. A more generic framework is then proposed to model, quantify and guide the partitioning and deployment of work ows over federated clouds. This framework considers the situation where changes in cloud availability (including cloud failure) arise during work ow execution.
44

Factors that impact the cloud portability of legacy Web applications

Costa Silva, Gabriel January 2016 (has links)
The technological dependency of products or services provided by a particular cloud platform or provider (i.e. cloud vendor lock-in) leaves cloud users unprotected against service failures and providers going out of business, and unable to modernise their software applications by exploiting new technologies and cheaper services from alternative clouds. High portability is key to ensure a smooth migration of software applications between clouds, reducing the risk of vendor lock-in. This research identifies and models key factors that impact the portability of legacy web applications in cloud computing. Unlike existing cloud portability studies, we use a combination of techniques from empirical software engineering, software quality and areas related to cloud, including service-oriented computing and distributed systems, to carry out a rigorous experimental study of four factors impacting on cloud application portability. In addition, we exploit established methods for software effort prediction to build regression models for predicting the effort required to increase cloud application portability. Our results show that software coupling, authentication technology, cloud platform and service are statistically significant and scientifically relevant factors for cloud application portability in the experiments undertaken. Furthermore, the experimental data enabled the development of fair (mean magnitude of relative error, MMRE, between 0.493 and 0.875), good (MMRE between 0.386 and 0.493) and excellent (MMRE not exceeding 0.368) regression models for predicting the effort of increasing the portability of legacy cloud applications. By providing empirical evidence of factors that impact cloud application portability and building effort prediction models, our research contributes to improving decision making when migrating legacy applications between clouds, and to mitigating the risks associated with cloud vendor lock-in.
45

Application partitioning and offloading in mobile cloud computing

Javied, Asad January 2017 (has links)
With the emergence of high quality and rich multimedia content, the end user demands of content processing and delivery are increasing rapidly. In view of increasing user demands and quality of service (QoS), cloud computing offers a huge amount of online processing and storage resources which can be exploited on demand. Moreover, the current high speed 4G mobile network i.e. Long Term Evolution (LTE) enables leveraging of the cloud resources. Mobile Cloud Computing (MCC) is an emerging paradigm comprising three heterogeneous domains of mobile computing, cloud computing, and wireless networks. MCC aims to enhance computational capabilities of resource-constrained mobile devices towards rich user experience. Decreasing cloud cost and latency is attracting the research community to exploit the cloud computing resource to offload and process multimedia content in the cloud. High bandwidth and low latency of LTE makes it a suitable candidate for delivering of rich multi-media cloud content back to the user. The convergence of cloud and LTE give rise to an end-to-end communication framework which opens up the possibility for new applications and services. In addition to cloud and network, end user and application constitute the other enti-ties of the end-to-end communication framework. End user quality of service and particular application profile dictate about resource allocation in the cloud and the wireless network. This research formulates different building blocks of the end-to-end communications and in-troduces a new paradigm to exploit the network and cloud resources for the end user. In this way, we employ a multi-objective optimization strategy to propose and simulate an end-to-end communication framework which promises to optimize the behavior of MCC based end-to-end communication to deliver appropriate quality of service (QoS) with utilization of min-imum cloud and network resources. Then we apply application partitioning and offloading schemes to offload certain parts of an application to the cloud to improve energy efficiency and response time. As deliverables of this research, behavior of different entities (cloud, LTE based mobile network, user and application context) have been modeled. In addition, a com-prehensive application partitioning and offloading framework has been proposed in order to minimize the cloud and network resources to achieve user required QoS. Keywords: Long Term Evolution (LTE), Cloud computing, Application partitioning and offloading, Image Retrieval.
46

Internet traffic volumes characterization and forecasting

Vlachos, Nikolaos January 2016 (has links)
Internet usage increases every year and the need to estimate the growth of the generated traffic has become a major topic. Forecasting actual figures in advance is essential for bandwidth allocation, networking design and investment planning. In this thesis novel mathematical equations are presented to model and to predict long-term Internet traffic in terms of total aggregating volume, globally and more locally. Historical traffic data from consecutive years have revealed hidden numerical patterns as the values progress year over year and this trend can be well represented with appropriate mathematical relations. The proposed formulae have excellent fitting properties over long-history measurements and can indicate forthcoming traffic for the next years with an exceptionally low prediction error. In cases where pending traffic data have already become available, the suggested equations provide more successful results than the respective projections that come from worldwide leading research. The studies also imply that future traffic strongly depends on the past activity and on the growth of Internet users, provided that a big and representative sample of pertinent data exists from large geographical areas. To the best of my knowledge this work is the first to introduce effective prediction methods that exclusively rely on the static attributes and the progression properties of historical values.
47

Supporting device mobility and state distribution through indirection, topological isomorphism and evolutionary algorithms

Attwood, Andrew January 2014 (has links)
The Internet of Things will result in the deployment of many billions of wireless embedded systems, creating interactive pervasive environments. These pervasive networks will provide seamless access to sensor actuators, enabling organisations and individuals to control and monitor their environment. The majority of devices attached to the Internet of Things will be static. However, it is anticipated that with the advent of body and vehicular networks, we will see many mobile Internet of Things Devices. During emergency situations, the flow of data across the Internet of Things may be disrupted, giving rise to a requirement for machine-to-machine interaction within the remaining environment. Current approaches to routing on the Internet and wireless sensor networks fail to address the requirements of mobility, isolated operation during failure or deal with the imbalance caused by either initial or failing topologies when applying geographic coordinate-based peer-to-peer storage mechanisms. The use of global and local DHT mechanisms to facilitate improved reachability and data redundancy are explored in this thesis. Resulting in the development of an Architecture to support the global reachability of static and mobile Internet of Things Devices. This is achieved through the development of a global indirection mechanism supporting position relative wireless environments. To support the distribution and preservation of device state within the wireless domain a new geospatial keying mechanism is presented, this enables a device to persist state within an overlay with certain guarantees as to its survival. The guarantees relating to geospatial storage rely on the balanced allocation of distributed information. This thesis details a mechanism to balance the address space utilising evolutionary techniques. Following the generation of an initial balanced topology, we present a protocol that applies Topological Isomorphism to provide the continued balancing and reachability of data following partial network failure. This dissertation details the analysis of the proposed protocols and their evaluation through simulation. The results show that our proposed Architecture operates within the capabilities of the devices that operate in this space. The evaluation of Geospatial Keying within the wireless domain showed that the mechanism presented provides better device state preservation than would be found in the random placement exhibited by the storage of state in overlay DHT schemes. Experiments confirm device storage imbalance when using geographic routing; however, the results provided in this thesis show that the use of genetic algorithms can provide an improved identity assignment through the application of alternating fitness between reachability and ideal key displacement. This topology, as is commonly found in geographical routing, was susceptible to imbalance following device failure. The use of topological isomorphism provided an improvement over existing geographical routing protocols to counteract the reachability and imbalance caused by failure.
48

Workflow framework for cloud-based distributed simulation

Chaudhry, Nauman Riaz January 2016 (has links)
Although distributed simulation (DS) using parallel computing has received considerable research and development in a number of compute-intensive fields, it has still to be significantly adopted by the wider simulation community. According to scientific literature, major reasons for low adoption of cloud-based services for DS execution are the perceived complexities of understanding and managing the underlying architecture and software for deploying DS models, as well as the remaining challenges in performance and interoperability of cloud-based DS. The focus of this study, therefore, has been to design and test the feasibility of a well-integrated, generic, workflow structured framework that is universal in character and transparent in implementation. The choice of a workflow framework for implementing cloud-based DS was influenced by the ability of scientific workflow management systems to define, execute, and actively manage computing workflows. As a result of this study, a hybrid workflow framework, combined with four cloud-based implementation services, has been used to develop an integrated potential standard for workflow implementation of cloud-based DS, which has been named the WORLDS framework (Workflow Framework for Cloud-based Distributed Simulation). The main contribution of this research study is the WORLDS framework itself, which identifies five services (including a Parametric Study Service) that can potentially be provided through the use of workflow technologies to deliver effective cloud-based distributed simulation that is transparently provisioned for the user. This takes DS a significant step closer to its provision as a viable cloud-based service (DSaaS). In addition, the study introduces a simple workflow solution to applying parametric studies to distributed simulations. Further research to confirm the generic nature of the workflow framework, to apply and test modified HLA standards, and to introduce a simulation analytics function by modifying the workflow is anticipated.
49

On the elastic optimisation of cloud IaaS environments

Chatziprimou, Kleopatra January 2016 (has links)
Elasticity refers to the auto-scaling ability of clouds towards optimally matching their resources to actual demand conditions. An important problem facing the infrastructure and service providers is how to optimise their resource configurations online, to elastically serve time-varying demands. Most scaling methodologies provide resource reconfiguration decisions to maintain quality properties under environment changes. However, issues related to the timeliness of such reconfiguration decisions are often neglected. A trade-o between the optimality of the reconfiguration solutions and the time cost to obtain these solutions is evident in the current literature. Highly accurate algorithms require a lot of data and time to execute, while more simplistic models may be fast to converge but provide poor quality solutions. In this thesis, we present a methodology for online optimisation of cloud configurations. Our motive is to balance the optimality versus timeliness trade-o in dynamic configurations management. We first employ a search-based approach to extract near-optimal configurations considering mutually conflicting performance and business quality attributes. Towards reducing the burden of time-consuming fitness evaluations of the configurations' quality during search-based optimisation, we develop surrogate models to predict the configurations' quality based on history observations. We evaluate our technique using CloudSim-based cloud simulation. Our experimental results show that the proposed methodology can produce high quality configurations with lead time of seconds and prediction error within 6%.
50

Infrastructures for virtual computing : computing utilities and software services in the next generation Internet

Cohen, Jeremy Hugh January 2009 (has links)
No description available.

Page generated in 0.0355 seconds