• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 4
  • 4
  • Tagged with
  • 144
  • 30
  • 26
  • 24
  • 24
  • 19
  • 13
  • 11
  • 10
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Industry attitudes and behaviour towards web accessibility in general and age-related change in particular and the validation of a virtual third-age simulator for web accessibility training for students and professionals

Gilbertson, Terri January 2014 (has links)
While the need for web accessibility for people with disabilities is widely accepted, the same visibility does not apply to the accessibility needs of older adults. This research initially explored developer behaviour in terms of how they presented accessibility on their websites as well as their own accessibility practices in terms of presentation of accessibility statements, the mention of accessibility as a selling point to potential clients and homepage accessibility of company websites. Following from this starting point the research focused in on web accessibility for ageing in particular. A questionnaire was developed to explore the differences between developer views of general accessibility and accessibility for older people. The questionnaire findings indicated that ageing is not seen as an accessibility issue by a majority of developers. Awareness of ageing accessibility documentation was also very low, highlighting the need for raising awareness of accessibility practices for ageing. Current age-related documentation developed by the Web Accessibility Initiative was then examined and critiqued. The findings show a tension between the machine-centric Web Content Accessibility Guidelines 2.0 (WCAG 2.0) and the needs of older people. Examination of guidelines when compared to research-derived findings reveal that the Assistive Technology (AT) centric structure of the documentation does not appropriately highlight accessibility practices in a context that matches the observed behaviour of older people. The documentation also fails to appropriately address the psycho-social ramifications of how older people choose to interact with technology as well as how they identify themselves in relation to any conditions they have which may be considered disabling. The need for a novel, engaging and awareness-raising tool resulted in the development of what is essentially a "Virtual third-age simulator". This ageing simulator is the first to combine multiple impairments in an active simulation and uses eye-tracking technology to increase the fidelity of conditions resulting in partial sightedness. It also allows for developers to view their own web content in addition to the lessons provided using the simulations presented in the software. The simulator was then validated in terms of its ability to raise awareness as well as its ability to affect web industry professionals' intentions towards accessible practices that benefit older people.
82

User-centric quality of service provisioning in IP networks

Culverhouse, Mark January 2012 (has links)
The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.
83

Self-aware and self-adaptive autoscaling for cloud based services

Chen, Tao January 2016 (has links)
Modern Internet services are increasingly leveraging on cloud computing for flexible, elastic and on-demand provision. Typically, Quality of Service (QoS) of cloud-based services can be tuned using different underlying cloud configurations and resources, e.g., number of threads, CPU and memory etc., which are shared, leased and priced as utilities. This benefit is fundamentally grounded by autoscaling: an automatic and elastic process that adapts cloud configurations on-demand according to time-varying workloads. This thesis proposes a holistic cloud autoscaling framework to effectively and seamlessly address existing challenges related to different logical aspects of autoscaling, including architecting autoscaling system, modelling the QoS of cloudbased service, determining the granularity of control and deciding trade-off autoscaling decisions. The framework takes advantages of the principles of self-awareness and the related algorithms to adaptively handle the dynamics, uncertainties, QoS interference and trade-offs on objectives that are exhibited in the cloud. The major benefit is that, by leveraging the framework, cloud autoscaling can be effectively achieved without heavy human analysis and design time knowledge. Through conducting various experiments using RUBiS benchmark and realistic workload on real cloud setting, this thesis evaluates the effectiveness of the framework based on various quality indicators and compared with other state-of-the-art approaches.
84

Automated management cloud-platforms based on energy policies

Alansari, Marwah January 2016 (has links)
Delivering environmentally friendly services has become an important issue in Cloud Computing due to awareness provided by governments and environmental conservation organisations about the impact of electricity usage on carbon footprints. Cloud providers and cloud consumers (organisations/ enterprises) have their own defined \(green\) \(policies\) to control energy consumption at their data centers. At service management level, \(green\) \(policies\) can be mapped as \(energy\) \(management\) \(policies\) or \(management\) \(policies\). Focusing at cloud consumer's side, \(management\) \(policies\) are described by business managers which can change regularly. The continuous changing is based on the nature of the technical environment, changes in regulation; and business requirements. Therefore, there is a gap between the level of describing and implementing \(management\) \(policies\) in the cloud environment. This thesis provides a method to bridge that gap by (a) defining a specification for formulating \(management\) \(policies\) into executable form for an infrastructure-as-a-service (IaaS) cloud model; (b) designing a framework to execute the described \(management\) \(policies\) automatically; (c) proposing a modelling and analysis method to identify the potential \(energy\) \(management\) \(policy\) that would save energy-cost. Each aspect covered in the thesis is evaluated with a help of an Energy Management Case Study for a private cloud scenario.
85

Enhancing programmability for adaptive resource management in next generation data centre networks

Jouet, Simon January 2017 (has links)
Recently, Data Centre (DC) infrastructures have been growing rapidly to support a wide range of emerging services, and provide the underlying connectivity and compute resources that facilitate the "*-as-a-Service" model. This has led to the deployment of a multitude of services multiplexed over few, very large-scale centralised infrastructures. In order to cope with the ebb and flow of users, services and traffic, infrastructures have been provisioned for peak-demand resulting in the average utilisation of resources to be low. This overprovisionning has been further motivated by the complexity in predicting traffic demands over diverse timescales and the stringent economic impact of outages. At the same time, the emergence of Software Defined Networking (SDN), is offering new means to monitor and manage the network infrastructure to address this underutilisation. This dissertation aims to show how measurement-based resource management can improve performance and resource utilisation by adaptively tuning the infrastructure to the changing operating conditions. To achieve this dynamicity, the infrastructure must be able to centrally monitor, notify and react based on the current operating state, from per-packet dynamics to longstanding traffic trends and topological changes. However, the management and orchestration abilities of current SDN realisations is too limiting and must evolve for next generation networks. The current focus has been on logically centralising the routing and forwarding decisions. However, in order to achieve the necessary fine-grained insight, the data plane of the individual device must be programmable to collect and disseminate the metrics of interest. The results of this work demonstrates that a logically centralised controller can dynamically collect and measure network operating metrics to subsequently compute and disseminate fine-tuned environment-specific settings. They show how this approach can prevent TCP throughput incast collapse and improve TCP performance by an order of magnitude for partition-aggregate traffic patterns. Futhermore, the paradigm is generalised to show the benefits for other services widely used in DCs such as, e.g, routing, telemetry, and security.
86

Mobile network and cloud based privacy-preserving data aggregation and processing

Baharon, M. R. January 2017 (has links)
The emerging technology of mobile devices and cloud computing has brought a new and efficient way for data to be collected, processed and stored by mobile users. With improved specifications of mobile devices and various mobile applications provided by cloud servers, mobile users can enjoy tremendous advantages to manage their daily life through those applications instantaneously, conveniently and productively. However, using such applications may lead to the exposure of user data to unauthorised access when the data is outsourced for processing and storing purposes. Furthermore, such a setting raises the privacy breach and security issue to mobile users. As a result, mobile users would be reluctant to accept those applications without any guarantee on the safety of their data. The recent breakthrough of Fully Homomorphic Encryption (FHE) has brought a new solution for data processing in a secure motion. Several variants and improvements on the existing methods have been developed due to efficiency problems. Experience of such problems has led us to explore two areas of studies, Mobile Sensing Systems (MSS) and Mobile Cloud Computing (MCC). In MSS, the functionality of smartphones has been extended to sense and aggregate surrounding data for processing by an Aggregation Server (AS) that may be operated by a Cloud Service Provider (CSP). On the other hand, MCC allows resource-constraint devices like smartphones to fully leverage services provided by powerful and massive servers of CSPs for data processing. To support the above two application scenarios, this thesis proposes two novel schemes: an Accountable Privacy-preserving Data Aggregation (APDA) scheme and a Lightweight Homomorphic Encryption (LHE) scheme. MSS is a kind of WSNs, which implements a data aggregation approach for saving the battery lifetime of mobile devices. Furthermore, such an approach could improve the security of the outsourced data by mixing the data prior to be transmitted to an AS, so as to prevent the collusion between mobile users and the AS (or its CSP). The exposure of users’ data to other mobile users leads to a privacy breach and existing methods on preserving users’ privacy only provide an integrity check on the aggregated data without being able to identify any misbehaved nodes once the integrity check has failed. Thus, to overcome such problems, our first scheme APDA is proposed to efficiently preserve privacy and support accountability of mobile users during the data aggregation. Furthermore, APDA is designed with three versions to provide balanced solutions in terms of misbehaved node detection and data aggregation efficiency for different application scenarios. In addition, the successfully aggregated data also needs to be accompanied by some summary information based on necessary additive and non-additive functions. To preserve the privacy of mobile users, such summary could be executed by implementing existing privacy-preserving data aggregation techniques. Nevertheless, those techniques have limitations in terms of applicability, efficiency and functionality. Thus, our APDA has been extended to allow maximal value finding to be computed on the ciphertext data so as to preserve user privacy with good efficiency. Furthermore, such a solution could also be developed for other comparative operations like Average, Percentile and Histogram. Three versions of Maximal value finding (Max) are introduced and analysed in order to differentiate their efficiency and capability to determine the maximum value in a privacy-preserving manner. Moreover, the formal security proof and extensive performance evaluation of our proposed schemes demonstrate that APDA and its extended version can achieve stronger security with an optimised efficiency advantage over the state-of-the-art in terms of both computational and communication overheads. In the MCC environment, the new LHE scheme is proposed with a significant difference so as to allow arbitrary functions to be executed on ciphertext data. Such a scheme will enable rich-mobile applications provided by CSPs to be leveraged by resource-constraint devices in a privacy-preserving manner. The scheme works well as long as noise (a random number attached to the plaintext for security reasons) is less than the encryption key, which makes it flexible. The flexibility of the key size enables the scheme to incorporate with any computation functions in order to produce an accurate result. In addition, this scheme encrypts integers rather than individual bits so as to improve the scheme’s efficiency. With a proposed process that allows three or more parties to communicate securely, this scheme is suited to the MCC environment due to its lightweight property and strong security. Furthermore, the efficacy and efficiency of this scheme are thoroughly evaluated and compared with other schemes. The result shows that this scheme can achieve stronger security under a reasonable cost.
87

Adaptively improving performance stability of cloud based application using the modern portfolio theory

Alrebeish, Faisal January 2016 (has links)
The increasing number of Software-as-a-Service(SaaS) services available in the cloud market make them plausible and attractive for building cloud-based applications. However, performance instability is common in the cloud environment due to changes in supply and demand of shared computational infrastructure and resources. Candidate services are vulnerable to such instability. Current service selection and composition approaches do not explicitly address performance fluctuations when building cloud-based applications. This thesis proposes a novel approach to improve performance stability by leveraging on the principles of design diversity and portfolio-based thinking when selecting and composing cloud-based applications. The objective is to minimize the risks that could stem from selecting and composing cloud-based services that are vulnerable to performance instability. In this thesis, we use two scenarios to illustrate the applicability and the effectiveness of the approach. As scalability is of paramount importance for efficient dynamic and adaptive selection and composition, the thesis adapt a systematic method to identify the various scalability dimensions that can affect the working of the approach and consequently evaluate the sensitivity of the approach to the identified dimensions. The thesis concludes with possible directions for future work.
88

Using interaction data for improving the offline and online evaluation of search engines

Kharitonov, Evgeny January 2016 (has links)
This thesis investigates how web search evaluation can be improved using historical interaction data. Modern search engines combine offline and online evaluation approaches in a sequence of steps that a tested change needs to pass through to be accepted as an improvement and subsequently deployed. We refer to such a sequence of steps as an evaluation pipeline. In this thesis, we consider the evaluation pipeline to contain three sequential steps: an offline evaluation step, an online evaluation scheduling step, and an online evaluation step. In this thesis we show that historical user interaction data can aid in improving the accuracy or efficiency of each of the steps of the web search evaluation pipeline. As a result of these improvements, the overall efficiency of the entire evaluation pipeline is increased. Firstly, we investigate how user interaction data can be used to build accurate offline evaluation methods for query auto-completion mechanisms. We propose a family of offline evaluation metrics for query auto-completion that represents the effort the user has to spend in order to submit their query. The parameters of our proposed metrics are trained against a set of user interactions recorded in the search engine’s query logs. From our experimental study, we observe that our proposed metrics are significantly more correlated with an online user satisfaction indicator than the metrics proposed in the existing literature. Hence, fewer changes will pass the offline evaluation step to be rejected after the online evaluation step. As a result, this would allow us to achieve a higher efficiency of the entire evaluation pipeline. Secondly, we state the problem of the optimised scheduling of online experiments. We tackle this problem by considering a greedy scheduler that prioritises the evaluation queue according to the predicted likelihood of success of a particular experiment. This predictor is trained on a set of online experiments, and uses a diverse set of features to represent an online experiment. Our study demonstrates that a higher number of successful experiments per unit of time can be achieved by deploying such a scheduler on the second step of the evaluation pipeline. Consequently, we argue that the efficiency of the evaluation pipeline can be increased. Next, to improve the efficiency of the online evaluation step, we propose the Generalised Team Draft interleaving framework. Generalised Team Draft considers both the interleaving policy (how often a particular combination of results is shown) and click scoring (how important each click is) as parameters in a data-driven optimisation of the interleaving sensitivity. Further, Generalised Team Draft is applicable beyond domains with a list-based representation of results, i.e. in domains with a grid-based representation, such as image search. Our study using datasets of interleaving experiments performed both in document and image search domains demonstrates that Generalised Team Draft achieves the highest sensitivity. A higher sensitivity indicates that the interleaving experiments can be deployed for a shorter period of time or use a smaller sample of users. Importantly, Generalised Team Draft optimises the interleaving parameters w.r.t. historical interaction data recorded in the interleaving experiments. Finally, we propose to apply the sequential testing methods to reduce the mean deployment time for the interleaving experiments. We adapt two sequential tests for the interleaving experimentation. We demonstrate that one can achieve a significant decrease in experiment duration by using such sequential testing methods. The highest efficiency is achieved by the sequential tests that adjust their stopping thresholds using historical interaction data recorded in diagnostic experiments. Our further experimental study demonstrates that cumulative gains in the online experimentation efficiency can be achieved by combining the interleaving sensitivity optimisation approaches, including Generalised Team Draft, and the sequential testing approaches. Overall, the central contributions of this thesis are the proposed approaches to improve the accuracy or efficiency of the steps of the evaluation pipeline: the offline evaluation frameworks for the query auto-completion, an approach for the optimised scheduling of online experiments, a general framework for the efficient online interleaving evaluation, and a sequential testing approach for the online search evaluation. The experiments in this thesis are based on massive real-life datasets obtained from Yandex, a leading commercial search engine. These experiments demonstrate the potential of the proposed approaches to improve the efficiency of the evaluation pipeline.
89

Scaleable audio for collaborative environments

Radenkovic, Milena January 2002 (has links)
This thesis is concerned with supporting natural audio communication in collaborative environments across the Internet. Recent experience with Collaborative Virtual Environments, for example, to support large on-line communities and highly interactive social events, suggest that in the future there will be applications in which many users speak at the same time. Such applications will generate large and dynamically changing volumes of audio traffic that can cause congestion and hence packet loss in the network and so seriously impair audio quality. This thesis reveals that no current approach to audio distribution can combine support for large number of simultaneous speakers with TCP-fair responsiveness to congestion. A model for audio distribution called Distributed Partial Mixing (DPM) is proposed that dynamically adapts both to varying numbers of active audio streams in collaborative environments and to congestion in the network. Each DPM component adaptively mixes subsets of its input audio streams into one or more mixed streams, which it then forwards to the other components along with any unmixed streams. DPM minimises the amount of mixing performed so that end users receive as many separate audio streams as possible within prevailing network resource constraints. This is important in order to allow maximum flexibility of audio presentation (especially spatialisation) to the end user. A distributed partial mixing prototype is realised as part of the audio service in MASSIVE-3. A series of experiments over a single network link demonstrate that DPM gracefully manages the tradeoff between preserving stable audio quality and being responsive to congestion and achieving fairness towards competing TCP traffic. The problem of large scale deployment of DPM over heterogeneous networks is also addressed. The thesis proposes that a shared tree of DPM servers and clients, where the nodes of the tree can perform distributed partial mixing, is an effective basis for wide area deployment. Two models for realising this in two contrasting situations are then explored in more detail: a static, centralised, subscription-based DPM service suitable for fully managed networks, and a fully distributed self-organising DPM service suitable for unmanaged networks (such as the current Internet).
90

A bio-inspired cache management policy for cloud computing environments using the artificial bee colony algorithm

Idachaba, Unekwu Solomon January 2015 (has links)
Caching has become an important technology in the development of cloud computing-based high-performance web services. Caches reduce the request-response latency experienced by users and reduce workload on backend databases. Caches need a high cache-hit rate to be fit for purpose, and this is dependent on the cache management policy used. Existing cache management policies do not prevent cache pollution and cache monopoly. This lack of prevention impacts negatively on cache hit rates. This work presents a Bio-inspired Community-based Caching (BCC) approach to address these two problems, by drawing intelligence from users' access behaviour using the Quantity and Quality Aware Artificial Bee Colony (Q2-ABC) clustering algorithm to achieve high cache-hit rates. Q2-ABC is a redesigned Artificial Bee Colony (ABC) algorithm which is also presented in this work. It optimizes the quality of clusters produced by addressing the repetition in metric space searches, probability-based effort distribution, and limit of abandonment problems inherent in ABC. To evaluate the performance of BCC, two sets of experiments were performed. In the first set of experiments, the quality of clusters identified by Q2-ABC was between 15% and 63% better than ABC. The performance of Q2-ABC comes with a cost: additional storage (a maximum of 300 bytes in this experiment) to store indexes of searched metric space. In the second set of experiments, the cache-hit rate achieved by BCC was between 0.7% and 55% better than the others across most of the test data used. The cost associated with BCC performance includes additional memory requirement-a total of 1.7Mb in this experiment-for storing generated intelligence and processor cycle overhead for generating intelligence. The implication of these results are that better quality clusters are produced by avoiding repeated searches within a metric space, and that high cache-hit rate can be achieved by managing caches intelligently, an alternative to expanding them as is conventional for Cloud Computing based services.

Page generated in 0.069 seconds