371 |
Towards lightweight secure user-transparent and privacy-preserving web meteringAlarifi, Fahad Abdulkareem January 2015 (has links)
Privacy is an issue today as more people are actively connecting and participating in the Internet. Problems arise when such concerning issue is coupled with security requirements of online applications. The web metering problem is the problem of counting the number of visits done by users to a webserver, additionally capturing data about these visits. There are trade-o s between designing secure web metering solutions and preserving users' privacy. There is also a dilemma between privacy preserving solutions versus accuracy of results. The problem becomes more difficult when the main interacting party, the user, is not inherently interested to participate and operations need to be carried out transparently. This thesis addresses the web metering problem in a hostile environment and proposes different web metering solutions. The web metering solutions operate in an environment where webservers or attackers are capable of invading users' privacy or modifying the web metering result. Threats in such environment are identified, using a well established threat model with certain assumptions, which are then used to derive privacy, security and functional requirements. Those requirements are used to show shortcomings in previous web metering schemes, which are then addressed by our proposed solutions. The central theme of this thesis is user's privacy by user-transparent solutions. Preserving users' privacy and designing secure web metering solutions that operate transparently to the user are two main goals of this research. Achieving the two goals can conflict with other requirements and such exploration was missed by former solutions in the literature. Privacy issues in this problem are the result of the dilemma of convincing interested parties of web metering results with sufficient details and non-repudiation evidence that can still preserve users' privacy. Relevant privacy guidelines are used to discuss and analyse privacy concerns in the context of the problem and consequently privacy-preserving solutions are proposed. Also, improving the usability through \securely" redesigning already used solutions will help into wider acceptance and universal deployment of the new solutions. Consequently, secure and privacy-preserving web metering solutions are proposed that operate transparently to the visitor. This thesis describes existing web metering solutions and analyses them with respect to different requirements and desiderata. It also describes and analyses new solutions which use existing security and authentication protocols, hardware devices and analytic codes. The proposed solutions provide a reasonable trade-o among privacy, security, accuracy and transparency. The first proposed solution, transparently to the user, reuses Identity Management Systems and hash functions for web metering purposes. The second hardware-based solution securely and transparently uses hardware devices and existing protocols in a privacy-preserving manner. The third proposed solution transparently collects different "unique" users' data and analyses fingerprints using privacy-preserving codes.
|
372 |
Specification and analysis of service oriented architectures within the calculus of communicating sequential processes (CSP)Al-Homaimeedi, Abiar Suliman January 2016 (has links)
Software architecture evolved from the monolithic paradigm to the Service-Oriented Computing (SOC) paradigm. IT systems in the SOC paradigm are based on service compositions. A service composition is an aggregate of loosely coupled autonomous heterogeneous services which are collectively composed to implement a particular task. Internet standards are the dominant modelling methods of SOC systems. How- ever, they raise fundamental issues: standards lack formalism, and they fall short when being applied independently. The former issue has been solved and rigorous semantics have been developed for the di erent standards. However, the latter is- sue has only partially been solved, by developing new formal modelling languages that are adopting the concepts rather than the notations of the internet standards. In principle, the main concepts that should be hosted in SOC modelling languages are: asynchronicity, mobility, multiparty sessions, and compensations. However, not all of these concepts are supported in the current developed modelling languages. This thesis addresses this problem and proposes a new formal modelling language for SOC systems which is adequately expressive to model the previous concepts. Additionally, the thesis provides an implementation for the new modelling language in a model checker to facilitate automated formal reasoning on systems properties like: good/bad traces, deadlock-freedom, and livelock-freedom.
|
373 |
Estimating varying illuminant colours in imagesLynch, Stuart Ellis January 2014 (has links)
Colour Constancy is the ability to perceive colours independently of varying illumi-nation colour. A human could tell that a white t-shirt was indeed white, even under the presence of blue or red illumination. These illuminant colours would actually make the reflectance colour of the t-shirt bluish or reddish. Humans can, to a good extent, see colours constantly. Getting a computer to achieve the same goal, with a high level of accuracy has proven problematic. Particularly if we wanted to use colour as a main cue in object recognition. If we trained a system on object colours under one illuminant and then tried to recognise the objects under another illuminant, the system would likely fail. Early colour constancy algorithms assumed that an image contains a single uniform illuminant. They would then attempt to estimate the colour of the illuminant to apply a single correction to the entire image. It’s not hard to imagine a scenario where a scene is lit by more than one illuminant. If we take the case of an outdoors scene on a typical summers day, we would see objects brightly lit by sunlight and others that are in shadow. The ambient light in shadows is known to be a different colour to that of direct sunlight (bluish and yellowish respectively). This means that there are at least two illuminant colours to be recovered in this scene. This thesis focuses on the harder case of recovering the illuminant colours when more than one are present in a scene. Early work on this subject made the empirical observation that illuminant colours are actually very predictable compared to surface colours. Real-world illuminants tend not to be greens or purples, but rather blues, yellows and reds. We can think of an illuminant mapping as the function which takes a scene from some unknown illuminant to a known illuminant. We model this mapping as a simple multiplication of the Red, Green and Blue channels of a pixel. It turns out that the set of realistic mappings approximately lies on a line segment in chromaticity space. We propose an algorithm that uses this knowledge and only requires two pixels of the same surface under two illuminants as input. We can then recover an estimate for the surface reflectance colour, and subsequently the two illuminants. Additionally in this thesis, we propose a more robust algorithm that can use vary-ing surface reflectance data in a scene. One of the most successful colour constancy algorithms, known Gamut Mappping, was developed by Forsyth (1990). He argued that the illuminant colour of a scene naturally constrains the surfaces colours that are possible to perceive. We couldn’t perceive a very chromatic red under a deep blue illuminant. We introduce our multiple illuminant constraint in a Gamut Mapping context and are able to further improve it’s performance. The final piece of work proposes a method for detecting shadow-edges, so that we can automatically recover estimates for the illuminant colours in and out of shadow. We also formulate our illuminant estimation algorithm in a voting scheme, that probabilistically chooses an illuminant estimate on both sides of the shadow edge. We test the performance of all our algorithms experimentally on well known datasets, as well as our new proposed shadow datasets.
|
374 |
Augmenting user interfaces with haptic feedbackAsque, Christopher January 2014 (has links)
Computer assistive technologies have developed considerably over the past decades. Advances in computer software and hardware have provided motion-impaired operators with much greater access to computer interfaces. For people with motion impairments, the main difficulty in the communication process is the input of data into the system. For example, the use of a mouse or a keyboard demands a high level of dexterity and accuracy. Traditional input devices are designed for able-bodied users and often do not meet the needs of someone with disabilities. As the key feature of most graphical user interfaces (GUIs) is to point-and-click with a cursor this can make a computer inaccessible for many people. Human-computer interaction (HCI) is an important area of research that aims to improve communication between humans and machines. Previous studies have identified haptics as a useful method for improving computer access. However, traditional haptic techniques suffer from a number of shortcomings that have hindered their inclusion with real world software. The focus of this thesis is to develop haptic rendering algorithms that will permit motion-impaired operators to use haptic assistance with existing graphical user interfaces. The main goal is to improve interaction by reducing error rates and improving targeting times. A number of novel haptic assistive techniques are presented that utilise the three degrees-of-freedom (3DOF) capabilities of modern haptic devices to produce assistance that is designed specifically for motion-impaired computer users. To evaluate the effectiveness of the new techniques a series of point-and-click experiments were undertaken in parallel with cursor analysis to compare the levels of performance. The task required the operator to produce a predefined sentence on the densely populated Windows on-screen keyboard (OSK). The results of the study prove that higher performance levels can be achieved using techniques that are less constricting than traditional assistance.
|
375 |
Multi-objective evolutionary algorithms for data clusteringKirkland, Oliver January 2014 (has links)
In this work we investigate the use of Multi-Objective metaheuristics for the data-mining task of clustering. We �first investigate methods of evaluating the quality of clustering solutions, we then propose a new Multi-Objective clustering algorithm driven by multiple measures of cluster quality and then perform investigations into the performance of different Multi-Objective clustering algorithms. In the context of clustering, a robust measure for evaluating clustering solutions is an important component of an algorithm. These Cluster Quality Measures (CQMs) should rely solely on the structure of the clustering solution. A robust CQM should have three properties: it should be able to reward a \good" clustering solution; it should decrease in value monotonically as the solution quality deteriorates and, it should be able to evaluate clustering solutions with varying numbers of clusters. We review existing CQMs and present an experimental evaluation of their robustness. We find that measures based on connectivity are more robust than other measures for cluster evaluation. We then introduce a new Multi-Objective Clustering algorithm (MOCA). The use of Multi-Objective optimisation in clustering is desirable because it permits the incorporation of multiple measures of cluster quality. Since the definition of what constitutes a good clustering is far from clear, it is beneficial to develop algorithms that allow for multiple CQMs to be accommodated. The selection of the clustering quality measures to use as objectives for MOCA is informed by our previous work with internal evaluation measures. We explain the implementation details and perform experimental work to establish its worth. We compare MOCA with k-means and find some promising results. We�find that MOCA can generate a pool of clustering solutions that is more likely to contain the optimal clustering solution than the pool of solutions generated by k-means. We also perform an investigation into the performance of different implementations of MOEA algorithms for clustering. We�find that representations of clustering based around centroids and medoids produce more desirable clustering solutions and Pareto fronts. We also �find that mutation operators that greatly disrupt the clustering solutions lead to better exploration of the Pareto front whereas mutation operators that modify the clustering solutions in a more moderate way lead to higher quality clustering solutions. We then perform more specific investigations into the performance of mutation operators focussing on operators that promote clustering solution quality, exploration of the Pareto front and a hybrid combination. We use a number of techniques to assess the performance of the mutation operators as the algorithms execute. We confirm that a disruptive mutation operator leads to better exploration of the Pareto front and mutation operators that modify the clustering solutions lead to the discovery of higher quality clustering solutions. We find that our implementation of a hybrid mutation operator does not lead to a good improvement with respect to the other mutation operators but does show promise for future work.
|
376 |
Computational analysis of small RNAs and the RNA degradome with application to plant water stressFolkes, Leighton January 2014 (has links)
Water shortage is one of the most important environmental stress factors that affects plants, limiting crop yield in large areas worldwide. Plants can survive water stress by regulating gene expression at several levels. One of the recently discovered regulatory mechanisms involves small RNAs (sRNAs), which can regulate gene expression by targeting messenger RNAs (mRNAs) and directing endonucleolytic cleavage resulting in mRNA degradation. A snapshot of an mRNA degradation profile (degradome) can be captured through a new high-throughput technique called Parallel Analysis of RNA Ends (PARE) by using next generation sequencing technologies. In this thesis we describe a new user friendly degradome analysis software tool called PAREsnip that we have used for the rapid genome-wide discovery of sRNA/target interactions evidenced through the degradome. In addition to PAREsnip and based upon PAREsnip’s rapid capability, we also present a new software tool for the construction, analysis and visualisation of sRNA regulatory interaction networks. The two new tools were used to analyse PARE datasets obtained fromMedicago truncatula and Arabidopsis thaliana. In particular, we have used PAREsnip for the high-throughput analysis of PARE data obtained from Medicago when subjected to dehydration and found several sRNA/mRNA interactions that are potentially responsive to water stress. We also present how we used our new network visualisation and analysis tool with PARE datasets obtained from Arabidopsis and discovered several novel sRNA regulatory interaction networks. In building tools and using them for this kind of analysis, we gain a better understanding of the processes and mechanisms involved in sRNA mediated gene regulation and how plants respond to water stress which could lead to new strategies in improving stress tolerance.
|
377 |
Non-metric multi-dimensional scaling for distance-based privacy-preserving data miningAlotaibi, Khaled January 2014 (has links)
Recent advances in the field of data mining have led to major concerns about privacy. Sharing data with external parties for analysis puts private information at risk. The original data are often perturbed before external release to protect private information. However, data perturbation can decrease the utility of the output. A good perturbation technique requires balance between privacy and utility. This study proposes a new method for data perturbation in the context of distance-based data mining. We propose the use of non-metric multi-dimensional scaling (MDS) as a suitable technique to perturb data that are intended for distance-based data mining. The basic premise of this approach is to transform the original data into a lower dimensional space and generate new data that protect private details while maintaining good utility for distance-based data mining analysis. We investigate the extent the perturbed data are able to preserve useful statistics for distance-based analysis and to provide protection against malicious attacks. We demonstrate that our method provides an adequate alternative to data randomisation approaches and other dimensionality reduction approaches. Testing is conducted on a wide range of benchmarked datasets and against some existing perturbation methods. The results confirm that our method has very good overall performance, is competitive with other techniques, and produces clustering and classification results at least as good, and in some cases better, than the results obtained from the original data.
|
378 |
Real-time rendering and simulation of trees and snowReynolds, Daniel Tobias January 2014 (has links)
Tree models created by an industry used package are exported and the structure extracted in order to procedurally regenerate the geometric mesh, addressing the limitations of the application's standard output. The structure, once extracted, is used to fully generate a high quality skeleton for the tree, individually representing each section in every branch to give the greatest achievable level of freedom of deformation and animation. Around the generated skeleton, a new geometric mesh is wrapped using a single, continuous surface resulting in the removal of intersection based render artefacts. Surface smoothing and enhanced detail is added to the model dynamically using the GPU enhanced tessellation engine. A real-time snow accumulation system is developed to generate snow cover on a dynamic, animated scene. Occlusion techniques are used to project snow accumulating faces and map exposed areas to applied accumulation maps in the form of dynamic textures. Accumulation maps are xed to applied surfaces, allowing moving objects to maintain accumulated snow cover. Mesh generation is performed dynamically during the rendering pass using surface o�setting and tessellation to enhance required detail.
|
379 |
From data to knowledge in secondary health care databasesBettencourt-Silva, Joao January 2014 (has links)
The advent of big data in health care is a topic receiving increasing attention worldwide. In the UK, over the last decade, the National Health Service (NHS) programme for Information Technology has boosted big data by introducing electronic infrastructures in hospitals and GP practices across the country. This ever growing amount of data promises to expand our understanding of the services, processes and research. Potential benefits include reducing costs, optimisation of services, knowledge discovery, and patient-centred predictive modelling. This thesis will explore the above by studying over ten years worth of electronic data and systems in a hospital treating over 750 thousand patients a year. The hospital's information systems store routinely collected data, used primarily by health practitioners to support and improve patient care. This raw data is recorded on several different systems but rarely linked or analysed. This thesis explores the secondary uses of such data by undertaking two case studies, one on prostate cancer and another on stroke. The journey from data to knowledge is made in each of the studies by traversing critical steps: data retrieval, linkage, integration, preparation, mining and analysis. Throughout, novel methods and computational techniques are introduced and the value of routinely collected data is assessed. In particular, this thesis discusses in detail the methodological aspects of developing clinical data warehouses from routine heterogeneous data and it introduces methods to model, visualise and analyse the journeys that patients take through care. This work has provided lessons in hospital IT provision, integration, visualisation and analytics of complex electronic patient records and databases and has enabled the use of raw routine data for management decision making and clinical research in both case studies.
|
380 |
Crowd-sourced data and its applications for new algorithms in photographic imagingHarris, Michael January 2015 (has links)
This thesis comprises two main themes. The first of these is concerned primarily with the validity and utility of data acquired from web-based psychophysical experiments. In recent years web-based experiments, and the crowd-sourced data they can deliver, have been rising in popularity among the research community for several key reasons – primarily ease of administration and easy access to a large population of diverse participants. However, the level of control with which traditional experiments are performed, and the severe lack of control we have over web-based alternatives may lead us to believe that these benefits come at the cost of reliable data. Indeed, the results reported early in this thesis support this assumption. However, we proceed to show that it is entirely possible to crowd-source data that is comparable with lab-based results. The second theme of the thesis explores the possibilities presented by the use of crowd-sourced data, taking a popular colour naming experiment as an example. After using the crowd-sourced data to construct a model for computational colour naming, we consider the value of colour names as image descriptors, with particular relevance to illuminant estimation and object indexing. We discover that colour names represent a particularly useful quantisation of colour space, allowing us to construct compact image descriptors for object indexing. We show that these descriptors are somewhat tolerant to errors in illuminant estimation and that their perceptual relevance offers even further utility. We go on to develop a novel algorithm which delivers perceptually-relevant, illumination-invariant image descriptors based on colour names.
|
Page generated in 0.057 seconds