261 |
Bandit algorithms for searching large spacesDorard, L. R. M. January 2012 (has links)
Bandit games consist of single-state environments in which an agent must sequentially choose actions to take, for which rewards are given. The objective being to maximise the cumulated reward, the agent naturally seeks to build a model of the relationship between actions and rewards. The agent must both choose uncertain actions in order to improve its model (exploration), and actions that are believed to yield high rewards according to the model (exploitation). The choice of an action to take is called a play of an arm of the bandit, and the total number of plays may or may not be known in advance. Algorithms designed to handle the exploration-exploitation dilemma were initially motivated by problems with rather small numbers of actions. But the ideas they were based on have been extended to cases where the number of actions to choose from is much larger than the maximum possible number of plays. Several problems fall into this setting, such as information retrieval with relevance feedback, where the system must learn what a user is looking for while serving relevant documents often enough, but also global optimisation, where the search for an optimum is done by selecting where to acquire potentially expensive samples of a target function. All have in common the search of large spaces. In this thesis, we focus on an algorithm based on the Gaussian Processes probabilistic model, often used in Bayesian optimisation, and the Upper Confidence Bound action-selection heuristic that is popular in bandit algorithms. In addition to demonstrating the advantages of the GP-UCB algorithm on an image retrieval problem, we show how it can be adapted in order to search tree-structured spaces. We provide an efficient implementation, theoretical guarantees on the algorithm's performance, and empirical evidence that it handles large branching factors better than previous bandit-based algorithms, on synthetic trees.
|
262 |
Numerical approaches for solving the combined reconstruction and registration of digital breast tomosynthesisYang, G. January 2012 (has links)
Heavy demands on the development of medical imaging modalities for breast cancer detection have been witnessed in the last three decades in an attempt to reduce the mortality associated with the disease. Recently, Digital Breast Tomosynthesis (DBT) shows its promising in the early diagnosis when lesions are small. In particular, it offers potential benefits over X-ray mammography - the current modality of choice for breast screening - of increased sensitivity and specificity for comparable X-ray dose, speed, and cost. An important feature of DBT is that it provides a pseudo-3D image of the breast. This is of particular relevance for heterogeneous dense breasts of young women, which can inhibit detection of cancer using conventional mammography. In the same way that it is difficult to see a bird from the edge of the forest, detecting cancer in a conventional 2D mammogram is a challenging task. Three-dimensional DBT, however, enables us to step through the forest, i.e., the breast, reducing the confounding effect of superimposed tissue and so (potentially) increasing the sensitivity and specificity of cancer detection. The workflow in which DBT would be used clinically, involves two key tasks: reconstruction, to generate a 3D image of the breast, and registration, to enable images from different visits to be compared as is routinely performed by radiologists working with conventional mammograms. Conventional approaches proposed in the literature separate these steps, solving each task independently. This can be effective if reconstructing using a complete set of data. However, for ill-posed limited-angle problems such as DBT, estimating the deformation is difficult because of the significant artefacts associated with DBT reconstructions, leading to severe inaccuracies in the registration. The aim of my work is to find and evaluate methods capable of allying these two tasks, which will enhance the performance of each process as a result. Consequently, I prove that the processes of reconstruction and registration of DBT are not independent but reciprocal. This thesis proposes innovative numerical approaches combining reconstruction of a pair of temporal DBT acquisitions with their registration iteratively and simultaneously. To evaluate the performance of my methods I use synthetic images, breast MRI, and DBT simulations with in-vivo breast compressions. I show that, compared to the conventional sequential method, jointly estimating image intensities and transformation parameters gives superior results with respect to both reconstruction fidelity and registration accuracy.
|
263 |
Optimising information security decision makingBeautement, A. January 2013 (has links)
The aim of the thesis is to investigate the relationship between human behaviour and effective security in order to develop tools and methods for supporting decision makers in the field of information security. A review of the literature of information security, Human Computer Interaction (HCI), and the economics of security reveals that role of users in delivering effective security has largely been neglected. Security designers working without an understanding of the limitations of human cognition implement systems that, by their nature, offer perverse incentives to the user. The result is the adoption of insecure behaviour by the users in order to cope with the burdens placed upon them. Despite HCI identifying the need for increased usability in security, much of the research in the field of HCI Security (HCISec) still focuses on improving the usability of the interface to security systems, rather than the underlying system itself. In addition, while the impact of user non-compliance on the effectiveness of security has been demonstrated, most security design methods still rely on technical measures and controls to achieve their security aims. In recent years the need to incorporate human factors into security decision making has been recognised but this process is not supported by appropriate tools or methodologies. The traditional CIA framework used to express security goals lacks the flexibility and granularity to support the analysis of the trade-offs that are taking place. The research gap is therefore not so much one of knowledge (for much of the required information does exist in the fields of security and HCI) but rather how to combine this knowledge to form an effective decision making framework. This gap is addressed by combining the fields of security and HCI with economics in order to provide a utility-based approach that allows the effective balancing and management of human factors alongside more technical measures and controls. The need to consider human effort as a limited resource is shown by highlighting the negative consequences of neglecting this axis of resource measurement. This need is expressed through the Compliance Budget model which treats users as perceptive actors conducting a cost/benefit analysis when faced with compliance decisions. Through the use of the qualitative data analysis methodology Grounded Theory, a set of semi-structured interviews were analysed to provide the basis for this model. Passwords form a running example throughout the thesis. The need to provide decision makers with empirical data grounded in the real world is recognised and addressed through a combination of data gathering techniques. A laboratory study and a field trial were conducted to gather performance data with two password policies. In order to make optimal use of this data, a unified approach to decision making is necessary. Alongside this, the usefulness of systems models as tools for simulation and analysis is recognised. An economically motivated framework is therefore presented that organises and expresses security goals with the methods required to fulfil them. The role of the user is fully represented in this framework which is structured in such a way as to allow a smooth transition from data gathering to systems modelling. This unified approach to optimising security decision making provides key insights into the requirements for making more effective real-world decisions in the field of information security and is a useful foundation for improving current practices in this area.
|
264 |
New algorithms for evolving robust genetic programming solutions in dynamic environments with a real world case study in hedge fund stock selectionYan, W. January 2012 (has links)
This thesis presents three new genetic programming (GP) algorithms designed to enhance robustness of solutions evolved in highly dynamic environments and investigates the application of the new algorithms to financial time series analysis. The research is motivated by the following thesis question: what are viable strategies to enhance the robustness of GP individuals when the environment of a task being optimized or learned by a GP system is characterized by large, rapid, frequent and low-predictability changes? The vast majority of existing techniques aim to track dynamics of optima in very simple dynamic environments. But the research area in improving robustness in dynamic environments characterized by large, frequent and unpredictable changes is not yet widely explored. The three new algorithms were designed specifically to evolve robust solutions in these environments. The first algorithm ‘behavioural diversity preservation’ is a novel diversity preservation technique. The algorithm evolves more robust solutions by preserving population phenotypic diversity through the reduction of their behavioural intercorrelation and the promotion of individuals with unique behaviour. The second algorithm ‘multiple-scenario training’ is a novel population training and evaluation technique. The algorithm evolves more robust solutions by training a population simultaneously across a set of pre-constructed environment scenarios and by using a ‘consistency-adjusted’ fitness measure to favour individuals performing well across the entire range of environment scenarios. The third algorithm ‘committee voting’ is a novel ‘final solution’ selection technique. The algorithm enhances robustness by breaking away from ‘best-of-run’ tradition, creating a solution based on a majority-voting committee structure consisting of individuals evolved in a range of diverse environmental dynamics. The thesis introduces a comprehensive real-world case application for the evaluation experiments. The case is a hedge fund stock selection application for a typical long-short marketneutral equity strategy in the Malaysian stock market. The underlying technology of the stock selection system is GP which assists to select stocks by exploiting the underlying nonlinear relationship between diverse ranges of influencing factors. The three proposed algorithms are all applied to this case study during evaluation. The results of experiments based on the case study demonstrate that all three new algo-rithms overwhelmingly outperform canonical GP in two aspects of the robustness criteria and conclude they are viable strategies for improving robustness of GP individuals when the environment of a task being optimized or learned by a GP system is characterized by large, sudden, frequent and unpredictable changes.
|
265 |
Methods for the automatic alignment of colour histogramsSenanayake, C. R. January 2009 (has links)
Colour provides important information in many image processing tasks such as object identification and tracking. Different images of the same object frequently yield different colour values due to undesired variations in lighting and the camera. In practice, controlling the source of these fluctuations is difficult, uneconomical or even impossible in a particular imaging environment. This thesis is concerned with the question of how to best align the corresponding clusters of colour histograms to reduce or remove the effect of these undesired variations. We introduce feature based histogram alignment (FBHA) algorithms that enable flexible alignment transformations to be applied. The FBHA approach has three steps, 1) feature detection in the colour histograms, 2) feature association and 3) feature alignment. We investigate the choices for these three steps on two colour databases : 1) a structured and labeled database of RGB imagery acquired under controlled camera, lighting and object variation and 2) grey-level video streams from an industrial inspection application. The design and acquisition of the RGB image and grey-level video databases are a key contribution of the thesis. The databases are used to quantitatively compare the FBHA approach against existing methodologies and show it to be effective. FBHA is intended to provide a generic method for aligning colour histograms, it only uses information from the histograms and therefore ignores spatial information in the image. Spatial information and other context sensitive cues are deliberately avoided to maintain the generic nature of the algorithm; by ignoring some of this important information we gain useful insights into the performance limits of a colour alignment algorithm that works from the colour histogram alone, this helps understand the limits of a generic approach to colour alignment.
|
266 |
Sparse machine learning methods with applications in multivariate signal processingDiethe, T. R. January 2010 (has links)
This thesis details theoretical and empirical work that draws from two main subject areas: Machine Learning (ML) and Digital Signal Processing (DSP). A unified general framework is given for the application of sparse machine learning methods to multivariate signal processing. In particular, methods that enforce sparsity will be employed for reasons of computational efficiency, regularisation, and compressibility. The methods presented can be seen as modular building blocks that can be applied to a variety of applications. Application specific prior knowledge can be used in various ways, resulting in a flexible and powerful set of tools. The motivation for the methods is to be able to learn and generalise from a set of multivariate signals. In addition to testing on benchmark datasets, a series of empirical evaluations on real world datasets were carried out. These included: the classification of musical genre from polyphonic audio files; a study of how the sampling rate in a digital radar can be reduced through the use of Compressed Sensing (CS); analysis of human perception of different modulations of musical key from Electroencephalography (EEG) recordings; classification of genre of musical pieces to which a listener is attending from Magnetoencephalography (MEG) brain recordings. These applications demonstrate the efficacy of the framework and highlight interesting directions of future research.
|
267 |
Self-organised multi agent system for search and rescue operationsSaeedi, P. January 2010 (has links)
Autonomous multi-agent systems perform inadequately in time critical missions, while they tend to explore exhaustively each location of the field in one phase with out selecting the pertinent strategy. This research aims to solve this problem by introducing a hierarchy of exploration strategies. Agents explore an unknown search terrain with complex topology in multiple predefined stages by performing pertinent strategies depending on their previous observations. Exploration inside unknown, cluttered, and confined environments is one of the main challenges for search and rescue robots inside collapsed buildings. In this regard we introduce our novel exploration algorithm for multi–agent system, that is able to perform a fast, fair, and thorough search as well as solving the multi–agent traffic congestion. Our simulations have been performed on different test environments in which the complexity of the search field has been defined by fractal dimension of Brownian movements. The exploration stages are depicted as defined arenas of National Institute of Standard and Technology (NIST). NIST introduced three scenarios of progressive difficulty: yellow, orange, and red. The main concentration of this research is on the red arena with the least structure and most challenging parts to robot nimbleness.
|
268 |
Estimating uncertainty in multiple fibre reconstructionsSeunarine, K. K. January 2011 (has links)
Diffusion magnetic resonance imaging (MRI) is a technique that allows us to probe the microstructure of materials. The standard technique in diffusion MRI is diffusion tensor imaging (DTI). However, DTI can only model a single fibre orientation and fails in regions of complex microstructure. Multiple-fibre algorithms aim to overcome this limitation of DTI, but there remain many questions about which multiple-fibre algorithms are most promising and how best to exploit them in tractography. This work focuses on exploring the potential of multiple-fibre reconstructions and preparing them for transfer to the clinical arena. We provide a standardised framework for comparing multiple-fibre algorithms and use it for a robust comparison of standard algorithms, such as persistent angular structure (PAS) MRI, spherical deconvolution (SD), maximum entropy SD (MESD), constrained SD (CSD) and QBall. An output of this framework is the parameter settings of the algorithms that maximise the consistency of reconstructions. We show that non-linear algorithms, and CSD in particular, provide the most consistent reconstructions. Next, we investigate features of the reconstructions that can be exploited to improve tractography. We show that the peak shapes of multiple-fibre reconstructions can be used to predict anisotropy in the uncertainty of fibre-orientation estimates. We design an experiment that exploits this information in the probabilistic index of connectivity (PICo) tractography algorithm. We then compare PICo tractography results using information about peak shape and sharpness to estimate uncertainty with PICo results using only the peak sharpness to estimate uncertainty and show structured differences. The final contribution of this work is a robust algorithm for calibrating PICo that overcomes some of the limitations of the original algorithm. We finish with some early exploratory work that aims to estimate the distribution of fibre-orientations in a voxel using features of the reconstruction.
|
269 |
Structured discussion and early failure prediction in feature requestsFitzgerald, C. E. B. January 2012 (has links)
Feature request management systems are popular tools for gathering and negotiating stakeholders' change requests during system evolution. While these frameworks encourage stakeholder participation in distributed software development, their lack of structure also raises challenges. We present a study of requirements defects and failures in large scale feature request management systems, which we build upon to propose and evaluate two distinct solutions for key challenges in feature requests. The discussion forums on which feature request management systems are based make it difficult for developers to understand stakeholders' real needs. We propose a tool-supported argumentation framework, DoArgue, that integrates into feature request management systems allowing stakeholders to annotate comments on whether a suggested feature should be implemented. DoArgue aims to help stakeholders provide input into requirements activity that is more effective and understandable to developers. A case study evaluation suggests that DoArgue encapsulates the key discussion concepts on implementing a feature, and requires little additional effort to use. Therefore it could be adopted to clarify the complexities of requirements discussions in distributed settings. Deciding how much upfront requirements analysis to perform on feature requests is another important challenge: too little may result in inadequate functionalities being developed, costly changes, and wasted development effort; too much is a waste of time and resources. We propose an automated tool-supported framework for predicting failures early in a feature request's life-cycle when a decision is made on whether to implement it. A cost-benefit model assesses the value of conducting additional requirements analysis on a body of feature requests predicted to fail. An evaluation on six large-scale projects shows that prediction models provide more value than the best baseline predictors for many failure types. This suggests that failure prediction during requirements elicitation is a promising approach for localising, guiding, and deciding how much requirements analysis to conduct.
|
270 |
Blur perception : an evaluation of focus measuresShilston, R. T. January 2012 (has links)
Since the middle of the 20th century the technological development of conventional photographic cameras has taken advantage of the advances in electronics and signal processing. One speci c area that has bene ted from these developments is that of auto-focus, the ability for a cameras optical arrangement to be altered so as to ensure the subject of the scene is in focus. However, whilst the precise focus point can be known for a single point in a scene, the method for selecting a best focus for the entire scene is an unsolved problem. Many focus algorithms have been proposed and compared, though no overall comparison between all algorithms has been made, nor have the results been compared with human observers. This work describes a methodology that was developed to benchmark focus algorithms against human results. Experiments that capture quantitative metrics about human observers were developed and conducted with a large set of observers on a diverse range of equipment. From these experiments, it was found that humans were highly consensual in their experimental responses. The human results were then used as a benchmark, against which equivalent experiments were performed by each of the candidate focus algorithms. A second set of experiments, conducted in a controlled environment, captured the underlying human psychophysical blur discrimination thresholds in natural scenes. The resultant thresholds were then characterised and compared against equivalent discrimination thresholds obtained by using the candidate focus algorithms as automated observers. The results of this comparison and how this should guide the selection of an auto-focus algorithm are discussed, with comment being passed on how focus algorithms may need to change to cope with future imaging techniques.
|
Page generated in 0.0305 seconds