• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3232
  • 1207
  • 892
  • 505
  • 219
  • 178
  • 161
  • 161
  • 160
  • 160
  • 160
  • 160
  • 160
  • 159
  • 77
  • Tagged with
  • 8683
  • 4021
  • 2487
  • 2411
  • 2411
  • 805
  • 805
  • 587
  • 577
  • 554
  • 551
  • 525
  • 486
  • 480
  • 469
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Information security awareness : improving current research and practice

Ashenden, D. M. January 2015 (has links)
Large-scale data losses experienced across both public and private sector organisations have led to expectations that organisations will develop a culture that supports information security aims and objectives. Despite the fact that many organisations now run awareness, education and training programmes for their employees, however, information security incidents due to employee misuse of information still keep occurring. This suggests that these programmes are not working. The research presented in this thesis examines ways to better understand employees’ attitudes towards information security with a view to improving current organisational practice. The research explores whether Chief Information Security Officers are delivering organisational change for information security, before moving on to better understand employee’s attitudes and how these are translated into behaviours. The research takes a mixed-methods approach that is not often used in information security research and combines both qualitative and quantitative analytical methods, grounded in the theory of social psychology. Case studies are carried out with Chief Information Security Officers as well as at the Office of Fair Trading and Prudential plc. The research delivers a survey tool that can be used in organisations to better understand how to frame information security messages so that they achieve their aims. An expert panel of users evaluated the survey. The research concluded that end users fall into two groups – the ‘I Can Handle It Group’ and the ‘It’s Out of My Control Group’ and these substantive findings have been validated by a field experiment. By mirroring the attributions of the dominant group the field experiment demonstrates that it is possible to influence employees’ behaviour.
112

Real-coded genetic algorithm particle filters for high-dimensional state spaces

Hussain, M. S. January 2014 (has links)
This thesis successfully addresses the issues faced by particle filters in high-dimensional state-spaces by comparing them with genetic algorithms and then using genetic algorithm theory to address these issues. Sequential Monte Carlo methods are a class of online posterior density estimation algorithms that are suitable for non-Gaussian and nonlinear environments, however they are known to suffer from particle degeneracy; where the sample of particles becomes too sparse to approximate the posterior accurately. Various techniques have been proposed to address this issue but these techniques fail in high-dimensions. In this thesis, after a careful comparison between genetic algorithms and particle filters, we posit that genetic algorithm theoretic arguments can be used to explain the working of particle filters. Analysing the working of a particle filter, we note that it is designed similar to a genetic algorithm but does not include recombination. We argue based on the building-block hypothesis that the addition of a recombination operator would be able to address the sample impoverishment phenomenon in higher dimensions. We propose a novel real-coded genetic algorithm particle filter (RGAPF) based on these observations and test our hypothesis on the stochastic volatility estimation of financial stocks. The RGAPF successfully scales to higher-dimensions. To further strengthen our argument that whether building-block-hypothesis-like effects are due to the recombination operator, we compare the RGAPF with a mutation-only particle filter with an adjustable mutation rate that is set to equal the population-to-population variance of the RGAPF. The latter significantly and consistently performs better, indicating that recombination is having a subtle and significant effect that may be theoretically explained by genetic algorithm theory. After two successful attempts at validating our hypothesis we compare the performance of the RGAPF using different real-recombination operators. Observing the behaviour of the RGAPF under these recombination operators we propose a mean-centric recombination operator specifically for high-dimensional particle filtering. This recombination operator is successfully tested and compared with benchmark particle filters and a hybrid CMA-ES particle filter using simulated data and finally on real end-of-day data of the securities making up the FTSE-100 index. Each experiment is discussed in detail and we conclude with a brief description of the future direction of research.
113

Supervised algorithm selection for flow and other computer vision problems

Mac Aodha, O. January 2014 (has links)
Motion estimation is one of the core problems of computer vision. Given two or more frames from a video sequence, the goal is to find the temporal correspondence for one or more points from the sequence. For dense motion estimation, or optical flow, a dense correspondence field is sought between the pair of frames. A standard approach to optical flow involves constructing an energy function and then using some optimization scheme to find its minimum. These energy functions are hand designed to work well generally, with the intention that the global minimum corresponds to the ground truth temporal correspondence. As an alternative to these heuristic energy functions we aim to assess the quality of existing algorithms directly from training data. We show that the addition of an offline training phase can improve the quality of motion estimation. For optical flow, decisions such as which algorithm to use and when to trust its accuracy, can all be learned from training data. Generating ground truth optical flow data is a difficult and time consuming process. We propose the use of synthetic data for training and present a new dataset for optical flow evaluation and a tool for generating an unlimited quantity of ground truth correspondence data. We use this method for generating data to synthesize depth images for the problem of depth image super-resolution and show that it is superior to real data. We present results for optical flow confidence estimation with improved performance on a standard benchmark dataset. Using a similar feature representation, we extend this work to occlusion region detection and present state of the art results for challenging real scenes. Finally, given a set of different algorithms we treat optical flow estimation as the problem of choosing the best algorithm from this set for a given pixel. However, posing algorithm selection as a standard classification problem assumes that class labels are disjoint. For each training example it is assumed that there is only one class label that correctly describes it, and that all other labels are equally bad. To overcome this, we propose a novel example dependent cost-sensitive learning algorithm based on decision trees where each label is instead a vector representing a data point's affinity for each of the algorithms. We show that this new algorithm has improved accuracy compared to other classification baselines on several computer vision problems.
114

A proactive approach to application performance analysis, forecast and fine-tuning

Kargupta, S. January 2014 (has links)
A major challenge currently faced by the IT industry is the cost, time and resource associated with repetitive performance testing when existing applications undergo evolution. IT organizations are under pressure to reduce the cost of testing, especially given its high percentage of the overall costs of application portfolio management. Previously, to analyse application performance, researchers have proposed techniques requiring complex performance models, non-standard modelling formalisms, use of process algebras or complex mathematical analysis. In Continuous Performance Management (CPM), automated load testing is invoked during the Continuous Integration (CI) process after a build. CPM is reactive and raises alarms when performance metrics are violated. The CI process is repeated until performance is acceptable. Previous and current work is yet to address the need of an approach to allow software developers proactively target a specified performance level while modifying existing applications instead of reacting to the performance test results after code modification and build. There is thus a strong need for an approach which does not require repetitive performance testing, resource intensive application profilers, complex software performance models or additional quality assurance experts. We propose to fill this gap with an innovative relational model associating the operation‟s Performance with two novel concepts – the operation‟s Admittance and Load Potential. To address changes to a single type or multiple types of processing activities of an application operation, we present two bi-directional methods, both of which in turn use the relational model. From annotations of Delay Points within the code, the methods allow software developers to either fine-tune the operation‟s algorithm “targeting” a specified performance level in a bottom-up way or to predict the operation‟s performance due to code changes in a top-down way under a given workload. The methods do not need complex performance models or expensive performance testing of the whole application. We validate our model on a realistic experimentation framework. Our results indicate that it is possible to characterize an application Performance as a function of its Admittance and Load Potential and that the application Admittance can be characterized as a function of the latency of its Delay Points. Applying this method to complex large-scale systems has the potential to significantly reduce the cost of performance testing during system maintenance and evolution.
115

Experimental computational simulation environments for algorithmic trading

Galas, M. January 2014 (has links)
This thesis investigates experimental Computational Simulation Environments for Computational Finance that for the purpose of this study focused on Algorithmic Trading (AT) models and their risk. Within Computational Finance, AT combines different analytical techniques from statistics, machine learning and economics to create algorithms capable of taking, executing and administering investment decisions with optimal levels of profit and risk. Computational Simulation Environments are crucial for Big Data Analytics, and are increasingly being used by major financial institutions for researching algorithm models, evaluation of their stability, estimation of their optimal parameters and their expected risk and performance profiles. These large-scale Environments are predominantly designed for testing, optimisation and monitoring of algorithms running in virtual or real trading mode. The stateof-the-art Computational Simulation Environment described in this thesis is believed to be the first available for academic research in Computational Finance; specifically Financial Economics and AT. Consequently, the aim of the thesis was: 1) to set the operational expectations of the environment, and 2) to holistically evaluate the prototype software architecture of the system by providing access to it to the academic community via a series of trading competitions. Three key studies have been conducted as part of this thesis: a) an experiment investigating the design of Electronic Market Simulation Models; b) an experiment investigating the design of a Computational Simulation Environment for researching Algorithmic Trading; c) an experiment investigating algorithms and the design of a Portfolio Selection System, a key component of AT systems. Electronic Market Simulation Models (Experiment 1): this study investigates methods of simulating Electronic Markets (EMs) to enable computational finance experiments in trading. EMs are central hubs for bilateral exchange of securities in a well-defined, contracted and controlled manner. Such modern markets rely on electronic networks and are designed to replace Open Outcry Exchanges for the advantage of increased speed, reduced costs of transaction, and programmatic access. Study of simulation models of EMs is important from the point of view of testing trading paradigms, as it allows users to tailor the simulation to the needs of particular trading paradigms. This is a common practice amongst investment institutions to use EMs to fine-tune their algorithms before allowing the algorithms to trade with real funds. Simulations of EMs provide users with the ability to investigate the market micro-structure and to participate in a market, receive live data feeds and monitor their behaviour without bearing any of the risks associated with real-time market trading. Simulated EMs are used by risk managers to test risk characteristics and by quant developers to build and test quantitative financial systems against market behaviour. Computational Simulation Environments (Experiment 2): this study investigates the design, implementation and testing of an experimental Environment for Algorithmic Trading able to support a variety of AT strategies. The Environment consists of a set of distributed, multi-threaded, event-driven, real-time, Linux services communicating with each other via an asynchronous messaging system. The Environment allows multi-user real and virtual trading. It provides a proprietary application programming interface (API) to support research into algorithmic trading models and strategies. It supports advanced trading-signal generation and analysis in near real-time, with use of statistical and technical analysis as well as data mining methods. It provides data aggregation functionalities to process and store market data feeds. Portfolio Selection System (Experiment 3): this study investigates a key component of Computational Finance systems to discover exploitable relationships between financial time-series applicable amongst others to algorithmic trading; where the challenge lays in identification of similarities/dissimilarities in behaviour of elements within variable-size portfolios of tradable and non-tradable securities. Recognition of sets of securities characterized by a very similar/dissimilar behaviour over time, is beneficial from the perspective of risk management, recognition of statistical arbitrage and hedge opportunities, and can be also beneficial from the point of view of portfolio diversification. Consequently, a large-scale search algorithm enabling discovery of sets of securities with AT domain-specific similarity characteristics can be utilized in creation of better portfolio-based strategies, pairs-trading strategies, statistical arbitrage strategies, hedging and mean-reversion strategies. This thesis has the following contributions to science: Electronic Markets Simulation - identifies key features, modes of operation and software architecture of an electronic financial exchange for simulated (virtual) trading. It also identifies key exchange simulation models. These simulation models are crucial in the process of evaluation of trading algorithms and systemic risk. Majority of the proposed models are believed to be unique in the academia. Computational Simulation Environment - design, implementation and testing of a prototype experimental Computational Simulation Environment for Computational Finance research, currently supporting the design of trading algorithms and their associated risk. This is believed to be unique in the academia. Portfolio Selection System - defines what is believed to be a unique software system for portfolio selection containing a combinatorial framework for discovery of subsets of internally cointegrated time-series of financial securities and a graph-guided search algorithm for combinatorial selection of such time-series subsets.
116

Compartment models and model selection for in-vivo diffusion-MRI of human brain white matter

Ferizi, U. January 2014 (has links)
Diffusion MRI microstructure imaging provides a unique noninvasive probe into tissue microstructure. The technique relies on mathematical models, relating microscopic tissue features to the MR signal. The assumption of Gaussian diffusion oversimplifies the behaviour of water in complex media. Multi-compartment models fit the signal better and enable the estimation of more specific indices, such as axon diameter and density. A previous model comparison framework used data from fixed rat brains to show that three compartment models, designed for intra/extra-axonal diffusion, best explain multi-b-value datasets. The purpose of this PhD work is to translate this analysis to in vivo human brain white matter. It updates the framework methodology by enriching the acquisition protocol, extending the model base and improving the model fitting. In the first part of this thesis, the original fixed rat study is taken in-vivo by using a live human subject on a clinical scanner. A preliminary analysis cannot differentiate the models well. The acquisition protocol is then extended to include a richer angular resolution of diffusion- sampling gradient directions. Compared with ex-vivo data, simpler three-compartment models emerge. Changes in diffusion behaviour and acquisition protocol are likely to have influenced the results. The second part considers models that explicitly seek to explain fibre dispersion, another potentially specific biomarker of neurological diseases. This study finds that models that capture fibre dispersion are preferred, showing the importance of modelling dispersion even in apparently coherent fibres. In the third part, we improve the methodology. First, during the data pre-processing we narrow the region of interest. Second, the model fitting takes into account the varying echo time and compartmental tissue relaxation; we also test the benefit to model performance of different compartmental diffusivities. Next, we evaluate the inter- and intra-subject reproducibility of ranking. In the fourth part, high-gradient Connectom-Skyra data are used to assess the generalisability of earlier results derived from a standard Achieva scanner. Results showed a reproducibility of major trends in the model ranking. In particular, dispersion models explain low gradient strength data best, but cannot capture Connectom signal that remains at very high b-values. The fifth part uses cross-validation and bootstrapping as complementary means to model ranking. Both methods support the previous ranking; however, the leave-one-shell-out cross- validation supports less difference between the models than bootstrapping.
117

Modelling empirical features and liquidity resilience in the Limit Order Book

Panayi, E. January 2015 (has links)
The contribution of this body of work is in developing new methods for modelling interactions in modern financial markets and understanding the origins of pervasive features of trading data. The advent of electronic trading and the improvement in trading technology has brought about vast changes in individual trading behaviours, and thus in the overall dynamics of trading interactions. The increased sophistication of market venues has led to the diminishing of the role of specialists in making markets, a more direct interaction between trading parties and the emergence of the Limit Order Book (LOB) as the pre-eminent trading system. However, this has also been accompanied by an increased fluctuation in the liquidity available for immediate execution, as market makers try to balance the provision of liquidity against the probability of an adverse price move, with liquidity traders being increasingly aware of this and searching for the optimal placement strategy to reduce execution costs. The varying intra-day liquidity levels in the LOB are one of the main issues examined here. The thesis proposes a new measure for the resilience of liquidity, based on the duration of intra-day liquidity droughts. The flexible survival regression framework employed can accommodate any liquidity measure and any threshold liquidity level of choice to model these durations, and relate them to covariates summarising the state of the LOB. Of these covariates, the frequency of the droughts and the value of the liquidity measure are found to have substantial power in explaining the variation in the new resilience metric. We have shown that the model also has substantial predictive power for the duration of these liquidity droughts, and could thus be of use in estimating the time between subsequent tranches of a large order in an optimal execution setting. A number of recent studies have uncovered a commonality in liquidity that extends across markets and across countries. We outline the implications of using the PCA regression approaches that have been employed in recent studies through synthetic examples, and demonstrate that using such an approach for the study of European stocks can mislead regarding the level of liquidity commonality. We also propose a method via which to measure commonality in liquidity resilience, using an extension of the resilience metric identified earlier. This involves the first use of functional data analysis in this setting, as a way of summarising resilience data, as well as measuring commonality via functional principal components analysis regression. Trading interactions are considered using a form of agent-based modelling in the LOB, where the activity is assumed to arise from the interaction of liquidity providers, liquidity demanders and noise traders. The highly detailed nature of the model entails that one can quantify the dependence between order arrival rates at different prices, as well as market orders and cancellations. In this context, we demonstrate the value of indirect inference and simulation-based estimation methods (multi-objective optimisation in particular) for models for which direct estimation through maximum likelihood is difficult (for example, when the likelihood cannot be obtained in closed form). Besides being a novel contribution to the area of agent-based modelling, we demonstrate how the model can be used in a regulation setting, to quantify the effect of the introduction of new financial regulation.
118

Surrogate-driven motion models from cone-beam CT for motion management in radiotherapy treatments

Martin, J. L. January 2015 (has links)
This thesis details a variety of methods to build a surrogate-driven motion model from a cone-beam CT (CBCT) scan. The methods are intended to form a key constituent of a tracked RT treatment system, by providing a markerless means of tracking tumour and organs at risk (OAR) positions in real-time. The beam can then be adjusted to account for the respiratory motion of the tumour, whilst ensuring no adverse e.ects on the OAR from the adjustment in the beam. An approach to describe an iterative method to markerlessly track the lung tumour region is presented. A motion model is built of the tumour region using the CBCT projections, which then gives tumour position information during treatment. For simulated data, the motion model was able to reduce the mean L2-norm error from 4.1 to 1.0 mm, relative to the mean position. The model was used to account for the motion of an object placed within a respiratory phantom. When used to perform a motion compensated reconstruction (MCR), measured dimensions of this object agreed to within the voxel size (1 mm cube) used for the reconstruction. The method was applied to 6 clinical datasets. Improvements in edge contrast of the tumour were seen, and compared to clinically-derived positions for the tumour centres, the mean absolute errors in superior-inferior directions was reduced to under 2.5 mm. The model is then subsequently extended to monitor both tumour and OAR regions during treatment. This extended approach uses both the planning 4DCT and CBCT scans, focusing on the strengths of each respective dataset. Results are presented on three simulated and three clinical datasets. For the simulated data, maximal L2-norm errors were reduced from 14.8 to 4.86 mm. Improvements in edge contrast in the diaphragm and lung regions were seen in the MCR for the clinical data. A final approach to building a model of the entire patient is then presented, utilising only the CBCT data. An optical-flow-based approach is taken, which is adapted to the unique nature of the CBCT data via some interesting conceptualisations. Results on a simulated case are presented, showing increased edge contrast in the MCR using the fitted motion model. Mean L2-norm errors in the tumour region were reduced from 4.2 to 2.6 mm. Future work is discussed, with a variety of extensions to the methods proposed. With further development, it is hoped that some of the ideas detailed could be translated into the clinic and have a direct impact on patient treatment.
119

Human balance behaviour in immersive virtual environments

Antley, A. January 2014 (has links)
Presence is defined as the illusion of being in a place depicted by an immersive virtual reality (IVR) system. A consequence of this illusion is that participants respond to places and events in an IVR as if they were real. Currently, there is no objective measure of presence that applies across all systems and applications. In this thesis we examine a particular type of response as if real, human balance behaviour (HBB), the actions that prevent the body’s centre of gravity from moving outside the base of support, as a way to measure presence in an IVR. Our first experiment was designed to investigate whether HBB can detect presence in IVRs. We used surface EMG to measure muscle activations and found an increase when subjects walked on a virtual raised platform compared to a virtual floor registered to the laboratory floor. A similar increase was found when subjects walked on a real raised platform. This provides evidence of real HBB induced by an IVR . In a second experiment HBB was used to compare partial-body and full-body tracking configurations. When participants viewed a lateral lean imposed on the torso of a synchronous virtual body (SVB), their stance angle changed in a compensatory direction. We found a weaker negative correlation indicating compensatory leaning in the partial-body tracking condition leaning. This suggests partial-body tracking may dampen the full-body illusion in IVRs. We carried out a case study to show the relevance of HBB for IVRs used for movement rehabilitation. Hemiparetic stroke patients observed a SVB that was colocated with their own. When an animation caused their virtual arms to rise up, we found evidence of counterbalancing in centre of pressure data that was not apparent when the subjects were told to simply imagine the movement. Here HBB directly indicates the effectiveness of an IVR application.
120

Multitask and transfer learning for multi-aspect data

Romera Paredes, B. January 2014 (has links)
Supervised learning aims to learn functional relationships between inputs and outputs. Multitask learning tackles supervised learning tasks by performing them simultaneously to exploit commonalities between them. In this thesis, we focus on the problem of eliminating negative transfer in order to achieve better performance in multitask learning. We start by considering a general scenario in which the relationship between tasks is unknown. We then narrow our analysis to the case where data are characterised by a combination of underlying aspects, e.g., a dataset of images of faces, where each face is determined by a person's facial structure, the emotion being expressed, and the lighting conditions. In machine learning there have been numerous efforts based on multilinear models to decouple these aspects but these have primarily used techniques from the field of unsupervised learning. In this thesis we take inspiration from these approaches and hypothesize that supervised learning methods can also benefit from exploiting these aspects. The contributions of this thesis are as follows: 1. A multitask learning and transfer learning method that avoids negative transfer when there is no prescribed information about the relationships between tasks. 2. A multitask learning approach that takes advantage of a lack of overlapping features between known groups of tasks associated with different aspects. 3. A framework which extends multitask learning using multilinear algebra, with the aim of learning tasks associated with a combination of elements from different aspects. 4. A novel convex relaxation approach that can be applied both to the suggested framework and more generally to any tensor recovery problem. Through theoretical validation and experiments on both synthetic and real-world datasets, we show that the proposed approaches allow fast and reliable inferences. Furthermore, when performing learning tasks on an aspect of interest, accounting for secondary aspects leads to significantly more accurate results than using traditional approaches.

Page generated in 0.0292 seconds