Spelling suggestions: "subject:"computing science"" "subject:"acomputing science""
1 |
Simple and adaptive particle swarmsBratton, Daniel January 2010 (has links)
The substantial advances that have been made to both the theoretical and practical aspects of particle swarm optimization over the past 10 years have taken it far beyond its original intent as a biological swarm simulation. This thesis details and explains these advances in the context of what has been achieved to this point, as well as what has yet to be understood or solidified within the research community. Taking into account the state of the modern field, a standardized PSO algorithm is defined for benchmarking and comparative purposes both within the work, and for the community as a whole. This standard is refined and simplified over several iterations into a form that does away with potentially undesirable properties of the standard algorithm while retaining equivalent or superior performance on the common set of benchmarks. This refinement, referred to as a discrete recombinant swarm (PSODRS) requires only a single user-defined parameter in the positional update equation, and uses minimal additive stochasticity, rather than the multiplicative stochasticity inherent in the standard PSO. After a mathematical analysis of the PSO-DRS algorithm, an adaptive framework is developed and rigorously tested, demonstrating the effects of the tunable particle- and swarm-level parameters. This adaptability shows practical benefit by broadening the range of problems which the PSO-DRS algorithm is wellsuited to optimize.
|
2 |
Spectral results for perturbed periodic Jacobi operatorsJudge, Edmund William January 2017 (has links)
In this text we explore various techniques to embed eigenvalues into the bands of essential spectrum of Hermitian periodic Jacobi operators.
|
3 |
Improving automated layout techniques for the production of schematic diagramsChivers, Daniel January 2014 (has links)
This thesis explores techniques for the automated production of schematic diagrams, in particular those in the style of metro maps. Metro map style schematics are used across the world, typically to depict public transport networks, and therefore benefit from an innate level of user familiarity not found with most other data visualisation styles. Currently, this style of schematic is used infrequently due to the difficulties involved with creating an effective layout – there are no software tools to aid with the positioning of nodes and other features, resulting in schematics being produced by hand at great expense of time and effort. Automated schematic layout has been an active area of research for the past decade, and part of our work extends upon an effective current technique – multi-criteria hill climbing. We have implemented additional layout criteria and clustering techniques, as well as performance optimisations to improve the final results. Additionally, we ran a series of layouts whilst varying algorithm parameters in an attempt to identify patterns specific to map characteristics. This layout algorithm has been implemented into a custom-written piece of software running on the Android operating system. The software is targeted at tablet devices, using their touch-sensitive screens with a gesture recognition system to allow users to construct complex schematics using sequences of simple gestures. Following on from this, we present our work on a modified force-directed layout method capable of producing fast, high-quality, angular schematic layouts. Our method produces superior results to the previous octilinear force-directed layout method, and is capable of producing results comparable to many of the much slower current approaches. Using our force-directed layout method we then implemented a novel mental map preservation technique which aims to preserve node proximity relations during optimisation; we believe this approach provides a number of benefits over the the more common method of preserving absolute node positions. Finally, we performed a user study on our method to test the effect of varying levels of mental map preservation on diagram comprehension.
|
4 |
Detecting salient information using RSVP and the P3 : computational and EEG explorationsAlsufyani, Abdulmajeed January 2015 (has links)
This thesis investigates the efficacy of employing the Rapid Serial Visual Presentation (RSVP) technique for stimulus presentation in brain activity-based deception detection tests. One reason for using RSVP is to present stimuli on the fringe of awareness (e.g. 10 per second), making it more difficult for a guilty person to confound the test by use of countermeasures. It is hypothesized that such a rapid presentation rate prevents the vast majority of RSVP stimuli from being perceived at a level sufficient for encoding into working memory, but that salient stimuli will break through into consciousness and be encoded. Such ‘breakthrough’ perceptual events are correlated with a P300 Event Related Potential (ERP) component that can be used as an index of perceiving/processing a salient stimulus (e.g. crime-relevant information). On this basis, a method is proposed for detecting salience based on RSVP and the P300, which will be referred to as the Fringe-P3 method. The thesis then demonstrates how the Fringe-P3 method can be specialized for application to deception detection. Specifically, the proposed method was tested in an identity deception study, in which participants were instructed to lie about (i.e. conceal) their own-name. As will be shown, experimental findings demonstrated a very high hit rate in terms of detecting deceivers and a low false alarm rate in misdetecting non-deceivers. Most significantly, a review of these findings confirms that the Fringe-P3 identity detector is resilient against countermeasures. The effectiveness of the Fringe-P3 method in detecting stimuli of lower salience (i.e. famous names) than own-name stimuli was then evaluated. In addition, the question of whether faces can be used in an ERP-based RSVP paradigm to infer recognition of familiar faces was also investigated. The experimental results showed that the method is effective in distinguishing broadly familiar stimuli as salient, resulting in the generation of a detectable P300 component on a per-individual basis. These findings support the applicability of the proposed method to forensic science (e.g. detecting knowledge of criminal colleagues). Finally, an ERP assessment method is proposed for performing per-individual statistical inferences in deception detection tests. By analogy with functional localizers in fMRI, this method can be viewed as a form of functional profiling. The method was evaluated on EEG data sets obtained by use of the Fringe-P3 technique. Additionally, simulated data were used to explore how the method’s performance varies with parametric manipulation of the signal-to-noise ratio (SNR). As will be demonstrated, experimental findings confirm that the proposed method is effective for detecting the P300, even in ERPs with low SNR.
|
5 |
Application of dynamic factor modelling to financial contagionSakaria, Dhirendra Kumar January 2016 (has links)
Contagion has been described as the spread of idiosyncratic shocks from one market to another in times of financial turmoil. In this work, contagion has been modelled using a global factor to capture the general market movements and idiosyncratic shocks are used to capture co-movements and volatility spill-over between markets. Many previous studies have used pre-specified turmoil and calm periods to understand when contagion occurs. We introduce time-varying parameters which model the volatility spillover from one country to another. This approach avoids the need to pre-specify particular types of periods using external information. Efficient Bayesian inference can be made using the Kalman filter in a forward filtering and backward sampling algorithm. The model is applied to market indices for Greece and Spain to understand the effect of contagion during the European sovereign debt crisis 2007-2013 (Euro crisis) and examine the volatility spillover between Greece and Spain. Similarly, the volatility spillover from Hong Kong to Singapore during the Asian financial crisis 1997-1998 has also been studied. After a review of the research work in the financial contagion area and of the definitions used, we have specified a model based on the work by Dungey et al. (2005) and include a world factor. Time varying parameters are introduced and Bayesian inference and MCMC simulations are used to estimate the parameters. This is followed by work using the Normal Mixture model based on the paper by Kim et al. (1998) where we realised that the volatility parameters results depended on the value of the ‘mixture offset’ parameter. We propose method to overcome the problem of setting the parameter value. In the final chapter, a stochastic volatility model with with heavy tails for the innovations in the volatility spillover is used and results from simulated cases and the market data for the Asian financial crisis and Euro crisis are summarised. Briefly, the Asian financial crisis periods are identified clearly and agree with results in other published work. For the Euro crisis, the periods of volatility spillover (or financial contagion) are identified too, but for smaller periods of time. We conclude with a summary and outline of further work.
|
6 |
Provenance-aware CXXRSilles, Christopher Anthony January 2014 (has links)
A provenance-aware computer system is one that records information about the operations it performs on data to enable it to provide an account of the process that led to a particular item of data. These systems allow users to ask questions of data, such as “What was the sequence of steps involved in its creation?”, “What other items of data were used to create it?”, or “What items of data used it during their creation?”. This work will present a study of how, and the extent to which the CXXR statistical programming software can be made aware of the provenance of the data on which it operates. CXXR is a variant of the R programming language and environment, which is an open source implementation of S. Interestingly S is notable for becoming an early pioneer of provenance-aware computing in 1988. Examples of adapting software such as CXXR for provenance-awareness are few and far between, and the idiosyncrasies of an interpreter such as CXXR—moreover the R language itself—present interesting challenges to provenance-awareness: such as receiving input from a variety of sources and complex evaluation mechanisms. Herein presented are designs for capturing and querying provenance information in such an environment, along with serialisation facilities to preserve data together with its provenance so that they may be distributed and/or subsequently restored to a CXXR session. Also presented is a method for enabling this serialised provenance information to be interoperable with other provenance-aware software. This work also looks at the movement towards making research reproducible, and considers that provenance-aware systems, and provenance-aware CXXR in particular, are well positioned to further the goal of making computational research reproducible.
|
7 |
Essays on the modelling of quantiles for forecasting and risk estimationMitrodima, Evangelia January 2015 (has links)
This thesis examines the use of quantile methods to better estimate the time-varying conditional asset return distribution. The motivation for this is to contribute to improvements in the time series forecasting by taking into account some features of financial returns. We first consider a single quantile model with a long memory component in order to estimate the Value at Risk (VaR). We find that the model provides us with improved estimates and forecasts, and has valuable economic interpretation for the firm’s capital allocation. We also present improvements in the economic performance of existing models through the use of past aggregate return information in VaR estimation. Additionally, we attempt to make a contribution by examining some of the empirical properties of quantile models, such as the types of issues that arise in their estimation. A limitation of quantile models of this type is the lack of monotonicity in the estimation of conditional quantile functions. Thus, there is a need for a model that considers the correct quantile ordering. In addition, there is still a need for more accurate forecasts that may be of practical use for various financial applications. We speculate that this can be done by decomposing the conditional distribution in a natural way into its shape and scale dynamics. Motivated by these, we extend the single quantile model to incorporate more than one probability levels and the dynamic of the scale. We find that by accounting for the scale, we are able to explain the time-varying patterns between the individual quantiles. Apart from being able to address the monotonicity of quantile functions, this setting offers valuable information for the conditional distribution of returns. We are able to study the dynamics of the scale and shape over time separately and obtain satisfactory VaR forecasts. We deliver estimates for this model in a frequentist and a Bayesian framework. The latter is able to deliver more robust estimates than the classical approach. Bayesian inference is motivated by the estimation issues that we identify in both the single and the multiple quantile settings. In particular, we find that the Bayesian methodology is useful for addressing the multi-modality of the objective function and estimating the uncertainty of the model parameters.
|
8 |
The design and implementation of a notional machine for teaching introductory programmingBerry, Michael January 2015 (has links)
Comprehension of both programming and programs is a difficult task for novices to master, with many university courses that feature a programming component demonstrating significant failure and drop out rates. Many theories exist that attempt to explain why this is the case. One such theory, originally postulated by du Boulay, is that students do not understand the properties of the machine; they do not understand what they are or how they are controlling them by writing code. This idea formed the development of the notional machine, which exists solely as an abstraction of the physical machine to aid with its understanding and comprehension. This work contributes a design for a new notional machine and a graphical notation for its representation. The notional machine is designed to work with object-oriented languages (in particular Java). It provides several novel contributions over pre-existing models -- while existing similar models are generally constrained to line by line operation, the notional machine presented here can scale effectively across many program sizes, from few objects and lines to many. In addition, it is able to be used in a variety of formats (in both electronic and unplugged form). It also melds together three traditionally separate diagrams that had to be understood simultaneously (the stack trace, class diagram and object heap.) Novis, an implemented version of the notional machine, is also presented and evaluated. It is able to create automatic and animated versions of notional machine diagrams, and has been integrated into BlueJ's main interface. Novis can present static notional machine diagrams at selected stages of program execution, or animate ongoing execution in real time. The evaluation of Novis is presented in two parts. It is first tested alongside a selection of methodically chosen textbook examples to ensure it can visualise a range of useful programs, and it then undergoes usability testing with a group of first year computer science students.
|
9 |
A study of thread-local garbage collection for multi-core systemsMole, Matthew Robert January 2015 (has links)
With multi-processor systems in widespread use, and programmers increasingly writing programs that exploit multiple processors, scalability of application performance is more of an issue. Increasing the number of processors available to an application by a factor does not necessarily boost that application's performance by that factor. More processors can actually harm performance. One cause of poor scalability is memory bandwidth becoming saturated as processors contend with each other for memory bus use. More multi-core systems have a non-uniform memory architecture and placement of threads and the data they use is important in tackling this problem. Garbage collection is a memory load and store intensive activity, and whilst well known techniques such as concurrent and parallel garbage collection aim to increase performance with multi-core systems, they do not address the memory bottleneck problem. One garbage collection technique that can address this problem is thread-local heap garbage collection. Smaller, more frequent, garbage collection cycles are performed so that intensive memory activity is distributed. This thesis evaluates a novel thread-local heap garbage collector for Java, that is designed to improve the effectiveness of this thread-independent garbage collection.
|
10 |
Comparing computational approaches to the analysis of high-frequency trading data using Bayesian methodsCremaschi, Andrea January 2017 (has links)
Financial prices are usually modelled as continuous, often involving geometric Brownian motion with drift, leverage, and possibly jump components. An alternative modelling approach allows financial observations to take discrete values when they are interpreted as integer multiples of a fixed quantity, the ticksize, the monetary value associated with a single change in the asset evolution. These samples are usually collected at very high frequency, exhibiting diverse trading operations per seconds. In this context, the observables are modelled in two different ways: on one hand, via the Skellam process, defined as the difference between two independent Poisson processes; on the other, using a stochastic process whose conditional law is that of a mixture of Geometric distributions. The parameters of the two stochastic processes modelled as functions of a stochastic volatility process, which is in turn described by a discretised Gaussian Ornstein-Uhlenbeck AR(1) process. The work will present, at first, a parametric model for independent and identically distributed data, in order to motivate the algorithmic choices used as a basis for the next Chapters. These include adaptive Metropolis-Hastings algorithms, and Interweaving Strategy. The central Chapters of the work are devoted to the illustration of Particle Filtering methods for MCMC posterior computations (or PMCMC methods). The discussion starts by presenting the existing Particle Gibbs and the Particle Marginal Metropolis-Hastings samplers. Additionally, we propose two extensions to the existing methods. Posterior inference and out-of-sample prediction obtained with the different methodologies is discussed, and compared to the methodologies existing in the literature. To allow for more flexibility in the modelling choices, the work continues with a presentation of a semi-parametric version of the original model. Comparative inference obtained via the previously discussed methodologies is presented. The work concludes with a summary and an account of topics for further research.
|
Page generated in 0.0711 seconds