• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 151
  • 45
  • 21
  • 13
  • 12
  • 10
  • 7
  • 6
  • 6
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 346
  • 29
  • 25
  • 23
  • 21
  • 21
  • 21
  • 21
  • 20
  • 19
  • 19
  • 18
  • 18
  • 18
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Essays on the modelling of quantiles for forecasting and risk estimation

Mitrodima, Evangelia January 2015 (has links)
This thesis examines the use of quantile methods to better estimate the time-varying conditional asset return distribution. The motivation for this is to contribute to improvements in the time series forecasting by taking into account some features of financial returns. We first consider a single quantile model with a long memory component in order to estimate the Value at Risk (VaR). We find that the model provides us with improved estimates and forecasts, and has valuable economic interpretation for the firm’s capital allocation. We also present improvements in the economic performance of existing models through the use of past aggregate return information in VaR estimation. Additionally, we attempt to make a contribution by examining some of the empirical properties of quantile models, such as the types of issues that arise in their estimation. A limitation of quantile models of this type is the lack of monotonicity in the estimation of conditional quantile functions. Thus, there is a need for a model that considers the correct quantile ordering. In addition, there is still a need for more accurate forecasts that may be of practical use for various financial applications. We speculate that this can be done by decomposing the conditional distribution in a natural way into its shape and scale dynamics. Motivated by these, we extend the single quantile model to incorporate more than one probability levels and the dynamic of the scale. We find that by accounting for the scale, we are able to explain the time-varying patterns between the individual quantiles. Apart from being able to address the monotonicity of quantile functions, this setting offers valuable information for the conditional distribution of returns. We are able to study the dynamics of the scale and shape over time separately and obtain satisfactory VaR forecasts. We deliver estimates for this model in a frequentist and a Bayesian framework. The latter is able to deliver more robust estimates than the classical approach. Bayesian inference is motivated by the estimation issues that we identify in both the single and the multiple quantile settings. In particular, we find that the Bayesian methodology is useful for addressing the multi-modality of the objective function and estimating the uncertainty of the model parameters.
42

The design and implementation of a notional machine for teaching introductory programming

Berry, Michael January 2015 (has links)
Comprehension of both programming and programs is a difficult task for novices to master, with many university courses that feature a programming component demonstrating significant failure and drop out rates. Many theories exist that attempt to explain why this is the case. One such theory, originally postulated by du Boulay, is that students do not understand the properties of the machine; they do not understand what they are or how they are controlling them by writing code. This idea formed the development of the notional machine, which exists solely as an abstraction of the physical machine to aid with its understanding and comprehension. This work contributes a design for a new notional machine and a graphical notation for its representation. The notional machine is designed to work with object-oriented languages (in particular Java). It provides several novel contributions over pre-existing models -- while existing similar models are generally constrained to line by line operation, the notional machine presented here can scale effectively across many program sizes, from few objects and lines to many. In addition, it is able to be used in a variety of formats (in both electronic and unplugged form). It also melds together three traditionally separate diagrams that had to be understood simultaneously (the stack trace, class diagram and object heap.) Novis, an implemented version of the notional machine, is also presented and evaluated. It is able to create automatic and animated versions of notional machine diagrams, and has been integrated into BlueJ's main interface. Novis can present static notional machine diagrams at selected stages of program execution, or animate ongoing execution in real time. The evaluation of Novis is presented in two parts. It is first tested alongside a selection of methodically chosen textbook examples to ensure it can visualise a range of useful programs, and it then undergoes usability testing with a group of first year computer science students.
43

A study of thread-local garbage collection for multi-core systems

Mole, Matthew Robert January 2015 (has links)
With multi-processor systems in widespread use, and programmers increasingly writing programs that exploit multiple processors, scalability of application performance is more of an issue. Increasing the number of processors available to an application by a factor does not necessarily boost that application's performance by that factor. More processors can actually harm performance. One cause of poor scalability is memory bandwidth becoming saturated as processors contend with each other for memory bus use. More multi-core systems have a non-uniform memory architecture and placement of threads and the data they use is important in tackling this problem. Garbage collection is a memory load and store intensive activity, and whilst well known techniques such as concurrent and parallel garbage collection aim to increase performance with multi-core systems, they do not address the memory bottleneck problem. One garbage collection technique that can address this problem is thread-local heap garbage collection. Smaller, more frequent, garbage collection cycles are performed so that intensive memory activity is distributed. This thesis evaluates a novel thread-local heap garbage collector for Java, that is designed to improve the effectiveness of this thread-independent garbage collection.
44

Comparing computational approaches to the analysis of high-frequency trading data using Bayesian methods

Cremaschi, Andrea January 2017 (has links)
Financial prices are usually modelled as continuous, often involving geometric Brownian motion with drift, leverage, and possibly jump components. An alternative modelling approach allows financial observations to take discrete values when they are interpreted as integer multiples of a fixed quantity, the ticksize, the monetary value associated with a single change in the asset evolution. These samples are usually collected at very high frequency, exhibiting diverse trading operations per seconds. In this context, the observables are modelled in two different ways: on one hand, via the Skellam process, defined as the difference between two independent Poisson processes; on the other, using a stochastic process whose conditional law is that of a mixture of Geometric distributions. The parameters of the two stochastic processes modelled as functions of a stochastic volatility process, which is in turn described by a discretised Gaussian Ornstein-Uhlenbeck AR(1) process. The work will present, at first, a parametric model for independent and identically distributed data, in order to motivate the algorithmic choices used as a basis for the next Chapters. These include adaptive Metropolis-Hastings algorithms, and Interweaving Strategy. The central Chapters of the work are devoted to the illustration of Particle Filtering methods for MCMC posterior computations (or PMCMC methods). The discussion starts by presenting the existing Particle Gibbs and the Particle Marginal Metropolis-Hastings samplers. Additionally, we propose two extensions to the existing methods. Posterior inference and out-of-sample prediction obtained with the different methodologies is discussed, and compared to the methodologies existing in the literature. To allow for more flexibility in the modelling choices, the work continues with a presentation of a semi-parametric version of the original model. Comparative inference obtained via the previously discussed methodologies is presented. The work concludes with a summary and an account of topics for further research.
45

On the theory of dissipative extensions

Fischbacher, Christoph Stefan January 2017 (has links)
We consider the problem of constructing dissipative extensions of given dissipative operators. Firstly, we discuss the dissipative extensions of symmetric operators and give a suffcient condition for when these extensions are completely non-selfadjoint. Moreover, given a closed and densely defined operator A, we construct its closed extensions which we parametrize by suitable subspaces of D(A^*). Then, we consider operators A and \widetilde{A} that form a dual pair, which means that A\subset \widetilde{A}^*, respectively \widetilde{A}\subset A^* Assuming that A and (-\widetilde{A}) are dissipative, we present a method of determining the proper dissipative extensions \widehat{A} of this dual pair, i.e. we determine all dissipative operators \widehat{A} such that A\subset \subset\widehat{A}\subset\widetilde{A}^* provided that D(A)\cap D(\widetilde{A}) is dense in H. We discuss applications to symmetric operators, symmetric operators perturbed by a relatively bounded dissipative operator and more singular differential operators. Also, we investigate the stability of the numerical ranges of the various proper dissipative extensions of the dual pair (A,\widetilde{A}). Assuming that zero is in the field of regularity of a given dissipative operator A, we then construct its Krein-von Neumann extension A_K, which we show to be maximally dissipative. If there exists a dissipative operator (-\widetilde{A}) such that A and \widetilde{A} form a dual pair, we discuss when A_K is a proper extension of the dual pair (A,\widetilde{A}) and if this is not the case, we propose a construction of a dual pair (A_0,\widetilde{A}_0), where A_0\subset A and \widetilde{A}_0\subset\widetilde{A} such that A_K is a proper extension of (A_0,\widetilde{A}_0). After this, we consider dual pairs (A, \widetilde{A}) of sectorial operators and construct proper sectorial extensions that satisfy certain conditions on their numerical range. We apply this result to positive symmetric operators, where we recover the theory of non-negative selfadjoint and sectorial extensions of positive symmetric operators as described by Birman, Krein, Vishik and Grubb. Moreover, for the case of proper extensions of a dual pair (A_0,\widetilde{A}_0)of sectorial operators, we develop a theory along the lines of the Birman-Krein-Vishik theory and define an order in the imaginary parts of the various proper dissipative extensions of (A,\widetilde{A}). We finish with a discussion of non-proper extensions: Given a dual pair (A,\widetilde{A}) that satisfies certain assumptions, we construct all dissipative extensions of A that have domain contained in D(\widetilde{A}^*). Applying this result, we recover Crandall and Phillip's description of all dissipative extensions of a symmetric operator perturbed by a bounded dissipative operator. Lastly, given a dissipative operator A whose imaginary part induces a strictly positive closable quadratic form, we find a criterion for an arbitrary extension of A to be dissipative.
46

On the structure of Foulkes modules for the symmetric group

de Boeck, Melanie January 2015 (has links)
This thesis concerns the structure of Foulkes modules for the symmetric group. We study `ordinary' Foulkes modules $H^{(m^n)}$, where $m$ and $n$ are natural numbers, which are permutation modules arising from the action on cosets of $\mathfrak{S}_m\wr\mathfrak{S}_n\leq \mathfrak{S}_{mn}$. We also study a generalisation of these modules $H^{(m^n)}_\nu$, labelled by a partition $\nu$ of $n$, which we call generalised Foulkes modules. Working over a field of characteristic zero, we investigate the module structure using semistandard homomorphisms. We identify several new relationships between irreducible constituents of $H^{(m^n)}$ and $H^{(m^{n+q})}$, where $q$ is a natural number, and also apply the theory to twisted Foulkes modules, which are labelled by $\nu=(1^n)$, obtaining analogous results. We make extensive use of character-theoretic techniques to study $\varphi^{(m^n)}_\nu$, the ordinary character afforded by the Foulkes module $H^{(m^n)}_\nu$, and we draw conclusions about near-minimal constituents of $\varphi^{(m^n)}_{(n)}$ in the case where $m$ is even. Further, we prove a recursive formula for computing character multiplicities of any generalised Foulkes character $\varphi^{(m^n)}_\nu$, and we decompose completely the character $\varphi^{(2^n)}_\nu$ in the cases where $\nu$ has either two rows or two columns, or is a hook partition. Finally, we examine the structure of twisted Foulkes modules in the modular setting. In particular, we answer questions about the structure of $H^{(2^n)}_{(1^n)}$ over fields of prime characteristic.
47

Painleve equations and orthogonal polynomials

Smith, James January 2016 (has links)
In this thesis we classify all of the special function solutions to Painleve equations and all their associated equations produced using their Hamiltonian structures. We then use these special solutions to highlight the connection between the Painleve equations and the coefficients of some three-term recurrence relations for some specific orthogonal polynomials. The key idea of this newly developed method is the recognition of certain orthogonal polynomial moments as a particular special function. This means we can compare the matrix of moments with the Wronskian solutions, which the Painleve equations are famous for. Once this connection is found we can simply read o the all important recurrence coefficients in a closed form. In certain cases, we can even improve upon this as some of the weights allow a simplification of the recurrence coefficients to polynomials and with it, the new sequences orthogonal polynomials are simplified too.
48

Development of statistical methods for monitoring insect abundance

Dennis, Emily Beth January 2015 (has links)
During a time of habitat loss, climate change and loss of biodiversity, efficient analytical tools are vital for population monitoring. This thesis concerns the modelling of butterflies, whose populations are undergoing various changes in abundance, range, phenology and voltinism. In particular, three-quarters of UK butterfly species have shown declines in their distribution, abundance, or both over a ten-year period. As the most comprehensively monitored insect taxon, known to respond rapidly and sensitively to change, butterflies are particularly valuable, but devising methods that can be fitted to large data sets is challenging and they can be computer intensive. We use occupancy models to formulate occupancy maps and novel regional indices, which will allow for improved reporting of changes in butterfly distributions. The remainder of the thesis focuses on models for count data. We show that the popular N-mixture model can sometimes produce infinite estimates of abundance and describe the equivalence of multivariate Poisson and negative-binomial models. We then present a variety of approaches for modelling butterfly abundance, where complicating features are the seasonal nature of the counts and variation among species. A generalised abundance index is very efficient compared to generalised additive models, which are currently used for annual reporting, and new parametric descriptions of seasonal variation produce novel and meaningful parameters relating to phenology and survival. We develop dynamic models which explicitly model dependence between broods and years. These new models will improve our understanding of the complex processes and drivers underlying changes in butterfly populations.
49

A bio-inspired cache management policy for cloud computing environments using the artificial bee colony algorithm

Idachaba, Unekwu Solomon January 2015 (has links)
Caching has become an important technology in the development of cloud computing-based high-performance web services. Caches reduce the request-response latency experienced by users and reduce workload on backend databases. Caches need a high cache-hit rate to be fit for purpose, and this is dependent on the cache management policy used. Existing cache management policies do not prevent cache pollution and cache monopoly. This lack of prevention impacts negatively on cache hit rates. This work presents a Bio-inspired Community-based Caching (BCC) approach to address these two problems, by drawing intelligence from users' access behaviour using the Quantity and Quality Aware Artificial Bee Colony (Q2-ABC) clustering algorithm to achieve high cache-hit rates. Q2-ABC is a redesigned Artificial Bee Colony (ABC) algorithm which is also presented in this work. It optimizes the quality of clusters produced by addressing the repetition in metric space searches, probability-based effort distribution, and limit of abandonment problems inherent in ABC. To evaluate the performance of BCC, two sets of experiments were performed. In the first set of experiments, the quality of clusters identified by Q2-ABC was between 15% and 63% better than ABC. The performance of Q2-ABC comes with a cost: additional storage (a maximum of 300 bytes in this experiment) to store indexes of searched metric space. In the second set of experiments, the cache-hit rate achieved by BCC was between 0.7% and 55% better than the others across most of the test data used. The cost associated with BCC performance includes additional memory requirement-a total of 1.7Mb in this experiment-for storing generated intelligence and processor cycle overhead for generating intelligence. The implication of these results are that better quality clusters are produced by avoiding repeated searches within a metric space, and that high cache-hit rate can be achieved by managing caches intelligently, an alternative to expanding them as is conventional for Cloud Computing based services.
50

Inference with time-varying parameter models using Bayesian shrinkage

Wang, Su January 2016 (has links)
In macroeconomics, predicting future realisations of economic variables is the central issue for policymakers in central banks as well as the investors in businesses. Nowadays, it is common to have a large number of economic variables in the time series data, and selecting the important variables is essential in achieving quality forecasts. Hence, our research interest lies in the variable selection problem within time series data analysis. One of the key factors to determine the quality of a forecast is the model that is used for inference. The Time-Varying Parameter (TVP) model is an important means of understanding the effect of predictor variables on a given response. This allows the defined data to vary with time. However, the effects can be difficult to estimate and interpret if the number of predictors are large. To perform variable selection techniques in Bayesian inference, we apply continuous shrinkage priors to the TVP model in order to select the important variables over time. In particular, we are interested in three shrinkage priors: Bayesian Lasso, Normal-Gamma and Dirichlet-Laplace. These continuous shrinkage priors have Bayesian hierarchical representations according to the scale mixtures of Normals, which are encouraged to shrinkage by the assumption of sparsity. Markov Chain Monte Carlo (MCMC) algorithms are used to estimate the TVP models, as the model can be expressed in a state space model. This is particularly useful when we are estimating the time-varying regression coefficients by using the Kalman filter forward and backward sampling steps. In addition, the dynamic variance components are sampled by fitting a stochastic volatility model within the MCMC framework. To further improve the estimations of the time-varying parameters, such as the time-varying regression coefficients and the stochastic volatilities, we then move on to using the Particle Gibbs (PG) and the Particle Gibbs with Ancestor Sampling (PGAS) algorithms as both allow the joint updates of the two time-varying parameters by using Conditional Particle Filters (PF). However, when the time-varying regression coefficients and the stochastic volatilities are highly correlated, both PG and PGAS algorithms produce poor estimations for these parameters. We make an improvement within the PG framework by marginalising the stochastic volatilities over the time-varying regression coefficients. The marginalised PG updates both parameters in two steps: the first is to update the stochastic volatilities by using the PG with Kalman filter forward step and the second is to update the time-varying regression coefficients by using the Kalman filter backward step. In order to draw comparisons for shrinkage strength among the shrinkage priors for the TVP model, the aforementioned sampling methods, such as MCMC, marginalised PG and PGAS are applied to equity premium data. We find that both Normal-Gamma and Dirichlet-Laplace have shrinkage by selecting fewer variables compared to Bayesian Lasso. The posterior estimates of the parameters obtained by marginalised PGAS tend to mix better and converge to stationaries faster than the results from MCMC and marginalised PG. Finally, we compare the forecasting power between the TVP model with stochastic volatility and the TVP model with constant variance with all shrinkage priors. Our findings suggest that by assuming the variance component to be constant, the TVP model with constant variance outperforms the TVP model with stochastic volatility in forecasting for all shrinkage prior set-ups. Both Normal-Gamma and Dirichlet-Laplace produce smaller values of root-mean-square-error and the log predictive score when making one-step-ahead out of sample predictions.

Page generated in 0.0297 seconds