• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 1
  • Tagged with
  • 20
  • 9
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Stochastic dynamics of financial markets

Zhitlukhin, Mikhail Valentinovich January 2014 (has links)
This thesis provides a study on stochastic models of financial markets related to problems of asset pricing and hedging, optimal portfolio managing and statistical changepoint detection in trends of asset prices. Chapter 1 develops a general model of a system of interconnected stochastic markets associated with a directed acyclic graph. The main result of the chapter provides sufficient conditions of hedgeability of contracts in the model. These conditions are expressed in terms of consistent price systems, which generalise the notion of equivalent martingale measures. Using the general results obtained, a particular model of an asset market with transaction costs and portfolio constraints is studied. In the second chapter the problem of multi-period utility maximisation in the general market model is considered. The aim of the chapter is to establish the existence of systems of supporting prices, which play the role of Lagrange multipliers and allow to decompose a multi-period constrained utility maximisation problem into a family of single-period and unconstrained problems. Their existence is proved under conditions similar to those of Chapter 1.The last chapter is devoted to applications of statistical sequential methods for detecting trend changes in asset prices. A model where prices are driven by a geometric Gaussian random walk with changing mean and variance is proposed, and the problem of choosing the optimal moment of time to sell an asset is studied. The main theorem of the chapter describes the structure of the optimal selling moments in terms of the Shiryaev–Roberts statistic and the posterior probability process.
12

Decision making using Thompson Sampling

Mellor, Joseph Charles January 2014 (has links)
The ability to make decisions is a crucial ability of many autonomous systems. In many scenarios the consequence of a decision is unknown and often stochastic. The same decision may lead to a different outcome every time it is taken. An agent that can learn to make decisions based purely on its past experience needs less tuning and is likely more robust. An agent must often balance between learning the payoff of actions by exploring, and exploiting the knowledge they currently have. The multi-armed bandit problem exhibits such an exploration-exploitation dilemma. Thompson Sampling is a strategy for the problem, first proposed in 1933. In the last several years there has been renewed interest in it, with the emergence of strong empirical and theoretical justification for its use. This thesis seeks to take advantage of the benefits of Thompson Sampling while applying it to other decision-making models. In doing so we propose different algorithms for these scenarios. Firstly we explore a switching multi-armed bandit problem. In real applications the most appropriate decision to take often changes over time. We show that an agent assuming switching is often robust to many types of changing environment. Secondly we consider the best arm identification problem. Unlike the multi-armed bandit problem, where an agent wants to increase reward over the entire period of decision making, the best arm identification is concerned in increasing the reward gained by a final decision. This thesis argues that both problems can be tackled effectively using Thompson Sampling based approaches and provides empirical evidence to support this claim.
13

Bayesian Hierarchical Methods and the Use of Ecological Thresholds and Changepoints for Habitat Selection Models

Pooler, Penelope S. 03 January 2006 (has links)
Modeling the complex relationships between habitat characteristics and a species' habitat preferences pose many difficult problems for ecological researchers. These problems are complicated further when information is collected over a range of time or space. Additionally, the variety of factors affecting these choices is difficult to understand and even more difficult to accurately collect information about. In light of these concerns, we evaluate the performance of current standard habitat preference models that are based on Bayesian methods and then present some extensions and supplements to those methods that prove to be very useful. More specifically, we demonstrate the value of extending the standard Bayesian hierarchical model using finite mixture model methods. Additionally, we demonstrate that an extension of the Bayesian hierarchical changepoint model to allow for estimating multiple changepoints simultaneously can be very informative when applied to data about multiple habitat locations or species. These models allow the researcher to compare the sites or species with respect to a very specific ecological question and consequently provide definitive answers that are often not available with more commonly used models containing many explanatory factors. Throughout our work we use a complex data set containing information about horseshoe crab spawning habitat preferences in the Delaware Bay over a five-year period. These data epitomize some of the difficult issues inherent to studying habitat preferences. The data are collected over time at many sites, have missing observations, and include explanatory variables that, at best, only provide surrogate information for what researchers feel is important in explaining spawning preferences throughout the bay. We also looked at a smaller data set of freshwater mussel habitat selection preferences in relation to bridge construction on the Kennerdell River in Western Pennsylvania. Together, these two data sets provided us with insight in developing and refining the methods we present. They also help illustrate the strengths and weaknesses of the methods we discuss by assessing their performance in real situations where data are inevitably complex and relationships are difficult to discern. / Ph. D.
14

Bayesian Gaussian processes for sequential prediction, optimisation and quadrature

Osborne, Michael A. January 2010 (has links)
We develop a family of Bayesian algorithms built around Gaussian processes for various problems posed by sensor networks. We firstly introduce an iterative Gaussian process for multi-sensor inference problems, and show how our algorithm is able to cope with data that may be noisy, missing, delayed and/or correlated. Our algorithm can also effectively manage data that features changepoints, such as sensor faults. Extensions to our algorithm allow us to tackle some of the decision problems faced in sensor networks, including observation scheduling. Along these lines, we also propose a general method of global optimisation, Gaussian process global optimisation (GPGO), and demonstrate how it may be used for sensor placement. Our algorithms operate within a complete Bayesian probabilistic framework. As such, we show how the hyperparameters of our system can be marginalised by use of Bayesian quadrature, a principled method of approximate integration. Similar techniques also allow us to produce full posterior distributions for any hyperparameters of interest, such as the location of changepoints. We frame the selection of the positions of the hyperparameter samples required by Bayesian quadrature as a decision problem, with the aim of minimising the uncertainty we possess about the values of the integrals we are approximating. Taking this approach, we have developed sampling for Bayesian quadrature (SBQ), a principled competitor to Monte Carlo methods. We conclude by testing our proposals on real weather sensor networks. We further benchmark GPGO on a wide range of canonical test problems, over which it achieves a significant improvement on its competitors. Finally, the efficacy of SBQ is demonstrated in the context of both prediction and optimisation.
15

Image Analysis Applications of the Maximum Mean Discrepancy Distance Measure

Diu, Michael January 2013 (has links)
The need to quantify distance between two groups of objects is prevalent throughout the signal processing world. The difference of group means computed using the Euclidean, or L2 distance, is one of the predominant distance measures used to compare feature vectors and groups of vectors, but many problems arise with it when high data dimensionality is present. Maximum mean discrepancy (MMD) is a recent unsupervised kernel-based pattern recognition method which may improve differentiation between two distinct populations over many commonly used methods such as the difference of means, when paired with the proper feature representations and kernels. MMD-based distance computation combines many powerful concepts from the machine learning literature, such as data distribution-leveraging similarity measures and kernel methods for machine learning. Due to this heritage, we posit that dissimilarity-based classification and changepoint detection using MMD can lead to enhanced separation between different populations. To test this hypothesis, we conduct studies comparing MMD and the difference of means in two subareas of image analysis and understanding: first, to detect scene changes in video in an unsupervised manner, and secondly, in the biomedical imaging field, using clinical ultrasound to assess tumor response to treatment. We leverage effective computer vision data descriptors, such as the bag-of-visual-words and sparse combinations of SIFT descriptors, and choose from an assessment of several similarity kernels (e.g. Histogram Intersection, Radial Basis Function) in order to engineer useful systems using MMD. Promising improvements over the difference of means, measured primarily using precision/recall for scene change detection, and k-nearest neighbour classification accuracy for tumor response assessment, are obtained in both applications.
16

Spectral methods and computational trade-offs in high-dimensional statistical inference

Wang, Tengyao January 2016 (has links)
Spectral methods have become increasingly popular in designing fast algorithms for modern highdimensional datasets. This thesis looks at several problems in which spectral methods play a central role. In some cases, we also show that such procedures have essentially the best performance among all randomised polynomial time algorithms by exhibiting statistical and computational trade-offs in those problems. In the first chapter, we prove a useful variant of the well-known Davis{Kahan theorem, which is a spectral perturbation result that allows us to bound of the distance between population eigenspaces and their sample versions. We then propose a semi-definite programming algorithm for the sparse principal component analysis (PCA) problem, and analyse its theoretical performance using the perturbation bounds we derived earlier. It turns out that the parameter regime in which our estimator is consistent is strictly smaller than the consistency regime of a minimax optimal (yet computationally intractable) estimator. We show through reduction from a well-known hard problem in computational complexity theory that the difference in consistency regimes is unavoidable for any randomised polynomial time estimator, hence revealing subtle statistical and computational trade-offs in this problem. Such computational trade-offs also exist in the problem of restricted isometry certification. Certifiers for restricted isometry properties can be used to construct design matrices for sparse linear regression problems. Similar to the sparse PCA problem, we show that there is also an intrinsic gap between the class of matrices certifiable using unrestricted algorithms and using polynomial time algorithms. Finally, we consider the problem of high-dimensional changepoint estimation, where we estimate the time of change in the mean of a high-dimensional time series with piecewise constant mean structure. Motivated by real world applications, we assume that changes only occur in a sparse subset of all coordinates. We apply a variant of the semi-definite programming algorithm in sparse PCA to aggregate the signals across different coordinates in a near optimal way so as to estimate the changepoint location as accurately as possible. Our statistical procedure shows superior performance compared to existing methods in this problem.
17

Kvantilové křivky / Quantile curves

Michl, Marek January 2017 (has links)
Modeling of quantile curves is a common problem across various fields in today's practice. The topic of this thesis is estimating quantile curves in case of two-sample gradual change. That is, when a relationship between two continuous variables in two samples is of interest, where the relationship is the same for both samples until a certain value of the explanatory variable. From that point on the relationship can differ. The result of this thesis is a procedure for estimating quantile curves, which fulfill this concept. 1
18

Inférence dans les modèles à changement de pente aléatoire : application au déclin cognitif pré-démence / Inference for random changepoint models : application to pre-dementia cognitive decline

Segalas, Corentin 03 December 2019 (has links)
Le but de ce travail a été de proposer des méthodes d'inférence pour décrire l'histoire naturelle de la phase pré-diagnostic de la démence. Durant celle-ci, qui dure une quinzaine d'années, les trajectoires de déclin cognitif sont non linéaires et hétérogènes entre les sujets. Pour ces raisons, nous avons choisi un modèle à changement de pente aléatoire pour les décrire. Une première partie de ce travail a consisté à proposer une procédure de test pour l'existence d'un changement de pente aléatoire. En effet, dans certaines sous-populations, le déclin cognitif semble lisse et la question de l'existence même d'un changement de pente se pose. Cette question présente un défi méthodologique en raison de la non-identifiabilité de certains paramètres sous l'hypothèse nulle rendant les tests standards inutiles. Nous avons proposé un supremum score test pour répondre à cette question. Une seconde partie du travail concernait l'ordre temporel du temps de changement entre plusieurs marqueurs. La démence est une maladie multidimensionnelle et plusieurs dimensions de la cognition sont affectées. Des schémas hypothétiques existent pour décrire l'histoire naturelle de la démence mais n'ont pas été éprouvés sur données réelles. Comparer le temps de changement de différents marqueurs mesurant différentes fonctions cognitives permet d'éclairer ces hypothèses. Dans cet esprit, nous proposons un modèle bivarié à changement de pente aléatoire permettant de comparer les temps de changement de deux marqueurs, potentiellement non gaussiens. Les méthodes proposées ont été évaluées sur simulations et appliquées sur des données issues de deux cohortes françaises. Enfin, nous discutons les limites de ces deux modèles qui se concentrent sur une accélération tardive du déclin cognitif précédant le diagnostic de démence et nous proposons un modèle alternatif qui estime plutôt une date de décrochage entre cas et non-cas. / The aim of this work was to propose inferential methods to describe natural history of the pre-diagnosis phase of dementia. During this phase, which can last around fifteen years, the cognitive decline trajectories are nonlinear and heterogeneous between subjects. Because heterogeneity and nonlinearity, we chose a random changepoint mixed model to describe these trajectories. A first part of this work was to propose a testing procedure to assess the existence of a random changepoint. Indeed, in some subpopulations, the cognitive decline seems smooth and the question of the existence of a changepoint itself araises. This question is methodologically challenging because of identifiability issues on some parameters under the null hypothesis that makes standard tests useless. We proposed a supremum score test to answer this question. A second part of this work was the comparison of the temporal order of different markers changepoint. Dementia is a multidimensional disease where different dimensions of the cognition are affected. Hypothetic cascade models exist for describing this natural history but have not been evaluated on real data. Comparing change over time of different markers measuring different cognitive functions gives precious insight on this hypothesis. In this spirit, we propose a bivariate random changepoint model allowing proper comparison of the time of change of two cognitive markers, potentially non Gaussian. The proposed methodologies were evaluated on simulation studies and applied on real data from two French cohorts. Finally, we discussed the limitations of the two models we used that focused on the late acceleration of the cognitive decline before dementia diagnosis and we proposed an alternative model that estimates the time of differentiation between cases and non-cases.
19

Detection of the Change Point and Optimal Stopping Time by Using Control Charts on Energy Derivatives

AL, Cihan, Koroglu, Kubra January 2011 (has links)
No description available.
20

Modelling Financial and Social Networks

Klochkov, Yegor 04 October 2019 (has links)
In dieser Arbeit untersuchen wir einige Möglichkeiten, financial und soziale Netzwerke zu analysieren, ein Thema, das in letzter Zeit in der ökonometrischen Literatur große Beachtung gefunden hat. Kapitel 2 untersucht den Risiko-Spillover-Effekt über das in White et al. (2015) eingeführte multivariate bedingtes autoregressives Value-at-Risk-Modell. Wir sind an der Anwendung auf nicht stationäre Zeitreihen interessiert und entwickeln einen sequentiellen statistischen Test, welcher das größte verfügbare Homogenitätsintervall auswählt. Unser Ansatz basiert auf der Changepoint-Teststatistik und wir verwenden einen neuartigen Multiplier Bootstrap Ansatz zur Bewertung der kritischen Werte. In Kapitel 3 konzentrieren wir uns auf soziale Netzwerke. Wir modellieren Interaktionen zwischen Benutzern durch ein Vektor-Autoregressivmodell, das Zhu et al. (2017) folgt. Um für die hohe Dimensionalität kontrollieren, betrachten wir ein Netzwerk, das einerseits von Influencers und Andererseits von Communities gesteuert wird, was uns hilft, den autoregressiven Operator selbst dann abzuschätzen, wenn die Anzahl der aktiven Parameter kleiner als die Stichprobegröße ist. Kapitel 4 befasst sich mit technischen Tools für die Schätzung des Kovarianzmatrix und Kreuzkovarianzmatrix. Wir entwickeln eine neue Version von der Hanson-Wright- Ungleichung für einen Zufallsvektor mit subgaußschen Komponenten. Ausgehend von unseren Ergebnissen zeigen wir eine Version der dimensionslosen Bernstein-Ungleichung, die für Zufallsmatrizen mit einer subexponentiellen Spektralnorm gilt. Wir wenden diese Ungleichung auf das Problem der Schätzung der Kovarianzmatrix mit fehlenden Beobachtungen an und beweisen eine verbesserte Version des früheren Ergebnisses von (Lounici 2014). / In this work we explore some ways of studying financial and social networks, a topic that has recently received tremendous amount of attention in the Econometric literature. Chapter 2 studies risk spillover effect via Multivariate Conditional Autoregressive Value at Risk model introduced in White et al. (2015). We are particularly interested in application to non-stationary time series and develop a sequential test procedure that chooses the largest available interval of homogeneity. Our approach is based on change point test statistics and we use a novel Multiplier Bootstrap approach for the evaluation of critical values. In Chapter 3 we aim at social networks. We model interactions between users through a vector autoregressive model, following Zhu et al. (2017). To cope with high dimensionality we consider a network that is driven by influencers on one side, and communities on the other, which helps us to estimate the autoregressive operator even when the number of active parameters is smaller than the sample size. Chapter 4 is devoted to technical tools related to covariance cross-covariance estimation. We derive uniform versions of the Hanson-Wright inequality for a random vector with independent subgaussian components. The core technique is based on the entropy method combined with truncations of both gradients of functions of interest and of the coordinates itself. We provide several applications of our techniques: we establish a version of the standard Hanson-Wright inequality, which is tighter in some regimes. Extending our results we show a version of the dimension-free matrix Bernstein inequality that holds for random matrices with a subexponential spectral norm. We apply the derived inequality to the problem of covariance estimation with missing observations and prove an improved high probability version of the recent result of Lounici (2014).

Page generated in 0.0331 seconds