• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 8
  • 8
  • 8
  • 5
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Exact and Monte-Carlo algorithms for combinatorial games / Exakta och Monte-Carlo algoritmer för kombinatoriska spel

Leino, Anders January 2014 (has links)
This thesis concerns combinatorial games and algorithms that can be used to play them.Basic definitions and results about combinatorial games are covered, and an implementation of the minimax algorithm with alpha-beta pruning is presented.Following this, we give a description and implementation of the common UCT (Upper Confidence bounds applied to Trees) variant of MCTS (Monte-Carlo tree search).Then, a framework for testing the behavior of UCT as first player, at various numbers of iterations (namely 2,7, ... 27), versus minimax as second player, is described.Finally, we present the results obtained by applying this framework to the 2.2 million smallest non-trivial positional games having winning sets of size either 2 or 3.It is seen that on almost all different classifications of the games studied, UCT converges quickly to near-perfect play. / Denna rapport handlar om kombinatoriska spel och algoritmer som kan användas för att spela dessa.Grundläggande definitioner och resultat som berör kombinatoriska spel täcks, och en implementation av minimax-algoritmen med alpha-beta beskärning ges.Detta följs av en beskrivning samt en implementation av UCT varianten av MCTS (Monte-Carlo tree search).Sedan beskrivs ett ramverk för att testa beteendet för UCT som första spelare, vid olika antal iterationer (nämligen 2, 7, ... 27), mot minimax som andra spelare.Till sist beskrivs resultaten vi funnit genom att använda detta ramverk för att spela de 2,2 miljoner minsta icke triviala positionella spelen med vinstmängder av storlek antingen 2 eller 3.Vi finner att, för nästan alla olika klassificeringar av spel vi studerar, så konvergerar UCT snabbt mot nära perfekt spel.
2

Markov chains for sampling matchings

Matthews, James January 2008 (has links)
Markov Chain Monte Carlo algorithms are often used to sample combinatorial structures such as matchings and independent sets in graphs. A Markov chain is defined whose state space includes the desired sample space, and which has an appropriate stationary distribution. By simulating the chain for a sufficiently large number of steps, we can sample from a distribution arbitrarily close to the stationary distribution. The number of steps required to do this is known as the mixing time of the Markov chain. In this thesis, we consider a number of Markov chains for sampling matchings, both in general and more restricted classes of graphs, and also for sampling independent sets in claw-free graphs. We apply techniques for showing rapid mixing based on two main approaches: coupling and conductance. We consider chains using single-site moves, and also chains using large block moves. Perfect matchings of bipartite graphs are of particular interest in our community. We investigate the mixing time of a Markov chain for sampling perfect matchings in a restricted class of bipartite graphs, and show that its mixing time is exponential in some instances. For a further restricted class of graphs, however, we can show subexponential mixing time. One of the techniques for showing rapid mixing is coupling. The bound on the mixing time depends on a contraction ratio b. Ideally, b < 1, but in the case b = 1 it is still possible to obtain a bound on the mixing time, provided there is a sufficiently large probability of contraction for all pairs of states. We develop a lemma which obtains better bounds on the mixing time in this case than existing theorems, in the case where b = 1 and the probability of a change in distance is proportional to the distance between the two states. We apply this lemma to the Dyer-Greenhill chain for sampling independent sets, and to a Markov chain for sampling 2D-colourings.
3

Modelling operational risk using skew t-copulas and Bayesian inference

Garzon Rozo, Betty Johanna January 2016 (has links)
Operational risk losses are heavy tailed and are likely to be asymmetric and extremely dependent among business lines/event types. The analysis of dependence via copula models has been focussed on the bivariate case mainly. In the vast majority of instances symmetric elliptical copulas are employed to model dependence for severities. This thesis proposes a new methodology to assess, in a multivariate way, the asymmetry and extreme dependence between severities, and to calculate the capital for operational risk. This methodology simultaneously uses (i) several parametric distributions and an alternative mixture distribution (the Lognormal for the body of losses and the generalised Pareto Distribution for the tail) using a technique from extreme value theory, (ii) the multivariate skew t-copula applied for the first time across severities and (iii) Bayesian theory. The former to model severities, I test simultaneously several parametric distributions and the mixture distribution for each business line. This procedure enables me to achieve multiple combinations of the severity distribution and to find which fits most closely. The second to effectively model asymmetry and extreme dependence in high dimensions. The third to estimate the copula model, given the high multivariate component (i.e. eight business lines and seven event types) and the incorporation of mixture distributions it is highly difficult to implement maximum likelihood. Therefore, I use a Bayesian inference framework and Markov chain Monte Carlo simulation to evaluate the posterior distribution to estimate and make inferences of the parameters of the skew t-copula model. The research analyses an updated operational loss data set, SAS® Operational Risk Global Data (SAS OpRisk Global Data), to model operational risk at international financial institutions. I then evaluate the impact of this multivariate, asymmetric and extreme dependence on estimating the total regulatory capital, among other established multivariate copulas. My empirical findings are consistent with other studies reporting thin and medium-tailed loss distributions. My approach substantially outperforms symmetric elliptical copulas, demonstrating that modelling dependence via the skew t-copula provides a more efficient allocation of capital charges of up to 56% smaller than that indicated by the standard Basel model.
4

Auxiliary variable Markov chain Monte Carlo methods

Graham, Matthew McKenzie January 2018 (has links)
Markov chain Monte Carlo (MCMC) methods are a widely applicable class of algorithms for estimating integrals in statistical inference problems. A common approach in MCMC methods is to introduce additional auxiliary variables into the Markov chain state and perform transitions in the joint space of target and auxiliary variables. In this thesis we consider novel methods for using auxiliary variables within MCMC methods to allow approximate inference in otherwise intractable models and to improve sampling performance in models exhibiting challenging properties such as multimodality. We first consider the pseudo-marginal framework. This extends the Metropolis–Hastings algorithm to cases where we only have access to an unbiased estimator of the density of target distribution. The resulting chains can sometimes show ‘sticking’ behaviour where long series of proposed updates are rejected. Further the algorithms can be difficult to tune and it is not immediately clear how to generalise the approach to alternative transition operators. We show that if the auxiliary variables used in the density estimator are included in the chain state it is possible to use new transition operators such as those based on slice-sampling algorithms within a pseudo-marginal setting. This auxiliary pseudo-marginal approach leads to easier to tune methods and is often able to improve sampling efficiency over existing approaches. As a second contribution we consider inference in probabilistic models defined via a generative process with the probability density of the outputs of this process only implicitly defined. The approximate Bayesian computation (ABC) framework allows inference in such models when conditioning on the values of observed model variables by making the approximation that generated observed variables are ‘close’ rather than exactly equal to observed data. Although making the inference problem more tractable, the approximation error introduced in ABC methods can be difficult to quantify and standard algorithms tend to perform poorly when conditioning on high dimensional observations. This often requires further approximation by reducing the observations to lower dimensional summary statistics. We show how including all of the random variables used in generating model outputs as auxiliary variables in a Markov chain state can allow the use of more efficient and robust MCMC methods such as slice sampling and Hamiltonian Monte Carlo (HMC) within an ABC framework. In some cases this can allow inference when conditioning on the full set of observed values when standard ABC methods require reduction to lower dimensional summaries for tractability. Further we introduce a novel constrained HMC method for performing inference in a restricted class of differentiable generative models which allows conditioning the generated observed variables to be arbitrarily close to observed data while maintaining computational tractability. As a final topicwe consider the use of an auxiliary temperature variable in MCMC methods to improve exploration of multimodal target densities and allow estimation of normalising constants. Existing approaches such as simulated tempering and annealed importance sampling use temperature variables which take on only a discrete set of values. The performance of these methods can be sensitive to the number and spacing of the temperature values used, and the discrete nature of the temperature variable prevents the use of gradient-based methods such as HMC to update the temperature alongside the target variables. We introduce new MCMC methods which instead use a continuous temperature variable. This both removes the need to tune the choice of discrete temperature values and allows the temperature variable to be updated jointly with the target variables within a HMC method.
5

3 essays on credit risk modeling and the macroeconomic environment

Papanastasiou, Dimitrios January 2015 (has links)
In the aftermath of the recent financial crisis, the way credit risk is affected by and affects the macroeconomic environment has been the focus of academics, risk practitioners and central bankers alike. In this thesis I approach three distinct questions that aim to provide valuable insight into how corporate defaults, recoveries and credit ratings interact with the conditions in the wider economy. The first question focuses on how well the macroeconomic environment forecasts corporate bond defaults. I approach the question from a macroeconomic perspective and I make full use of the multitude of lengthy macroeconomic time series available. Following the recent literature on data-rich environment modelling, I summarise a large panel of 103 macroeconomic time series into a small set of 6 dynamic factors; the factors capture business cycle, yield curve, credit premia and equity market conditions. Prior studies on dynamic factors use identification schemes based on principal components or recursive short-run restrictions. The main contribution to the body of existing literature is that I provide a novel and more robust identification scheme for the 6 macro-financial stochastic factors, based on a set of over-identifying restrictions. This allows for a more straightforward interpretation of the extracted factors and a more meaningful decomposition of the corporate default dynamics. Furthermore, I use a novel Bayesian estimation scheme based on a Markov chain Monte Carlo algorithm that has not been used before in a credit risk context. I argue that the proposed algorithm provides an effcient and flexible alternative to the simulation based estimation approaches used in the existing literature. The sampling scheme is used to estimate a state-of-the-art dynamic econometric specification that is able to separate macro-economic fluctuations from unobserved default clustering. Finally, I provide evidence that the macroeconomic factors can lead to significant improvements in default probability forecasting performance. The forecasting performance gains become less pronounced the longer the default forecasting horizon. The second question explores the sensitivity of corporate bond defaults and recoveries on monetary policy and macro-financial shocks. To address the question, I follow a more structural approach to extract theory-based economic shocks and quantify the magnitude of the impact on the two main credit risk drivers. This is the first study that approaches the decomposition of the movements in credit risk metrics from a structural perspective. I introduce a VAR model with a novel semi-structural identification scheme to isolate the various shocks at the macro level. The dynamic econometric specification for defaults and recoveries is similar to the one used to address the first question. The specification is flexible enough to allow for the separation of the macroeconomic movements from the credit risk specific unobserved correlation and, therefore, isolate the different shock transmission mechanisms. I report that the corporate default likelihood is strongly affected by balance sheet and real economy shocks for the cyclical industry sectors, while the effects of monetary policy shocks typically take up to one year to materialise. In contrast, recovery rates tend to be more sensitive to asset price shocks, while real economy shocks mainly affect secured debt recovery values. The third question shifts the focus to credit ratings and addresses the Through-the- Cycle dynamics of the serial dependence in rating migrations. The existing literature treats the so-called rating momentum as constant through time. I show that the rating momentum is far from constant, it changes with the business cycle and its magnitude exhibits a non-linear dependence on time spent in a given rating grade. Furthermore, I provide robust evidence that the time-varying rating momentum substantially increases actual and Marked-to-Market losses in periods of stress. The impact on regulatory capital for financial institutions is less clear; nevertheless, capital requirements for high credit quality portfolios can be significantly underestimated during economic downturns.
6

Mécanismes de formation de coalitions d’agents dans les processus de planification / On coalition formation methods in multi-agents systems

Arib, Souhila 10 September 2015 (has links)
Le travail que nous présentons dans cette thèse s'articule autour du problème de la formation de coalitions entre des agents égoïstes qui planifient leurs activités, dans les systèmes multi-agents (SMA). Nous avons proposé, dans un premier temps, un mécanisme qui se fonde sur l’analyse des actions des agents dans leurs plans et le raisonnement sur les plans des autres, grâce notamment au calcul d’un degré de croyance sur les actions. Nous nous sommes, par ailleurs, intéressés au problème de la formation de coalitions avec des contraintes dynamiques et des préférences que les agents révèlent et communiquent aux autres lors de leurs négociations. Enfin, nous avons affiné notre mécanisme de formation des coalitions en permettant une recherche des coalitions guidée par la construction d'un arbre de contraintes et d'un arbre de coalitions, qui sont ensuite exploré par le biais de l'algorithme Monte-Carlo. / The work we present, in this thesis, focuses on the coalition formation problem for self-interested agents which plan their activities in multi-agents systems. As a first step, we have proposed, a mechanism that is based on the analysis of the agents' actions in their plans and reasoning about the plans of others. Additionally, we have addressed the problem of coalition formation with dynamic constraints and preferences that agents reveal and communicate to others during their negotiations. Finally, we have refined our coalition formation mechanism allowing a guided search of the coalitions by building a tree of constraints and a tree of coalitions. Each tree is explored by means of the Monte-Carlo algorithm.
7

Some Contributions to Distribution Theory and Applications

Selvitella, Alessandro 11 1900 (has links)
In this thesis, we present some new results in distribution theory for both discrete and continuous random variables, together with their motivating applications. We start with some results about the Multivariate Gaussian Distribution and its characterization as a maximizer of the Strichartz Estimates. Then, we present some characterizations of discrete and continuous distributions through ideas coming from optimal transportation. After this, we pass to the Simpson's Paradox and see that it is ubiquitous and it appears in Quantum Mechanics as well. We conclude with a group of results about discrete and continuous distributions invariant under symmetries, in particular invariant under the groups $A_1$, an elliptical version of $O(n)$ and $\mathbb{T}^n$. As mentioned, all the results proved in this thesis are motivated by their applications in different research areas. The applications will be thoroughly discussed. We have tried to keep each chapter self-contained and recalled results from other chapters when needed. The following is a more precise summary of the results discussed in each chapter. In chapter \ref{chapter 2}, we discuss a variational characterization of the Multivariate Normal distribution (MVN) as a maximizer of the Strichartz Estimates. Strichartz Estimates appear as a fundamental tool in the proof of wellposedness results for dispersive PDEs. With respect to the characterization of the MVN distribution as a maximizer of the entropy functional, the characterization as a maximizer of the Strichartz Estimate does not require the constraint of fixed variance. In this chapter, we compute the precise optimal constant for the whole range of Strichartz admissible exponents, discuss the connection of this problem to Restriction Theorems in Fourier analysis and give some statistical properties of the family of Gaussian Distributions which maximize the Strichartz estimates, such as Fisher Information, Index of Dispersion and Stochastic Ordering. We conclude this chapter presenting an optimization algorithm to compute numerically the maximizers. Chapter \ref{chapter 3} is devoted to the characterization of distributions by means of techniques from Optimal Transportation and the Monge-Amp\`{e}re equation. We give emphasis to methods to do statistical inference for distributions that do not possess good regularity, decay or integrability properties. For example, distributions which do not admit a finite expected value, such as the Cauchy distribution. The main tool used here is a modified version of the characteristic function (a particular case of the Fourier Transform). An important motivation to develop these tools come from Big Data analysis and in particular the Consensus Monte Carlo Algorithm. In chapter \ref{chapter 4}, we study the \emph{Simpson's Paradox}. The \emph{Simpson's Paradox} is the phenomenon that appears in some datasets, where subgroups with a common trend (say, all negative trend) show the reverse trend when they are aggregated (say, positive trend). Even if this issue has an elementary mathematical explanation, the statistical implications are deep. Basic examples appear in arithmetic, geometry, linear algebra, statistics, game theory, sociology (e.g. gender bias in the graduate school admission process) and so on and so forth. In our new results, we prove the occurrence of the \emph{Simpson's Paradox} in Quantum Mechanics. In particular, we prove that the \emph{Simpson's Paradox} occurs for solutions of the \emph{Quantum Harmonic Oscillator} both in the stationary case and in the non-stationary case. We prove that the phenomenon is not isolated and that it appears (asymptotically) in the context of the \emph{Nonlinear Schr\"{o}dinger Equation} as well. The likelihood of the \emph{Simpson's Paradox} in Quantum Mechanics and the physical implications are also discussed. Chapter \ref{chapter 5} contains some new results about distributions with symmetries. We first discuss a result on symmetric order statistics. We prove that the symmetry of any of the order statistics is equivalent to the symmetry of the underlying distribution. Then, we characterize elliptical distributions through group invariance and give some properties. Finally, we study geometric probability distributions on the torus with applications to molecular biology. In particular, we introduce a new family of distributions generated through stereographic projection, give several properties of them and compare them with the Von-Mises distribution and its multivariate extensions. / Thesis / Doctor of Philosophy (PhD)
8

Non-convex Bayesian Learning via Stochastic Gradient Markov Chain Monte Carlo

Wei Deng (11804435) 18 December 2021 (has links)
<div>The rise of artificial intelligence (AI) hinges on the efficient training of modern deep neural networks (DNNs) for non-convex optimization and uncertainty quantification, which boils down to a non-convex Bayesian learning problem. A standard tool to handle the problem is Langevin Monte Carlo, which proposes to approximate the posterior distribution with theoretical guarantees. However, non-convex Bayesian learning in real big data applications can be arbitrarily slow and often fails to capture the uncertainty or informative modes given a limited time. As a result, advanced techniques are still required.</div><div><br></div><div>In this thesis, we start with the replica exchange Langevin Monte Carlo (also known as parallel tempering), which is a Markov jump process that proposes appropriate swaps between exploration and exploitation to achieve accelerations. However, the na\"ive extension of swaps to big data problems leads to a large bias, and the bias-corrected swaps are required. Such a mechanism leads to few effective swaps and insignificant accelerations. To alleviate this issue, we first propose a control variates method to reduce the variance of noisy energy estimators and show a potential to accelerate the exponential convergence. We also present the population-chain replica exchange and propose a generalized deterministic even-odd scheme to track the non-reversibility and obtain an optimal round trip rate. Further approximations are conducted based on stochastic gradient descents, which yield a user-friendly nature for large-scale uncertainty approximation tasks without much tuning costs. </div><div><br></div><div>In the second part of the thesis, we study scalable dynamic importance sampling algorithms based on stochastic approximation. Traditional dynamic importance sampling algorithms have achieved successes in bioinformatics and statistical physics, however, the lack of scalability has greatly limited their extensions to big data applications. To handle this scalability issue, we resolve the vanishing gradient problem and propose two dynamic importance sampling algorithms based on stochastic gradient Langevin dynamics. Theoretically, we establish the stability condition for the underlying ordinary differential equation (ODE) system and guarantee the asymptotic convergence of the latent variable to the desired fixed point. Interestingly, such a result still holds given non-convex energy landscapes. In addition, we also propose a pleasingly parallel version of such algorithms with interacting latent variables. We show that the interacting algorithm can be theoretically more efficient than the single-chain alternative with an equivalent computational budget.</div>

Page generated in 0.0505 seconds