• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 11
  • 6
  • 5
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 87
  • 87
  • 45
  • 45
  • 27
  • 22
  • 20
  • 16
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Calcul parallèle pour les problèmes linéaires, non-linéaires et linéaires inverses en finance / Parallel computing for linear, nonlinear and linear inverse problems in finance

Abbas-Turki, Lokman 21 September 2012 (has links)
De ce fait, le premier objectif de notre travail consiste à proposer des générateurs de nombres aléatoires appropriés pour des architectures parallèles et massivement parallèles de clusters de CPUs/GPUs. Nous testerons le gain en temps de calcul et l'énergie consommée lors de l'implémentation du cas linéaire du pricing européen. Le deuxième objectif est de reformuler le problème non-linéaire du pricing américain pour que l'on puisse avoir des gains de parallélisation semblables à ceux obtenus pour les problèmes linéaires. La méthode proposée fondée sur le calcul de Malliavin est aussi plus avantageuse du point de vue du praticien au delà même de l'intérêt intrinsèque lié à la possibilité d'une bonne parallélisation. Toujours dans l'objectif de proposer des algorithmes parallèles, le dernier point est l'étude de l'unicité de la solution de certains cas linéaires inverses en finance. Cette unicité aide en effet à avoir des algorithmes simples fondés sur Monte Carlo / Handling multidimensional parabolic linear, nonlinear and linear inverse problems is the main objective of this work. It is the multidimensional word that makes virtually inevitable the use of simulation methods based on Monte Carlo. This word also makes necessary the use of parallel architectures. Indeed, the problems dealing with a large number of assets are major resources consumers, and only parallelization is able to reduce their execution times. Consequently, the first goal of our work is to propose "appropriate" random number generators to parallel and massively parallel architecture implemented on CPUs/GPUs cluster. We quantify the speedup and the energy consumption of the parallel execution of a European pricing. The second objective is to reformulate the nonlinear problem of pricing American options in order to get the same parallelization gains as those obtained for linear problems. In addition to its parallelization suitability, the proposed method based on Malliavin calculus has other practical advantages. Continuing with parallel algorithms, the last point of this work is dedicated to the uniqueness of the solution of some linear inverse problems in finance. This theoretical study enables the use of simple methods based on Monte Carlo
22

Oceňování derivátů pomocí Monte Carlo simulací / Derivative Pricing Using Monte Carlo Simulations

Burešová, Jana January 2009 (has links)
Pricing of more complex derivatives is very often based on Monte Carlo simulations. Estimates given by these simulations are derived from thousands of scenarions for the underlying asset price developement. These estimates can be more precise in case of higher number of scenarions or in case of modifications of a simulation mentioned in this master thesis. First part of the thesis includes theoretic description of variance reduction techniques, second part consists of implementation of all techniques in pricing a barrier option and of their comparison. We conclude the thesis by two statements. The former one says that usage of each technique is subject to simulation specifics, the latter one recommends to use MC simulations even in the case a closed-form formula was derived.
23

SEQUENTIAL A/B TESTING USING PRE-EXPERIMENT DATA

Stenberg, Erik January 2019 (has links)
This thesis bridges the gap between two popular methods of achieving more efficient online experiments, sequential tests and variance reduction with pre-experiment data. Through simulations, it is shown that there is efficiency to be gained in using control-variates sequentially along with the popular mixture Sequential Probability Ratio Test. More efficient tests lead to faster decisions and smaller sample sizes required. The technique proposed is also tested using empirical data on users from the music streaming service Spotify. An R package which includes the main tests applied in this thesis is also presented.
24

Variance Reduction in Wind Farm Layout Optimization

Gagakuma, Bertelsen 01 December 2019 (has links)
As demand for wind power continues to grow, it is becoming increasingly important to minimize the risk, characterized by the variance, that is associated with long-term power forecasts. This thesis investigated variance reduction in power forecasts from wind farm layout optimization.The problem was formulated as a multi-objective optimization one of maximizing mean-plant-power and minimizing variance. The ε−constraint method was used to solve the bi-objectiveproblem in a two-step optimization framework where two sequential optimizations are performed. The first is maximizing mean wind farm power alone and the second, minimizing variance with a constraint on the mean power which is the value from the first optimization. The results show that the variance in power estimates can be reduced by up to 30%, without sacrificing mean-plant-power for the different farm sizes and wind conditions studied. This reduction is attributed to the multi-modality of the design space which allows for unique solutions of high mean plant power at different power variances. Thus, wind farms can be designed to maximize power capture with greater confidence.
25

Mathematical modelling andsimulation for tumour growth andangiogenesis / Matematisk modellering och simulering för tumörtillväxt och angiogenes

Luna, René Edgardo January 2021 (has links)
Cancer is a complex illness that affects millions of people every year. Amongst the most frequently encountered variants of this illness are solid tumours. The growth of solid tumours depends on a large number of factors such as oxygen concentration, cell reproduction, cell movement, cell death, and vascular environment. The aim of this thesis is to provide further insight in the interconnections between these factors by means of numerical simulations. We present a multiscale model for tumor growth by coupling a microscopic, agent-based model for normal and tumor cells with macroscopic mean-field models for oxygen and extracellular concentrations. We assume the cell movement to be dominated by Brownian motion. The temporal and spatial evolution of the oxygen concentration is governed by a reaction-diffusion equation that mimics a balance law.To complement this macroscopic oxygen evolution with microscopic information, we propose a lattice-free approach that extends the vascular distribution of oxygen. We employ a Markov chain to estimate the sprout probability of new vessels. The extension of the new vessels is modeled by enhancing the agent-based cell model with chemotactic sensitivity. Our results include finite-volume discretizations of the resulting partial differential equations and suitable approaches to approximate the stochastic differential equations governing the agent-based motion. We provide a simulation framework that evaluates the effect of the various parameters on, for instance, the spread of oxygen. We also show results of numerical experiments where we allow new vessels to sprout, i.e. we explore angiogenesis. In the case of a static vasculature, we simulate the full multiscale model using a coupled stochastic/deterministic discretization approach which is able to reduce variance at least for a chosen computable indicator, leading to improved efficiency and a potential increased reliability on models of this type.
26

Asymptotic approaches in financial risk management / Approches asymptotiques en gestion des risques financiers

Genin, Adrien 21 September 2018 (has links)
Cette thèse se propose de traiter de trois problèmes de gestion des risques financiers en utilisant différentes approches asymptotiques. La première partie présente un algorithme Monte Carlo d’échantillonnage d’importance pour la valorisation d’options asiatiques dans des modèles exponentiels de Lévy. La mesure optimale d’échantillonnage d’importance est obtenue grâce à la théorie des grandes déviations. La seconde partie présente l’étude du comportement asymptotique de la somme de n variables aléatoires positives et dépendantes dont la distribution est un mélange log-normal ainsi que des applications en gestion des risque de portefeuille d’actifs. Enfin, la dernière partie, présente une application de la notion de variations régulières pour l’analyse du comportement des queues de distribution d’un vecteur aléatoire dont les composantes suivent des distributions à queues épaisses et dont la structure de dépendance est modélisée par une copule Gaussienne. Ces résultats sont ensuite appliqués au comportement asymptotique d’un portefeuille d’options dans le modèle de Black-Scholes / This thesis focuses on three problems from the area of financial risk management, using various asymptotic approaches. The first part presents an importance sampling algorithm for Monte Carlo pricing of exotic options in exponential Lévy models. The optimal importance sampling measure is computed using techniques from the theory of large deviations. The second part uses the Laplace method to study the tail behavior of the sum of n dependent positive random variables, following a log-normal mixture distribution, with applications to portfolio risk management. Finally, the last part employs the notion of multivariate regular variation to analyze the tail behavior of a random vector with heavy-tailed components, whose dependence structure is modeled by a Gaussian copula. As application, we consider the tail behavior of a portfolio of options in the Black-Scholes model
27

Asynchronous optimization for machine learning / Optimisation asynchrone pour l'apprentissage statistique

Leblond, Rémi 15 November 2018 (has links)
Les explosions combinées de la puissance computationnelle et de la quantité de données disponibles ont fait des algorithmes les nouveaux facteurs limitants en machine learning. L’objectif de cette thèse est donc d’introduire de nouvelles méthodes capables de tirer profit de quantités de données et de ressources computationnelles importantes. Nous présentons deux contributions indépendantes. Premièrement, nous développons des algorithmes d’optimisation rapides, adaptés aux avancées en architecture de calcul parallèle pour traiter des quantités massives de données. Nous introduisons un cadre d’analyse pour les algorithmes parallèles asynchrones, qui nous permet de faire des preuves correctes et simples. Nous démontrons son utilité en analysant les propriétés de convergence et d’accélération de deux nouveaux algorithmes. Asaga est une variante parallèle asynchrone et parcimonieuse de Saga, un algorithme à variance réduite qui a un taux de convergence linéaire rapide dans le cas d’un objectif lisse et fortement convexe. Dans les conditions adéquates, Asaga est linéairement plus rapide que Saga, même en l’absence de parcimonie. ProxAsaga est une extension d’Asaga au cas plus général où le terme de régularisation n’est pas lisse. ProxAsaga obtient aussi une accélération linéaire. Nous avons réalisé des expériences approfondies pour comparer nos algorithms à l’état de l’art. Deuxièmement, nous présentons de nouvelles méthodes adaptées à la prédiction structurée. Nous nous concentrons sur les réseaux de neurones récurrents (RNNs), dont l’algorithme d’entraînement traditionnel – basé sur le principe du maximum de vraisemblance (MLE) – présente plusieurs limitations. La fonction de coût associée ignore l’information contenue dans les métriques structurées ; de plus, elle entraîne des divergences entre l’entraînement et la prédiction. Nous proposons donc SeaRNN, un nouvel algorithme d’entraînement des RNNs inspiré de l’approche dite “learning to search”. SeaRNN repose sur une exploration de l’espace d’états pour définir des fonctions de coût globales-locales, plus proches de la métrique d’évaluation que l’objectif MLE. Les modèles entraînés avec SeaRNN ont de meilleures performances que ceux appris via MLE pour trois tâches difficiles, dont la traduction automatique. Enfin, nous étudions le comportement de ces modèles et effectuons une comparaison détaillée de notre nouvelle approche aux travaux de recherche connexes. / The impressive breakthroughs of the last two decades in the field of machine learning can be in large part attributed to the explosion of computing power and available data. These two limiting factors have been replaced by a new bottleneck: algorithms. The focus of this thesis is thus on introducing novel methods that can take advantage of high data quantity and computing power. We present two independent contributions. First, we develop and analyze novel fast optimization algorithms which take advantage of the advances in parallel computing architecture and can handle vast amounts of data. We introduce a new framework of analysis for asynchronous parallel incremental algorithms, which enable correct and simple proofs. We then demonstrate its usefulness by performing the convergence analysis for several methods, including two novel algorithms. Asaga is a sparse asynchronous parallel variant of the variance-reduced algorithm Saga which enjoys fast linear convergence rates on smooth and strongly convex objectives. We prove that it can be linearly faster than its sequential counterpart, even without sparsity assumptions. ProxAsaga is an extension of Asaga to the more general setting where the regularizer can be non-smooth. We prove that it can also achieve a linear speedup. We provide extensive experiments comparing our new algorithms to the current state-of-art. Second, we introduce new methods for complex structured prediction tasks. We focus on recurrent neural networks (RNNs), whose traditional training algorithm for RNNs – based on maximum likelihood estimation (MLE) – suffers from several issues. The associated surrogate training loss notably ignores the information contained in structured losses and introduces discrepancies between train and test times that may hurt performance. To alleviate these problems, we propose SeaRNN, a novel training algorithm for RNNs inspired by the “learning to search” approach to structured prediction. SeaRNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error than the MLE objective. We demonstrate improved performance over MLE on three challenging tasks, and provide several subsampling strategies to enable SeaRNN to scale to large-scale tasks, such as machine translation. Finally, after contrasting the behavior of SeaRNN models to MLE models, we conduct an in-depth comparison of our new approach to the related work.
28

Non-convex Stochastic Optimization With Biased Gradient Estimators

Sokolov, Igor 03 1900 (has links)
Non-convex optimization problems appear in various applications of machine learning. Because of their practical importance, these problems gained a lot of attention in recent years, leading to the rapid development of new efficient stochastic gradient-type methods. In the quest to improve the generalization performance of modern deep learning models, practitioners are resorting to using larger and larger datasets in the training process, naturally distributed across a number of edge devices. However, with the increase of trainable data, the computational costs of gradient-type methods increase significantly. In addition, distributed methods almost invariably suffer from the so-called communication bottleneck: the cost of communication of the information necessary for the workers to jointly solve the problem is often very high, and it can be orders of magnitude higher than the cost of computation. This thesis provides a study of first-order stochastic methods addressing these issues. In particular, we structure this study by considering certain classes of methods. That allowed us to understand current theoretical gaps, which we successfully filled by providing new efficient algorithms.
29

Benchmark estimation for Markov Chain Monte Carlo samplers

Guha, Subharup 18 June 2004 (has links)
No description available.
30

Risk Management of Cascading Failure in Composite Reliability of a Deregulated Power System with Microgrids

Chen, Quan 27 December 2013 (has links)
Due to power system deregulations, transmission expansion not keeping up with the load growth, and higher frequency of natural hazards resulting from climate change, major blackouts are becoming more frequent and are spreading over larger regions, entailing higher losses and costs to the economy and the society of many countries in the world. Large-scale blackouts typically result from cascading failure originating from a local event, as typified by the 2003 U.S.-Canada blackout. Their mitigation in power system planning calls for the development of methods and algorithms that assess the risk of cascading failures due to relay over-tripping, short-circuits induced by overgrown vegetation, voltage sags, line and transformer overloading, transient instabilities, voltage collapse, to cite a few. How to control the economic losses of blackouts is gaining a lot of attention among power researchers. In this research work, we develop new Monte Carlo methods and algorithms that assess and manage the risk of cascading failure in composite reliability of deregulated power systems. To reduce the large computational burden involved by the simulations, we make use of importance sampling techniques utilizing the Weibull distribution when modeling power generator outages. Another computing time reduction is achieved by applying importance sampling together with antithetic variates. It is shown that both methods noticeably reduce the number of samples that need to be investigated while maintaining the accuracy of the results at a desirable level. With the advent of microgrids, the assessment of their benefits in power systems is becoming a prominent research topic. In this research work, we investigate their potential positive impact on power system reliability while performing an optimal coordination among three energy sources within microgrids, namely renewable energy conversion, energy storage and micro-turbine generation. This coordination is modeled when applying sequential Monte Carlo simulations, which seek the best placement and sizing of microgrids in composite reliability of a deregulated power system that minimize the risk of cascading failure leading to blackouts subject to fixed investment budget. The performance of the approach is evaluated on the Roy Billinton Test System (RBTS) and the IEEE Reliability Test System (RTS). Simulation results show that in both power systems, microgrids contribute to the improvement of system reliability and the decrease of the risk of cascading failure. / Ph. D.

Page generated in 0.3452 seconds