• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 11
  • 6
  • 5
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 92
  • 92
  • 47
  • 47
  • 27
  • 23
  • 21
  • 16
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Mixing Processes for Ground Improvement by Deep Mixing

Larsson, Stefan January 2003 (has links)
<p>The thesis is dealing with mixing processes havingapplication to ground improvement by deep mixing. The mainobjectives of the thesis is to make a contribution to knowledgeof the basic mechanisms in mixing binding agents into soil andimprove the knowledge concerning factors that influence theuniformity of stabilised soil.</p><p>A great part of the work consists of a literature surveywith particular emphasis on literature on the processindustries. This review forms a basis for a profounddescription and discussion of the mixing process and factorsaffecting the process in connection with deep mixingmethods.</p><p>The thesis presents a method for a simple field test for thestudy of influential factors in the mixing process. A number offactors in the installation process of lime-cement columns havebeen studied in two field tests using statistical multifactorexperiment design. The effects of retrieval rate, number ofmixing blades, rotation speed, air pressure in the storagetank, and diameter of the binder outlet on the stabilisationeffect and the coefficient of variation determined byhand-operated penetrometer tests for excavated lime-cementcolumns, were studied.</p><p>The literature review, the description of the mixingprocess, and the results from the field tests provide a morebalanced picture of the mixing process and are expected to beuseful in connection to ground improvement projects and thedevelopment of mixing equipments.</p><p>The concept of sufficient mixture quality, i.e. theinteraction between the mixing process and the mechanicalsystem, is discussed in the last section. By means ofgeostatistical methods, the analysis considers thevolume-variability relationship with reference to strengthproperties. According to the analysis, the design values forstrength properties depends on the mechanical system, the scaleof scrutiny, the spatial correlation structure, and the conceptof safety, i.e. the concept of sufficient mixture quality isproblem specific.</p><p><b>Key words:</b>Deep Mixing, Lime cement columns, Mixingmechanisms, Mixture quality, Field test, ANOVA, Variancereduction.</p>
22

Bias and Variance Reduction in Assessing Solution Quality for Stochastic Programs

Stockbridge, Rebecca January 2013 (has links)
Stochastic programming combines ideas from deterministic optimization with probability and statistics to produce more accurate models of optimization problems involving uncertainty. However, due to their size, stochastic programming problems can be extremely difficult to solve and instead approximate solutions are used. Therefore, there is a need for methods that can accurately identify optimal or near optimal solutions. In this dissertation, we focus on improving Monte-Carlo sampling-based methods that assess the quality of potential solutions to stochastic programs by estimating optimality gaps. In particular, we aim to reduce the bias and/or variance of these estimators. We first propose a technique to reduce the bias of optimality gap estimators which is based on probability metrics and stability results in stochastic programming. This method, which requires the solution of a minimum-weight perfect matching problem, can be run in polynomial time in sample size. We establish asymptotic properties and present computational results. We then investigate the use of sampling schemes to reduce the variance of optimality gap estimators, and in particular focus on antithetic variates and Latin hypercube sampling. We also combine these methods with the bias reduction technique discussed above. Asymptotic properties of the resultant estimators are presented, and computational results on a range of test problems are discussed. Finally, we apply methods of assessing solution quality using antithetic variates and Latin hypercube sampling to a sequential sampling procedure to solve stochastic programs. In this setting, we use Latin hypercube sampling when generating a sequence of candidate solutions that is input to the procedure. We prove that these procedures produce a high-quality solution with high probability, asymptotically, and terminate in a finite number of iterations. Computational results are presented.
23

Mixing Processes for Ground Improvement by Deep Mixing

Larsson, Stefan January 2003 (has links)
The thesis is dealing with mixing processes havingapplication to ground improvement by deep mixing. The mainobjectives of the thesis is to make a contribution to knowledgeof the basic mechanisms in mixing binding agents into soil andimprove the knowledge concerning factors that influence theuniformity of stabilised soil. A great part of the work consists of a literature surveywith particular emphasis on literature on the processindustries. This review forms a basis for a profounddescription and discussion of the mixing process and factorsaffecting the process in connection with deep mixingmethods. The thesis presents a method for a simple field test for thestudy of influential factors in the mixing process. A number offactors in the installation process of lime-cement columns havebeen studied in two field tests using statistical multifactorexperiment design. The effects of retrieval rate, number ofmixing blades, rotation speed, air pressure in the storagetank, and diameter of the binder outlet on the stabilisationeffect and the coefficient of variation determined byhand-operated penetrometer tests for excavated lime-cementcolumns, were studied. The literature review, the description of the mixingprocess, and the results from the field tests provide a morebalanced picture of the mixing process and are expected to beuseful in connection to ground improvement projects and thedevelopment of mixing equipments. The concept of sufficient mixture quality, i.e. theinteraction between the mixing process and the mechanicalsystem, is discussed in the last section. By means ofgeostatistical methods, the analysis considers thevolume-variability relationship with reference to strengthproperties. According to the analysis, the design values forstrength properties depends on the mechanical system, the scaleof scrutiny, the spatial correlation structure, and the conceptof safety, i.e. the concept of sufficient mixture quality isproblem specific. Key words:Deep Mixing, Lime cement columns, Mixingmechanisms, Mixture quality, Field test, ANOVA, Variancereduction.
24

Calcul parallèle pour les problèmes linéaires, non-linéaires et linéaires inverses en finance / Parallel computing for linear, nonlinear and linear inverse problems in finance

Abbas-Turki, Lokman 21 September 2012 (has links)
De ce fait, le premier objectif de notre travail consiste à proposer des générateurs de nombres aléatoires appropriés pour des architectures parallèles et massivement parallèles de clusters de CPUs/GPUs. Nous testerons le gain en temps de calcul et l'énergie consommée lors de l'implémentation du cas linéaire du pricing européen. Le deuxième objectif est de reformuler le problème non-linéaire du pricing américain pour que l'on puisse avoir des gains de parallélisation semblables à ceux obtenus pour les problèmes linéaires. La méthode proposée fondée sur le calcul de Malliavin est aussi plus avantageuse du point de vue du praticien au delà même de l'intérêt intrinsèque lié à la possibilité d'une bonne parallélisation. Toujours dans l'objectif de proposer des algorithmes parallèles, le dernier point est l'étude de l'unicité de la solution de certains cas linéaires inverses en finance. Cette unicité aide en effet à avoir des algorithmes simples fondés sur Monte Carlo / Handling multidimensional parabolic linear, nonlinear and linear inverse problems is the main objective of this work. It is the multidimensional word that makes virtually inevitable the use of simulation methods based on Monte Carlo. This word also makes necessary the use of parallel architectures. Indeed, the problems dealing with a large number of assets are major resources consumers, and only parallelization is able to reduce their execution times. Consequently, the first goal of our work is to propose "appropriate" random number generators to parallel and massively parallel architecture implemented on CPUs/GPUs cluster. We quantify the speedup and the energy consumption of the parallel execution of a European pricing. The second objective is to reformulate the nonlinear problem of pricing American options in order to get the same parallelization gains as those obtained for linear problems. In addition to its parallelization suitability, the proposed method based on Malliavin calculus has other practical advantages. Continuing with parallel algorithms, the last point of this work is dedicated to the uniqueness of the solution of some linear inverse problems in finance. This theoretical study enables the use of simple methods based on Monte Carlo
25

Oceňování derivátů pomocí Monte Carlo simulací / Derivative Pricing Using Monte Carlo Simulations

Burešová, Jana January 2009 (has links)
Pricing of more complex derivatives is very often based on Monte Carlo simulations. Estimates given by these simulations are derived from thousands of scenarions for the underlying asset price developement. These estimates can be more precise in case of higher number of scenarions or in case of modifications of a simulation mentioned in this master thesis. First part of the thesis includes theoretic description of variance reduction techniques, second part consists of implementation of all techniques in pricing a barrier option and of their comparison. We conclude the thesis by two statements. The former one says that usage of each technique is subject to simulation specifics, the latter one recommends to use MC simulations even in the case a closed-form formula was derived.
26

SEQUENTIAL A/B TESTING USING PRE-EXPERIMENT DATA

Stenberg, Erik January 2019 (has links)
This thesis bridges the gap between two popular methods of achieving more efficient online experiments, sequential tests and variance reduction with pre-experiment data. Through simulations, it is shown that there is efficiency to be gained in using control-variates sequentially along with the popular mixture Sequential Probability Ratio Test. More efficient tests lead to faster decisions and smaller sample sizes required. The technique proposed is also tested using empirical data on users from the music streaming service Spotify. An R package which includes the main tests applied in this thesis is also presented.
27

Variance Reduction in Wind Farm Layout Optimization

Gagakuma, Bertelsen 01 December 2019 (has links)
As demand for wind power continues to grow, it is becoming increasingly important to minimize the risk, characterized by the variance, that is associated with long-term power forecasts. This thesis investigated variance reduction in power forecasts from wind farm layout optimization.The problem was formulated as a multi-objective optimization one of maximizing mean-plant-power and minimizing variance. The ε−constraint method was used to solve the bi-objectiveproblem in a two-step optimization framework where two sequential optimizations are performed. The first is maximizing mean wind farm power alone and the second, minimizing variance with a constraint on the mean power which is the value from the first optimization. The results show that the variance in power estimates can be reduced by up to 30%, without sacrificing mean-plant-power for the different farm sizes and wind conditions studied. This reduction is attributed to the multi-modality of the design space which allows for unique solutions of high mean plant power at different power variances. Thus, wind farms can be designed to maximize power capture with greater confidence.
28

Mathematical modelling andsimulation for tumour growth andangiogenesis / Matematisk modellering och simulering för tumörtillväxt och angiogenes

Luna, René Edgardo January 2021 (has links)
Cancer is a complex illness that affects millions of people every year. Amongst the most frequently encountered variants of this illness are solid tumours. The growth of solid tumours depends on a large number of factors such as oxygen concentration, cell reproduction, cell movement, cell death, and vascular environment. The aim of this thesis is to provide further insight in the interconnections between these factors by means of numerical simulations. We present a multiscale model for tumor growth by coupling a microscopic, agent-based model for normal and tumor cells with macroscopic mean-field models for oxygen and extracellular concentrations. We assume the cell movement to be dominated by Brownian motion. The temporal and spatial evolution of the oxygen concentration is governed by a reaction-diffusion equation that mimics a balance law.To complement this macroscopic oxygen evolution with microscopic information, we propose a lattice-free approach that extends the vascular distribution of oxygen. We employ a Markov chain to estimate the sprout probability of new vessels. The extension of the new vessels is modeled by enhancing the agent-based cell model with chemotactic sensitivity. Our results include finite-volume discretizations of the resulting partial differential equations and suitable approaches to approximate the stochastic differential equations governing the agent-based motion. We provide a simulation framework that evaluates the effect of the various parameters on, for instance, the spread of oxygen. We also show results of numerical experiments where we allow new vessels to sprout, i.e. we explore angiogenesis. In the case of a static vasculature, we simulate the full multiscale model using a coupled stochastic/deterministic discretization approach which is able to reduce variance at least for a chosen computable indicator, leading to improved efficiency and a potential increased reliability on models of this type.
29

Asymptotic approaches in financial risk management / Approches asymptotiques en gestion des risques financiers

Genin, Adrien 21 September 2018 (has links)
Cette thèse se propose de traiter de trois problèmes de gestion des risques financiers en utilisant différentes approches asymptotiques. La première partie présente un algorithme Monte Carlo d’échantillonnage d’importance pour la valorisation d’options asiatiques dans des modèles exponentiels de Lévy. La mesure optimale d’échantillonnage d’importance est obtenue grâce à la théorie des grandes déviations. La seconde partie présente l’étude du comportement asymptotique de la somme de n variables aléatoires positives et dépendantes dont la distribution est un mélange log-normal ainsi que des applications en gestion des risque de portefeuille d’actifs. Enfin, la dernière partie, présente une application de la notion de variations régulières pour l’analyse du comportement des queues de distribution d’un vecteur aléatoire dont les composantes suivent des distributions à queues épaisses et dont la structure de dépendance est modélisée par une copule Gaussienne. Ces résultats sont ensuite appliqués au comportement asymptotique d’un portefeuille d’options dans le modèle de Black-Scholes / This thesis focuses on three problems from the area of financial risk management, using various asymptotic approaches. The first part presents an importance sampling algorithm for Monte Carlo pricing of exotic options in exponential Lévy models. The optimal importance sampling measure is computed using techniques from the theory of large deviations. The second part uses the Laplace method to study the tail behavior of the sum of n dependent positive random variables, following a log-normal mixture distribution, with applications to portfolio risk management. Finally, the last part employs the notion of multivariate regular variation to analyze the tail behavior of a random vector with heavy-tailed components, whose dependence structure is modeled by a Gaussian copula. As application, we consider the tail behavior of a portfolio of options in the Black-Scholes model
30

Asynchronous optimization for machine learning / Optimisation asynchrone pour l'apprentissage statistique

Leblond, Rémi 15 November 2018 (has links)
Les explosions combinées de la puissance computationnelle et de la quantité de données disponibles ont fait des algorithmes les nouveaux facteurs limitants en machine learning. L’objectif de cette thèse est donc d’introduire de nouvelles méthodes capables de tirer profit de quantités de données et de ressources computationnelles importantes. Nous présentons deux contributions indépendantes. Premièrement, nous développons des algorithmes d’optimisation rapides, adaptés aux avancées en architecture de calcul parallèle pour traiter des quantités massives de données. Nous introduisons un cadre d’analyse pour les algorithmes parallèles asynchrones, qui nous permet de faire des preuves correctes et simples. Nous démontrons son utilité en analysant les propriétés de convergence et d’accélération de deux nouveaux algorithmes. Asaga est une variante parallèle asynchrone et parcimonieuse de Saga, un algorithme à variance réduite qui a un taux de convergence linéaire rapide dans le cas d’un objectif lisse et fortement convexe. Dans les conditions adéquates, Asaga est linéairement plus rapide que Saga, même en l’absence de parcimonie. ProxAsaga est une extension d’Asaga au cas plus général où le terme de régularisation n’est pas lisse. ProxAsaga obtient aussi une accélération linéaire. Nous avons réalisé des expériences approfondies pour comparer nos algorithms à l’état de l’art. Deuxièmement, nous présentons de nouvelles méthodes adaptées à la prédiction structurée. Nous nous concentrons sur les réseaux de neurones récurrents (RNNs), dont l’algorithme d’entraînement traditionnel – basé sur le principe du maximum de vraisemblance (MLE) – présente plusieurs limitations. La fonction de coût associée ignore l’information contenue dans les métriques structurées ; de plus, elle entraîne des divergences entre l’entraînement et la prédiction. Nous proposons donc SeaRNN, un nouvel algorithme d’entraînement des RNNs inspiré de l’approche dite “learning to search”. SeaRNN repose sur une exploration de l’espace d’états pour définir des fonctions de coût globales-locales, plus proches de la métrique d’évaluation que l’objectif MLE. Les modèles entraînés avec SeaRNN ont de meilleures performances que ceux appris via MLE pour trois tâches difficiles, dont la traduction automatique. Enfin, nous étudions le comportement de ces modèles et effectuons une comparaison détaillée de notre nouvelle approche aux travaux de recherche connexes. / The impressive breakthroughs of the last two decades in the field of machine learning can be in large part attributed to the explosion of computing power and available data. These two limiting factors have been replaced by a new bottleneck: algorithms. The focus of this thesis is thus on introducing novel methods that can take advantage of high data quantity and computing power. We present two independent contributions. First, we develop and analyze novel fast optimization algorithms which take advantage of the advances in parallel computing architecture and can handle vast amounts of data. We introduce a new framework of analysis for asynchronous parallel incremental algorithms, which enable correct and simple proofs. We then demonstrate its usefulness by performing the convergence analysis for several methods, including two novel algorithms. Asaga is a sparse asynchronous parallel variant of the variance-reduced algorithm Saga which enjoys fast linear convergence rates on smooth and strongly convex objectives. We prove that it can be linearly faster than its sequential counterpart, even without sparsity assumptions. ProxAsaga is an extension of Asaga to the more general setting where the regularizer can be non-smooth. We prove that it can also achieve a linear speedup. We provide extensive experiments comparing our new algorithms to the current state-of-art. Second, we introduce new methods for complex structured prediction tasks. We focus on recurrent neural networks (RNNs), whose traditional training algorithm for RNNs – based on maximum likelihood estimation (MLE) – suffers from several issues. The associated surrogate training loss notably ignores the information contained in structured losses and introduces discrepancies between train and test times that may hurt performance. To alleviate these problems, we propose SeaRNN, a novel training algorithm for RNNs inspired by the “learning to search” approach to structured prediction. SeaRNN leverages test-alike search space exploration to introduce global-local losses that are closer to the test error than the MLE objective. We demonstrate improved performance over MLE on three challenging tasks, and provide several subsampling strategies to enable SeaRNN to scale to large-scale tasks, such as machine translation. Finally, after contrasting the behavior of SeaRNN models to MLE models, we conduct an in-depth comparison of our new approach to the related work.

Page generated in 0.1042 seconds