Spelling suggestions: "subject:"ariance reduction"" "subject:"cariance reduction""
31 |
Non-convex Stochastic Optimization With Biased Gradient EstimatorsSokolov, Igor 03 1900 (has links)
Non-convex optimization problems appear in various applications of machine learning. Because of their practical importance, these problems gained a lot of attention in recent years, leading to the rapid development of new efficient stochastic gradient-type methods. In the quest to improve the generalization performance of modern deep learning models, practitioners are resorting to using larger and larger datasets in the training process, naturally distributed across a number of edge devices. However, with the increase of trainable data, the computational costs of gradient-type methods increase significantly. In addition, distributed methods almost invariably suffer from the so-called communication bottleneck: the cost of communication of the information necessary for the workers to jointly solve the problem is often very high, and it can be orders of magnitude higher than the cost of computation. This thesis provides a study of first-order stochastic methods addressing these issues. In particular, we structure this study by considering certain classes of methods. That allowed us to understand current theoretical gaps, which we successfully filled by providing new efficient algorithms.
|
32 |
Benchmark estimation for Markov Chain Monte Carlo samplersGuha, Subharup 18 June 2004 (has links)
No description available.
|
33 |
Risk Management of Cascading Failure in Composite Reliability of a Deregulated Power System with MicrogridsChen, Quan 27 December 2013 (has links)
Due to power system deregulations, transmission expansion not keeping up with the load growth, and higher frequency of natural hazards resulting from climate change, major blackouts are becoming more frequent and are spreading over larger regions, entailing higher losses and costs to the economy and the society of many countries in the world. Large-scale blackouts typically result from cascading failure originating from a local event, as typified by the 2003 U.S.-Canada blackout. Their mitigation in power system planning calls for the development of methods and algorithms that assess the risk of cascading failures due to relay over-tripping, short-circuits induced by overgrown vegetation, voltage sags, line and transformer overloading, transient instabilities, voltage collapse, to cite a few. How to control the economic losses of blackouts is gaining a lot of attention among power researchers.
In this research work, we develop new Monte Carlo methods and algorithms that assess and manage the risk of cascading failure in composite reliability of deregulated power systems. To reduce the large computational burden involved by the simulations, we make use of importance sampling techniques utilizing the Weibull distribution when modeling power generator outages. Another computing time reduction is achieved by applying importance sampling together with antithetic variates. It is shown that both methods noticeably reduce the number of samples that need to be investigated while maintaining the accuracy of the results at a desirable level.
With the advent of microgrids, the assessment of their benefits in power systems is becoming a prominent research topic. In this research work, we investigate their potential positive impact on power system reliability while performing an optimal coordination among three energy sources within microgrids, namely renewable energy conversion, energy storage and micro-turbine generation. This coordination is modeled when applying sequential Monte Carlo simulations, which seek the best placement and sizing of microgrids in composite reliability of a deregulated power system that minimize the risk of cascading failure leading to blackouts subject to fixed investment budget. The performance of the approach is evaluated on the Roy Billinton Test System (RBTS) and the IEEE Reliability Test System (RTS). Simulation results show that in both power systems, microgrids contribute to the improvement of system reliability and the decrease of the risk of cascading failure. / Ph. D.
|
34 |
Principled Variance Reduction Techniques for Real Time Patient-Specific Monte Carlo Applications within Brachytherapy and Cone-Beam Computed TomographySampson, Andrew 30 April 2013 (has links)
This dissertation describes the application of two principled variance reduction strategies to increase the efficiency for two applications within medical physics. The first, called correlated Monte Carlo (CMC) applies to patient-specific, permanent-seed brachytherapy (PSB) dose calculations. The second, called adjoint-biased forward Monte Carlo (ABFMC), is used to compute cone-beam computed tomography (CBCT) scatter projections. CMC was applied for two PSB cases: a clinical post-implant prostate, and a breast with a simulated lumpectomy cavity. CMC computes the dose difference between the highly correlated dose computing homogeneous and heterogeneous geometries. The particle transport in the heterogeneous geometry assumed a purely homogeneous environment, and altered particle weights accounted for bias. Average gains of 37 to 60 are reported from using CMC, relative to un-correlated Monte Carlo (UMC) calculations, for the prostate and breast CTV’s, respectively. To further increase the efficiency up to 1500 fold above UMC, an approximation called interpolated correlated Monte Carlo (ICMC) was applied. ICMC computes using CMC on a low-resolution (LR) spatial grid followed by interpolation to a high-resolution (HR) voxel grid followed. The interpolated, HR is then summed with a HR, pre-computed, homogeneous dose map. ICMC computes an approximate, but accurate, HR heterogeneous dose distribution from LR MC calculations achieving an average 2% standard deviation within the prostate and breast CTV’s in 1.1 sec and 0.39 sec, respectively. Accuracy for 80% of the voxels using ICMC is within 3% for anatomically realistic geometries. Second, for CBCT scatter projections, ABFMC was implemented via weight windowing using a solution to the adjoint Boltzmann transport equation computed either via the discrete ordinates method (DOM), or a MC implemented forward-adjoint importance generator (FAIG). ABFMC, implemented via DOM or FAIG, was tested for a single elliptical water cylinder using a primary point source (PPS) and a phase-space source (PSS). The best gains were found by using the PSS yielding average efficiency gains of 250 relative to non-weight windowed MC utilizing the PPS. Furthermore, computing 360 projections on a 40 by 30 pixel grid requires only 48 min on a single CPU core allowing clinical use via parallel processing techniques.
|
35 |
Numerical methods for homogenization : applications to random media / Techniques numériques d'homogénéisation : application aux milieux aléatoiresCostaouec, Ronan 23 November 2011 (has links)
Le travail de cette thèse a porté sur le développement de techniques numériques pour l'homogénéisation de matériaux présentant à une petite échelle des hétérogénéités aléatoires. Sous certaines hypothèses, la théorie mathématique de l'homogénéisation stochastique permet d'expliciter les propriétés effectives de tels matériaux. Néanmoins, en pratique, la détermination de ces propriétés demeure difficile. En effet, celle-ci requiert la résolution d'équations aux dérivées partielles stochastiques posées sur l'espace tout entier. Dans cette thèse, cette difficulté est abordée de deux manières différentes. Les méthodes classiques d'approximation conduisent à approcher les propriétés effectives par des quantités aléatoires. Réduire la variance de ces quantités est l'objectif des travaux de la Partie I. On montre ainsi comment adapter au cadre de l'homogénéisation stochastique une technique de réduction de variance déjà éprouvée dans d'autres domaines. Les travaux de la Partie II s'intéressent à des cas pour lesquels le matériau d'intérêt est considéré comme une petite perturbation aléatoire d'un matériau de référence. On montre alors numériquement et théoriquement que cette simplification de la modélisation permet effectivement une réduction très importante du coût calcul / In this thesis we investigate numerical methods for the homogenization of materials the structures of which, at fine scales, are characterized by random heterogenities. Under appropriate hypotheses, the effective properties of such materials are given by closed formulas. However, in practice the computation of these properties is a difficult task because it involves solving partial differential equations with stochastic coefficients that are additionally posed on the whole space. In this work, we address this difficulty in two different ways. The standard discretization techniques lead to random approximate effective properties. In Part I, we aim at reducing their variance, using a well-known variance reduction technique that has already been used successfully in other domains. The works of Part II focus on the case when the material can be seen as a small random perturbation of a periodic material. We then show both numerically and theoretically that, in this case, computing the effective properties is much less costly than in the general case
|
36 |
Evaluating of path-dependent securities with low discrepancy methodsKrykova, Inna 13 January 2004 (has links)
The objective of this thesis is the implementation of Monte Carlo and quasi-Monte Carlo methods for the valuation of financial derivatives. Advantages and disadvantages of each method are stated based on both the literature and on independent computational experiments by the author. Various methods to generate pseudo-random and quasi-random sequences are implemented in a computationally uniform way to enable objective comparisons. Code is developed in VBA and C++, with the C++ code converted to a COM object to make it callable from Microsoft Excel and Matlab. From the simulated random sequences Brownian motion paths are built using various constructions and variance-reduction techniques including Brownian Bridge and Latin hypercube. The power and efficiency of the methods is compared on four financial securities pricing problems: European options, Asian options, barrier options and mortgage-backed securities. In this paper a detailed step-by-step algorithm is given for each method (construction of pseudo- and quasi-random sequences, Brownian motion paths for some stochastic processes, variance- and dimension- reduction techniques, evaluation of some financial securities using different variance-reduction techniques etc).
|
37 |
Amélioration des mesures anthroporadiamétriques personnalisées assistées par calcul Monte Carlo : optimisation des temps de calculs et méthodologie de mesure pour l’établissement de la répartition d’activité / Optimizing the in vivo monitoring of female workers using in vivo measurements and Monte Carlo calculations : method for the management of complex contaminationsFarah, Jad 06 October 2011 (has links)
Afin d’optimiser la surveillance des travailleuses du nucléaire par anthroporadiamétrie, il est nécessaire de corriger les coefficients d’étalonnage obtenus à l’aide du fantôme physique masculin Livermore. Pour ce faire, des étalonnages numériques basés sur l’utilisation des calculs Monte Carlo associés à des fantômes numériques ont été utilisés. De tels étalonnages nécessitent d’une part le développement de fantômes représentatifs des tailles et des morphologies les plus communes et d’autre part des simulations Monte Carlo rapides et fiables. Une bibliothèque de fantômes thoraciques féminins a ainsi été développée en ajustant la masse des organes internes et de la poitrine suivant la taille et les recommandations de la chirurgie plastique. Par la suite, la bibliothèque a été utilisée pour étalonner le système de comptage du Secteur d’Analyses Médicales d’AREVA NC La Hague. De plus, une équation décrivant la variation de l’efficacité de comptage en fonction de l’énergie et de la morphologie a été développée. Enfin, des recommandations ont été données pour corriger les coefficients d’étalonnage du personnel féminin en fonction de la taille et de la poitrine. Enfin, pour accélérer les simulations, des méthodes de réduction de variance ainsi que des opérations de simplification de la géométrie ont été considérées.Par ailleurs, pour l’étude des cas de contamination complexes, il est proposé de remonter à la cartographie d’activité en associant aux mesures anthroporadiamétriques le calcul Monte Carlo. La méthode développée consiste à réaliser plusieurs mesures spectrométriques avec différents positionnements des détecteurs. Ensuite, il s’agit de séparer la contribution de chaque organe contaminé au comptage grâce au calcul Monte Carlo. L’ensemble des mesures réalisées au LEDI, au CIEMAT et au KIT ont démontré l’intérêt de cette méthode et l’apport des simulations Monte Carlo pour une analyse plus précise des mesures in vivo, permettant ainsi de déterminer la répartition de l’activité à la suite d’une contamination interne. / To optimize the monitoring of female workers using in vivo spectrometry measurements, it is necessary to correct the typical calibration coefficients obtained with the Livermore male physical phantom. To do so, numerical calibrations based on the use of Monte Carlo simulations combined with anthropomorphic 3D phantoms were used. Such computational calibrations require on the one hand the development of representative female phantoms of different size and morphologies and on the other hand rapid and reliable Monte Carlo calculations. A library of female torso models was hence developed by fitting the weight of internal organs and breasts according to the body height and to relevant plastic surgery recommendations. This library was next used to realize a numerical calibration of the AREVA NC La Hague in vivo counting installation. Moreover, the morphology-induced counting efficiency variations with energy were put into equation and recommendations were given to correct the typical calibration coefficients for any monitored female worker as a function of body height and breast size. Meanwhile, variance reduction techniques and geometry simplification operations were considered to accelerate simulations.Furthermore, to determine the activity mapping in the case of complex contaminations, a method that combines Monte Carlo simulations with in vivo measurements was developed. This method consists of realizing several spectrometry measurements with different detector positioning. Next, the contribution of each contaminated organ to the count is assessed from Monte Carlo calculations. The in vivo measurements realized at LEDI, CIEMAT and KIT have demonstrated the effectiveness of the method and highlighted the valuable contribution of Monte Carlo simulations for a more detailed analysis of spectrometry measurements. Thus, a more precise estimate of the activity distribution is given in the case of an internal contamination.
|
38 |
Stochastic Volatility Models in Option PricingKalavrezos, Michail, Wennermo, Michael January 2008 (has links)
<p>In this thesis we have created a computer program in Java language which calculates European call- and put options with four different models based on the article The Pricing of Options on Assets with Stochastic Volatilities by John Hull and Alan White. Two of the models use stochastic volatility as an input. The paper describes the foundations of stochastic volatility option pricing and compares the output of the models. The model which better estimates the real option price is dependent on further research of the model parameters involved.</p>
|
39 |
Stochastic Volatility Models in Option PricingKalavrezos, Michail, Wennermo, Michael January 2008 (has links)
In this thesis we have created a computer program in Java language which calculates European call- and put options with four different models based on the article The Pricing of Options on Assets with Stochastic Volatilities by John Hull and Alan White. Two of the models use stochastic volatility as an input. The paper describes the foundations of stochastic volatility option pricing and compares the output of the models. The model which better estimates the real option price is dependent on further research of the model parameters involved.
|
40 |
Using MAVRIC sequence to determine dose rate to accessible areas of the IRIS nuclear power plantHartmangruber, David Patrick 25 October 2010 (has links)
The objective of this thesis is to determine and analyze the dose rate to personnel throughout the proposed IRIS nuclear power plant. To accomplish this objective, complex models of the IRIS plant have been devised, advanced transport theory methods employed, and computationally intense simulations performed.
IRIS is an advanced integral, light water reactor with a 335 MWe expected power output (1000 MWth). Due to its integral design, the IRIS pressure vessel has a large downcomer region. The large downcomer and the neutron reflector provide a great deal of additional shielding. This increase in shielding ensures that the IRIS design easily accomplishes the regulatory dose limits for radiation workers. However, The IRIS project set enhanced objectives of further reducing the dose rate to significantly lower levels, comparable or below the limit allowed for general public.
The IRIS nuclear power plant design is very compact and has a rather complex geometric structure. Programs that use conventional methods would take too much time or would be unable to provide an answer for such a challenging deep penetration problem. Therefore, the modeling of the power plant was done using a hybrid methodology for automated variance reduction implemented into the MAVRIC sequence of the SCALE6 program package. The methodology is based on the CADIS and FW-CADIS methods. The CADIS method was developed by J.C. Wagner and A. Haghighat. The FW-CADIS method was developed by J.C. Wagner and D. Peplow. Using these methodologies in the MAVRIC code sequence, this thesis shows the dose rate throughout most of the inhabitable regions of the IRIS nuclear power plant. This thesis will also show the regions that are below the dose rate reduction objective set by the IRIS shielding team.
|
Page generated in 0.1316 seconds