• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 11
  • 6
  • 5
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 87
  • 87
  • 45
  • 45
  • 27
  • 22
  • 20
  • 16
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Principled Variance Reduction Techniques for Real Time Patient-Specific Monte Carlo Applications within Brachytherapy and Cone-Beam Computed Tomography

Sampson, Andrew 30 April 2013 (has links)
This dissertation describes the application of two principled variance reduction strategies to increase the efficiency for two applications within medical physics. The first, called correlated Monte Carlo (CMC) applies to patient-specific, permanent-seed brachytherapy (PSB) dose calculations. The second, called adjoint-biased forward Monte Carlo (ABFMC), is used to compute cone-beam computed tomography (CBCT) scatter projections. CMC was applied for two PSB cases: a clinical post-implant prostate, and a breast with a simulated lumpectomy cavity. CMC computes the dose difference between the highly correlated dose computing homogeneous and heterogeneous geometries. The particle transport in the heterogeneous geometry assumed a purely homogeneous environment, and altered particle weights accounted for bias. Average gains of 37 to 60 are reported from using CMC, relative to un-correlated Monte Carlo (UMC) calculations, for the prostate and breast CTV’s, respectively. To further increase the efficiency up to 1500 fold above UMC, an approximation called interpolated correlated Monte Carlo (ICMC) was applied. ICMC computes using CMC on a low-resolution (LR) spatial grid followed by interpolation to a high-resolution (HR) voxel grid followed. The interpolated, HR is then summed with a HR, pre-computed, homogeneous dose map. ICMC computes an approximate, but accurate, HR heterogeneous dose distribution from LR MC calculations achieving an average 2% standard deviation within the prostate and breast CTV’s in 1.1 sec and 0.39 sec, respectively. Accuracy for 80% of the voxels using ICMC is within 3% for anatomically realistic geometries. Second, for CBCT scatter projections, ABFMC was implemented via weight windowing using a solution to the adjoint Boltzmann transport equation computed either via the discrete ordinates method (DOM), or a MC implemented forward-adjoint importance generator (FAIG). ABFMC, implemented via DOM or FAIG, was tested for a single elliptical water cylinder using a primary point source (PPS) and a phase-space source (PSS). The best gains were found by using the PSS yielding average efficiency gains of 250 relative to non-weight windowed MC utilizing the PPS. Furthermore, computing 360 projections on a 40 by 30 pixel grid requires only 48 min on a single CPU core allowing clinical use via parallel processing techniques.
32

Numerical methods for homogenization : applications to random media / Techniques numériques d'homogénéisation : application aux milieux aléatoires

Costaouec, Ronan 23 November 2011 (has links)
Le travail de cette thèse a porté sur le développement de techniques numériques pour l'homogénéisation de matériaux présentant à une petite échelle des hétérogénéités aléatoires. Sous certaines hypothèses, la théorie mathématique de l'homogénéisation stochastique permet d'expliciter les propriétés effectives de tels matériaux. Néanmoins, en pratique, la détermination de ces propriétés demeure difficile. En effet, celle-ci requiert la résolution d'équations aux dérivées partielles stochastiques posées sur l'espace tout entier. Dans cette thèse, cette difficulté est abordée de deux manières différentes. Les méthodes classiques d'approximation conduisent à approcher les propriétés effectives par des quantités aléatoires. Réduire la variance de ces quantités est l'objectif des travaux de la Partie I. On montre ainsi comment adapter au cadre de l'homogénéisation stochastique une technique de réduction de variance déjà éprouvée dans d'autres domaines. Les travaux de la Partie II s'intéressent à des cas pour lesquels le matériau d'intérêt est considéré comme une petite perturbation aléatoire d'un matériau de référence. On montre alors numériquement et théoriquement que cette simplification de la modélisation permet effectivement une réduction très importante du coût calcul / In this thesis we investigate numerical methods for the homogenization of materials the structures of which, at fine scales, are characterized by random heterogenities. Under appropriate hypotheses, the effective properties of such materials are given by closed formulas. However, in practice the computation of these properties is a difficult task because it involves solving partial differential equations with stochastic coefficients that are additionally posed on the whole space. In this work, we address this difficulty in two different ways. The standard discretization techniques lead to random approximate effective properties. In Part I, we aim at reducing their variance, using a well-known variance reduction technique that has already been used successfully in other domains. The works of Part II focus on the case when the material can be seen as a small random perturbation of a periodic material. We then show both numerically and theoretically that, in this case, computing the effective properties is much less costly than in the general case
33

Amélioration des mesures anthroporadiamétriques personnalisées assistées par calcul Monte Carlo : optimisation des temps de calculs et méthodologie de mesure pour l’établissement de la répartition d’activité / Optimizing the in vivo monitoring of female workers using in vivo measurements and Monte Carlo calculations : method for the management of complex contaminations

Farah, Jad 06 October 2011 (has links)
Afin d’optimiser la surveillance des travailleuses du nucléaire par anthroporadiamétrie, il est nécessaire de corriger les coefficients d’étalonnage obtenus à l’aide du fantôme physique masculin Livermore. Pour ce faire, des étalonnages numériques basés sur l’utilisation des calculs Monte Carlo associés à des fantômes numériques ont été utilisés. De tels étalonnages nécessitent d’une part le développement de fantômes représentatifs des tailles et des morphologies les plus communes et d’autre part des simulations Monte Carlo rapides et fiables. Une bibliothèque de fantômes thoraciques féminins a ainsi été développée en ajustant la masse des organes internes et de la poitrine suivant la taille et les recommandations de la chirurgie plastique. Par la suite, la bibliothèque a été utilisée pour étalonner le système de comptage du Secteur d’Analyses Médicales d’AREVA NC La Hague. De plus, une équation décrivant la variation de l’efficacité de comptage en fonction de l’énergie et de la morphologie a été développée. Enfin, des recommandations ont été données pour corriger les coefficients d’étalonnage du personnel féminin en fonction de la taille et de la poitrine. Enfin, pour accélérer les simulations, des méthodes de réduction de variance ainsi que des opérations de simplification de la géométrie ont été considérées.Par ailleurs, pour l’étude des cas de contamination complexes, il est proposé de remonter à la cartographie d’activité en associant aux mesures anthroporadiamétriques le calcul Monte Carlo. La méthode développée consiste à réaliser plusieurs mesures spectrométriques avec différents positionnements des détecteurs. Ensuite, il s’agit de séparer la contribution de chaque organe contaminé au comptage grâce au calcul Monte Carlo. L’ensemble des mesures réalisées au LEDI, au CIEMAT et au KIT ont démontré l’intérêt de cette méthode et l’apport des simulations Monte Carlo pour une analyse plus précise des mesures in vivo, permettant ainsi de déterminer la répartition de l’activité à la suite d’une contamination interne. / To optimize the monitoring of female workers using in vivo spectrometry measurements, it is necessary to correct the typical calibration coefficients obtained with the Livermore male physical phantom. To do so, numerical calibrations based on the use of Monte Carlo simulations combined with anthropomorphic 3D phantoms were used. Such computational calibrations require on the one hand the development of representative female phantoms of different size and morphologies and on the other hand rapid and reliable Monte Carlo calculations. A library of female torso models was hence developed by fitting the weight of internal organs and breasts according to the body height and to relevant plastic surgery recommendations. This library was next used to realize a numerical calibration of the AREVA NC La Hague in vivo counting installation. Moreover, the morphology-induced counting efficiency variations with energy were put into equation and recommendations were given to correct the typical calibration coefficients for any monitored female worker as a function of body height and breast size. Meanwhile, variance reduction techniques and geometry simplification operations were considered to accelerate simulations.Furthermore, to determine the activity mapping in the case of complex contaminations, a method that combines Monte Carlo simulations with in vivo measurements was developed. This method consists of realizing several spectrometry measurements with different detector positioning. Next, the contribution of each contaminated organ to the count is assessed from Monte Carlo calculations. The in vivo measurements realized at LEDI, CIEMAT and KIT have demonstrated the effectiveness of the method and highlighted the valuable contribution of Monte Carlo simulations for a more detailed analysis of spectrometry measurements. Thus, a more precise estimate of the activity distribution is given in the case of an internal contamination.
34

Stochastic Volatility Models in Option Pricing

Kalavrezos, Michail, Wennermo, Michael January 2008 (has links)
<p>In this thesis we have created a computer program in Java language which calculates European call- and put options with four different models based on the article The Pricing of Options on Assets with Stochastic Volatilities by John Hull and Alan White. Two of the models use stochastic volatility as an input. The paper describes the foundations of stochastic volatility option pricing and compares the output of the models. The model which better estimates the real option price is dependent on further research of the model parameters involved.</p>
35

Stochastic Volatility Models in Option Pricing

Kalavrezos, Michail, Wennermo, Michael January 2008 (has links)
In this thesis we have created a computer program in Java language which calculates European call- and put options with four different models based on the article The Pricing of Options on Assets with Stochastic Volatilities by John Hull and Alan White. Two of the models use stochastic volatility as an input. The paper describes the foundations of stochastic volatility option pricing and compares the output of the models. The model which better estimates the real option price is dependent on further research of the model parameters involved.
36

Using MAVRIC sequence to determine dose rate to accessible areas of the IRIS nuclear power plant

Hartmangruber, David Patrick 25 October 2010 (has links)
The objective of this thesis is to determine and analyze the dose rate to personnel throughout the proposed IRIS nuclear power plant. To accomplish this objective, complex models of the IRIS plant have been devised, advanced transport theory methods employed, and computationally intense simulations performed. IRIS is an advanced integral, light water reactor with a 335 MWe expected power output (1000 MWth). Due to its integral design, the IRIS pressure vessel has a large downcomer region. The large downcomer and the neutron reflector provide a great deal of additional shielding. This increase in shielding ensures that the IRIS design easily accomplishes the regulatory dose limits for radiation workers. However, The IRIS project set enhanced objectives of further reducing the dose rate to significantly lower levels, comparable or below the limit allowed for general public. The IRIS nuclear power plant design is very compact and has a rather complex geometric structure. Programs that use conventional methods would take too much time or would be unable to provide an answer for such a challenging deep penetration problem. Therefore, the modeling of the power plant was done using a hybrid methodology for automated variance reduction implemented into the MAVRIC sequence of the SCALE6 program package. The methodology is based on the CADIS and FW-CADIS methods. The CADIS method was developed by J.C. Wagner and A. Haghighat. The FW-CADIS method was developed by J.C. Wagner and D. Peplow. Using these methodologies in the MAVRIC code sequence, this thesis shows the dose rate throughout most of the inhabitable regions of the IRIS nuclear power plant. This thesis will also show the regions that are below the dose rate reduction objective set by the IRIS shielding team.
37

Efficient Simulations in Finance

Sak, Halis January 2008 (has links) (PDF)
Measuring the risk of a credit portfolio is a challenge for financial institutions because of the regulations brought by the Basel Committee. In recent years lots of models and state-of-the-art methods, which utilize Monte Carlo simulation, were proposed to solve this problem. In most of the models factors are used to account for the correlations between obligors. We concentrate on the the normal copula model, which assumes multivariate normality of the factors. Computation of value at risk (VaR) and expected shortfall (ES) for realistic credit portfolio models is subtle, since, (i) there is dependency throughout the portfolio; (ii) an efficient method is required to compute tail loss probabilities and conditional expectations at multiple points simultaneously. This is why Monte Carlo simulation must be improved by variance reduction techniques such as importance sampling (IS). Thus a new method is developed for simulating tail loss probabilities and conditional expectations for a standard credit risk portfolio. The new method is an integration of IS with inner replications using geometric shortcut for dependent obligors in a normal copula framework. Numerical results show that the new method is better than naive simulation for computing tail loss probabilities and conditional expectations at a single x and VaR value. Finally, it is shown that compared to the standard t statistic a skewness-correction method of Peter Hall is a simple and more accurate alternative for constructing confidence intervals. (author´s abstract) / Series: Research Report Series / Department of Statistics and Mathematics
38

Numerical methods for homogenization : applications to random media

Costaouec, Ronan, Costaouec, Ronan 23 November 2011 (has links) (PDF)
In this thesis we investigate numerical methods for the homogenization of materials the structures of which, at fine scales, are characterized by random heterogenities. Under appropriate hypotheses, the effective properties of such materials are given by closed formulas. However, in practice the computation of these properties is a difficult task because it involves solving partial differential equations with stochastic coefficients that are additionally posed on the whole space. In this work, we address this difficulty in two different ways. The standard discretization techniques lead to random approximate effective properties. In Part I, we aim at reducing their variance, using a well-known variance reduction technique that has already been used successfully in other domains. The works of Part II focus on the case when the material can be seen as a small random perturbation of a periodic material. We then show both numerically and theoretically that, in this case, computing the effective properties is much less costly than in the general case
39

On improving variational inference with low-variance multi-sample estimators

Dhekane, Eeshan Gunesh 08 1900 (has links)
Les progrès de l’inférence variationnelle, tels que l’approche de variational autoencoder (VI) (Kingma and Welling (2013), Rezende et al. (2014)) et ses nombreuses modifications, se sont avérés très efficaces pour l’apprentissage des représentations latentes de données. Importance-weighted variational inference (IWVI) par Burda et al. (2015) améliore l’inférence variationnelle en utilisant plusieurs échantillons indépendants et répartis de manière identique pour obtenir des limites inférieures variationnelles plus strictes. Des articles récents tels que l’approche de hierarchical importance-weighted autoencoders (HIWVI) par Huang et al. (2019) et la modélisation de la distribution conjointe par Klys et al. (2018) démontrent l’idée de modéliser une distribution conjointe sur des échantillons pour améliorer encore l’IWVI en le rendant efficace pour l’échantillon. L’idée sous-jacente de ce mémoire est de relier les propriétés statistiques des estimateurs au resserrement des limites variationnelles. Pour ce faire, nous démontrons d’abord une borne supérieure sur l’écart variationnel en termes de variance des estimateurs sous certaines conditions. Nous prouvons que l’écart variationnel peut être fait disparaître au taux de O(1/n) pour une grande famille d’approches d’inférence variationelle. Sur la base de ces résultats, nous proposons l’approche de Conditional-IWVI (CIWVI), qui modélise explicitement l’échantillonnage séquentiel et conditionnel de variables latentes pour effectuer importance-weighted variational inference, et une approche connexe de Antithetic-IWVI (AIWVI) par Klys et al. (2018). Nos expériences sur les jeux de données d’analyse comparative, tels que MNIST (LeCun et al. (2010)) et OMNIGLOT (Lake et al. (2015)), démontrent que nos approches fonctionnent soit de manière compétitive, soit meilleures que les références IWVI et HIWVI en tant que le nombre d’échantillons augmente. De plus, nous démontrons que les résultats sont conformes aux propriétés théoriques que nous avons prouvées. En conclusion, nos travaux fournissent une perspective sur le taux d’amélioration de l’inference variationelle avec le nombre d’échantillons utilisés et l’utilité de modéliser la distribution conjointe sur des représentations latentes pour l’efficacité de l’échantillon. / Advances in variational inference, such as variational autoencoders (VI) (Kingma and Welling (2013), Rezende et al. (2014)) along with its numerous modifications, have proven highly successful for learning latent representations of data. Importance-weighted variational inference (IWVI) by Burda et al. (2015) improves the variational inference by using multiple i.i.d. samples for obtaining tighter variational lower bounds. Recent works like hierarchical importance-weighted autoencoders (HIWVI) by Huang et al. (2019) and joint distribution modeling by Klys et al. (2018) demonstrate the idea of modeling a joint distribution over samples to further improve over IWVI by making it sample efficient. The underlying idea in this thesis is to connect the statistical properties of the estimators to the tightness of the variational bounds. Towards this, we first demonstrate an upper bound on the variational gap in terms of the variance of the estimators under certain conditions. We prove that the variational gap can be made to vanish at the rate of O(1/n) for a large family of VI approaches. Based on these results, we propose the approach of Conditional-IWVI (CIWVI), which explicitly models the sequential and conditional sampling of latent variables to perform importance-weighted variational inference, and a related approach of Antithetic-IWVI (AIWVI) by Klys et al. (2018). Our experiments on the benchmarking datasets MNIST (LeCun et al. (2010)) and OMNIGLOT (Lake et al. (2015)) demonstrate that our approaches perform either competitively or better than the baselines IWVI and HIWVI as the number of samples increases. Further, we also demonstrate that the results are in accordance with the theoretical properties we proved. In conclusion, our work provides a perspective on the rate of improvement in VI with the number of samples used and the utility of modeling the joint distribution over latent representations for sample efficiency in VI.
40

股權連結結構型商品之評價 / Valuation of Equity-Linkded Structured Note

王瑞元, Wang, Jui Yuan Unknown Date (has links)
本文整理市場上已發行結構債的現金流量型式,且利用風險中立評價法推導多資產Quanto模型,並以蒙地卡羅模擬法模擬外幣計價的結構型商品的理論價格,除了計算使用Quanto模型所求得的理論價格外,本文也比較使用Quanto模型與沒有使用Quanto模型評價商品時理論價格的差異,此外也進行商品的利率敏感度分析和相關係數敏感度分析;其後找到有效的控制變數,利用變異數縮減技術克服蒙地卡羅模擬法收斂不易的缺點,增進模擬的效率與精準程度,最後並做變異數縮減的Rubust分析,討論在何種參數的設定下變異數縮減的效果會最好,及如何透過參數的選取,如參與率與保本率,設計商品與成本分析。

Page generated in 0.031 seconds