• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 12
  • 12
  • 12
  • 7
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

準蒙地卡羅法於多資產路徑相依債券之評價

張極鑫, Chang, Chi-Shin Unknown Date (has links)
近年來隨著法規與市場逐漸的開放,使得券商可以發行衍生性商品的種類也逐漸增加,而在眾多結構型商品中,不少商品其連結標的包含了多資產與路徑相依條款,可以看成投資一藍子股票且具有多個觀察時間的商品,一方面若連結資產上漲投資人將可得到一定的報酬,另外一方面同時具有下方保護的條款可避免本金嚴重虧損。 而此類商品包含了多資產連結且有路徑相依條款,在評價方面是一個高維度的問題,若使用傳統的蒙地卡羅法來評價,因其收斂速度緩慢常需秏費大量的計算時間,使得蒙地卡羅法在應用上有此缺點,一般來說可以使用對立變數法或控制變數法來改進收斂的速度,另外也可以使用低差異性數列即所謂的準蒙地卡羅法來改進收斂速度,並且準蒙地卡羅法與布朗橋結構或主成份分析法相結合還可加快收斂速度。 本文主要提供二種不同報酬型態的商品,第一個商品為低維度上入局商品,其報酬型態與障礙型選擇類似,第二個商品為連結多資產且路徑相依商品,以此兩商品來探討各種不同方法在不同報酬型態下的收斂速度與準確性,最後文中模擬的結果顯示在所有方法中,使用準蒙地卡羅法結合主成份分析法皆可以得到不錯的收斂速度與準確性。
12

Non-convex Bayesian Learning via Stochastic Gradient Markov Chain Monte Carlo

Wei Deng (11804435) 18 December 2021 (has links)
<div>The rise of artificial intelligence (AI) hinges on the efficient training of modern deep neural networks (DNNs) for non-convex optimization and uncertainty quantification, which boils down to a non-convex Bayesian learning problem. A standard tool to handle the problem is Langevin Monte Carlo, which proposes to approximate the posterior distribution with theoretical guarantees. However, non-convex Bayesian learning in real big data applications can be arbitrarily slow and often fails to capture the uncertainty or informative modes given a limited time. As a result, advanced techniques are still required.</div><div><br></div><div>In this thesis, we start with the replica exchange Langevin Monte Carlo (also known as parallel tempering), which is a Markov jump process that proposes appropriate swaps between exploration and exploitation to achieve accelerations. However, the na\"ive extension of swaps to big data problems leads to a large bias, and the bias-corrected swaps are required. Such a mechanism leads to few effective swaps and insignificant accelerations. To alleviate this issue, we first propose a control variates method to reduce the variance of noisy energy estimators and show a potential to accelerate the exponential convergence. We also present the population-chain replica exchange and propose a generalized deterministic even-odd scheme to track the non-reversibility and obtain an optimal round trip rate. Further approximations are conducted based on stochastic gradient descents, which yield a user-friendly nature for large-scale uncertainty approximation tasks without much tuning costs. </div><div><br></div><div>In the second part of the thesis, we study scalable dynamic importance sampling algorithms based on stochastic approximation. Traditional dynamic importance sampling algorithms have achieved successes in bioinformatics and statistical physics, however, the lack of scalability has greatly limited their extensions to big data applications. To handle this scalability issue, we resolve the vanishing gradient problem and propose two dynamic importance sampling algorithms based on stochastic gradient Langevin dynamics. Theoretically, we establish the stability condition for the underlying ordinary differential equation (ODE) system and guarantee the asymptotic convergence of the latent variable to the desired fixed point. Interestingly, such a result still holds given non-convex energy landscapes. In addition, we also propose a pleasingly parallel version of such algorithms with interacting latent variables. We show that the interacting algorithm can be theoretically more efficient than the single-chain alternative with an equivalent computational budget.</div>

Page generated in 0.1069 seconds