Spelling suggestions: "subject:"quais fonte carlo"" "subject:"quais fonte sarlo""
31 |
Statistical Yield Analysis and Design for Nanometer VLSIJaffari, Javid January 2010 (has links)
Process variability is the pivotal factor impacting the design of high yield integrated circuits and systems in deep sub-micron CMOS technologies. The electrical and physical properties of transistors and interconnects, the building blocks of integrated circuits, are prone to significant variations that directly impact the performance and power consumption of the fabricated devices, severely impacting the manufacturing yield. However, the large number of the transistors on a single chip adds even more challenges for the analysis of the variation effects, a critical task in diagnosing the cause of failure and designing for yield. Reliable and efficient statistical analysis methodologies in various design phases are key to predict the yield before entering such an expensive fabrication process.
In this thesis, the impacts of process variations are examined at three different levels: device, circuit, and micro-architecture. The variation models are provided for each level of abstraction, and new methodologies are proposed for efficient statistical analysis and design under variation.
At the circuit level, the variability analysis of three crucial sub-blocks of today's system-on-chips, namely, digital circuits, memory cells, and analog blocks, are targeted. The accurate and efficient yield analysis of circuits is recognized as an extremely challenging task within the electronic design automation community. The large scale of the digital circuits, the extremely high yield requirement for memory cells, and the time-consuming analog circuit simulation are major concerns in the development of any statistical analysis technique. In this thesis, several sampling-based methods have been proposed for these three types of circuits to significantly improve the run-time of the traditional Monte Carlo method, without compromising accuracy. The proposed sampling-based yield analysis methods benefit from the very appealing feature of the MC method, that is, the capability to consider any complex circuit model. However, through the use and engineering of advanced variance reduction and sampling methods, ultra-fast yield estimation solutions are provided for different types of VLSI circuits. Such methods include control variate, importance sampling, correlation-controlled Latin Hypercube Sampling, and Quasi Monte Carlo.
At the device level, a methodology is proposed which introduces a variation-aware design perspective for designing MOS devices in aggressively scaled geometries. The method introduces a yield measure at the device level which targets the saturation and leakage currents of an MOS transistor. A statistical method is developed to optimize the advanced doping profiles and geometry features of a device for achieving a maximum device-level yield.
Finally, a statistical thermal analysis framework is proposed. It accounts for the process and thermal variations simultaneously, at the micro-architectural level. The analyzer is developed, based on the fact that the process variations lead to uncertain leakage power sources, so that the thermal profile, itself, would have a probabilistic nature. Therefore, by a co-process-thermal-leakage analysis, a more reliable full-chip statistical leakage power yield is calculated.
|
32 |
Statistical Yield Analysis and Design for Nanometer VLSIJaffari, Javid January 2010 (has links)
Process variability is the pivotal factor impacting the design of high yield integrated circuits and systems in deep sub-micron CMOS technologies. The electrical and physical properties of transistors and interconnects, the building blocks of integrated circuits, are prone to significant variations that directly impact the performance and power consumption of the fabricated devices, severely impacting the manufacturing yield. However, the large number of the transistors on a single chip adds even more challenges for the analysis of the variation effects, a critical task in diagnosing the cause of failure and designing for yield. Reliable and efficient statistical analysis methodologies in various design phases are key to predict the yield before entering such an expensive fabrication process.
In this thesis, the impacts of process variations are examined at three different levels: device, circuit, and micro-architecture. The variation models are provided for each level of abstraction, and new methodologies are proposed for efficient statistical analysis and design under variation.
At the circuit level, the variability analysis of three crucial sub-blocks of today's system-on-chips, namely, digital circuits, memory cells, and analog blocks, are targeted. The accurate and efficient yield analysis of circuits is recognized as an extremely challenging task within the electronic design automation community. The large scale of the digital circuits, the extremely high yield requirement for memory cells, and the time-consuming analog circuit simulation are major concerns in the development of any statistical analysis technique. In this thesis, several sampling-based methods have been proposed for these three types of circuits to significantly improve the run-time of the traditional Monte Carlo method, without compromising accuracy. The proposed sampling-based yield analysis methods benefit from the very appealing feature of the MC method, that is, the capability to consider any complex circuit model. However, through the use and engineering of advanced variance reduction and sampling methods, ultra-fast yield estimation solutions are provided for different types of VLSI circuits. Such methods include control variate, importance sampling, correlation-controlled Latin Hypercube Sampling, and Quasi Monte Carlo.
At the device level, a methodology is proposed which introduces a variation-aware design perspective for designing MOS devices in aggressively scaled geometries. The method introduces a yield measure at the device level which targets the saturation and leakage currents of an MOS transistor. A statistical method is developed to optimize the advanced doping profiles and geometry features of a device for achieving a maximum device-level yield.
Finally, a statistical thermal analysis framework is proposed. It accounts for the process and thermal variations simultaneously, at the micro-architectural level. The analyzer is developed, based on the fact that the process variations lead to uncertain leakage power sources, so that the thermal profile, itself, would have a probabilistic nature. Therefore, by a co-process-thermal-leakage analysis, a more reliable full-chip statistical leakage power yield is calculated.
|
33 |
Smoothed Transformed Density RejectionLeydold, Josef, Hörmann, Wolfgang January 2003 (has links) (PDF)
There are situations in the framework of quasi-Monte Carlo integration where nonuniform low-discrepancy sequences are required. Using the inversion method for this task usually results in the best performance in terms of the integration errors. However, this method requires a fast algorithm for evaluating the inverse of the cumulative distribution function which is often not available. Then a smoothed version of transformed density rejection is a good alternative as it is a fast method and its speed hardly depends on the distribution. It can easily be adjusted such that it is almost as good as the inversion method. For importance sampling it is even better to use the hat distribution as importance distribution directly. Then the resulting algorithm is as good as using the inversion method for the original importance distribution but its generation time is much shorter. / Series: Preprint Series / Department of Applied Statistics and Data Processing
|
34 |
Discrepancy of sequences and error estimates for the quasi-Monte Carlo method / Diskrepansen hos talföljder och feluppskattningar för kvasi-Monte Carlo metodenVesterinen, Niklas January 2020 (has links)
We present the notions of uniform distribution and discrepancy of sequences contained in the unit interval, as well as an important application of discrepancy in numerical integration by way of the quasi-Monte Carlo method. Some fundamental (and other interesting) results with regards to these notions are presented, along with some detalied and instructive examples and comparisons (some of which not often provided by the literature). We go on to analytical and numerical investigations of the asymptotic behaviour of the discrepancy (in particular for the van der Corput-sequence), and for the general error estimates of the quasi-Monte Carlo method. Using the discoveries from these investigations, we give a conditional proof of the van der Corput theorem. Furthermore, we illustrate that by using low discrepancy sequences (such as the vdC-sequence), a rather fast convergence rate of the quasi-Monte Carlo method may still be achieved, even for situations in which the famous theoretical result, the Koksma inequality, hasbeen rendered unusable. / Vi presenterar begreppen likformig distribution och diskrepans hos talföljder på enhetsintervallet, såväl som en viktig tillämpning av diskrepans inom numerisk integration via kvasi-Monte Carlo metoden. Några fundamentala (och andra intressanta) resultat presenteras med avseende på dessa begrepp, tillsammans med några detaljerade och instruktiva exempel och jämförelser (varav några sällan presenterade i litteraturen). Vi går vidare med analytiska och numeriska undersökningar av det asymptotiska beteendet hos diskrepansen (särskilt för van der Corput-följden), såväl som för den allmänna feluppskattningen hos kvasi-Monte Carlo metoden. Utifrån upptäckterna från dessa undersökningar ger vi ett villkorligt bevis av van der Corput's sats, samt illustrerar att man genom att använda lågdiskrepanstalföljder (som van der Corput-följden) fortfarande kan uppnå tämligen snabb konvergenshastighet för kvasi-Monte Carlo metoden. Detta även för situationer där de kända teoretiska resultatet, Koksma's olikhet, är oandvändbart.
|
35 |
Birds' Flight Range. : Sensitivity Analysis.Masinde, Brian January 2020 (has links)
’Flight’ is a program that uses flight mechanics to estimate the flight range of birds. This program, used by ornithologists, is only available for Windows OS. It requires manual imputation of body measurements and constants (one observation at a time) and this is time-consuming. Therefore, the first task is to implement the methods in R, a programming language that runs on various platforms. The resulting package named flying, has three advantages; first, it can estimate flight range of multiple bird observations, second, it makes it easier to experiment with different settings (e.g. constants) in comparison to Flight and third, it is open-source making contribution relatively easy. Uncertainty and global sen- sitivity analyses are carried out on body measurements separately and with various con- stants. In doing so, the most influential body variables and constants are discovered. This task would have been near impossible to undertake using ’Flight’. A comparison is made amongst the results from a crude partitioning method, generalized additive model, gradi- ent boosting machines and quasi-Monte Carlo method. All of these are based on Sobol’s method for variance decomposition. The results show that fat mass drives the simulations with other inputs playing a secondary role (for example mechanical conversion efficiency and body drag coefficient).
|
36 |
Hierarchical Adaptive Quadrature and Quasi-Monte Carlo for Efficient Fourier Pricing of Multi-Asset OptionsSamet, Michael 11 July 2023 (has links)
Efficiently pricing multi-asset options is a challenging problem in computational finance. Although classical Fourier methods are extremely fast in pricing single asset options, maintaining the tractability of Fourier techniques for multi-asset option pricing is still an area of active research. Fourier methods rely on explicit knowledge of the characteristic function of the suitably stochastic price process, allowing for calculation of the option price by evaluation of multidimensional integral in the Fourier domain. The high smoothness of the integrand in the Fourier space motivates the exploration of deterministic quadrature methods that are highly efficient under certain regularity assumptions, such as, adaptive sparse grids quadrature (ASGQ), and Randomized Quasi-Monte Carlo (RQMC). However, when designing a numerical quadrature method for most of the existing Fourier pricing approaches, two key factors affecting the complexity should be carefully controlled, (i) the choice of the vector of damping parameters that ensure the Fourier-integrability and control the regularity class of the integrand, (ii) the high-dimensionality of the integration problem. To address these challenges, in the first part of this thesis we propose a rule for choosing the damping parameters, resulting in smoother integrands. Moreover, we explore the effect of sparsification and dimension-adaptivity in alleviating the curse of dimensionality. Despite the efficiency of ASGQ, the error estimates are very hard to compute. In cases where error quantification is of high priority, in the second part of this thesis, we design an RQMC-based method for the (inverse) Fourier integral computation. RQMC integration is known to be highly efficient for high-dimensional integration problems of sufficiently regular integrands, and it further allows for computation of probabilistic estimates. Nonetheless, using RQMC requires an appropriate domain transformation of the unbounded integration domain to the hypercube, which may originate in a transformed integrand with singularities at the boundaries, and consequently deteriorate the rate of convergence. To preserve the nice properties of the transformed integrand,we propose a model-dependent domain transformation to avoid these corner singularities and retain the optimal efficiency of RQMC. The effectiveness of the proposed optimal damping rule, the designed domain transformation procedure, and their combination with ASGQ and RQMC are demonstrated via several numerical experiments and computational comparisons to the MC approach and the COS method.
|
37 |
Méthodes accélérées de Monte-Carlo pour la simulation d'événements rares. Applications aux Réseaux de Petri / Fast Monte Carlo methods for rare event simulation. Applications to Petri netsEstecahandy, Maïder 18 April 2016 (has links)
Les études de Sûreté de Fonctionnement (SdF) sur les barrières instrumentées de sécurité représentent un enjeu important dans de nombreux domaines industriels. Afin de pouvoir réaliser ce type d'études, TOTAL développe depuis les années 80 le logiciel GRIF. Pour prendre en compte la complexité croissante du contexte opératoire de ses équipements de sécurité, TOTAL est de plus en plus fréquemment amené à utiliser le moteur de calcul MOCA-RP du package Simulation. MOCA-RP permet d'analyser grâce à la simulation de Monte-Carlo (MC) les performances d'équipements complexes modélisés à l'aide de Réseaux de Petri (RP). Néanmoins, obtenir des estimateurs précis avec MC sur des équipements très fiables, tels que l'indisponibilité, revient à faire de la simulation d'événements rares, ce qui peut s'avérer être coûteux en temps de calcul. Les méthodes standard d'accélération de la simulation de Monte-Carlo, initialement développées pour répondre à cette problématique, ne semblent pas adaptées à notre contexte. La majorité d'entre elles ont été définies pour améliorer l'estimation de la défiabilité et/ou pour les processus de Markov. Par conséquent, le travail accompli dans cette thèse se rapporte au développement de méthodes d'accélération de MC adaptées à la problématique des études de sécurité se modélisant en RP et estimant notamment l'indisponibilité. D'une part, nous proposons l'Extension de la Méthode de Conditionnement Temporel visant à accélérer la défaillance individuelle des composants. D'autre part, la méthode de Dissociation ainsi que la méthode de ``Truncated Fixed Effort'' ont été introduites pour accroitre l'occurrence de leurs défaillances simultanées. Ensuite, nous combinons la première technique avec les deux autres, et nous les associons à la méthode de Quasi-Monte-Carlo randomisée. Au travers de diverses études de sensibilité et expériences numériques, nous évaluons leur performance, et observons une amélioration significative des résultats par rapport à MC. Par ailleurs, nous discutons d'un sujet peu familier à la SdF, à savoir le choix de la méthode à utiliser pour déterminer les intervalles de confiance dans le cas de la simulation d'événements rares. Enfin, nous illustrons la faisabilité et le potentiel de nos méthodes sur la base d'une application à un cas industriel. / The dependability analysis of safety instrumented systems is an important industrial concern. To be able to carry out such safety studies, TOTAL develops since the eighties the dependability software GRIF. To take into account the increasing complexity of the operating context of its safety equipment, TOTAL is more frequently led to use the engine MOCA-RP of the GRIF Simulation package. Indeed, MOCA-RP allows to estimate quantities associated with complex aging systems modeled in Petri nets thanks to the standard Monte Carlo (MC) simulation. Nevertheless, deriving accurate estimators, such as the system unavailability, on very reliable systems involves rare event simulation, which requires very long computing times with MC. In order to address this issue, the common fast Monte Carlo methods do not seem to be appropriate. Many of them are originally defined to improve only the estimate of the unreliability and/or well-suited for Markovian processes. Therefore, the work accomplished in this thesis pertains to the development of acceleration methods adapted to the problematic of performing safety studies modeled in Petri nets and estimating in particular the unavailability. More specifically, we propose the Extension of the "Méthode de Conditionnement Temporel" to accelerate the individual failure of the components, and we introduce the Dissociation Method as well as the Truncated Fixed Effort Method to increase the occurrence of their simultaneous failures. Then, we combine the first technique with the two other ones, and we also associate them with the Randomized Quasi-Monte Carlo method. Through different sensitivities studies and benchmark experiments, we assess the performance of the acceleration methods and observe a significant improvement of the results compared with MC. Furthermore, we discuss the choice of the confidence interval method to be used when considering rare event simulation, which is an unfamiliar topic in the field of dependability. Last, an application to an industrial case permits the illustration of the potential of our solution methodology.
|
38 |
多資產結構型商品之評價與避險--利用Quasi-Monte Carlo模擬法粘哲偉 Unknown Date (has links)
結構型商品,這種風險介於固定收益證券和股票之間的產品,甫上市以來,便廣受投資人的青睞,不僅提供保障本金的需求,更賦予參與股市上漲的獲利。且自從2004年之後,隨著目前景氣逐步回升,股票市場也預期會跟著上揚,於是連結股權的結構型商品也不斷地被推出,而其所隱含選擇權逐漸以連動多資產和具有新奇路徑相依條款為主,而使得在評價上,我們所面對的是高維度的問題,一般在處理高維度問題上,皆以傳統蒙地卡羅模擬法來因應。但因其緩慢的收斂速度,成為應用上的最大缺點,而且在處理高維度問題上所需耗費的模擬時間更為顯著。
本論文主要貢獻可分為兩點:第一,在應用準蒙地卡羅法來對多資產結構型商品評價,並採用Silva(2003)和Acworth, Broadie, and Glasserman(1998)的方法,來對準蒙地卡羅法作改善,並利用二檔市面上存在的結構型商品---高收益鎖定型連動債券和優選鎖定連動債券進行評價,結果發現改善後的準地卡羅法,其評價效率高於蒙地卡羅法和反向變異蒙地卡羅法。第二,本文還對高收益鎖定型連動債券提出delta避險策略,透過先計算選擇權對報酬率的delta,再轉換為所需持有股票的部位,最後發現所建立的避險組合能夠完全支應每年到期時所應付給投資人的債息,以及在避險時所需借款的部份,表示此一策略應為可行的避險策略,可供券商作避險上的參考。
|
39 |
Contributions à l'analyse numérique des méthodes quasi-Monte CarloCoulibaly, Ibrahim 03 November 1997 (has links) (PDF)
Les méthodes de type quasi-Monte Carlo sont des versions déterministes des méthodes de Monte Carlo. Les nombres aléatoires sont remplacés par des nombres déterministes qui forment des ensembles ou des suites à faible discrepance, ayant une meilleure distribution uniforme. L'erreur d'une méthode quasi-Monte Carlo dépend de la discrepance de la suite utilisée, la discrepance étant une mesure de la déviation par rapport à la distribution uniforme. Dans un premier temps nous nous intéressons à la résolution par des méthodes quasi-Monte Carlo d'équations différentielles pour lesquelles il y a peu de régularité en temps. Ces méthodes consistent à formuler le problème avec un terme intégral pour effectuer ensuite une quadrature quasi-Monte Carlo. Ensuite des méthodes particulaires quasi-Monte Carlo sont proposées pour résoudre les équations cinétiques suivantes : l'équation de Boltzmann linéaire et le modèle de Kac. Enfin, nous nous intéressons à la résolution de l'équation de la diffusion à l'aide de méthodes particulaires utilisant des marches quasi-aléatoires. Ces méthodes comportent trois étapes : un schéma d'Euler en temps, une approximation particulaire et une quadrature quasi-Monte Carlo à l'aide de réseaux-$(0,m,s)$. A chaque pas de temps les particules sont réparties par paquets dans le cas des problèmes multi-dimensionnels ou triées si le problème est uni-dimensionnel. Ceci permet de démontrer la convergence. Les tests numériques montrent pour les méthodes de type quasi-Monte Carlo de meilleurs résultats que ceux fournis par les méthodes de type Monte Carlo.
|
40 |
Modelling Implied Volatility of American-Asian Options : A Simple Multivariate Regression ApproachRadeschnig, David January 2015 (has links)
This report focus upon implied volatility for American styled Asian options, and a least squares approximation method as a way of estimating its magnitude. Asian option prices are calculated/approximated based on Quasi-Monte Carlo simulations and least squares regression, where a known volatility is being used as input. A regression tree then empirically builds a database of regression vectors for the implied volatility based on the simulated output of option prices. The mean squared errors between imputed and estimated volatilities are then compared using a five-folded cross-validation test as well as the non-parametric Kruskal-Wallis hypothesis test of equal distributions. The study results in a proposed semi-parametric model for estimating implied volatilities from options. The user must however be aware of that this model may suffer from bias in estimation, and should thereby be used with caution.
|
Page generated in 0.068 seconds