1 |
Importance Resampling for Global IlluminationTalbot, Justin F. 16 September 2005 (has links) (PDF)
This thesis develops a generalized form of Monte Carlo integration called Resampled Importance Sampling. It is based on the importance resampling sample generation technique. Resampled Importance Sampling can lead to significant variance reduction over standard Monte Carlo integration for common rendering problems. We show how to select the importance resampling parameters for near optimal variance reduction. We also combine RIS with stratification and with Multiple Importance Sampling for further variance reduction. We demonstrate the robustness of this technique on the direct lighting problem and achieve up to a 33% variance reduction over standard techniques. We also suggest using RIS as a default BRDF sampling technique.
|
2 |
Parallel Hardware for Sampling Based Nonlinear Filters in FPGAsKota Rajasekhar, Rakesh January 2014 (has links)
Particle filters are a class of sequential Monte-Carlo methods which are used commonly when estimating various unknowns of the time-varying signals presented in real time, especially when dealing with nonlinearity and non-Gaussianity in BOT applications. This thesis work is designed to perform one such estimate involving tracking a person using the road information available from an IR surveillance video. In this thesis, a parallel custom hardware is implemented in Altera cyclone IV E FPGA device utilizing SIRF type of particle filter. This implementation has accounted how the algorithmic aspects of this sampling based filter relate to possibilities and constraints in a hardware implementation. Using 100MHz clock frequency, the synthesised hardware design can process almost 50 Mparticles/s. Thus, this implementation has resulted in tracking the target, which is defined by a 5-dimensional state variable, using the noisy measurements available from the sensor.
|
3 |
Improved Methods for Pharmacometric Model-Based Decision-Making in Clinical Drug DevelopmentDosne, Anne-Gaëlle January 2016 (has links)
Pharmacometric model-based analysis using nonlinear mixed-effects models (NLMEM) has to date mainly been applied to learning activities in drug development. However, such analyses can also serve as the primary analysis in confirmatory studies, which is expected to bring higher power than traditional analysis methods, among other advantages. Because of the high expertise in designing and interpreting confirmatory studies with other types of analyses and because of a number of unresolved uncertainties regarding the magnitude of potential gains and risks, pharmacometric analyses are traditionally not used as primary analysis in confirmatory trials. The aim of this thesis was to address current hurdles hampering the use of pharmacometric model-based analysis in confirmatory settings by developing strategies to increase model compliance to distributional assumptions regarding the residual error, to improve the quantification of parameter uncertainty and to enable model prespecification. A dynamic transform-both-sides approach capable of handling skewed and/or heteroscedastic residuals and a t-distribution approach allowing for symmetric heavy tails were developed and proved relevant tools to increase model compliance to distributional assumptions regarding the residual error. A diagnostic capable of assessing the appropriateness of parameter uncertainty distributions was developed, showing that currently used uncertainty methods such as bootstrap have limitations for NLMEM. A method based on sampling importance resampling (SIR) was thus proposed, which could provide parameter uncertainty in many situations where other methods fail such as with small datasets, highly nonlinear models or meta-analysis. SIR was successfully applied to predict the uncertainty in human plasma concentrations for the antibiotic colistin and its prodrug colistin methanesulfonate based on an interspecies whole-body physiologically based pharmacokinetic model. Lastly, strategies based on model-averaging were proposed to enable full model prespecification and proved to be valid alternatives to standard methodologies for studies assessing the QT prolongation potential of a drug and for phase III trials in rheumatoid arthritis. In conclusion, improved methods for handling residual error, parameter uncertainty and model uncertainty in NLMEM were successfully developed. As confirmatory trials are among the most demanding in terms of patient-participation, cost and time in drug development, allowing (some of) these trials to be analyzed with pharmacometric model-based methods will help improve the safety and efficiency of drug development.
|
4 |
Nonlinear Estimation Techniques Applied To EconometricAslan, Serdar 01 December 2004 (has links) (PDF)
This thesis considers the filtering and prediction problems of nonlinear noisy econometric systems. As a filter/predictor, the standard tool Extended Kalman Filter and new approaches Discrete Quantization Filter and Sequential Importance Resampling Filter are used. The algorithms are compared by using Monte Carlo Simulation technique. The advantages of the new algorithms over Extended Kalman Filter are shown.
|
5 |
Vehicle Collision Risk Prediction Using a Dynamic Bayesian Network / Förutsägelse av kollisionsrisk för fordon med ett dynamiskt Bayesianskt nätverkLindberg, Jonas, Wolfert Källman, Isak January 2020 (has links)
This thesis tackles the problem of predicting the collision risk for vehicles driving in complex traffic scenes for a few seconds into the future. The method is based on previous research using dynamic Bayesian networks to represent the state of the system. Common risk prediction methods are often categorized into three different groups depending on their abstraction level. The most complex of these are interaction-aware models which take driver interactions into account. These models often suffer from high computational complexity which is a key limitation in practical use. The model studied in this work takes interactions between drivers into account by considering driver intentions and the traffic rules in the scene. The state of the traffic scene used in the model contains the physical state of vehicles, the intentions of drivers and the expected behaviour of drivers according to the traffic rules. To allow for real-time risk assessment, an approximate inference of the state given the noisy sensor measurements is done using sequential importance resampling. Two different measures of risk are studied. The first is based on driver intentions not matching the expected maneuver, which in turn could lead to a dangerous situation. The second measure is based on a trajectory prediction step and uses the two measures time to collision (TTC) and time to critical collision probability (TTCCP). The implemented model can be applied in complex traffic scenarios with numerous participants. In this work, we focus on intersection and roundabout scenarios. The model is tested on simulated and real data from these scenarios. %Simulations of these scenarios is used to test the model. In these qualitative tests, the model was able to correctly identify collisions a few seconds before they occur and is also able to avoid false positives by detecting the vehicles that will give way. / Detta arbete behandlar problemet att förutsäga kollisionsrisken för fordon som kör i komplexa trafikscenarier för några sekunder i framtiden. Metoden är baserad på tidigare forskning där dynamiska Bayesianska nätverk används för att representera systemets tillstånd. Vanliga riskprognosmetoder kategoriseras ofta i tre olika grupper beroende på deras abstraktionsnivå. De mest komplexa av dessa är interaktionsmedvetna modeller som tar hänsyn till förarnas interaktioner. Dessa modeller lider ofta av hög beräkningskomplexitet, vilket är en svår begränsning när det kommer till praktisk användning. Modellen som studeras i detta arbete tar hänsyn till interaktioner mellan förare genom att beakta förarnas avsikter och trafikreglerna i scenen. Tillståndet i trafikscenen som används i modellen innehåller fordonets fysiska tillstånd, förarnas avsikter och förarnas förväntade beteende enligt trafikreglerna. För att möjliggöra riskbedömning i realtid görs en approximativ inferens av tillståndet givet den brusiga sensordatan med hjälp av sekventiell vägd simulering. Två olika mått på risk studeras. Det första är baserat på förarnas avsikter, närmare bestämt att ta reda på om de inte överensstämmer med den förväntade manövern, vilket då skulle kunna leda till en farlig situation. Det andra riskmåttet är baserat på ett prediktionssteg som använder sig av time to collision (TTC) och time to critical collision probability (TTCCP). Den implementerade modellen kan tillämpas i komplexa trafikscenarier med många fordon. I detta arbete fokuserar vi på scerarier i korsningar och rondeller. Modellen testas på simulerad och verklig data från dessa scenarier. I dessa kvalitativa tester kunde modellen korrekt identifiera kollisioner några få sekunder innan de inträffade. Den kunde också undvika falsklarm genom att lista ut vilka fordon som kommer att lämna företräde.
|
6 |
利用預測分析-篩選及檢視再保險契約中之承保風險 / Selecting and Monitoring Insurance Risk on Reinsurance Treaties Using Predictive Analysis吳家安, Wu, Chiao-An Unknown Date (has links)
傳統的保險人在面對保險契約所承保的風險時,常會藉由國際上的再保險市場來分散其保險風險。由於所承保險事件的不確定性,保險人需要謹慎小心評估其保險風險並將承保風險轉移至再保險人。再保險有兩種主要的保險型式,可區分成比例再保契約及超額損失再保契約,保險人將利用這些再保險契約來分散求償給付時的損失,加強保險人本身的財務清償能力。
本研究,主要在於建構未來損失求償幅度或頻率的預測分佈並模擬未來支付求償的損失。簡單重點重複抽樣法是一種從危險參數的驗後分佈中抽樣的抽樣方法。然而,蒙地卡羅模擬是一種利用大量電腦運算計算近似預測分佈的逼近方法。利用被選取危險參數的驗前分佈來模擬其驗後分佈,並建構可能的承保危險參數結構,將基於馬可夫鏈蒙地卡羅理論的吉普生抽樣方法決定最適自留額,同時運用於再保險合約決策擬定過程。
最後,考慮於不同的再保險契約下來衡量再保險人的自負財務風險。基本上我們研究的對象是針對保險人所承保的風險,再藉由上述的方法來模擬、近似以量化所衍生的財務風險。這將有助於保險人清楚地瞭解其承保的風險,並對其承保業務做妥善的財務風險管理。本研究提供保險人具體的模型建構方法並對此建構技巧做詳細說明及實證分析。 / Insurers traditionally transfer their insurance risk through the international reinsurance market. Due to the uncertainty of these insured risks, the primary insurer need to carefully evaluate the insured risk and further transfer these risks to his ceding reinsurers. There are two major types of reinsurance, i.e. pro rata treaty and excess of loss treaty, used in protecting the claim losses.
In this article, the predictive distribution of the claim size is constructed to monitor the future claim underwriting losses based on the reinsurance agreement. Simple Importance Resampling (SIR) are employed in sampling the posterior distribution of risk parameters. Then Monte Carlo simulations are used to approximate the predictive distribution. Plausible prior distributions of these risk parameters are chosen in simulation its posterior distribution. Markov chain Monte Carlo (MCMC) method using Gibbs sampling scheme is also performed based on possible parametric structures. Both the pro rata and excess of loss treaties are investigated to quantify the retention risks of the ceding reinsurers.
The insurance risks are focused in our model. Through the implemented model and simulation techniques, it is beneficial for the primary insurer in projecting his underwriting risks. The results show a significant advantage and flexibility using this approach in risk management. This article outlines the procedure of building the model. Finally a practical case study is performed for numerical illustrated.
|
Page generated in 0.1039 seconds