Spelling suggestions: "subject:"ehe fonte carlo 3methods"" "subject:"ehe fonte carlo 4methods""
11 |
A novel method to increase depth of imaging in optical coherence tomography using ultrasoundPereira Bogado, Pedro Fernando 18 September 2012 (has links)
Optical coherence tomography (OCT) is a biomedical imaging technique with many current applications.
A limitation of the technique is its shallow depth of imaging.
A major factor limiting imaging depth in OCT is multiple-scattering of light.
This thesis proposes an integrated computational imgaging approach to improve depth of imaging in OCT.
In this approach ultrasound patterns are used to modulate the refractive index of tissue.
Simulations of the impact of ultrasound on the refractive index are performed, and the results are shown in this thesis.
Simulations of the impact of the modulated refractive index on the propagation of light in tissue are needed.
But there is no suitable simulator available.
Thus, we implemented a Monte Carlo method to solve integral equations that could be used to perform these simulations.
Results for integral equations in 1-D and 2-D are shown.
|
12 |
A novel method to increase depth of imaging in optical coherence tomography using ultrasoundPereira Bogado, Pedro Fernando 18 September 2012 (has links)
Optical coherence tomography (OCT) is a biomedical imaging technique with many current applications.
A limitation of the technique is its shallow depth of imaging.
A major factor limiting imaging depth in OCT is multiple-scattering of light.
This thesis proposes an integrated computational imgaging approach to improve depth of imaging in OCT.
In this approach ultrasound patterns are used to modulate the refractive index of tissue.
Simulations of the impact of ultrasound on the refractive index are performed, and the results are shown in this thesis.
Simulations of the impact of the modulated refractive index on the propagation of light in tissue are needed.
But there is no suitable simulator available.
Thus, we implemented a Monte Carlo method to solve integral equations that could be used to perform these simulations.
Results for integral equations in 1-D and 2-D are shown.
|
13 |
Coarse Graining Monte Carlo Methods for Wireless Channels and Stochastic Differential EquationsHoel, Håkon January 2010 (has links)
This thesis consists of two papers considering different aspects of stochastic process modelling and the minimisation of computational cost. In the first paper, we analyse statistical signal properties and develop a Gaussian pro- cess model for scenarios with a moving receiver in a scattering environment, as in Clarke’s model, with the generalisation that noise is introduced through scatterers randomly flip- ping on and off as a function of time. The Gaussian process model is developed by extracting mean and covariance properties from the Multipath Fading Channel model (MFC) through coarse graining. That is, we verify that under certain assumptions, signal realisations of the MFC model converge to a Gaussian process and thereafter compute the Gaussian process’ covariance matrix, which is needed to construct Gaussian process signal realisations. The obtained Gaussian process model is under certain assumptions less computationally costly, containing more channel information and having very similar signal properties to its corresponding MFC model. We also study the problem of fitting our model’s flip rate and scatterer density to measured signal data. The second paper generalises a multilevel Forward Euler Monte Carlo method intro- duced by Giles [1] for the approximation of expected values depending on the solution to an Ito stochastic differential equation. Giles work [1] proposed and analysed a Forward Euler Multilevel Monte Carlo method based on realsiations on a hierarchy of uniform time discretisations and a coarse graining based control variates idea to reduce the computa- tional effort required by a standard single level Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretisations generated by adaptive algorithms developed by Moon et al. [3, 2]. These adaptive algorithms apply either deterministic time steps or stochastic time steps and are based on a posteriori error expansions first developed by Szepessy et al. [4]. Under sufficient regularity conditions, our numerical results, which include one case with singular drift and one with stopped dif- fusion, exhibit savings in the computational cost to achieve an accuracy of O(T ol), from O(T ol−3 ) to O (log (T ol) /T ol)2 . We also include an analysis of a simplified version of the adaptive algorithm for which we prove similar accuracy and computational cost results.
|
14 |
Collimator width Optimization in X-ray Luminescent Computed TomographyMishra, Sourav 17 June 2013 (has links)
X-ray Luminescent Computed Tomography (XLCT) is a new imaging modality which is under extensive trials at present. The modality works by selective excitation of X-ray sensitive nanophosphors and detecting the optical signal thus generated. This system can be used towards recreating high quality tomographic slices even with low X-ray dose. There have been many studies which have reported successful validation of the underlying philosophy. However, there is still lack of information about optimal settings or combination of imaging parameters, which could yield best outputs. Research groups participating in this area have reported results on basis of dose, signal to noise ratio or resolution only.
In this thesis, the candidate has evaluated XLCT taking into consideration noise and resolution in terms of composite indices. Simulations have been performed for various beam widths and noise & resolution metrics deduced. This information has been used in evaluating quality of images on basis of CT Figure of Merit & a modified Wang-Bovik Image Quality index. Simulations indicate the presence of an optimal setting which can be set prior to extensive scans. The conducted study, although focusing on a particular implementation, hopes to establish a paradigm in finding best settings for any XLCT system. Scanning with an optimal setting preconfigured can help in vastly reducing the cost and risks involved with this imaging modality. / Master of Science
|
15 |
Modélisation du smile de volatilité pour les produits dérivés de taux d'intérêt / Multi factor stochastic volatility for interest rates modelingPalidda, Ernesto 29 May 2015 (has links)
L'objet de cette thèse est l'étude d'un modèle de la dynamique de la courbe de taux d'intérêt pour la valorisation et la gestion des produits dérivées. En particulier, nous souhaitons modéliser la dynamique des prix dépendant de la volatilité. La pratique de marché consiste à utiliser une représentation paramétrique du marché, et à construire les portefeuilles de couverture en calculant les sensibilités par rapport aux paramètres du modèle. Les paramètres du modèle étant calibrés au quotidien pour que le modèle reproduise les prix de marché, la propriété d'autofinancement n'est pas vérifiée. Notre approche est différente, et consiste à remplacer les paramètres par des facteurs, qui sont supposés stochastiques. Les portefeuilles de couverture sont construits en annulant les sensibilités des prix à ces facteurs. Les portefeuilles ainsi obtenus vérifient la propriété d’autofinancement / This PhD thesis is devoted to the study of an Affine Term Structure Model where we use Wishart-like processes to model the stochastic variance-covariance of interest rates. This work was initially motivated by some thoughts on calibration and model risk in hedging interest rates derivatives. The ambition of our work is to build a model which reduces as much as possible the noise coming from daily re-calibration of the model to the market. It is standard market practice to hedge interest rates derivatives using models with parameters that are calibrated on a daily basis to fit the market prices of a set of well chosen instruments (typically the instrument that will be used to hedge the derivative). The model assumes that the parameters are constant, and the model price is based on this assumption; however since these parameters are re-calibrated, they become in fact stochastic. Therefore, calibration introduces some additional terms in the price dynamics (precisely in the drift term of the dynamics) which can lead to poor P&L explain, and mishedging. The initial idea of our research work is to replace the parameters by factors, and assume a dynamics for these factors, and assume that all the parameters involved in the model are constant. Instead of calibrating the parameters to the market, we fit the value of the factors to the observed market prices. A large part of this work has been devoted to the development of an efficient numerical framework to implement the model. We study second order discretization schemes for Monte Carlo simulation of the model. We also study efficient methods for pricing vanilla instruments such as swaptions and caplets. In particular, we investigate expansion techniques for prices and volatility of caplets and swaptions. The arguments that we use to obtain the expansion rely on an expansion of the infinitesimal generator with respect to a perturbation factor. Finally we have studied the calibration problem. As mentioned before, the idea of the model we study in this thesis is to keep the parameters of the model constant, and calibrate the values of the factors to fit the market. In particular, we need to calibrate the initial values (or the variations) of the Wishart-like process to fit the market, which introduces a positive semidefinite constraint in the optimization problem. Semidefinite programming (SDP) gives a natural framework to handle this constraint
|
16 |
Searching for the optimal control strategy of epidemics spreading on different types of networksOleś, Katarzyna A. January 2014 (has links)
The main goal of my studies has been to search for the optimal control strategy of controlling epidemics when taking into account both economical and social costs of the disease. Three control scenarios emerge with treating the whole population (global strategy, GS), treating a small number of individuals in a well-defined neighbourhood of a detected case (local strategy, LS) and allowing the disease to spread unchecked (null strategy, NS). The choice of the optimal strategy is governed mainly by a relative cost of palliative and preventive treatments. Although the properties of the pathogen might not be known in advance for emerging diseases, the prediction of the optimal strategy can be made based on economic analysis only. The details of the local strategy and in particular the size of the optimal treatment neighbourhood weakly depends on disease infectivity but strongly depends on other epidemiological factors (rate of occurring the symptoms, spontaneously recovery). The required extent of prevention is proportional to the size of the infection neighbourhood, but this relationship depends on time till detection and time till treatment in a non-nonlinear (power) law. The spontaneous recovery also affects the choice of the control strategy. I have extended my results to two contrasting and yet complementary models, in which individuals that have been through the disease can either be treated or not. Whether the removed individuals (i.e., those who have been through the disease but then spontaneously recover or die) are part of the treatment plan depends on the type of the disease agent. The key factor in choosing the right model is whether it is possible - and desirable - to distinguish such individuals from those who are susceptible. If the removed class is identified with dead individuals, the distinction is very clear. However, if the removal means recovery and immunity, it might not be possible to identify those who are immune. The models are similar in their epidemiological part, but differ in how the removed/recovered individuals are treated. The differences in models affect choice of the strategy only for very cheap treatment and slow spreading disease. However for the combinations of parameters that are important from the epidemiological perspective (high infectiousness and expensive treatment) the models give similar results. Moreover, even where the choice of the strategy is different, the total cost spent on controlling the epidemic is very similar for both models. Although regular and small-world networks capture some aspects of the structure of real networks of contacts between people, animals or plants, they do not include the effect of clustering noted in many real-life applications. The use of random clustered networks in epidemiological modelling takes an impor- tant step towards application of the modelling framework to realistic systems. Network topology and in particular clustering also affects the applicability of the control strategy.
|
17 |
Investigation of a discrete velocity Monte Carlo Boltzmann equationMorris, Aaron Benjamin 03 September 2009 (has links)
A new discrete velocity scheme for solving the Boltzmann equation has been implemented for homogeneous relaxation and one-dimensional problems. Directly solving the Boltzmann equation is computationally expensive because in addition to working in physical space, the nonlinear collision integral must also be evaluated in a velocity space. To best solve the collision integral, collisions between each point in velocity space with all other points in velocity space must be considered, but this is very expensive. Motivated by the Direct Simulation Monte Carlo (DSMC) method, the computational costs in the present method are reduced by randomly sampling a set of collision partners for each point in velocity space. A collision partner selection algorithm was implemented to favor collision partners that contribute more to the collision integral. The new scheme has a built in flexibility, where the resolution in approximating the collision integral can be adjusted by changing how many collision partners are sampled. The computational cost associated with evaluation of the collision integral is compared to the corresponding statistical error. Having a fixed set of velocities can artificially limit the collision outcomes by restricting post collision velocities to those that satisfy the conservation equations and lie precisely on the grid. A new velocity interpolation algorithm enables us to map velocities that do not lie on the grid to nearby grid points while preserving mass, momentum, and energy. This allows for arbitrary post-collision velocities that lie between grid points or completely outside of the velocity space to be projected back onto the nearby grid points. The present scheme is applied to homogeneous relaxation of the non-equilibrium Bobylev Krook-Wu distribution, and the numerical results agree well with the analytic solution. After verifying the proposed method for spatially homogeneous relaxation problems, the scheme was then used to solve a 1D traveling shock. The jump conditions across the shock match the Rankine-Hugoniot jump conditions. The internal shock wave structure was then compared to DSMC solutions, and good agreement was found for Mach numbers ranging from 1.2 to 6. Since a coarse velocity discretization is required for efficient calculation, the effects of different velocity grid resolutions are examined. Although using a relatively coarse approximation for the collision integral is computationally efficient, statistical noise pollutes the solution. The effects of using coarse and fine approximations for the collision integral are examined and it is found that by coarsely evaluating the collision integral, the computational time can be reduced by nearly two orders of magnitude while retaining relatively smooth macroscopic properties. / text
|
18 |
Suitability of FPGA-based computing for cyber-physical systemsLauzon, Thomas Charles 18 August 2010 (has links)
Cyber-Physical Systems theory is a new concept that is about to revolutionize
the way computers interact with the physical world by integrating
physical knowledge into the computing systems and tailoring such computing
systems in a way that is more compatible with the way processes happen in
the physical world. In this master’s thesis, Field Programmable Gate Arrays
(FPGA) are studied as a potential technological asset that may contribute to
the enablement of the Cyber-Physical paradigm. As an example application
that may benefit from cyber-physical system support, the Electro-Slag Remelting
process - a process for remelting metals into better alloys - has been chosen
due to the maturity of its related physical models and controller designs. In
particular, the Particle Filter that estimates the state of the process is studied
as a candidate for FPGA-based computing enhancements. In comparison
with CPUs, through the designs and experiments carried in relationship with
this study, the FPGA reveals itself as a serious contender in the arsenal of
v
computing means for Cyber-Physical Systems, due to its capacity to mimic
the ubiquitous parallelism of physical processes. / text
|
19 |
Coarse Graining Monte Carlo Methods for Wireless Channels and Stochastic Differential EquationsHoel, Håkon January 2010 (has links)
<p>This thesis consists of two papers considering different aspects of stochastic process modelling and the minimisation of computational cost.</p><p>In the first paper, we analyse statistical signal properties and develop a Gaussian pro- cess model for scenarios with a moving receiver in a scattering environment, as in Clarke’s model, with the generalisation that noise is introduced through scatterers randomly flip- ping on and off as a function of time. The Gaussian process model is developed by extracting mean and covariance properties from the Multipath Fading Channel model (MFC) through coarse graining. That is, we verify that under certain assumptions, signal realisations of the MFC model converge to a Gaussian process and thereafter compute the Gaussian process’ covariance matrix, which is needed to construct Gaussian process signal realisations. The obtained Gaussian process model is under certain assumptions less computationally costly, containing more channel information and having very similar signal properties to its corresponding MFC model. We also study the problem of fitting our model’s flip rate and scatterer density to measured signal data.</p><p>The second paper generalises a multilevel Forward Euler Monte Carlo method intro- duced by Giles [1] for the approximation of expected values depending on the solution to an Ito stochastic differential equation. Giles work [1] proposed and analysed a Forward Euler Multilevel Monte Carlo method based on realsiations on a hierarchy of uniform time discretisations and a coarse graining based control variates idea to reduce the computa- tional effort required by a standard single level Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretisations generated by adaptive algorithms developed by Moon et al. [3, 2]. These adaptive algorithms apply either deterministic time steps or stochastic time steps and are based on a posteriori error expansions first developed by Szepessy et al. [4]. Under sufficient regularity conditions, our numerical results, which include one case with singular drift and one with stopped dif- fusion, exhibit savings in the computational cost to achieve an accuracy of O(T ol), from O(T ol−3 ) to O (log (T ol) /T ol)2 . We also include an analysis of a simplified version of the adaptive algorithm for which we prove similar accuracy and computational cost results.</p><p> </p>
|
20 |
Time-varying frequency analysis of bat echolocation signals using Monte Carlo methodsNagappa, Sharad January 2010 (has links)
Echolocation in bats is a subject that has received much attention over the last few decades. Bat echolocation calls have evolved over millions of years and can be regarded as well suited to the task of active target-detection. In analysing the time-frequency structure of bat calls, it is hoped that some insight can be gained into their capabilities and limitations. Most analysis of calls is performed using non-parametric techniques such as the short time Fourier transform. The resulting time-frequency distributions are often ambiguous, leading to further uncertainty in any subsequent analysis which depends on the time-frequency distribution. There is thus a need to develop a method which allows improved time-frequency characterisation of bat echolocation calls. The aim of this work is to develop a parametric approach for signal analysis, specifically taking into account the varied nature of bat echolocation calls in the signal model. A time-varying harmonic signal model with a polynomial chirp basis is used to track the instantaneous frequency components of the signal. The model is placed within a Bayesian context and a particle filter is used to implement the filter. Marginalisation of parameters is considered, leading to the development of a new marginalised particle filter (MPF) which is used to implement the algorithm. Efficient reversible jump moves are formulated for estimation of the unknown (and varying) number of frequency components and higher harmonics. The algorithm is applied to the analysis of synthetic signals and the performance is compared with an existing algorithm in the literature which relies on the Rao-Blackwellised particle filter (RBPF) for online state estimation and a jump Markov system for estimation of the unknown number of harmonic components. A comparison of the relative complexity of the RBPF and the MPF is presented. Additionally, it is shown that the MPF-based algorithm performs no worse than the RBPF, and in some cases, better, for the test signals considered. Comparisons are also presented from various reversible jump sampling schemes for estimation of the time-varying number of tones and harmonics. The algorithm is subsequently applied to the analysis of bat echolocation calls to establish the improvements obtained from the new algorithm. The calls considered are both amplitude and frequency modulated and are of varying durations. The calls are analysed using polynomial basis functions of different orders and the performance of these basis functions is compared. Inharmonicity, which is deviation of overtones away from integer multiples of the fundamental frequency, is examined in echolocation calls from several bat species. The results conclude with an application of the algorithm to the analysis of calls from the feeding buzz, a sequence of extremely short duration calls emitted at high pulse repetition frequency, where it is shown that reasonable time-frequency characterisation can be achieved for these calls.
|
Page generated in 0.0622 seconds