• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 83
  • 18
  • 13
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 148
  • 148
  • 148
  • 30
  • 25
  • 23
  • 20
  • 20
  • 19
  • 19
  • 18
  • 16
  • 15
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Techniques to handle missing values in a factor analysis

Turville, Christopher, University of Western Sydney, Faculty of Informatics, Science and Technology January 2000 (has links)
A factor analysis typically involves a large collection of data, and it is common for some of the data to be unrecorded. This study investigates the ability of several techniques to handle missing values in a factor analysis, including complete cases only, all available cases, imputing means, an iterative component method, singular value decomposition and the EM algorithm. A data set that is representative of that used for a factor analysis is simulated. Some of this data are then randomly removed to represent missing values, and the performance of the techniques are investigated over a wide range of conditions. Several criteria are used to investigate the abilities of the techniques to handle missing values in a factor analysis. Overall, there is no one technique that performs best for all of the conditions studied. The EM algorithm is generally the most effective technique except when there are ill-conditioned matrices present or when computing time is of concern. Some theoretical concerns are introduced regarding the effects that changes in the correlation matrix will have on the loadings of a factor analysis. A complicated expression is derived that shows that the change in factor loadings as a result of change in the elements of a correlation matrix involves components of eigenvectors and eigenvalues. / Doctor of Philosophy (PhD)
12

Complexity and Error Analysis of Numerical Methods for Wireless Channels, SDE, Random Variables and Quantum Mechanics

Hoel, Håkon January 2012 (has links)
This thesis consists of the four papers which consider different aspects of stochastic process modeling, error analysis, and minimization of computational cost.      In Paper I, we construct a Multipath Fading Channel (MFC) model for wireless channels with noise introduced through scatterers flipping on and off. By coarse graining the MFC model a Gaussian process channel model is developed. Complexity and accuracy comparisons of the models are conducted.      In Paper II, we generalize a multilevel Forward Euler Monte Carlo method introduced by Mike Giles for the approximation of expected values depending on solutions of Ito stochastic differential equations. Giles' work proposed and analyzed a Forward Euler Multilevel Monte Carlo (MLMC) method based on realizations on a hierarchy of uniform time discretizations and a coarse graining based control variates idea to reduce the computational cost required by a standard single level Forward Euler Monte Carlo method. This work is an extension of Giles' MLMC method from uniform to adaptive time grids. It has the same improvement in computational cost and is applicable to a larger set of problems.      In paper III, we consider the problem to estimate the mean of a random variable by a sequential stopping rule Monte Carlo method. The performance of a typical second moment based sequential stopping rule is shown to be unreliable both by numerical examples and by analytical arguments. Based on analysis and approximation of error bounds we construct a higher moment based stopping rule which performs more reliably.      In paper IV, Born-Oppenheimer dynamics is shown to provide an accurate approximation of time-independent Schrödinger observables for a molecular system with an electron spectral gap, in the limit of large ratio of nuclei and electron masses, without assuming that the nuclei are localized to vanishing domains. The derivation, based on a Hamiltonian system interpretation of the Schrödinger equation and stability of the corresponding hitting time Hamilton-Jacobi equation for non ergodic dynamics, bypasses the usual separation of nuclei and electron wave functions, includes caustic states and gives a different perspective on the Born-Oppenheimer approximation, Schrödinger Hamiltonian systems and numerical simulation in molecular dynamics modeling at constant energy. / <p>QC 20120508</p>
13

A novel method to increase depth of imaging in optical coherence tomography using ultrasound

Pereira Bogado, Pedro Fernando 18 September 2012 (has links)
Optical coherence tomography (OCT) is a biomedical imaging technique with many current applications. A limitation of the technique is its shallow depth of imaging. A major factor limiting imaging depth in OCT is multiple-scattering of light. This thesis proposes an integrated computational imgaging approach to improve depth of imaging in OCT. In this approach ultrasound patterns are used to modulate the refractive index of tissue. Simulations of the impact of ultrasound on the refractive index are performed, and the results are shown in this thesis. Simulations of the impact of the modulated refractive index on the propagation of light in tissue are needed. But there is no suitable simulator available. Thus, we implemented a Monte Carlo method to solve integral equations that could be used to perform these simulations. Results for integral equations in 1-D and 2-D are shown.
14

A novel method to increase depth of imaging in optical coherence tomography using ultrasound

Pereira Bogado, Pedro Fernando 18 September 2012 (has links)
Optical coherence tomography (OCT) is a biomedical imaging technique with many current applications. A limitation of the technique is its shallow depth of imaging. A major factor limiting imaging depth in OCT is multiple-scattering of light. This thesis proposes an integrated computational imgaging approach to improve depth of imaging in OCT. In this approach ultrasound patterns are used to modulate the refractive index of tissue. Simulations of the impact of ultrasound on the refractive index are performed, and the results are shown in this thesis. Simulations of the impact of the modulated refractive index on the propagation of light in tissue are needed. But there is no suitable simulator available. Thus, we implemented a Monte Carlo method to solve integral equations that could be used to perform these simulations. Results for integral equations in 1-D and 2-D are shown.
15

Coarse Graining Monte Carlo Methods for Wireless Channels and Stochastic Differential Equations

Hoel, Håkon January 2010 (has links)
This thesis consists of two papers considering different aspects of stochastic process modelling and the minimisation of computational cost. In the first paper, we analyse statistical signal properties and develop a Gaussian pro- cess model for scenarios with a moving receiver in a scattering environment, as in Clarke’s model, with the generalisation that noise is introduced through scatterers randomly flip- ping on and off as a function of time. The Gaussian process model is developed by extracting mean and covariance properties from the Multipath Fading Channel model (MFC) through coarse graining. That is, we verify that under certain assumptions, signal realisations of the MFC model converge to a Gaussian process and thereafter compute the Gaussian process’ covariance matrix, which is needed to construct Gaussian process signal realisations. The obtained Gaussian process model is under certain assumptions less computationally costly, containing more channel information and having very similar signal properties to its corresponding MFC model. We also study the problem of fitting our model’s flip rate and scatterer density to measured signal data. The second paper generalises a multilevel Forward Euler Monte Carlo method intro- duced by Giles [1] for the approximation of expected values depending on the solution to an Ito stochastic differential equation. Giles work [1] proposed and analysed a Forward Euler Multilevel Monte Carlo method based on realsiations on a hierarchy of uniform time discretisations and a coarse graining based control variates idea to reduce the computa- tional effort required by a standard single level Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretisations generated by adaptive algorithms developed by Moon et al. [3, 2]. These adaptive algorithms apply either deterministic time steps or stochastic time steps and are based on a posteriori error expansions first developed by Szepessy et al. [4]. Under sufficient regularity conditions, our numerical results, which include one case with singular drift and one with stopped dif- fusion, exhibit savings in the computational cost to achieve an accuracy of O(T ol), from O(T ol−3 ) to O (log (T ol) /T ol)2 . We also include an analysis of a simplified version of the adaptive algorithm for which we prove similar accuracy and computational cost results.
16

Collimator width Optimization in X-ray Luminescent Computed Tomography

Mishra, Sourav 17 June 2013 (has links)
X-ray Luminescent Computed Tomography (XLCT) is a new imaging modality which is under extensive trials at present. The modality works by selective excitation of X-ray sensitive nanophosphors and detecting the optical signal thus generated. This system can be used towards recreating high quality tomographic slices even with low X-ray dose. There have been many studies which have reported successful validation of the underlying philosophy. However, there is still lack of information about optimal settings or combination of imaging parameters, which could yield best outputs. Research groups participating in this area have reported results on basis of dose, signal to noise ratio or resolution only. In this thesis, the candidate has evaluated XLCT taking into consideration noise and resolution in terms of composite indices. Simulations have been performed for various beam widths and noise & resolution metrics deduced. This information has been used in evaluating quality of images on basis of CT Figure of Merit & a modified Wang-Bovik Image Quality index. Simulations indicate the presence of an optimal setting which can be set prior to extensive scans. The conducted study, although focusing on a particular implementation, hopes to establish a paradigm in finding best settings for any XLCT system. Scanning with an optimal setting preconfigured can help in vastly reducing the cost and risks involved with this imaging modality. / Master of Science
17

Modélisation du smile de volatilité pour les produits dérivés de taux d'intérêt / Multi factor stochastic volatility for interest rates modeling

Palidda, Ernesto 29 May 2015 (has links)
L'objet de cette thèse est l'étude d'un modèle de la dynamique de la courbe de taux d'intérêt pour la valorisation et la gestion des produits dérivées. En particulier, nous souhaitons modéliser la dynamique des prix dépendant de la volatilité. La pratique de marché consiste à utiliser une représentation paramétrique du marché, et à construire les portefeuilles de couverture en calculant les sensibilités par rapport aux paramètres du modèle. Les paramètres du modèle étant calibrés au quotidien pour que le modèle reproduise les prix de marché, la propriété d'autofinancement n'est pas vérifiée. Notre approche est différente, et consiste à remplacer les paramètres par des facteurs, qui sont supposés stochastiques. Les portefeuilles de couverture sont construits en annulant les sensibilités des prix à ces facteurs. Les portefeuilles ainsi obtenus vérifient la propriété d’autofinancement / This PhD thesis is devoted to the study of an Affine Term Structure Model where we use Wishart-like processes to model the stochastic variance-covariance of interest rates. This work was initially motivated by some thoughts on calibration and model risk in hedging interest rates derivatives. The ambition of our work is to build a model which reduces as much as possible the noise coming from daily re-calibration of the model to the market. It is standard market practice to hedge interest rates derivatives using models with parameters that are calibrated on a daily basis to fit the market prices of a set of well chosen instruments (typically the instrument that will be used to hedge the derivative). The model assumes that the parameters are constant, and the model price is based on this assumption; however since these parameters are re-calibrated, they become in fact stochastic. Therefore, calibration introduces some additional terms in the price dynamics (precisely in the drift term of the dynamics) which can lead to poor P&L explain, and mishedging. The initial idea of our research work is to replace the parameters by factors, and assume a dynamics for these factors, and assume that all the parameters involved in the model are constant. Instead of calibrating the parameters to the market, we fit the value of the factors to the observed market prices. A large part of this work has been devoted to the development of an efficient numerical framework to implement the model. We study second order discretization schemes for Monte Carlo simulation of the model. We also study efficient methods for pricing vanilla instruments such as swaptions and caplets. In particular, we investigate expansion techniques for prices and volatility of caplets and swaptions. The arguments that we use to obtain the expansion rely on an expansion of the infinitesimal generator with respect to a perturbation factor. Finally we have studied the calibration problem. As mentioned before, the idea of the model we study in this thesis is to keep the parameters of the model constant, and calibrate the values of the factors to fit the market. In particular, we need to calibrate the initial values (or the variations) of the Wishart-like process to fit the market, which introduces a positive semidefinite constraint in the optimization problem. Semidefinite programming (SDP) gives a natural framework to handle this constraint
18

Searching for the optimal control strategy of epidemics spreading on different types of networks

Oleś, Katarzyna A. January 2014 (has links)
The main goal of my studies has been to search for the optimal control strategy of controlling epidemics when taking into account both economical and social costs of the disease. Three control scenarios emerge with treating the whole population (global strategy, GS), treating a small number of individuals in a well-defined neighbourhood of a detected case (local strategy, LS) and allowing the disease to spread unchecked (null strategy, NS). The choice of the optimal strategy is governed mainly by a relative cost of palliative and preventive treatments. Although the properties of the pathogen might not be known in advance for emerging diseases, the prediction of the optimal strategy can be made based on economic analysis only. The details of the local strategy and in particular the size of the optimal treatment neighbourhood weakly depends on disease infectivity but strongly depends on other epidemiological factors (rate of occurring the symptoms, spontaneously recovery). The required extent of prevention is proportional to the size of the infection neighbourhood, but this relationship depends on time till detection and time till treatment in a non-nonlinear (power) law. The spontaneous recovery also affects the choice of the control strategy. I have extended my results to two contrasting and yet complementary models, in which individuals that have been through the disease can either be treated or not. Whether the removed individuals (i.e., those who have been through the disease but then spontaneously recover or die) are part of the treatment plan depends on the type of the disease agent. The key factor in choosing the right model is whether it is possible - and desirable - to distinguish such individuals from those who are susceptible. If the removed class is identified with dead individuals, the distinction is very clear. However, if the removal means recovery and immunity, it might not be possible to identify those who are immune. The models are similar in their epidemiological part, but differ in how the removed/recovered individuals are treated. The differences in models affect choice of the strategy only for very cheap treatment and slow spreading disease. However for the combinations of parameters that are important from the epidemiological perspective (high infectiousness and expensive treatment) the models give similar results. Moreover, even where the choice of the strategy is different, the total cost spent on controlling the epidemic is very similar for both models. Although regular and small-world networks capture some aspects of the structure of real networks of contacts between people, animals or plants, they do not include the effect of clustering noted in many real-life applications. The use of random clustered networks in epidemiological modelling takes an impor- tant step towards application of the modelling framework to realistic systems. Network topology and in particular clustering also affects the applicability of the control strategy.
19

Investigation of a discrete velocity Monte Carlo Boltzmann equation

Morris, Aaron Benjamin 03 September 2009 (has links)
A new discrete velocity scheme for solving the Boltzmann equation has been implemented for homogeneous relaxation and one-dimensional problems. Directly solving the Boltzmann equation is computationally expensive because in addition to working in physical space, the nonlinear collision integral must also be evaluated in a velocity space. To best solve the collision integral, collisions between each point in velocity space with all other points in velocity space must be considered, but this is very expensive. Motivated by the Direct Simulation Monte Carlo (DSMC) method, the computational costs in the present method are reduced by randomly sampling a set of collision partners for each point in velocity space. A collision partner selection algorithm was implemented to favor collision partners that contribute more to the collision integral. The new scheme has a built in flexibility, where the resolution in approximating the collision integral can be adjusted by changing how many collision partners are sampled. The computational cost associated with evaluation of the collision integral is compared to the corresponding statistical error. Having a fixed set of velocities can artificially limit the collision outcomes by restricting post collision velocities to those that satisfy the conservation equations and lie precisely on the grid. A new velocity interpolation algorithm enables us to map velocities that do not lie on the grid to nearby grid points while preserving mass, momentum, and energy. This allows for arbitrary post-collision velocities that lie between grid points or completely outside of the velocity space to be projected back onto the nearby grid points. The present scheme is applied to homogeneous relaxation of the non-equilibrium Bobylev Krook-Wu distribution, and the numerical results agree well with the analytic solution. After verifying the proposed method for spatially homogeneous relaxation problems, the scheme was then used to solve a 1D traveling shock. The jump conditions across the shock match the Rankine-Hugoniot jump conditions. The internal shock wave structure was then compared to DSMC solutions, and good agreement was found for Mach numbers ranging from 1.2 to 6. Since a coarse velocity discretization is required for efficient calculation, the effects of different velocity grid resolutions are examined. Although using a relatively coarse approximation for the collision integral is computationally efficient, statistical noise pollutes the solution. The effects of using coarse and fine approximations for the collision integral are examined and it is found that by coarsely evaluating the collision integral, the computational time can be reduced by nearly two orders of magnitude while retaining relatively smooth macroscopic properties. / text
20

Suitability of FPGA-based computing for cyber-physical systems

Lauzon, Thomas Charles 18 August 2010 (has links)
Cyber-Physical Systems theory is a new concept that is about to revolutionize the way computers interact with the physical world by integrating physical knowledge into the computing systems and tailoring such computing systems in a way that is more compatible with the way processes happen in the physical world. In this master’s thesis, Field Programmable Gate Arrays (FPGA) are studied as a potential technological asset that may contribute to the enablement of the Cyber-Physical paradigm. As an example application that may benefit from cyber-physical system support, the Electro-Slag Remelting process - a process for remelting metals into better alloys - has been chosen due to the maturity of its related physical models and controller designs. In particular, the Particle Filter that estimates the state of the process is studied as a candidate for FPGA-based computing enhancements. In comparison with CPUs, through the designs and experiments carried in relationship with this study, the FPGA reveals itself as a serious contender in the arsenal of v computing means for Cyber-Physical Systems, due to its capacity to mimic the ubiquitous parallelism of physical processes. / text

Page generated in 0.0508 seconds