• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 174
  • 24
  • Tagged with
  • 198
  • 198
  • 198
  • 198
  • 198
  • 198
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Sparse linear algebra on a GPU : with Applications to flow in porous Media

Torp, Audun January 2009 (has links)
<p>We investigate what the graphics processing units (GPUs) have to offer compared to the central processing units (CPUs) when solving a sparse linear system of equations. This is performed by using a GPU to simulate fluid-flow in a porous medium. Flow-problems are discretized mainly by the mimetic finite element discretization, but also by a two-point flux-approximation (TPFA) method. Both of these discretization schemes are explained in detail. Example-models of flow in porous media are simulated, as well as CO2 -injection into a realistic model of a sub-sea storage-cite. The linear algebra is solved by the conjugate gradient (CG) method without a preconditioner. The computationally most expensive calculation of this algorithm is the matrix-vector product. Several formats for storing sparse matrices are presented and implemented on both a CPU and a GPU. The fastest format on the CPU is different from the format performing best on the GPU. Implementations for the GPU is written for the compute unified driver architecture (CUDA), and C++ is used for the CPU-implementations. The program is created as a plug-in for Matlab and may be used to solve any symmetric positive definite (SPD) linear system. How a GPU differs from a CPU is explained, where focus is put on how a program should be written to fully utilize the potential of a GPU. The optimized implementation on the GPU outperforms the CPU, and offers a substantial improvement compared to Matlab’s conjugate gradient method, when no preconditioner is used.</p>
32

Numerical Path Integration for Lévy Driven Stochastic Differential Equations

Kleppe, Tore Selland January 2006 (has links)
<p>Some theory on Lévy processes and stochastic differential equations driven by Lévy processes is reviewed. Inverse Fast Fourier Transform routines are applied to compute the density of the increments of Lévy processes. We look at exact and approximate path integration operators to compute the probability density function of the solution process of a given stochastic differential equation. The numerical path integration method is shown to converge under the transition kernel backward convergence assumption. The numerical path integration method is applied on several examples with non-Brownian driving noises and nonlinearities, and shows satisfactory results. In the case when the noise is of additive type, a general code written for Lévy driving noises specified by the Lévy-Khintchine formula is described. A preliminary result on path integration in Fourier space is given.</p>
33

Multilevel Analysis Applied to Fetal Growth Data with Missing Values.

Bråthen, Eystein Widar January 2006 (has links)
<p>Intrauterine growth retardation means that the growth of a fetus is restricted as compared with its biological growth potential. This contributes to an increased risk for illnesses or death of the newborn. Therefore it is important to characterize, detect and to follow up clinically any suspected or confirmed growth restriction of the fetus. In this master thesis we aim to describe the course of growth during the pregnancy based on repeated ultrasound measurements and study how the growth depends on different background variables of the mother in analyzing the data from the SGA (small-for-getational age) - project. The SGA-project contains data from 5722 pregnancies that took place in Trondheim, Bergen and Uppsala from 1986-1988, named The Scandinavian SGA-studies. In this thesis we have confined ourselves to a random sample of 561 pregnancies. A problem with many studies of this kind is that the data set contain missing values. In the SGA data set under study there were missing values from one or more of the ultrasound measurements for approximately 40% of the women. Until recently, the most popular used missing-data method available has been complete case analysis, where only subjects with a complete set of data are being analysed. There exist a number of alternative ways of dealing with missing data. Bayesian multiple imputation (MI) has become a highly useful paradigm for handling missing values in many settings. In this paper we compare 2 general approaches that come highly recommended: Bayesian MI and maximum likelihood (ML), and point out some of its unique features. One aspect of MI is the separation of the imputation phase from the analysis phase. It can be advantageous in settings where the models underlying the two phases are different. We have used a multilevel analysis for the course of fetal growth. Multilevel analysis has a hierarchic structure with two levels of variation: variation between points in time for the same fetus (level 1) and variation between fetuses (level 2). Level 1 is modeled by regression analysis with gestational age as the independent variable and level 2 is modeled by regarding the regression coefficients as stochastic with a set of (non directly observed) values for individual fetuses and some background variables of the mother. The model we ended up with describes the devolopment in time of the abdominal diameter (MAD) of the fetus. It had several ``significant'' covariates (p-value < 0.05), they were gestational age (Time-variable), the body-mass index (BMI), age of the mother, an index varible wich tells if a mother has given birth to a low-weight child in an earlier pregnancy and the gender of the fetus. The last covariate was not significant in a strictly mathematical way, but since it is well known that the gender of the fetus has an important effect we included gender in the model as well. When we used the MI-method on the random sample (561) with missing values, the estimated standard deviations of the parameters have been reduced compared to those obtained from the complete case analysis. There were not a significant change in the parameter estimates except for the coefficient for the age of the mother. We also have found a procedure to verify if the MI-method gives us reasonable imputed values for the missing values by following the MCAR-procedure defined in Section 6. Another interesting observation from a simulation study is that estimates of the coefficients for variables used to generate the MAR and MNAR missing mechanism are ``suffering'' because they tend to be more biased compared to the values from the complete case analysis on the random sample (320) than the other variables. According to the MAR assumption such a procedure should give unbiased parameter estimates. {Key Words: Longitudinal data, multilevel analysis, missing data, multiple imputation (MI), Gibbs sampling, linear mixed-effects model and maximum likelihood (ML)-procedure.</p>
34

Advanced Filtering in Intuitive Robot Programming

Hauan, Tore Martin Madsø January 2006 (has links)
<p>This text deals with the problem of reducing multi-dimensional data in the context of programming an industrial robot. Different ways to treat the positional and orientational data are discussed, and algorithms for each of these are developed and tested on various generated datasets. The outcome of the work was an algorithm expressing the position as three polynomials, one for each coordinate, and the orientation is then reduced with respect to given tolerances in Euler Angles. The resulting algorithm turned out to reduce a physical dataset with 97%. It was concluded that it is very satisfying to be able to reduce a set with this amount without loosing vital information.</p>
35

Sequential Markov random fields and Markov mesh random fields for modelling of geological structures

Stien, Marita January 2006 (has links)
<p>We have been given a two-dimensional image of a geological structure. This structure is used to construct a three-dimensional statistical model, to be used as prior knowledge in the analysis of seismic data. We consider two classes of discrete lattice models for which efficient simulation is possible; sequential Markov random field (sMRF) and Markov mesh random field (MMRF). We first explore models from these two classes in two dimensions, using the maximum likelihood estimator (MLE). The results indicate that a larger neighbourhood should be considered for all the models. We also develop a second estimator, which is designed to match the model with the observation with respect to a set of specified functions. This estimator is only considered for the sMRF model, since that model proved to be flexible enough to give satisfying results. Due to time limitation of this thesis, we could not wait for the optimization of the estimator to converge. Thus, we can not evaluate this estimator. Finally, we extract useful information from the two-dimensional models and specify a sMRF model in three dimensions. Parameter estimation for this model needs approximative techniques, since we only have given observations in two dimensions. Such techniques have not been investigated in this report, however, we have adjusted the parameters manually and observed that the model is very flexible and might give very satisfying results.</p>
36

Mathematical Model of the Geomagnetic Field

Thorsen, Kjetil January 2006 (has links)
<p>First comes a description of a mathematical model of the geomagnetic field. Then some discussion of the classical non-uniqueness results of Backus. Further we look at more recent results concerning reconstruction of the geomagnetic field from intensity and the normal component of the field. New stability estimate for this reconstruction is obtained.</p>
37

Multiple Proposal Strategies for Markov Chain Monte Carlo

Stormark, Kristian January 2006 (has links)
<p>The multiple proposal methods represent a recent simulation technique for Markov Chain Monte Carlo that allows several proposals to be considered at each step of transition. Motivated by the ideas of Quasi Monte Carlo integration, we examine how strongly correlated proposals can be employed to construct Markov chains with improved mixing properties. We proceed by giving a concise introduction to the Monte Carlo and Markov Chain Monte Carlo theory, and we supply a short discussion of the standard simulation algorithms and the difficulties of efficient sampling. We then examine two multiple proposal methods suggested in the literature, and we indicate the possibility of a unified formulation of the two methods. More essentially, we report some systematic exploration strategies for the two multiple proposals methods. In particular, we present schemes for the utilization of well-distributed point sets and maximally spread search directions. We also include a simple construction procedure for the latter type of point set. A numerical examination of the multiple proposal methods are performed on two simple test problems. We find that the systematic exploration approach may provide a significant improvement of the mixing, especially when the probability mass of the target distribution is ``easy to miss'' by independent sampling. For both test problems, we find that the best results are obtained with the QMC schemes. In particular, we find that the gain is most pronounced for a relatively moderate number of proposal. With fewer proposals, the properties of the well-distributed point sets will no be that relevant. For a large number of proposals, the independent sampling approach will be more competitive, since the coverage of the local neighborhood then will be better.</p>
38

Constructing elliptic curves over finite fields using complex multiplication

Thuen, Øystein Øvreås January 2006 (has links)
<p>We study and improve the CM-method for the creation of elliptic curves with specified group order over finite fields. We include a thorough review of the mathematical theory needed to understand this method. The ability to construct elliptic curves with very special group order is important in pairing-based cryptography.</p>
39

Bayesian Inversion of Well Log Data into Facies Units based on a Spatially Coupled Model

Vigsnes, Maria January 2006 (has links)
<p>Through a study of cored wells from the Statfjord Formation in the Tampen Area, we derive a spatially coupled classification model for facies units. We consider a Bayesian framework for the problem. A likelihood model is defined from the log-response of the formation, where response from neighbour observations are considered. A first order Markov chain prior model is estimated from the cores. From the posterior pdf, the marginal maximum posterior solution can be calculated and simulations can be generated. Since the posterior pdf can be factorized, it can be calculated by a recursive Forward-Backward algorithm for hidden Markov models. The classification model is complex, and if the model assumptions does not coincides with the underlying model, the classification might give poor results due to the large number of estimated model parameters. The results from the classification of a blind well were not as good as we expected, but gave good results for the small classes, compared to a classification model without spatial coupling.</p>
40

Volatility and Dependence in Fixed Income Forward Rates with Application to Market Risk of Derivative Portfolios

Vesterdal, Bjørn Erlend January 2006 (has links)
<p>This thesis explores the modeling of volatility and dependence in forward rates in the fixed income market for the purpose of risk estimation in derivative portfolios. A brief background on popular quantile-based risk measures is given. A short introduction is given to GARCH-type volatility models, as well as copula and vine models for dependence between random variables. Some details on parameter estimation and sampling related to these models are also provided. A backtesting procedure is performed using various combinations of volatility and dependence models. The results of this procedure indicate that the Student's t copula is preferable among the dependence structures considered. Also, none of the choices of conditional distribution for the volatility models provide good results at all the percentiles considered, but the normal distribution appears to be a good choice far into the tails.</p>

Page generated in 0.1736 seconds