• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 295
  • 64
  • Tagged with
  • 359
  • 356
  • 340
  • 339
  • 251
  • 198
  • 105
  • 48
  • 37
  • 36
  • 36
  • 36
  • 36
  • 36
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Evaluating Different Simulation-Based Estimates for Value and Risk in Interest Rate Portfolios

Kierulf, Kaja January 2010 (has links)
<p>This thesis evaluates risk measures for interest rate portfolios. First a model for interest rates is established: the LIBOR market model. The model is applied to Norwegian and international interest rate data and used to calculate the value of the portfolio by using Monte Carlo simulation. Estimation of volatility and correlation is discussed as well as the two risk measures value at risk and expected tail loss. The data used is analysed before the results of the backtesting evaluating the two risk measures are presented.</p>
92

Matrix-Free Conjugate Gradient Methods for Finite Element Simulations on GPUs

Refsnæs, Runar Heggelien January 2010 (has links)
<p>A block-structured approach for solving 2-dimensional finite element approximations of the Poisson equation on graphics processing units(GPUs) is developed. Linear triangular elements are used, and a matrix-free version of the conjugate gradient method is utilized for solving test problems with over 30 million elements. A speedup of 24 is achieved on a NVIDIA Tesla C1060 GPU when compared to a serial CPU version of the same solution approach, and a comparison is made with previous GPU implementations of the same problem.</p>
93

Rekursiv blokkoppdatering av Isingmodellen / Recursive block updating of the Ising model

Sæther, Bjarne January 2006 (has links)
<p>I denne rapporten sammenligner vi tre varianter av Markov Chain Monte Carlo (MCMC) - simulering av Isingmodellen. Vi sammenligner enkeltnode-oppdatering, naiv blokkoppdatering og rekursiv blokkoppdatering. Vi begynner med å gi en generell introduksjon til markovfelt og Isingmodellen. Deretter viser vi det teoretiske fundamentet som MCMC-metoder hviler på. Etter det gir vi en teoretisk introduksjon til enkeltnode-oppdatering. Så gir vi en innføring i naiv blokkoppdatering som er den tradisjonelle metoden å utføre blokkoppdatering på. Deretter gir vi en tilsvarende innføring i en nylig foreslått metode for å gjennomføre blokkoppdatering, nemlig rekursiv blokkoppdatering. Blokkoppdatering er en metode som har vist seg nyttig med hensyn på miksing når vi simulerer. Med det menes at blokkoppdatering har vist seg nyttig i forhold til å utforske utfallsrommet til fordelingen vi er interessert i med færre iterasjoner enn enkeltnode-oppdatering. Problemet med naiv blokkoppdatering er imidlertid at vi raskt får en høy beregningsmengde ved at hver iterasjon tar veldig lang tid. Vi prøver også ut rekursiv blokkoppdatering. Denne tar sikte på å redusere beregningsmengden for hver iterasjon når vi utfører blokkoppdatering på et markovfelt. Vi viser så simuleringsalgoritmer og resultater. Vi har simulert Isingmodellen med enkeltnode-oppdatering, naiv blokkoppdatering og rekursiv blokkoppdatering. Det vi sammenligner er antall iterasjoner før markovfeltet konvergerer og spesielt beregningstiden pr iterasjon. Vi viser at beregningsmengden pr iterasjon øker med 91000 ganger med naiv blokkoppdatering dersom vi går fra en 3 × 3 blokk til en 5 × 5 blokk. Tilsvarende tall for rekursiv blokkoppdatering er en økning på 83 ganger fra en 3 × 3 blokk til en 5 × 5 blokk. Vi sammenligner også tiden det tar før Isingmodellen konvergerer. Når vi benytter naiv blokkoppdatering finner vi at Isingmodellen bruker 15 sekunder på å konvergere med en 3 × 3 blokk, 910 sekunder på å konvergere med en 4×4 blokk og 182000 sekunder med en 5×5 blokk. Tilsvarende tall for rekursiv blokkoppdatering er 3.74 sekunder for en 3 × 3 blokk, 72 sekunder for en 4 × 4 blokk og 141.2 sekunder for en 5×5 blokk. Når vi benytter enkeltnode-oppdatering bruker feltet 6.6 sekunder på å konvergere.</p>
94

Analysis of commom cause failures in complex safety instrumented systems

Lilleheier, Torbjørn January 2008 (has links)
<p>Common cause failures (CCFs) have been an important issue in reliability analysis for several decades, especially when dealing with safety instrumented systems (SIS). Different approaches have been used in order to describe this CCFs, but the topic is still subject to much research and there does not exist a general consensus as to which method is most suitable for dealing with CCFs. The $beta$-factor model is the most popular method today, even though this model has some well-known limitations. Other, more complicated methods, are also developed to describe situations where the $beta$-factor model is inadequate. The purpose of this thesis is to develop a strategy to suggest in which situations the different CCF methods are applicable. This is done by making a survey which includes several of the existing methods, before applying these in concrete SIS-examples. Observing the specific system in operation is a valuable tool and may help in acquiring feedback data to describe the lifetime of specific components and the number of failed components conditioned on the fact that the total system is failed. Since such feedback data usually are scarce and in our case totally absent, assessing whether the obtained results are accurate is difficult. Thus, the numerical results obtained from the analysis are compared to each other with respect to the assumptions of the particular model. For instance, the PDS method, a method developed for the Norwegian offshore industry, contains some assumptions which are different from the assumptions of the $beta$-factor model, and the report provides a study with respect to how these different assumptions lead to different results. Although other models are introduced, most focus is given to the following four, the $beta$-factor model, the PDS method, Markov analysis and stochastic simulation. For ordinary $M$ out of $N$ architectures with identical components, the PDS method is assumed adequate, and for $N=2$, the $beta$-factor model works well. Markov analysis and stochastic simulation are also well suited for modelling ordinary $M$ out of $N$ SIS, but because of the higher level of complexity, these approaches are not deemed necessary for simple systems. The need for Markov analysis becomes evident when working with SIS of a more complex nature, for instance non-identical components. Both the $beta$-factor model and the PDS method are not able to describe the system in full when dealing with certain types of systems that have different failure rates. An even more complex SIS is also included to illustrate when stochastic simulation is needed. This SIS is modelled by designing a computer algorithm. This computer algorithm describes how the system behaves in the long run, which in turn provides the estimate of interest, namely the average probability of failure on demand (PFD). Finally, it is always important to remember that if there exist any feedback data or expert knowledge describing the distribution of the number of components that fail in a CCF, this is vital in deciding the most descriptive CCF model. By the term ``descriptive model'', we mean a model that both describes the architecture of the system as accurately as possible, and also makes as few assumptions as possible. If it is known, either by applying expert opinion or from feedback data, that if a CCF occurs, all components of the SIS will always be disabled, then the $beta$-factor model is an adequate way of modelling most systems. If such knowledge does not exist, or it is known that a CCF may sometimes disable only a part of the SIS, then the $beta$-factor model will not be the most descriptive model.</p>
95

Continuation and Bifurcation software in MATLAB

Ravnås, Eirik January 2008 (has links)
<p>This article contains discussions of the algorithms used for the construction of the continuation software implemented in this thesis. The aim of the continuation was to be able to perform continuation of equilibria and periodic solutions originating from a Hopf bifurcation point. Algorithms for detection of simple branch points, folds, and Hopf bifurcation points have also been implemented. Some considerations are made with regard to optimization, and two schemes for mesh adaptation of periodic solutions based on moving mesh equations are suggested.</p>
96

Sparse linear algebra on a GPU : with Applications to flow in porous Media

Torp, Audun January 2009 (has links)
<p>We investigate what the graphics processing units (GPUs) have to offer compared to the central processing units (CPUs) when solving a sparse linear system of equations. This is performed by using a GPU to simulate fluid-flow in a porous medium. Flow-problems are discretized mainly by the mimetic finite element discretization, but also by a two-point flux-approximation (TPFA) method. Both of these discretization schemes are explained in detail. Example-models of flow in porous media are simulated, as well as CO2 -injection into a realistic model of a sub-sea storage-cite. The linear algebra is solved by the conjugate gradient (CG) method without a preconditioner. The computationally most expensive calculation of this algorithm is the matrix-vector product. Several formats for storing sparse matrices are presented and implemented on both a CPU and a GPU. The fastest format on the CPU is different from the format performing best on the GPU. Implementations for the GPU is written for the compute unified driver architecture (CUDA), and C++ is used for the CPU-implementations. The program is created as a plug-in for Matlab and may be used to solve any symmetric positive definite (SPD) linear system. How a GPU differs from a CPU is explained, where focus is put on how a program should be written to fully utilize the potential of a GPU. The optimized implementation on the GPU outperforms the CPU, and offers a substantial improvement compared to Matlab’s conjugate gradient method, when no preconditioner is used.</p>
97

Numerical Path Integration for Lévy Driven Stochastic Differential Equations

Kleppe, Tore Selland January 2006 (has links)
<p>Some theory on Lévy processes and stochastic differential equations driven by Lévy processes is reviewed. Inverse Fast Fourier Transform routines are applied to compute the density of the increments of Lévy processes. We look at exact and approximate path integration operators to compute the probability density function of the solution process of a given stochastic differential equation. The numerical path integration method is shown to converge under the transition kernel backward convergence assumption. The numerical path integration method is applied on several examples with non-Brownian driving noises and nonlinearities, and shows satisfactory results. In the case when the noise is of additive type, a general code written for Lévy driving noises specified by the Lévy-Khintchine formula is described. A preliminary result on path integration in Fourier space is given.</p>
98

Multilevel Analysis Applied to Fetal Growth Data with Missing Values.

Bråthen, Eystein Widar January 2006 (has links)
<p>Intrauterine growth retardation means that the growth of a fetus is restricted as compared with its biological growth potential. This contributes to an increased risk for illnesses or death of the newborn. Therefore it is important to characterize, detect and to follow up clinically any suspected or confirmed growth restriction of the fetus. In this master thesis we aim to describe the course of growth during the pregnancy based on repeated ultrasound measurements and study how the growth depends on different background variables of the mother in analyzing the data from the SGA (small-for-getational age) - project. The SGA-project contains data from 5722 pregnancies that took place in Trondheim, Bergen and Uppsala from 1986-1988, named The Scandinavian SGA-studies. In this thesis we have confined ourselves to a random sample of 561 pregnancies. A problem with many studies of this kind is that the data set contain missing values. In the SGA data set under study there were missing values from one or more of the ultrasound measurements for approximately 40% of the women. Until recently, the most popular used missing-data method available has been complete case analysis, where only subjects with a complete set of data are being analysed. There exist a number of alternative ways of dealing with missing data. Bayesian multiple imputation (MI) has become a highly useful paradigm for handling missing values in many settings. In this paper we compare 2 general approaches that come highly recommended: Bayesian MI and maximum likelihood (ML), and point out some of its unique features. One aspect of MI is the separation of the imputation phase from the analysis phase. It can be advantageous in settings where the models underlying the two phases are different. We have used a multilevel analysis for the course of fetal growth. Multilevel analysis has a hierarchic structure with two levels of variation: variation between points in time for the same fetus (level 1) and variation between fetuses (level 2). Level 1 is modeled by regression analysis with gestational age as the independent variable and level 2 is modeled by regarding the regression coefficients as stochastic with a set of (non directly observed) values for individual fetuses and some background variables of the mother. The model we ended up with describes the devolopment in time of the abdominal diameter (MAD) of the fetus. It had several ``significant'' covariates (p-value < 0.05), they were gestational age (Time-variable), the body-mass index (BMI), age of the mother, an index varible wich tells if a mother has given birth to a low-weight child in an earlier pregnancy and the gender of the fetus. The last covariate was not significant in a strictly mathematical way, but since it is well known that the gender of the fetus has an important effect we included gender in the model as well. When we used the MI-method on the random sample (561) with missing values, the estimated standard deviations of the parameters have been reduced compared to those obtained from the complete case analysis. There were not a significant change in the parameter estimates except for the coefficient for the age of the mother. We also have found a procedure to verify if the MI-method gives us reasonable imputed values for the missing values by following the MCAR-procedure defined in Section 6. Another interesting observation from a simulation study is that estimates of the coefficients for variables used to generate the MAR and MNAR missing mechanism are ``suffering'' because they tend to be more biased compared to the values from the complete case analysis on the random sample (320) than the other variables. According to the MAR assumption such a procedure should give unbiased parameter estimates. {Key Words: Longitudinal data, multilevel analysis, missing data, multiple imputation (MI), Gibbs sampling, linear mixed-effects model and maximum likelihood (ML)-procedure.</p>
99

Advanced Filtering in Intuitive Robot Programming

Hauan, Tore Martin Madsø January 2006 (has links)
<p>This text deals with the problem of reducing multi-dimensional data in the context of programming an industrial robot. Different ways to treat the positional and orientational data are discussed, and algorithms for each of these are developed and tested on various generated datasets. The outcome of the work was an algorithm expressing the position as three polynomials, one for each coordinate, and the orientation is then reduced with respect to given tolerances in Euler Angles. The resulting algorithm turned out to reduce a physical dataset with 97%. It was concluded that it is very satisfying to be able to reduce a set with this amount without loosing vital information.</p>
100

Sequential Markov random fields and Markov mesh random fields for modelling of geological structures

Stien, Marita January 2006 (has links)
<p>We have been given a two-dimensional image of a geological structure. This structure is used to construct a three-dimensional statistical model, to be used as prior knowledge in the analysis of seismic data. We consider two classes of discrete lattice models for which efficient simulation is possible; sequential Markov random field (sMRF) and Markov mesh random field (MMRF). We first explore models from these two classes in two dimensions, using the maximum likelihood estimator (MLE). The results indicate that a larger neighbourhood should be considered for all the models. We also develop a second estimator, which is designed to match the model with the observation with respect to a set of specified functions. This estimator is only considered for the sMRF model, since that model proved to be flexible enough to give satisfying results. Due to time limitation of this thesis, we could not wait for the optimization of the estimator to converge. Thus, we can not evaluate this estimator. Finally, we extract useful information from the two-dimensional models and specify a sMRF model in three dimensions. Parameter estimation for this model needs approximative techniques, since we only have given observations in two dimensions. Such techniques have not been investigated in this report, however, we have adjusted the parameters manually and observed that the model is very flexible and might give very satisfying results.</p>

Page generated in 0.0275 seconds