21 |
Numerical Simulation of Interacting Bodies with Delays; Application to Marine Seismic Source Arrays.Wisløff, Jens Fredrik Barra January 2007 (has links)
<p>This master thesis has looked at numerical simulation of interacting bodies with delays, especially the situation involving interacting airguns in seismic source arrays. The equations describing the airguns have been derived and the interaction between the airguns has been studied. The resulting delay differential equations have been solved with methods that handle step sizes larger than the delays. The accuracy and efficiency of these methods have been investigated, and compared with Matlab solvers.</p>
|
22 |
Exact Statistical Inference in Nonhomogeneous Poisson Processes, based on SimulationRannestad, Bjarte January 2007 (has links)
<p>We present a general approach for Monte Carlo computation of conditional expectations of the form E[(T)|S = s] given a sufficient statistic S. The idea of the method was first introduced by Lillegård and Engen [4], and has been further developed by Lindqvist and Taraldsen [7, 8, 9]. If a certain pivotal structure is satised in our model, the simulation could be done by direct sampling from the conditional distribution, by a simple parameter adjustment of the original statistical model. In general it is shown by Lindqvist and Taraldsen [7, 8] that a weighted sampling scheme needs to be used. The method is in particular applied to the nonhomogeneous Poisson process, in order to develop exact goodness-of-fit tests for the null hypothesis that a set of observed failure times follow the NHPP of a specic parametric form. In addition exact confidence intervals for unknown parameters in the NHPP model are considered [6]. Different test statistics W=W(T) designed in order to reveal departure from the null model are presented [1, 10, 11]. By the method given in the following, the conditional expectation of these test statistics could be simulated in the absence of the pivotal structure mentioned above. This extends results given in [10, 11], and answers a question stated in [1]. We present a power comparison of 5 of the test statistics considered under the nullhypothesis that a set of observed failure times are from a NHPP with log linear intensity, under the alternative hypothesis of power law intensity. Finally a convergence comparison of the method presented here and an alternative approach of Gibbs sampling is given.</p>
|
23 |
Approximate recursive calculations of discrete Markov random fieldsArnesen, Petter January 2010 (has links)
<p>In this thesis we present an approximate recursive algorithm for calculations of discrete Markov random fields defined on graphs. We write the probability distribution of a Markov random field as a function of interaction parameters, a representation well suited for approximations. The algorithm we establish is a forward-backward algorithm, where the forward part recursively decomposes the probability distribution into a product of conditional distributions. Next we establish two different backward parts to our algorithm. In the first one we are able to simulate from the probability distribution, using the decomposed system. The second one enables us to calculate the marginal distributions for all the nodes in the Markov random field. All the approximations in our algorithm are controlled by a positive parameter, and when this parameter is equal to 0, our algorithm is by definition an exact algorithm. We investigate the performance of our algorithm by the CPU time, and by evaluating the quality of the approximations in various ways. As an example of the usage of our algorithm, we estimate an unknown picture from a degenerated version, using the marginal posterior mode estimate. This is a classical Bayesian problem.</p>
|
24 |
Closed-skew Distributions : Simulation, Inversion and Parameter EstimationIversen, Daniel Høyer January 2010 (has links)
<p>Bayesian closed-skew Gaussian inversion is defined as a generalization of traditional Bayesian Gaussian inversion. Bayesian inversion is often used in seismic inversion, and the closed-skew model is able to capture the skewness in the variable of interest. Different stationary prior models are presented, but the generalization comes at a cost, simulation from high-dimensional pdfs and parameter inference from data is more complicated. An efficient algorithm to generate realizations from the high-dimensional closed-skew Gaussian distribution is presented. A full-likelihood is used for parameter estimation of stationary prior models under exponential dependence structure. The simulation algorithms and estimators are evaluated on synthetic examples. Also a closed-skew T-distribution is presented to include heavy tails in the pdf and the model is presented with some examples. In the last part the simulation algorithm, the different prior models and parameter estimators are demonstrated on real data from a well in the Sleipner Øst field. The full-likelihood estimator seems to be the best estimator for data with exponential dependence structure</p>
|
25 |
Fast Tensor-Product Solvers for the Numerical Solution of Partial Differential Equations : Application to Deformed Geometries and to Space-Time DomainsRøvik, Camilla January 2010 (has links)
<p>Spectral discretization in space and time of the weak formulation of a partial differential equations (PDE) is studied. The exact solution to the PDE, with either Dirichlet or Neumann boundary conditions imposed, is approximated using high order polynomials. This is known as a spectral Galerkin method. The main focus of this work is the solution algorithm for the arising algebraic system of equations. A direct fast tensor-product solver is presented for the Poisson problem in a rectangular domain. We also explore the possibility of using a similar method in deformed domains, where the geometry of the domain is approximated using high order polynomials. Furthermore, time-dependent PDE's are studied. For the linear convection-diffusion equation in $mathbb{R}$ we present a tensor-product solver allowing for parallel implementation, solving $mathcal{O}(N)$ independent systems of equations. Lastly, an iterative tensor-product solver is considered for a nonlinear time-dependent PDE. For most algorithms implemented, the computational cost is $mathcal O (N^{p+1})$ floating point operations and a memory required of $mathcal O (N^{p})$ floating point numbers for $mathcal O (N^{p})$ unknowns. In this work we only consider $p=2$, but the theory is easily extended to apply in higher dimensions. Numerical results verify the expected convergence for both the iterative method and the spectral discretization. Exponential convergence is obtained when the solution and domain geometry are infinitely smooth.</p>
|
26 |
Mimetic Finite Difference Method on GPU : Application in Reservoir Simulation and Well ModelingSingh, Gagandeep January 2010 (has links)
<p>Heterogeneous and parallel computing systems are increasingly appealing to high-performance computing. Among heterogeneous systems, the GPUs have become an attractive device for compute-intensive problems. Their many-core architecture, primarily customized for graphics processing, is now widely available through programming architectures that exploit parallelism in GPUs. We follow this new trend and attempt an implementation of a classical mathematical model describing incompressible single-phase fluid flow through a porous medium. The porous medium is an oil reservoir represented by means of corner-point grids. Important geological and mathematical properties of corner-point grids will be discussed. The model will also incorporate pressure- and rate-controlled wells to be used for some realistic simulations. Among the test models are the 10th SPE Comparative Solution Project Model 2. After deriving the underlying mathematical model, it will be discretised using the numerical technique of Mimetic Finite Difference methods. The heterogeneous system utilised is a desktop computer with an NVIDIA GPU, and the programming architecture to be used is CUDA, which will be described. Two different versions of the final discretised system have been implemented; a traditional way of using an assembled global stiffness sparse matrix, and a Matrix-free version, in which only the element stiffness matrices are used. The former version evaluates two GPU libraries; CUSP and THRUST. These libraries will be briefly decribed. The linear system is solved using the iterative Jacobi-preconditioned conjugate gradient method. Numerical tests on realistic and complex reservoir models shows significant performance benefits compared to corresponding CPU implementations.</p>
|
27 |
Evaluating Different Simulation-Based Estimates for Value and Risk in Interest Rate PortfoliosKierulf, Kaja January 2010 (has links)
<p>This thesis evaluates risk measures for interest rate portfolios. First a model for interest rates is established: the LIBOR market model. The model is applied to Norwegian and international interest rate data and used to calculate the value of the portfolio by using Monte Carlo simulation. Estimation of volatility and correlation is discussed as well as the two risk measures value at risk and expected tail loss. The data used is analysed before the results of the backtesting evaluating the two risk measures are presented.</p>
|
28 |
Rekursiv blokkoppdatering av Isingmodellen / Recursive block updating of the Ising modelSæther, Bjarne January 2006 (has links)
<p>I denne rapporten sammenligner vi tre varianter av Markov Chain Monte Carlo (MCMC) - simulering av Isingmodellen. Vi sammenligner enkeltnode-oppdatering, naiv blokkoppdatering og rekursiv blokkoppdatering. Vi begynner med å gi en generell introduksjon til markovfelt og Isingmodellen. Deretter viser vi det teoretiske fundamentet som MCMC-metoder hviler på. Etter det gir vi en teoretisk introduksjon til enkeltnode-oppdatering. Så gir vi en innføring i naiv blokkoppdatering som er den tradisjonelle metoden å utføre blokkoppdatering på. Deretter gir vi en tilsvarende innføring i en nylig foreslått metode for å gjennomføre blokkoppdatering, nemlig rekursiv blokkoppdatering. Blokkoppdatering er en metode som har vist seg nyttig med hensyn på miksing når vi simulerer. Med det menes at blokkoppdatering har vist seg nyttig i forhold til å utforske utfallsrommet til fordelingen vi er interessert i med færre iterasjoner enn enkeltnode-oppdatering. Problemet med naiv blokkoppdatering er imidlertid at vi raskt får en høy beregningsmengde ved at hver iterasjon tar veldig lang tid. Vi prøver også ut rekursiv blokkoppdatering. Denne tar sikte på å redusere beregningsmengden for hver iterasjon når vi utfører blokkoppdatering på et markovfelt. Vi viser så simuleringsalgoritmer og resultater. Vi har simulert Isingmodellen med enkeltnode-oppdatering, naiv blokkoppdatering og rekursiv blokkoppdatering. Det vi sammenligner er antall iterasjoner før markovfeltet konvergerer og spesielt beregningstiden pr iterasjon. Vi viser at beregningsmengden pr iterasjon øker med 91000 ganger med naiv blokkoppdatering dersom vi går fra en 3 × 3 blokk til en 5 × 5 blokk. Tilsvarende tall for rekursiv blokkoppdatering er en økning på 83 ganger fra en 3 × 3 blokk til en 5 × 5 blokk. Vi sammenligner også tiden det tar før Isingmodellen konvergerer. Når vi benytter naiv blokkoppdatering finner vi at Isingmodellen bruker 15 sekunder på å konvergere med en 3 × 3 blokk, 910 sekunder på å konvergere med en 4×4 blokk og 182000 sekunder med en 5×5 blokk. Tilsvarende tall for rekursiv blokkoppdatering er 3.74 sekunder for en 3 × 3 blokk, 72 sekunder for en 4 × 4 blokk og 141.2 sekunder for en 5×5 blokk. Når vi benytter enkeltnode-oppdatering bruker feltet 6.6 sekunder på å konvergere.</p>
|
29 |
Analysis of commom cause failures in complex safety instrumented systemsLilleheier, Torbjørn January 2008 (has links)
<p>Common cause failures (CCFs) have been an important issue in reliability analysis for several decades, especially when dealing with safety instrumented systems (SIS). Different approaches have been used in order to describe this CCFs, but the topic is still subject to much research and there does not exist a general consensus as to which method is most suitable for dealing with CCFs. The $beta$-factor model is the most popular method today, even though this model has some well-known limitations. Other, more complicated methods, are also developed to describe situations where the $beta$-factor model is inadequate. The purpose of this thesis is to develop a strategy to suggest in which situations the different CCF methods are applicable. This is done by making a survey which includes several of the existing methods, before applying these in concrete SIS-examples. Observing the specific system in operation is a valuable tool and may help in acquiring feedback data to describe the lifetime of specific components and the number of failed components conditioned on the fact that the total system is failed. Since such feedback data usually are scarce and in our case totally absent, assessing whether the obtained results are accurate is difficult. Thus, the numerical results obtained from the analysis are compared to each other with respect to the assumptions of the particular model. For instance, the PDS method, a method developed for the Norwegian offshore industry, contains some assumptions which are different from the assumptions of the $beta$-factor model, and the report provides a study with respect to how these different assumptions lead to different results. Although other models are introduced, most focus is given to the following four, the $beta$-factor model, the PDS method, Markov analysis and stochastic simulation. For ordinary $M$ out of $N$ architectures with identical components, the PDS method is assumed adequate, and for $N=2$, the $beta$-factor model works well. Markov analysis and stochastic simulation are also well suited for modelling ordinary $M$ out of $N$ SIS, but because of the higher level of complexity, these approaches are not deemed necessary for simple systems. The need for Markov analysis becomes evident when working with SIS of a more complex nature, for instance non-identical components. Both the $beta$-factor model and the PDS method are not able to describe the system in full when dealing with certain types of systems that have different failure rates. An even more complex SIS is also included to illustrate when stochastic simulation is needed. This SIS is modelled by designing a computer algorithm. This computer algorithm describes how the system behaves in the long run, which in turn provides the estimate of interest, namely the average probability of failure on demand (PFD). Finally, it is always important to remember that if there exist any feedback data or expert knowledge describing the distribution of the number of components that fail in a CCF, this is vital in deciding the most descriptive CCF model. By the term ``descriptive model'', we mean a model that both describes the architecture of the system as accurately as possible, and also makes as few assumptions as possible. If it is known, either by applying expert opinion or from feedback data, that if a CCF occurs, all components of the SIS will always be disabled, then the $beta$-factor model is an adequate way of modelling most systems. If such knowledge does not exist, or it is known that a CCF may sometimes disable only a part of the SIS, then the $beta$-factor model will not be the most descriptive model.</p>
|
30 |
Continuation and Bifurcation software in MATLABRavnås, Eirik January 2008 (has links)
<p>This article contains discussions of the algorithms used for the construction of the continuation software implemented in this thesis. The aim of the continuation was to be able to perform continuation of equilibria and periodic solutions originating from a Hopf bifurcation point. Algorithms for detection of simple branch points, folds, and Hopf bifurcation points have also been implemented. Some considerations are made with regard to optimization, and two schemes for mesh adaptation of periodic solutions based on moving mesh equations are suggested.</p>
|
Page generated in 0.0928 seconds