• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 174
  • 24
  • Tagged with
  • 198
  • 198
  • 198
  • 198
  • 198
  • 198
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Fast Tensor-Product Solvers for the Numerical Solution of Partial Differential Equations : Application to Deformed Geometries and to Space-Time Domains

Røvik, Camilla January 2010 (has links)
Spectral discretization in space and time of the weak formulation of a partial differential equations (PDE) is studied. The exact solution to the PDE, with either Dirichlet or Neumann boundary conditions imposed, is approximated using high order polynomials. This is known as a spectral Galerkin method. The main focus of this work is the solution algorithm for the arising algebraic system of equations. A direct fast tensor-product solver is presented for the Poisson problem in a rectangular domain. We also explore the possibility of using a similar method in deformed domains, where the geometry of the domain is approximated using high order polynomials. Furthermore, time-dependent PDE's are studied. For the linear convection-diffusion equation in $mathbb{R}$ we present a tensor-product solver allowing for parallel implementation, solving $mathcal{O}(N)$ independent systems of equations. Lastly, an iterative tensor-product solver is considered for a nonlinear time-dependent PDE. For most algorithms implemented, the computational cost is $mathcal O (N^{p+1})$ floating point operations and a memory required of $mathcal O (N^{p})$ floating point numbers for $mathcal O (N^{p})$ unknowns. In this work we only consider $p=2$, but the theory is easily extended to apply in higher dimensions. Numerical results verify the expected convergence for both the iterative method and the spectral discretization. Exponential convergence is obtained when the solution and domain geometry are infinitely smooth.
182

Mimetic Finite Difference Method on GPU : Application in Reservoir Simulation and Well Modeling

Singh, Gagandeep January 2010 (has links)
Heterogeneous and parallel computing systems are increasingly appealing to high-performance computing. Among heterogeneous systems, the GPUs have become an attractive device for compute-intensive problems. Their many-core architecture, primarily customized for graphics processing, is now widely available through programming architectures that exploit parallelism in GPUs. We follow this new trend and attempt an implementation of a classical mathematical model describing incompressible single-phase fluid flow through a porous medium. The porous medium is an oil reservoir represented by means of corner-point grids. Important geological and mathematical properties of corner-point grids will be discussed. The model will also incorporate pressure- and rate-controlled wells to be used for some realistic simulations. Among the test models are the 10th SPE Comparative Solution Project Model 2. After deriving the underlying mathematical model, it will be discretised using the numerical technique of Mimetic Finite Difference methods. The heterogeneous system utilised is a desktop computer with an NVIDIA GPU, and the programming architecture to be used is CUDA, which will be described. Two different versions of the final discretised system have been implemented; a traditional way of using an assembled global stiffness sparse matrix, and a Matrix-free version, in which only the element stiffness matrices are used. The former version evaluates two GPU libraries; CUSP and THRUST. These libraries will be briefly decribed. The linear system is solved using the iterative Jacobi-preconditioned conjugate gradient method. Numerical tests on realistic and complex reservoir models shows significant performance benefits compared to corresponding CPU implementations.
183

Evaluating Different Simulation-Based Estimates for Value and Risk in Interest Rate Portfolios

Kierulf, Kaja January 2010 (has links)
This thesis evaluates risk measures for interest rate portfolios. First a model for interest rates is established: the LIBOR market model. The model is applied to Norwegian and international interest rate data and used to calculate the value of the portfolio by using Monte Carlo simulation. Estimation of volatility and correlation is discussed as well as the two risk measures value at risk and expected tail loss. The data used is analysed before the results of the backtesting evaluating the two risk measures are presented.
184

Rekursiv blokkoppdatering av Isingmodellen / Recursive block updating of the Ising model

Sæther, Bjarne January 2006 (has links)
I denne rapporten sammenligner vi tre varianter av Markov Chain Monte Carlo (MCMC) - simulering av Isingmodellen. Vi sammenligner enkeltnode-oppdatering, naiv blokkoppdatering og rekursiv blokkoppdatering. Vi begynner med å gi en generell introduksjon til markovfelt og Isingmodellen. Deretter viser vi det teoretiske fundamentet som MCMC-metoder hviler på. Etter det gir vi en teoretisk introduksjon til enkeltnode-oppdatering. Så gir vi en innføring i naiv blokkoppdatering som er den tradisjonelle metoden å utføre blokkoppdatering på. Deretter gir vi en tilsvarende innføring i en nylig foreslått metode for å gjennomføre blokkoppdatering, nemlig rekursiv blokkoppdatering. Blokkoppdatering er en metode som har vist seg nyttig med hensyn på miksing når vi simulerer. Med det menes at blokkoppdatering har vist seg nyttig i forhold til å utforske utfallsrommet til fordelingen vi er interessert i med færre iterasjoner enn enkeltnode-oppdatering. Problemet med naiv blokkoppdatering er imidlertid at vi raskt får en høy beregningsmengde ved at hver iterasjon tar veldig lang tid. Vi prøver også ut rekursiv blokkoppdatering. Denne tar sikte på å redusere beregningsmengden for hver iterasjon når vi utfører blokkoppdatering på et markovfelt. Vi viser så simuleringsalgoritmer og resultater. Vi har simulert Isingmodellen med enkeltnode-oppdatering, naiv blokkoppdatering og rekursiv blokkoppdatering. Det vi sammenligner er antall iterasjoner før markovfeltet konvergerer og spesielt beregningstiden pr iterasjon. Vi viser at beregningsmengden pr iterasjon øker med 91000 ganger med naiv blokkoppdatering dersom vi går fra en 3 × 3 blokk til en 5 × 5 blokk. Tilsvarende tall for rekursiv blokkoppdatering er en økning på 83 ganger fra en 3 × 3 blokk til en 5 × 5 blokk. Vi sammenligner også tiden det tar før Isingmodellen konvergerer. Når vi benytter naiv blokkoppdatering finner vi at Isingmodellen bruker 15 sekunder på å konvergere med en 3 × 3 blokk, 910 sekunder på å konvergere med en 4×4 blokk og 182000 sekunder med en 5×5 blokk. Tilsvarende tall for rekursiv blokkoppdatering er 3.74 sekunder for en 3 × 3 blokk, 72 sekunder for en 4 × 4 blokk og 141.2 sekunder for en 5×5 blokk. Når vi benytter enkeltnode-oppdatering bruker feltet 6.6 sekunder på å konvergere.
185

Parameter estimation in convolved categorical models

Lindberg, David January 2010 (has links)
In this thesis, we solve the seismic inverse problem in a Bayesian setting and perform the associated model parameter estimation. The subsurface rock layers are represented by categorical variables, which depends on some response variables. The observations recorded appear as a convolution of these response variables. We thus assess the categorical variables' posterior distribution based on a prior distribution and a convolved likelihood distribution. Assuming that the prior model follows a Markov chain, the full model becomes a hidden Markov model. In the associated Posterior-Prior deconvolution algorithm, we approximate the convolved likelihood in order to use the recursive forward-backward algorithm. The prior and likelihood distributions are parameter dependent, and two parameter estimation approaches are discussed. Both estimation methods make use of the marginal likelihood distribution, which can be computed during the forward-backward algorithm.In two thorough test studies, we perform parameter estimation in the likelihood. Approximate posterior models, based on the respective parameter estimates, are computed by Posterior-Prior deconvolution algorithms for different orders. The signal-to-noise ratio, a ratio between the observation mean and variance, is found to be of importance. The results are generally more reliable for large values of this ratio. A more realistic seismic example is also introduced, with a more complex model description. The posterior model approximations are here more poor, due to under-estimation of the noise parameter.
186

Parameter estimation in convolved categorical models

Lindberg, David January 2010 (has links)
In this thesis, we solve the seismic inverse problem in a Bayesian setting and perform the associated model parameter estimation. The subsurface rock layers are represented by categorical variables, which depends on some response variables. The observations recorded appear as a convolution of these response variables. We thus assess the categorical variables' posterior distribution based on a prior distribution and a convolved likelihood distribution. Assuming that the prior model follows a Markov chain, the full model becomes a hidden Markov model. In the associated Posterior-Prior deconvolution algorithm, we approximate the convolved likelihood in order to use the recursive forward-backward algorithm. The prior and likelihood distributions are parameter dependent, and two parameter estimation approaches are discussed. Both estimation methods make use of the marginal likelihood distribution, which can be computed during the forward-backward algorithm.In two thorough test studies, we perform parameter estimation in the likelihood. Approximate posterior models, based on the respective parameter estimates, are computed by Posterior-Prior deconvolution algorithms for different orders. The signal-to-noise ratio, a ratio between the observation mean and variance, is found to be of importance. The results are generally more reliable for large values of this ratio. A more realistic seismic example is also introduced, with a more complex model description. The posterior model approximations are here more poor, due to under-estimation of the noise parameter.
187

Markov Random Field Modelling of Diagenetic Facies in Carbonate Reservoirs

Larsen, Elisabeth Finserås January 2010 (has links)
Bayesian inversion is performed on real observations to predict the diagenetic classes of a carbonate reservoir where the proportions of carbonate rock and depositional properties are known. The complete solution is the posterior model. The model is first developed in a 1D setting where the likelihood model is generalized Dirichlet distributed and the prior model is a Markov chain. The 1D model is used to justify the general assumptions on which the model is based. Thereafter the model is expanded to a 3D setting where the likelihood model remains the same and the prior model is a profile Markov random field where each profile is a Markov chain. Lateral continuity is incorporated into the model by adapting the transition matrices to fit a given associated limiting distribution, two algorithms for the adjustment are presented. The result is a good statistical formulation of the problem in 3D. Results from a study on real observations from a 2D reservoir show that simulations reproduce characteristics of the real data and it is also possible to incorporate conditioning on well observations into the model.
188

Betydning av feilspesifisert underliggende hasard for estimering av regresjonskoeffisienter og avhengighet i frailty-modeller / Effect of Baseline Hazard Misspecification on Regression Estimates and Dependence in Frailty Models

Mortensen, Bjørnar Tumanjan January 2007 (has links)
Med levetidsdata for et stort antall familier kan man bruke frailty-modeller til å finne risikofaktorer og avhengighet innad i familien. En måte å gjøre dette på er å anta en realistisk fordeling for frailty-variabelen og en fordeling for den underliggende hasarden. Det er ikke gjort noen store undersøkelser om betydningen av feilspesifisert underliggende hasard i frailty-modeller tidligere. Grunnen til dette er at det har vært vanlig å anta en ikke-parametrisk underliggende hasard. Dette er mulig for enkle frailty-modeller, men for frailty-modeller med ulik grad av korrelasjon innen en familie blir dette straks svært vanskelig. Derfor er det interessant å undersøke betydningen av feilspesifisert underliggende hasard. I hele denne oppgaven antar vi at den underliggende hasarden er Weibullfordelt. Frailty-fordelingen antas å være enten gamma- eller stablefordelt. Vi simulerer data der den underliggende hasarden er enten Gompertzfordelt, badekarformet eller log-logistisk fordelt. Basert på sannsynlighetsmaksimeringsestimatoren for avhengigheten og regresjonsparametrene undersøker vi betydningen av feilspesifisert underliggende hasard. Simuleringene viser at dersom det er et stor variasjon i levetidene og et stort sprik mellom virkelig og tilpasset underliggende hasard, underestimeres både risikofakorene og avhengigheten i relativt stor grad. Dette gjelder både når frailty-variabelen er stablefordelt og når den er gammafordelt. Enda mer alvorlig er det dersom også frailty-fordelingen er feilspesifisert.
189

Bandwith selection based on a special choice of the kernel

Oksavik, Thomas January 2007 (has links)
We investigate methods of bandwidth selection in kernel density estimation for a wide range of kernels, both conventional and non-conventional.
190

Parallel Multiple Proposal MCMC Algorithms

Austad, Haakon Michael January 2007 (has links)
We explore the variance reduction achievable through parallel implementation of multi-proposal MCMC algorithms and use of control variates. Implemented sequentially multi-proposal MCMC algorithms are of limited value, but they are very well suited for parallelization. Further, discarding the rejected states in an MCMC sampler can intuitively be interpreted as a waste of information. This becomes even more true for a multi-proposal algorithm where we discard several states in each iteration. By creating an alternative estimator consisting of a linear combination of the traditional sample mean and zero mean random variables called control variates we can improve on the traditional estimator. We present a setting for the multi-proposal MCMC algorithm and study it in two examples. The first example considers sampling from a simple Gaussian distribution, while for the second we design the framework for a multi-proposal mode jumping algorithm for sampling from a distribution with several separated modes. We find that the variance reduction achieved from our control variate estimator in general increases as the number of proposals in our sampler increase. For our Gaussian example we find that the benefit from parallelization is small, and that little is gained from increasing the number of proposals. The mode jumping example however is very well suited for parallelization and we get a relative variance reduction pr time of roughly 80% with 16 proposals in each iteration.

Page generated in 0.0808 seconds