• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 295
  • 64
  • Tagged with
  • 359
  • 356
  • 340
  • 339
  • 251
  • 198
  • 105
  • 48
  • 37
  • 36
  • 36
  • 36
  • 36
  • 36
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Multiscale Modelling of Elastic Parameters

Børset, Kari January 2008 (has links)
<p>Petrophysical properties in general and elasticity in particular have heterogeneous variations over many length scales. In a reservoir model, on which one for example can simulate fluid flow, seismic responses and resisitivity, it is necessary that the petrophysical parameters represent all these variations, even though the model is at a scale to coarse to capture all these properties in detail. Upscaling is a technique to bring information from one scale to a coarser in a consistent manner. Thus one upscaled model can be seen as homogeneous with a set of effective properties for its scale. For elastic properties, upscaling has traditionally been done by different volume weighted averaging methods like Voigt, Reuss or Backus averages which utilize limited or no information about the geology of the rock. The objective here is to do upscaling based on a technology where geological information is taken into account. This thesis considers different aspects governing elasticity upscaling in general and general geometry upscaling in particular. After the theory part it considers verification of the general geometry method and the implementation of this, projection of an elasticity tensor onto a certain symmetry and visualization of elastic moduli. Next the importance of including geological information is studied and upscaling is done on examples of realistic reservoir models. Finally elasticity upscaling utilized in a bottom-up approach to model 4D seismic is considered.</p>
122

Modelling of Electromagnetic Fields in Conductive Media and development of a Full Vectorial Contrast Source Inversion Method

Wiik, Torgeir January 2008 (has links)
<p>Seismic surveys are well established in hydrocarbon prospecting, and technology for processing seismic data has been developed through decades. The first electromagnetic survey for hydrocarbon prospecting however, was performed in 2000, and as a consequence of the short time span the technology is not as well developed as in seismics. For instance, the need for efficient and robust forward modelling software and inversion schemes for collected data is urgent. In this thesis forward modelling using integral equations and the Contrast Source Inversion (CSI) method is investigated for forward and inverse 3D electromagnetic scattering experiments in hydrocarbon prospecting, respectively. The mathematical model is developed in an arbitrary isotropic, conductive medium, with contrast in electric permittivity and electric conductivity between the scattering object and the background, while in the numerical examples the background model is restricted to a horizontally layered medium with variations in the $z$-direction only and contrast in electric conductivity. The difference in electric conductivity is considered the backbone of electromagnetic hydrocarbon prospecting. The main result concerning forward modelling in this thesis is the establishment of a, to my knowledge, previously unpublished method for solving the electric Lippmann-Schwinger equation in a conductive medium by fixed point iteration. In the inversion part of this thesis the previously scalar CSI method is extended to a full vector valued method. A new CSI method for inversion with respect to all the electromagnetic parameters (the electric permittivity, electric conductivity and magnetic permeability) is also presented, which I have yet to find treated elsewhere. Only the former method is tested numerically, using synthetic data, due to the computational complexity of the latter. The numerical results from the forward modelling show the numerical validity of integral equation modelling and the fixed point iteration, whereas the results from the inversion show some promise. With several source and receiver lines present the lateral position of the scattering object is reconstructed well, whereas the vertical position causes problems. When textit{a priori} information about the position of the scattering object is introduced to further regularise the problem, the approximate position of it is successfully inverted, which illustrates the essential part additional regularisation plays in this inverse scattering problem. Thus the CSI method could be useful in petroleum geophysics, and should be developed further for the purpose of locating hydrocarbons in the subsurface. Several possibilities for further work is noted. This work was performed for StatoilHydro ASA.</p>
123

A heavy tailed statistical model applied in anti-collision calculations for petroleum wells

Gjerde, Tony January 2008 (has links)
<p>Anti-collision calculations are done during the planning of a new petroleum well. These calculations are required in order to control the risk of having a well-collision, which is an unwanted event at any cost. The risk of having a well-collision is closely related to the position uncertainty both of the well that is planned and of the existing wells in the given region. Earlier literature has indicated that the distribution of the position errors are more heavy-tailed than a normal distribution, which leads to the question whether the current methods are accurate enough. The currently used industry standard calculates the standard deviation of the centre to centre distance by an approximation, and assumes that the centre to centre distance is normally distributed. In this thesis we use a heavy-tailed Normal Inverse Gaussian (NIG) distribution for the declination error source in MWD magnetic directional surveying, which lead to a position uncertainty that is heavy-tailed relative to the multivariate normal distribution. The parameters of the NIG-distribution are estimated from processed magnetic field data from the Tromsø geomagnetic observation station. The NIG-distribution requires the use of Monte Carlo simulations in order to apply the currently used industry approach. Other error sources are also included in the error model to give a more realistic position uncertainty. Three different anti-collision cases demonstrate the differences in using the NIG error model and the normal error model. We compare the simulation based results against the currently used methodology. The results are very dependent on the well geometries. The results differ significantly, and the NIG error model is the most conservative distribution in most cases, with respect to whether a wellplan should be realized or not. However, there are cases where a normally distributed declination error gives more conservative decisions than the NIG-distribution. As an alternative to change the distribution of the declination error, we propose two corrective actions to improve the existing anti-collision methodology. One action is to exchange one of the approximations in the current methodology with simulations or analytical computations. The other action is to correct for bias in the expected position, which is caused by the NIG error model.</p>
124

Automatic Parametrisation and Block pseudo Likelihood Estimation for binary Markov random Fields

Toftaker, Håkon January 2008 (has links)
<p>Discrete Markov random fields play an important role in spatial statistics, and are applied in many different areas. Models which consider only pairwise interaction between sites such as the Ising model often perform well as a prior in a Bayesian setting but are generally unable to provide a realistic representation of a typical scene. Models which are defined by considering more than only two points have been shown to do well in describing many different types of textures. The specification of such models is often rather tedious, both in defining the parametric model, and in estimating the parameters. In this paper we present a procedure which in an automatic fashion defines a parametric model from a training image. On the basis of the frequencies of the different types of local configurations we define the potential function of all the different clique configurations from a relatively small number of parameters. Then we make use of a forward-backward algorithm to compute a maximum block pseudo likelihood estimator for the parametric models resulting from the automatic procedure. Then this set of methods is used to define Markov random field models from three different training images. The same algorithm which is used to calculate the block pseudo likelihood is used to implement a block Gibbs sampler. This is used to explore the properties of the models through simulation. The procedure is tested for a set of different input values. The analysis shows that the procedure is quite able to produce a reasonable presentation for one of the training images but performs insufficiently on the others. The main problem seems to be the ratio between black and white, and this seems to be a problem caused mainly by the estimator. It is therefore difficult to make a conclusion about the quality of the parametric model. We also show that by modifying the estimated potential function slightly we can get a model which is able to describe the second training image quite well.</p>
125

A blockyness Constraint for seismic AVA Inversion

Jensås, Ingrid Østgård January 2008 (has links)
<p>The aim of seismic inversion is to determine the distribution of elastic parameters from recorded seismic reflection data. If a combination of elastic parameters is known, they indicate a certain fluid or lithology. Elastic parameters can therefore be very good hydrocarbon indicators. Although it is possible to interpret the reflection data from seismic acquisitions after processing, an improved analysis can be achieved by inverting for elastic properties. This can contribute to improved vertical resolution of the image. This work applies different applications of the blocky seismic inversion technique, which is based on Bayesian inversion. Generally, a Gaussian prior for the three elastic parameters P-wave velocity, S-wave velocity and density is assumed in inverse problems. This assumption does not always provide sharp edges between layers, and the idea of the work reported here is to improve this by assuming a prior distribution for the contrasts in the elastic parameters with more probability of high contrasts. Since the Cauchy distribution has heavier tails than the normal distribution, the idea for the blocky inversion is to assume a Cauchy prior distribution for the contrasts in the elastic parameters. Inversion is a non-unique process, hence, the more reasonable prior information we use, the better the result. When using statistical inversion based on Bayes' rule, the prior distribution is used to shape the solution, and the modified Cauchy norm can help provide a solution with better focused layer boundaries. The scale parameter in the Cauchy distribution is not very easy to estimate, and different methods are tested. Spatial coupling of the model parameters m is introduced along a line to provide lateral consistency and robust results from inverse problems. The 2D inversion was done by assuming a Markov model where the inversion result at one location depends only on the neighbouring traces. This implies a sparse structure of the matrix to be inverted, and Cholesky factorization was used as a computational tool. This method allows tracewise nesting in contrast to setting up the whole operation matrix for all traces at a line, and therefore reduces the computational time significantly. The aim of this approach was to consider the use of lateral correlation while inverting data as a sophisticated way of stacking data to improve the signal to noise ratio. To get a picture of the uncertainties in the inversion result, different methods, such as importance sampling was performed, even though the answers were unreasonable large. This remains a topic for further work. The data used in this work are a synthetic created case and real seismic data from the Kvitebjørn field in the North Sea.</p>
126

Reduced Basis Methods for Partial Differential Equations : Evaluation of multiple non-compliant flux-type output functionals for a non-affine electrostatics problem

Eftang, Jens Lohne January 2008 (has links)
<p>A method for rapid evaluation of flux-type outputs of interest from solutions to partial differential equations (PDEs) is presented within the reduced basis framework for linear, elliptic PDEs. The central point is a Neumann-Dirichlet equivalence that allows for evaluation of the output through the bilinear form of the weak formulation of the PDE. Through a comprehensive example related to electrostatics, we consider multiple outputs, a posteriori error estimators and empirical interpolation treatment of the non-affine terms in the bilinear form. Together with the considered Neumann-Dirichlet equivalence, these methods allow for efficient and accurate numerical evaluation of a relationship mu->s(mu), where mu is a parameter vector that determines the geometry of the physical domain and s(mu) is the corresponding flux-type output matrix of interest. As a practical application, we lastly employ the rapid evaluation of s-> s(mu) in solving an inverse (parameter-estimation) problem.</p>
127

Methods for Extreme Value Statistics Based on Measured Time Series

Haug, Even January 2008 (has links)
<p>The thesis describes the Average Exceedance Rate (AER) method, which is a method for predicting return levels from sampled time series. The AER method is an alternative to the Peaks over threshold (POT) method, which is based on the assumption that data exceeding a certain threshold will behave asymptotically. The AER methods avoids this assumption by using sub-asymptotic data instead. Also, instead of using declustering to obtain independent data, correlation among the data is dealt with by assuming a Markov-like property. A practical procedure for using the AER method is proposed and tested on two sets of real data. These are a set of wind speed data from Norway and a set of wave height data from the Norwegian continental shelf. From the results, the method appears to give satisfactory results for the wind speed data, but for the wave height data its use appears to be invalid. However, the method itself seems to be robust, and to have certain advantages when compared to the POT method.</p>
128

Simulation of two-phase flow with varying surface tension.

Lervåg, Karl Yngve January 2008 (has links)
<p>This thesis is a study on the effects of varying surface tension along an interface separating two fluids. Varying surface tension leads to tangential forces along the interface. This is often called the Marangoni effect. These forces are discussed in detail, and two test cases are considered to analyse the Marangoni effect, and to verify the present implementation. The first test studies steady-state two-phase flow where the fluids are separated with plane interfaces and the flow is driven by a linear surface-tension gradient. The second case is an analysis of the initial forces on a two-dimensional droplet due to a linear surface-tension gradient. The tests indicate that the present implementation is capable of simulating two-phase flow with a given surface-tension gradient. The underlying model is a two-phase flow model for Newtonian fluids with constant viscosity and density. The two-phase model is based on the Navier-Stokes equations coupled with a singular surface force, which together with the difference in fluid properties induces discontinuities across the interface. The Navier-Stokes equations are solved using a projection method, and a combination of the level-set method for capturing the interface and the ghost-fluid method (GFM) for handling the interface discontinuities. The thesis also discusses the effect of surfactants on an interface. The presence of surfactants reduces the local surface tension, and a non-uniform surfactant distribution results in varying surface tension and the Marangoni effect. A surfactant model is reviewed, where an equation of state couples the surface tension to the surfactant concentration and a transport equation is used to solve the surfactant mass-conservation.</p>
129

Numerical solution of partial differential equations in time-dependent domains

Tråsdahl, Øystein January 2008 (has links)
<p>Numerical solution of heat transfer and fluid flow problems in two spatial dimensions is studied. An arbitrary Lagrangian-Eulerian (ALE) formulation of the governing equations is applied to handle time-dependent geometries. A Legendre spectral method is used for the spatial discretization, and the temporal discretization is done with a semi-implicit multi-step method. The Stefan problem, a convection-diffusion boundary value problem modeling phase transition, makes for some interesting model problems. One problem is solved numerically to obtain first, second and third order convergence in time, and another numerical example is used to illustrate the difficulties that may arise with distribution of computational grid points in moving boundary problems. Strategies to maintain a favorable grid configuration for some particular geometries are presented. The Navier-Stokes equations are more complex and introduce new challenges not encountered in the convection-diffusion problems. They are studied in detail by considering different simplifications. Some numerical examples in static domains are presented to verify exponential convergence in space and second order convergence in time. A preconditioning technique for the unsteady Stokes problem with Dirichlet boundary conditions is presented and tested numerically. Free surface conditions are then introduced and studied numerically in a model of a droplet. The fluid is modeled first as Stokes flow, then Navier-Stokes flow, and the difference in the models is clearly visible in the numerical results. Finally, an interesting problem with non-constant surface tension is studied numerically.</p>
130

A statistical simulation-based framework for sample size considerations in case-control SNP association studies

Edsberg, Erik January 2008 (has links)
<p>In the thesis, a statistical simulation-based framework is presented that is intended for making sample size and power considerations prior to case-control association studies. It reviews biological background and biallelic single- and multiple-SNP disease models, with a focus on single-SNP models. Odds ratios, multiple testing, sample size, statistical power and the genomeSIM package are also reviewed. The framework is tested with the MAX stat method on a dominant disease model, demonstrating that it can be used for assessing whether different sample sizes are sufficient for detecting a causal SNP.</p>

Page generated in 0.0324 seconds