• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 295
  • 64
  • Tagged with
  • 359
  • 356
  • 340
  • 339
  • 251
  • 198
  • 105
  • 48
  • 37
  • 36
  • 36
  • 36
  • 36
  • 36
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Hierarchical Ensemble Kalman Filter : for Observations of Production and 4-D Seismic Data

Sætrom, Jon January 2007 (has links)
Hierarchical Bayesian sequential Reservoir History matching, seismic inversion, Ensemble Kalman Filter,
182

Estimation of Resrvoir Properties by Joint Inversion of Seismic AVO and CSEM data

Holm, Andreas January 2007 (has links)
Porosity and water saturation in a horizontal top-reservoir are estimated from seismic AVO (Amplitude Versus Offset) data and Controlled Source Electromagnetic (CSEM) data jointly. A model connecting porosity and saturation to both AVO effects and to the phase shift of electromagnetic signals is constructed. In this model, Gassmann's equations, Archie's law, Zoeppritz' equations and ray-tracing is involved. We use a Bayesian approach to solve the inversion problem, and the solution is given as posterior distributions for the parameters of interest. We also investigate the noise levels in the two types of data, and how these affect the estimates of the reservoir properties. Gaussian assumptions and linearizations are made to ensure analytically tractable posterior distributions for porosity and saturation, and a Gibbs sampler is used to explore the joint posterior for porosity, saturation and noise levels. The method is applied to both synthetic data, and field data from the Troll gas field. The results from the joint inversion are compared to results from using seismic data exclusively and a clear improvement is found in the estimates of the synthetic case. The results from the Troll data are more ambiguous, probably caused by the problem of picking seismic data along the top-reservoir and inaccuracies in the fixed parameters in the geophysical forward model.
183

Security analysis of blind signatures and group signatures

Nordli, Børge January 2007 (has links)
We present the latest formal security definitions for blind signature schemes and for group signature schemes. We start by introducing theory about algorithms, probability distributions, distinguishers, protocol attacks and experiments, which is needed to understand the definitions. In the case of blind signatures, we define blindness and non-forgeability, and we present the blind Schnorr and Okamoto-Schnorr signature schemes and prove that the latter is secure. For group signatures, we define full-anonymity and full-non-forgeability (full-traceability). In the end, we sketch a secure general group signature scheme presented by Bellare, Micciancio and Warinschi.
184

In silico Investigation of Possible Mitotic Checkpoint Signalling Mechanisms

Kirkeby, Håkon January 2007 (has links)
The mitotic checkpoint is the major bio-chemical pathway acting to ensure stable genome content in cell division. A delay in chromosome segregation is enforced as long as at least one kinetochore is in lack of proper attachment to the mitotic spindle, something that prevents premature initiation of anaphase and uneven chromosome distribution. The backbone of the mitotic checkpoint control system is established as the production of a wait-anaphase signal at the unattached kinetochores. However, how this signal is able to support a functional checkpoint is unclear. To explore the performance of the wait-anaphase signal in terms of providing the mitotic checkpoint with high fidelity, a mathematical modelling framework is constructed that simulates the spatially distinct production of anaphase inhibitors, their diffusion in the cytoplasm and interference with the anaphase-promoting machinery. The model is used to analyse the performance of several different signalling mechanisms, with emphasis on testing the ability to maintain tight inhibition and allow rapid release of the anaphase promoter.
185

Bayesian Text Categorization

Næss, Arild Brandrud January 2007 (has links)
Natural language processing is an interdisciplinary field of research which studies the problems and possibilities of automated generation and understanding of natural human languages. Text categorization is a central subfield of natural language processing. Automatically assigning categories to digital texts has a wide range of applications in today’s information society—from filtering spam to creating web hierarchies and digital newspaper archives. It is a discipline that lends itself more naturally to machine learning than to knowledge engineering; statistical approaches to text categorization are therefore a promising field of inquiry. We provide a survey of the state of the art in text categorization, presenting the most widespread methods in use, and placing particular emphasis on support vector machines—an optimization algorithm that has emerged as the benchmark method in text categorization in the past ten years. We then turn our attention to Bayesian logistic regression, a fairly new, and largely unstudied method in text categorization. We see how this method has certain similarities to the support vector machine method, but also differs from it in crucial respects. Notably, Bayesian logistic regression provides us with a statistical framework. It can be claimed to be more modular, in the sense that it is more open to modifications and supplementations by other statistical methods; whereas the support vector machine method remains more of a black box. We present results of thorough testing of the BBR toolkit for Bayesian logistic regression on three separate data sets. We demonstrate which of BBR’s parameters are of importance; and we show that its results compare favorably to those of the SVMli ght toolkit for support vector machines. We also present two extensions to the BBR toolkit. One attempts to incorporate domain knowledge by way of the prior probability distributions of single words; the other tries to make use of uncategorized documents to boost learning accuracy.
186

Application of the wavelet transform for analysis of ultrasound images

Kleiven, Eivind January 2008 (has links)
In this master thesis we analyse medical ultrasound images using the wavelet transform. Mathematical theory is introduced for both one-dimensional and two-dimensional functions. Three edge detectors based on the mathematical theory introduced are given. Two of the three edge detectors are suggested by the author and one is an implementation of a known edge detector called the Canny edge detector. Our implementation will differ slightly from the original Canny edge detector since in our implementation we use the wavelet transform. All three edge detectors are applied on several images and the result is discussed. The multiscale behavior of the wavelet transform makes it usefull for edge detection. For small scales it is sensitive to noise, but with good localisation of edges. For large scales it is not as sensitive to noise, but with poorer localisation. One problem when designing an edge detector is to find the scale that have the best trade-off between localisation and noise sensitivity. We suggest an algorithm that automatic selects this scale using information from the wavelet transform across larger scales. The result is an algorithm that works satisfactorily for a set of images that differs in amount of noise and contrast between objects in the image. An edge detector for one-dimensinal signals are given. This edge detector works very well for locating singularities and characterising Lipschitz regularity in one-dimensional signals. However, as an edge detector for images it does not function satisfactorily. Further investigation should be done on how to use the multiscale information carried by the wavelet transform. The author are convinced that better edge detectors that are less sensitive to noise with good localisation properites can be derived using the wavelet transform across scales.
187

Multiscale Modelling of Elastic Parameters

Børset, Kari January 2008 (has links)
Petrophysical properties in general and elasticity in particular have heterogeneous variations over many length scales. In a reservoir model, on which one for example can simulate fluid flow, seismic responses and resisitivity, it is necessary that the petrophysical parameters represent all these variations, even though the model is at a scale to coarse to capture all these properties in detail. Upscaling is a technique to bring information from one scale to a coarser in a consistent manner. Thus one upscaled model can be seen as homogeneous with a set of effective properties for its scale. For elastic properties, upscaling has traditionally been done by different volume weighted averaging methods like Voigt, Reuss or Backus averages which utilize limited or no information about the geology of the rock. The objective here is to do upscaling based on a technology where geological information is taken into account. This thesis considers different aspects governing elasticity upscaling in general and general geometry upscaling in particular. After the theory part it considers verification of the general geometry method and the implementation of this, projection of an elasticity tensor onto a certain symmetry and visualization of elastic moduli. Next the importance of including geological information is studied and upscaling is done on examples of realistic reservoir models. Finally elasticity upscaling utilized in a bottom-up approach to model 4D seismic is considered.
188

A heavy tailed statistical model applied in anti-collision calculations for petroleum wells

Gjerde, Tony January 2008 (has links)
Anti-collision calculations are done during the planning of a new petroleum well. These calculations are required in order to control the risk of having a well-collision, which is an unwanted event at any cost. The risk of having a well-collision is closely related to the position uncertainty both of the well that is planned and of the existing wells in the given region. Earlier literature has indicated that the distribution of the position errors are more heavy-tailed than a normal distribution, which leads to the question whether the current methods are accurate enough. The currently used industry standard calculates the standard deviation of the centre to centre distance by an approximation, and assumes that the centre to centre distance is normally distributed. In this thesis we use a heavy-tailed Normal Inverse Gaussian (NIG) distribution for the declination error source in MWD magnetic directional surveying, which lead to a position uncertainty that is heavy-tailed relative to the multivariate normal distribution. The parameters of the NIG-distribution are estimated from processed magnetic field data from the Tromsø geomagnetic observation station. The NIG-distribution requires the use of Monte Carlo simulations in order to apply the currently used industry approach. Other error sources are also included in the error model to give a more realistic position uncertainty. Three different anti-collision cases demonstrate the differences in using the NIG error model and the normal error model. We compare the simulation based results against the currently used methodology. The results are very dependent on the well geometries. The results differ significantly, and the NIG error model is the most conservative distribution in most cases, with respect to whether a wellplan should be realized or not. However, there are cases where a normally distributed declination error gives more conservative decisions than the NIG-distribution. As an alternative to change the distribution of the declination error, we propose two corrective actions to improve the existing anti-collision methodology. One action is to exchange one of the approximations in the current methodology with simulations or analytical computations. The other action is to correct for bias in the expected position, which is caused by the NIG error model.
189

A blockyness Constraint for seismic AVA Inversion

Jensås, Ingrid Østgård January 2008 (has links)
The aim of seismic inversion is to determine the distribution of elastic parameters from recorded seismic reflection data. If a combination of elastic parameters is known, they indicate a certain fluid or lithology. Elastic parameters can therefore be very good hydrocarbon indicators. Although it is possible to interpret the reflection data from seismic acquisitions after processing, an improved analysis can be achieved by inverting for elastic properties. This can contribute to improved vertical resolution of the image. This work applies different applications of the blocky seismic inversion technique, which is based on Bayesian inversion. Generally, a Gaussian prior for the three elastic parameters P-wave velocity, S-wave velocity and density is assumed in inverse problems. This assumption does not always provide sharp edges between layers, and the idea of the work reported here is to improve this by assuming a prior distribution for the contrasts in the elastic parameters with more probability of high contrasts. Since the Cauchy distribution has heavier tails than the normal distribution, the idea for the blocky inversion is to assume a Cauchy prior distribution for the contrasts in the elastic parameters. Inversion is a non-unique process, hence, the more reasonable prior information we use, the better the result. When using statistical inversion based on Bayes' rule, the prior distribution is used to shape the solution, and the modified Cauchy norm can help provide a solution with better focused layer boundaries. The scale parameter in the Cauchy distribution is not very easy to estimate, and different methods are tested. Spatial coupling of the model parameters m is introduced along a line to provide lateral consistency and robust results from inverse problems. The 2D inversion was done by assuming a Markov model where the inversion result at one location depends only on the neighbouring traces. This implies a sparse structure of the matrix to be inverted, and Cholesky factorization was used as a computational tool. This method allows tracewise nesting in contrast to setting up the whole operation matrix for all traces at a line, and therefore reduces the computational time significantly. The aim of this approach was to consider the use of lateral correlation while inverting data as a sophisticated way of stacking data to improve the signal to noise ratio. To get a picture of the uncertainties in the inversion result, different methods, such as importance sampling was performed, even though the answers were unreasonable large. This remains a topic for further work. The data used in this work are a synthetic created case and real seismic data from the Kvitebjørn field in the North Sea.
190

Reduced Basis Methods for Partial Differential Equations : Evaluation of multiple non-compliant flux-type output functionals for a non-affine electrostatics problem

Eftang, Jens Lohne January 2008 (has links)
A method for rapid evaluation of flux-type outputs of interest from solutions to partial differential equations (PDEs) is presented within the reduced basis framework for linear, elliptic PDEs. The central point is a Neumann-Dirichlet equivalence that allows for evaluation of the output through the bilinear form of the weak formulation of the PDE. Through a comprehensive example related to electrostatics, we consider multiple outputs, a posteriori error estimators and empirical interpolation treatment of the non-affine terms in the bilinear form. Together with the considered Neumann-Dirichlet equivalence, these methods allow for efficient and accurate numerical evaluation of a relationship mu->s(mu), where mu is a parameter vector that determines the geometry of the physical domain and s(mu) is the corresponding flux-type output matrix of interest. As a practical application, we lastly employ the rapid evaluation of s-> s(mu) in solving an inverse (parameter-estimation) problem.

Page generated in 0.0274 seconds