Spelling suggestions: "subject:"industrielle matematikk"" "subject:"industrielles matematikk""
51 |
Hierarchical Ensemble Kalman Filter : for Observations of Production and 4-D Seismic DataSætrom, Jon January 2007 (has links)
<p>Hierarchical Bayesian sequential Reservoir History matching, seismic inversion, Ensemble Kalman Filter,</p>
|
52 |
Estimation of Resrvoir Properties by Joint Inversion of Seismic AVO and CSEM dataHolm, Andreas January 2007 (has links)
<p>Porosity and water saturation in a horizontal top-reservoir are estimated from seismic AVO (Amplitude Versus Offset) data and Controlled Source Electromagnetic (CSEM) data jointly. A model connecting porosity and saturation to both AVO effects and to the phase shift of electromagnetic signals is constructed. In this model, Gassmann's equations, Archie's law, Zoeppritz' equations and ray-tracing is involved. We use a Bayesian approach to solve the inversion problem, and the solution is given as posterior distributions for the parameters of interest. We also investigate the noise levels in the two types of data, and how these affect the estimates of the reservoir properties. Gaussian assumptions and linearizations are made to ensure analytically tractable posterior distributions for porosity and saturation, and a Gibbs sampler is used to explore the joint posterior for porosity, saturation and noise levels. The method is applied to both synthetic data, and field data from the Troll gas field. The results from the joint inversion are compared to results from using seismic data exclusively and a clear improvement is found in the estimates of the synthetic case. The results from the Troll data are more ambiguous, probably caused by the problem of picking seismic data along the top-reservoir and inaccuracies in the fixed parameters in the geophysical forward model.</p>
|
53 |
Security analysis of blind signatures and group signaturesNordli, Børge January 2007 (has links)
<p>We present the latest formal security definitions for blind signature schemes and for group signature schemes. We start by introducing theory about algorithms, probability distributions, distinguishers, protocol attacks and experiments, which is needed to understand the definitions. In the case of blind signatures, we define blindness and non-forgeability, and we present the blind Schnorr and Okamoto-Schnorr signature schemes and prove that the latter is secure. For group signatures, we define full-anonymity and full-non-forgeability (full-traceability). In the end, we sketch a secure general group signature scheme presented by Bellare, Micciancio and Warinschi.</p>
|
54 |
In silico Investigation of Possible Mitotic Checkpoint Signalling MechanismsKirkeby, Håkon January 2007 (has links)
<p>The mitotic checkpoint is the major bio-chemical pathway acting to ensure stable genome content in cell division. A delay in chromosome segregation is enforced as long as at least one kinetochore is in lack of proper attachment to the mitotic spindle, something that prevents premature initiation of anaphase and uneven chromosome distribution. The backbone of the mitotic checkpoint control system is established as the production of a wait-anaphase signal at the unattached kinetochores. However, how this signal is able to support a functional checkpoint is unclear. To explore the performance of the wait-anaphase signal in terms of providing the mitotic checkpoint with high fidelity, a mathematical modelling framework is constructed that simulates the spatially distinct production of anaphase inhibitors, their diffusion in the cytoplasm and interference with the anaphase-promoting machinery. The model is used to analyse the performance of several different signalling mechanisms, with emphasis on testing the ability to maintain tight inhibition and allow rapid release of the anaphase promoter.</p>
|
55 |
Bayesian Text CategorizationNæss, Arild Brandrud January 2007 (has links)
<p>Natural language processing is an interdisciplinary field of research which studies the problems and possibilities of automated generation and understanding of natural human languages. Text categorization is a central subfield of natural language processing. Automatically assigning categories to digital texts has a wide range of applications in todays information societyfrom filtering spam to creating web hierarchies and digital newspaper archives. It is a discipline that lends itself more naturally to machine learning than to knowledge engineering; statistical approaches to text categorization are therefore a promising field of inquiry. We provide a survey of the state of the art in text categorization, presenting the most widespread methods in use, and placing particular emphasis on support vector machinesan optimization algorithm that has emerged as the benchmark method in text categorization in the past ten years. We then turn our attention to Bayesian logistic regression, a fairly new, and largely unstudied method in text categorization. We see how this method has certain similarities to the support vector machine method, but also differs from it in crucial respects. Notably, Bayesian logistic regression provides us with a statistical framework. It can be claimed to be more modular, in the sense that it is more open to modifications and supplementations by other statistical methods; whereas the support vector machine method remains more of a black box. We present results of thorough testing of the BBR toolkit for Bayesian logistic regression on three separate data sets. We demonstrate which of BBRs parameters are of importance; and we show that its results compare favorably to those of the SVMli ght toolkit for support vector machines. We also present two extensions to the BBR toolkit. One attempts to incorporate domain knowledge by way of the prior probability distributions of single words; the other tries to make use of uncategorized documents to boost learning accuracy.</p>
|
56 |
Application of the wavelet transform for analysis of ultrasound imagesKleiven, Eivind January 2008 (has links)
<p>In this master thesis we analyse medical ultrasound images using the wavelet transform. Mathematical theory is introduced for both one-dimensional and two-dimensional functions. Three edge detectors based on the mathematical theory introduced are given. Two of the three edge detectors are suggested by the author and one is an implementation of a known edge detector called the Canny edge detector. Our implementation will differ slightly from the original Canny edge detector since in our implementation we use the wavelet transform. All three edge detectors are applied on several images and the result is discussed. The multiscale behavior of the wavelet transform makes it usefull for edge detection. For small scales it is sensitive to noise, but with good localisation of edges. For large scales it is not as sensitive to noise, but with poorer localisation. One problem when designing an edge detector is to find the scale that have the best trade-off between localisation and noise sensitivity. We suggest an algorithm that automatic selects this scale using information from the wavelet transform across larger scales. The result is an algorithm that works satisfactorily for a set of images that differs in amount of noise and contrast between objects in the image. An edge detector for one-dimensinal signals are given. This edge detector works very well for locating singularities and characterising Lipschitz regularity in one-dimensional signals. However, as an edge detector for images it does not function satisfactorily. Further investigation should be done on how to use the multiscale information carried by the wavelet transform. The author are convinced that better edge detectors that are less sensitive to noise with good localisation properites can be derived using the wavelet transform across scales.</p>
|
57 |
Multiscale Modelling of Elastic ParametersBørset, Kari January 2008 (has links)
<p>Petrophysical properties in general and elasticity in particular have heterogeneous variations over many length scales. In a reservoir model, on which one for example can simulate fluid flow, seismic responses and resisitivity, it is necessary that the petrophysical parameters represent all these variations, even though the model is at a scale to coarse to capture all these properties in detail. Upscaling is a technique to bring information from one scale to a coarser in a consistent manner. Thus one upscaled model can be seen as homogeneous with a set of effective properties for its scale. For elastic properties, upscaling has traditionally been done by different volume weighted averaging methods like Voigt, Reuss or Backus averages which utilize limited or no information about the geology of the rock. The objective here is to do upscaling based on a technology where geological information is taken into account. This thesis considers different aspects governing elasticity upscaling in general and general geometry upscaling in particular. After the theory part it considers verification of the general geometry method and the implementation of this, projection of an elasticity tensor onto a certain symmetry and visualization of elastic moduli. Next the importance of including geological information is studied and upscaling is done on examples of realistic reservoir models. Finally elasticity upscaling utilized in a bottom-up approach to model 4D seismic is considered.</p>
|
58 |
Modelling of Electromagnetic Fields in Conductive Media and development of a Full Vectorial Contrast Source Inversion MethodWiik, Torgeir January 2008 (has links)
<p>Seismic surveys are well established in hydrocarbon prospecting, and technology for processing seismic data has been developed through decades. The first electromagnetic survey for hydrocarbon prospecting however, was performed in 2000, and as a consequence of the short time span the technology is not as well developed as in seismics. For instance, the need for efficient and robust forward modelling software and inversion schemes for collected data is urgent. In this thesis forward modelling using integral equations and the Contrast Source Inversion (CSI) method is investigated for forward and inverse 3D electromagnetic scattering experiments in hydrocarbon prospecting, respectively. The mathematical model is developed in an arbitrary isotropic, conductive medium, with contrast in electric permittivity and electric conductivity between the scattering object and the background, while in the numerical examples the background model is restricted to a horizontally layered medium with variations in the $z$-direction only and contrast in electric conductivity. The difference in electric conductivity is considered the backbone of electromagnetic hydrocarbon prospecting. The main result concerning forward modelling in this thesis is the establishment of a, to my knowledge, previously unpublished method for solving the electric Lippmann-Schwinger equation in a conductive medium by fixed point iteration. In the inversion part of this thesis the previously scalar CSI method is extended to a full vector valued method. A new CSI method for inversion with respect to all the electromagnetic parameters (the electric permittivity, electric conductivity and magnetic permeability) is also presented, which I have yet to find treated elsewhere. Only the former method is tested numerically, using synthetic data, due to the computational complexity of the latter. The numerical results from the forward modelling show the numerical validity of integral equation modelling and the fixed point iteration, whereas the results from the inversion show some promise. With several source and receiver lines present the lateral position of the scattering object is reconstructed well, whereas the vertical position causes problems. When textit{a priori} information about the position of the scattering object is introduced to further regularise the problem, the approximate position of it is successfully inverted, which illustrates the essential part additional regularisation plays in this inverse scattering problem. Thus the CSI method could be useful in petroleum geophysics, and should be developed further for the purpose of locating hydrocarbons in the subsurface. Several possibilities for further work is noted. This work was performed for StatoilHydro ASA.</p>
|
59 |
A heavy tailed statistical model applied in anti-collision calculations for petroleum wellsGjerde, Tony January 2008 (has links)
<p>Anti-collision calculations are done during the planning of a new petroleum well. These calculations are required in order to control the risk of having a well-collision, which is an unwanted event at any cost. The risk of having a well-collision is closely related to the position uncertainty both of the well that is planned and of the existing wells in the given region. Earlier literature has indicated that the distribution of the position errors are more heavy-tailed than a normal distribution, which leads to the question whether the current methods are accurate enough. The currently used industry standard calculates the standard deviation of the centre to centre distance by an approximation, and assumes that the centre to centre distance is normally distributed. In this thesis we use a heavy-tailed Normal Inverse Gaussian (NIG) distribution for the declination error source in MWD magnetic directional surveying, which lead to a position uncertainty that is heavy-tailed relative to the multivariate normal distribution. The parameters of the NIG-distribution are estimated from processed magnetic field data from the Tromsø geomagnetic observation station. The NIG-distribution requires the use of Monte Carlo simulations in order to apply the currently used industry approach. Other error sources are also included in the error model to give a more realistic position uncertainty. Three different anti-collision cases demonstrate the differences in using the NIG error model and the normal error model. We compare the simulation based results against the currently used methodology. The results are very dependent on the well geometries. The results differ significantly, and the NIG error model is the most conservative distribution in most cases, with respect to whether a wellplan should be realized or not. However, there are cases where a normally distributed declination error gives more conservative decisions than the NIG-distribution. As an alternative to change the distribution of the declination error, we propose two corrective actions to improve the existing anti-collision methodology. One action is to exchange one of the approximations in the current methodology with simulations or analytical computations. The other action is to correct for bias in the expected position, which is caused by the NIG error model.</p>
|
60 |
Automatic Parametrisation and Block pseudo Likelihood Estimation for binary Markov random FieldsToftaker, Håkon January 2008 (has links)
<p>Discrete Markov random fields play an important role in spatial statistics, and are applied in many different areas. Models which consider only pairwise interaction between sites such as the Ising model often perform well as a prior in a Bayesian setting but are generally unable to provide a realistic representation of a typical scene. Models which are defined by considering more than only two points have been shown to do well in describing many different types of textures. The specification of such models is often rather tedious, both in defining the parametric model, and in estimating the parameters. In this paper we present a procedure which in an automatic fashion defines a parametric model from a training image. On the basis of the frequencies of the different types of local configurations we define the potential function of all the different clique configurations from a relatively small number of parameters. Then we make use of a forward-backward algorithm to compute a maximum block pseudo likelihood estimator for the parametric models resulting from the automatic procedure. Then this set of methods is used to define Markov random field models from three different training images. The same algorithm which is used to calculate the block pseudo likelihood is used to implement a block Gibbs sampler. This is used to explore the properties of the models through simulation. The procedure is tested for a set of different input values. The analysis shows that the procedure is quite able to produce a reasonable presentation for one of the training images but performs insufficiently on the others. The main problem seems to be the ratio between black and white, and this seems to be a problem caused mainly by the estimator. It is therefore difficult to make a conclusion about the quality of the parametric model. We also show that by modifying the estimated potential function slightly we can get a model which is able to describe the second training image quite well.</p>
|
Page generated in 0.0805 seconds