<p>Paper I considers piecewise affine inverse problems. This is a large group of nonlinear inverse problems. Problems that obey certain variational structures are of this type. In inverse problems it is frequently such that some features are well determined by the observations while others are poorly resolved. In the Bayesian approach this imply that the likelihood forces the posterior distribution to be concentrated near hyper surfaces in the parameter space. In nonlinear problems this causes most generic sampling algorithms to be slow. The structure that is enforced in piecewise affine inverse problems allows the posterior distribution to be decomposed as a mixture of truncated Gaussian distributions. Under given regularity conditions this mixture distribution is non singular even if the observations are exact. A sampling algorithm that exploit this decomposition is proposed. The decomposition can however be used in a variety of sampling algorithms and is not limited to the sampling algorithm used here. Two small example problems are used to illustrate the theory as it is developed.</p><p>Paper II treats a problem in reflection seismic within the framework of piecewise affine inverse problems. Assuming a known, constant velocity in a layer, the problem is to determine the position of a reflector in the subsurface based on zero offset traveltimes. This is a standard simplification of the problem in reflection seismic. A synthetic example show that the uncertainty is well represented if there is a small number of observations, whereas the subsurface is satisfactory reconstructed when a large number of observations are considered. In the example it is demonstrated that the current approach improve the standard approach.</p><p>In Paper III cross well tomography is discussed in a Bayesian setting. In cross well tomography the slowness field, being the inverse of the velocity, is reconstructed based on the traveltimes of a signal generated in one well and received in an other well. The travel time recorded is the shortest time that is physically possible. The inverse problem is approximated by a piecewise affine inverse problem of the form considered in Paper I. The calculations are carried through for this problem by exploiting Fermat's principle of least time. The methodology is tested for a synthetic example. In the Bayesian approach to this problem, several slowness fields are sampled from the posterior distribution. All the proposed samples honor the traveltime observations up to the specified error structure. These slowness fields are averaged to produce the Bayesian estimator. The resulting estimator does not honor the the traveltime observations as the individual samples do, but generally have larger traveltimes. This is due to the nonlinearity in the problem. This effect is carefully explained in the paper. The synthetic example further show that a linearized approach is reasonable in the sense that it capture the main features in the estimate. The nonlinear estimate does however reduce the loss with about 10 % in the synthetic example. The linearized approach does not give a realistic representation of the uncertainty. In synthetic example the linearized approach underestimate the integrated variance by 30 %.</p><p>In Paper IV the objective is inversion of seismic pressure amplitudes recorded in a marine seismic survey. After several steps of preprocessing, the seismic observations can be modeled by a linear relation to the seismic reectivity, which again may be approximated by a linear relation to the material parameters on a logarithmic scale. The material parameters considered are pressure wave velocity, shear wave velocity and rock density. The seismic data that correspond to reections below one location at the surface are given as angle gathers. In Paper IV each angle gather is inverted independently. The main concern in Paper IV is that the seismic reectivity have heavier tails than what is predicted by a standard Gaussian model. A prior distribution based on superposition of a Cauchy process and Gaussian processes is proposed. As a test case material parameters observed in a well log at the Sleipner Øst Field is used to generate synthetic seismic observations. This is used as a basis for comparison between the proposed Cauchy model and a purely Gaussian model. The well log is used to estimate the parameters in the prior distribution both for the Cauchy model and for the pure Gaussian model. In a region with large variability the estimator for the pressure wave velocity resulting from the Cauchy model improves the risk by as much as 14 %. The Cauchy model also cause the uncertainty bounds to vary such that regions with low variability have shorter credibility intervals and regions with high variability have longer credibility intervals than for a pure Gaussian model. The model is also tested for real seismic observations. The results are satisfactory, although the uncertainty is large due to large observation errors in the seismic data.</p><p>Paper V has the same objective as Paper IV, that is to estimate pressure wave velocity, shear wave velocity and rock density, based on preprocessed data from a marine seismic survey. In Paper V it is however assumed that the Gaussian assumption can be justified. The focus in this paper is to incorporate lateral dependencies in the estimates. When latteral dependencies are included, all parameters are coupled, and must be solved simultaniously. This lead to a high dimensional problem. Paper V exploits the fact that a Fourier transform of the problem yield a block diagonal form such that a small problem may be solved for each frequency component independently. Both the posterior mean and the posterior covariance can be computed and stored efficiently, due to the special structure of the problem. This opens the possibility for including additional information such that well data to obtain a refined solution around the well. The methodology is tested on a seismic cube from the Sleipner Øst Field, where 12 million parameters are estimated. The total computing time after preprocessing is 6 minutes, the posterior covariance can be computed in additional 3 minutes on a single 400 Mhz Mips R12000 CPU. Hence the algorithm is extremely rapid.</p>
Identifer | oai:union.ndltd.org:UPSALLA/oai:DiVA.org:ntnu-489 |
Date | January 2002 |
Creators | Kolbjørnsen, Odd |
Publisher | Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, Fakultet for informasjonsteknologi, matematikk og elektroteknikk |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Doctoral thesis, comprehensive summary, text |
Relation | Dr. ingeniøravhandling, 0809-103X ; 2002:55 |
Page generated in 0.0028 seconds