• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Enhancement of the Signal-to-Noise Ratio in Sonic Logging Waveforms by Seismic Interferometry

Aldawood, Ali 04 1900 (has links)
Sonic logs are essential tools for reliably identifying interval velocities which, in turn, are used in many seismic processes. One problem that arises, while logging, is irregularities due to washout zones along the borehole surfaces that scatters the transmitted energy and hence weakens the signal recorded at the receivers. To alleviate this problem, I have extended the theory of super-virtual refraction interferometry to enhance the signal-to-noise ratio (SNR) sonic waveforms. Tests on synthetic and real data show noticeable signal-to-noise ratio (SNR) enhancements of refracted P-wave arrivals in the sonic waveforms. The theory of super-virtual interferometric stacking is composed of two redatuming steps followed by a stacking procedure. The first redatuming procedure is of correlation type, where traces are correlated together to get virtual traces with the sources datumed to the refractor. The second datuming step is of convolution type, where traces are convolved together to dedatum the sources back to their original positions. The stacking procedure following each step enhances the signal to noise ratio of the refracted P-wave first arrivals. Datuming with correlation and convolution of traces introduces severe artifacts denoted as correlation artifacts in super-virtual data. To overcome this problem, I replace the datuming with correlation step by datuming with deconvolution. Although the former datuming method is more robust, the latter one reduces the artifacts significantly. Moreover, deconvolution can be a noise amplifier which is why a regularization term is utilized, rendering the datuming with deconvolution more stable. Tests of datuming with deconvolution instead of correlation with synthetic and real data examples show significant reduction of these artifacts. This is especially true when compared with the conventional way of applying the super-virtual refraction interferometry method.
2

Exploiting Data Sparsity in Matrix Algorithms for Adaptive Optics and Seismic Redatuming

Hong, Yuxi 07 June 2023 (has links)
This thesis addresses the exponential growth of experimental data and the resulting computational complexity seen in two major scientific applications, which account for significant cycles consumed on today’s supercomputers. The first application concerns computation of the adaptive optics system in next-generation ground-based telescopes, which will expand our knowledge of the universe but confronts the astronomy community with daunting real-time computation requirements. The second application deals with emerging frequency-domain redatuming methods, e.g., Marchenko redatuming, which are game-changers in exploration geophysics. They are valuable to oil and gas applications and will soon be to geothermal exploration and carbon capture storage. However, they are impractical at industrial scale due to prohibitive computational complexity and memory footprint. We tackle the aforementioned challenges by designing high-performance algebraic and stochastic algorithms, which exploit the data sparsity structure of the matrix operator. We show that popular randomized algorithms from machine learning can also solve large covariance matrix problems that capture the correlations of wavefront sensors detecting the atmospheric turbulence for ground-based telescopes. Algebraic compression based on low-rank approximations that retains the most significant portion of the spectrum of the operator provides numerical solutions at the accuracy level required by the application. In addition, selective use of lower precisions can further reduce the data volume by trading off application accuracy for memory footprint. Reducing memory footprint has ancillary implications for reduced energy expenditure and reduced execution time because moving a word is more expensive than computing with it on today’s architectures. We exploit the data sparsity of matrices representative of these two scientific applications and propose four algorithms to accelerate the corresponding computational workload. In soft real-time control of an adaptive optics system, we design a stochastic Levenberg-Marquardt method and high-performance solver for Discrete-time Algebraic Riccati Equations. We create a tile low-rank matrix-vector multiplication algorithm used in both hard real-time control of ground-based telescopes and seismic redatuming. Finally, we leverage multiple precisions to further improve the performance of seismic redatuming applications We implement our algorithms on essentially all families of currently relevant HPC architectures and customized AI accelerators and demonstrate significant performance improvement and validated numerical solutions.

Page generated in 0.0489 seconds