• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • Tagged with
  • 15
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Curvelet-domain least-squares migration with sparseness constraints.

Herrmann, Felix J., Moghaddam, Peyman P. January 2004 (has links)
A non-linear edge-preserving solution to the least-squares migration problem with sparseness constraints is introduced. The applied formalism explores Curvelets as basis functions that, by virtue of their sparseness and locality, not only allow for a reduction of the dimensionality of the imaging problem but which also naturally lead to a non-linear solution with significantly improved signalto-noise ratio. Additional conditions on the image are imposed by solving a constrained optimization problem on the estimated Curvelet coefficients initialized by thresholding. This optimization is designed to also restore the amplitudes by (approximately) inverting the normal operator, which is like-wise the (de)-migration operators, almost diagonalized by the Curvelet transform.
2

Robust curvelet-domain primary-multiple separation with sparseness constraints

Herrmann, Felix J., Verschuur, Dirk J. January 2005 (has links)
A non-linear primary-multiple separation method using curvelets frames is presented. The advantage of this method is that curvelets arguably provide an optimal sparse representation for both primaries and multiples. As such curvelets frames are ideal candidates to separate primaries from multiples given inaccurate predictions for these two data components. The method derives its robustness regarding the presence of noise; errors in the prediction and missing data from the curvelet frame's ability (i) to represent both signal components with a limited number of multi-scale and directional basis functions; (ii) to separate the components on the basis of differences in location, orientation and scales and (iii) to minimize correlations between the coefficients of the two components. A brief sketch of the theory is provided as well as a number of examples on synthetic and real data.
3

Curvelet-based non-linear adaptive subtraction with sparseness constraints

Herrmann, Felix J., Moghaddam, Peyman P. January 2004 (has links)
In this paper an overview is given on the application of directional basis functions, known under the name Curvelets/Contourlets, to various aspects of seismic processing and imaging, which involve adaptive subtraction. Key concepts in the approach are the use of (i) directional basis functions that localize in both domains (e.g. space and angle); (ii) non-linear estimation, which corresponds to localized muting on the coefficients, possibly supplemented by constrained optimization. We will discuss applications that include multiple, ground-roll removal and migration denoising.
4

Curvelet-domain multiple elimination with sparseness constraints.

Herrmann, Felix J., Verschuur, Eric January 2004 (has links)
Predictive multiple suppression methods consist of two main steps: a prediction step, in which multiples are predicted from the seismic data, and a subtraction step, in which the predicted multiples are matched with the true multiples in the data. The last step appears crucial in practice: an incorrect adaptive subtraction method will cause multiples to be sub-optimally subtracted or primaries being distorted, or both. Therefore, we propose a new domain for separation of primaries and multiples via the Curvelet transform. This transform maps the data into almost orthogonal localized events with a directional and spatialtemporal component. The multiples are suppressed by thresholding the input data at those Curvelet components where the predicted multiples have large amplitudes. In this way the more traditional filtering of predicted multiples to fit the input data is avoided. An initial field data example shows a considerable improvement in multiple suppression.
5

Robust curvelet-domain data continuation with sparseness constraints.

Herrmann, Felix J. January 2005 (has links)
A robust data interpolation method using curvelets frames is presented. The advantage of this method is that curvelets arguably provide an optimal sparse representation for solutions of wave equations with smooth coefficients. As such curvelets frames circumvent - besides the assumption of caustic-free data - the necessity to make parametric assumptions (e.g. through linear/parabolic Radon or demigration) regarding the shape of events in seismic data. A brief sketch of the theory is provided as well as a number of examples on synthetic and real data.
6

A Study of Components of Pearson's Chi-Square Based on Marginal Distributions of Cross-Classified Tables for Binary Variables

January 2018 (has links)
abstract: The Pearson and likelihood ratio statistics are well-known in goodness-of-fit testing and are commonly used for models applied to multinomial count data. When data are from a table formed by the cross-classification of a large number of variables, these goodness-of-fit statistics may have lower power and inaccurate Type I error rate due to sparseness. Pearson's statistic can be decomposed into orthogonal components associated with the marginal distributions of observed variables, and an omnibus fit statistic can be obtained as a sum of these components. When the statistic is a sum of components for lower-order marginals, it has good performance for Type I error rate and statistical power even when applied to a sparse table. In this dissertation, goodness-of-fit statistics using orthogonal components based on second- third- and fourth-order marginals were examined. If lack-of-fit is present in higher-order marginals, then a test that incorporates the higher-order marginals may have a higher power than a test that incorporates only first- and/or second-order marginals. To this end, two new statistics based on the orthogonal components of Pearson's chi-square that incorporate third- and fourth-order marginals were developed, and the Type I error, empirical power, and asymptotic power under different sparseness conditions were investigated. Individual orthogonal components as test statistics to identify lack-of-fit were also studied. The performance of individual orthogonal components to other popular lack-of-fit statistics were also compared. When the number of manifest variables becomes larger than 20, most of the statistics based on marginal distributions have limitations in terms of computer resources and CPU time. Under this problem, when the number manifest variables is larger than or equal to 20, the performance of a bootstrap based method to obtain p-values for Pearson-Fisher statistic, fit to confirmatory dichotomous variable factor analysis model, and the performance of Tollenaar and Mooijaart (2003) statistic were investigated. / Dissertation/Thesis / Doctoral Dissertation Statistics 2018
7

On the Measurement of Model Fit for Sparse Categorical Data

Kraus, Katrin January 2012 (has links)
This thesis consists of four papers that deal with several aspects of the measurement of model fit for categorical data. In all papers, special attention is paid to situations with sparse data. The first paper concerns the computational burden of calculating Pearson's goodness-of-fit statistic for situations where many response patterns have observed frequencies that equal zero. A simple solution is presented that allows for the computation of the total value of Pearson's goodness-of-fit statistic when the expected frequencies of response patterns with observed frequencies of zero are unknown. In the second paper, a new fit statistic is presented that is a modification of Pearson's statistic but that is not adversely affected by response patterns with very small expected frequencies. It is shown that the new statistic is asymptotically equivalent to Pearson's goodness-of-fit statistic and hence, asymptotically chi-square distributed. In the third paper, comprehensive simulation studies are conducted that compare seven asymptotically equivalent fit statistics, including the new statistic. Situations that are considered concern both multinomial sampling and factor analysis. Tests for the goodness-of-fit are conducted by means of the asymptotic and the bootstrap approach both under the null hypothesis and when there is a certain degree of misfit in the data. Results indicate that recommendations on the use of a fit statistic can be dependent on the investigated situation and on the purpose of the model test. Power varies substantially between the fit statistics and the cause of the misfit of the model. Findings indicate further that the new statistic proposed in this thesis shows rather stable results and compared to the other fit statistics, no disadvantageous characteristics of the fit statistic are found. Finally, in the fourth paper, the potential necessity of determining the goodness-of-fit by two sided model testing is adverted. A simulation study is conducted that investigates differences between the one sided and the two sided approach of model testing. Situations are identified for which two sided model testing has advantages over the one sided approach.
8

Curvelet-domain preconditioned "wave-equation" depth-migration with sparseness and illumination constraints

Herrmann, Felix J., Moghaddam, Peyman P. January 2004 (has links)
A non-linear edge-preserving solution to the least-squares migration problem with sparseness & illumination constraints is proposed. The applied formalism explores Curvelets as basis functions. By virtue of their sparseness and locality, Curvelets not only reduce the dimensionality of the imaging problem but they also naturally lead to a dense preconditioning that almost diagonalizes the normal/Hessian operator. This almost diagonalization allows us to recast the imaging problem into a ’simple’ denoising problem. As such, we are in the position to use non-linear estimators based on thresholding. These estimators exploit the sparseness and locality of Curvelets and allow us to compute a first estimate for the reflectivity, which approximates the least-squares solution of the seismic inverse scattering problem. Given this estimate, we impose sparseness and additional amplitude corrections by solving a constrained optimization problem. This optimization problem is initialized and constrained by the thresholded image and is designed to remove remaining imaging artifacts and imperfections in the estimation and reconstruction.
9

Optimization strategies for sparseness- and continuity- enhanced imaging : Theory

Herrmann, Felix J., Moghaddam, Peyman P., Kirlin, Rodney L. January 2005 (has links)
Two complementary solution strategies to the least-squares migration problem with sparseness- & continuity constraints are proposed. The applied formalism explores the sparseness of curvelets on the reflectivity and their invariance under the demigration migration operator. Sparseness is enhanced by (approximately) minimizing a (weighted) l1-norm on the curvelet coefficients. Continuity along imaged reflectors is brought out by minimizing the anisotropic difussion or total variation norm which penalizes variations along and in between reflectors. A brief sketch of the theory is provided as well as a number of synthetic examples. Technical details on the implementation of the optimization strategies are deferred to an accompanying paper: implementation.
10

Sparseness-constrained data continuation with frames: Applications to missing traces and aliased signals in 2/3-D

Hennenfent, Gilles, Herrmann, Felix J. January 2005 (has links)
We present a robust iterative sparseness-constrained interpolation algorithm using 2/3D curvelet frames and Fourier-like transforms that exploits continuity along reflectors in seismic data. By choosing generic transforms, we circumvent the necessity to make parametric assumptions (e.g. through linear/parabolic Radon or demigration) regarding the shape of events in seismic data. Simulation and real data examples for data with moderately sized gaps demonstrate that our algorithm provides interpolated traces that accurately reproduce the wavelet shape as well as the AVO behavior. Our method also shows good results for de-aliasing judged by the behavior of the (f-k)-spectrum before and after regularization.

Page generated in 0.0834 seconds