• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 364
  • 56
  • 52
  • 45
  • 28
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 688
  • 125
  • 110
  • 103
  • 93
  • 88
  • 79
  • 73
  • 72
  • 69
  • 67
  • 66
  • 65
  • 64
  • 59
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Seismic Applications of Interactive Computational Methods

LI, MIN Unknown Date (has links)
Effective interactive computing methods are needed in a number of specific areas of geophysical interpretation, even though the basic algorithms have been established. One approach to raise the quality of interpretation is to promote better interaction between human and the computer. The thesis is concerned with improving this dialog in three areas: automatic event picking, data visualization and sparse data imaging. Fully automatic seismic event picking methods work well in relatively good conditions. They collapse when the signal-to-noise ratio is low and the structure of the subsurface is complex. The interactive seismic event picking system described here blends the interpreter's guidance and judgment into the computer program, as it can bring the user into the loop to make subjective decisions when the picking problem is complicated. Several interactive approaches for 2-D event picking and 3-D horizon tracking have been developed. Envelope (or amplitude) threshold detection for first break picking is based on the assumption that the power of the signal is larger than that of the noise. Correlation and instantaneous phase pickers are designed for and better suited to picking other arrivals. The former is based on the cross-correlation function, and a model trace (or model traces) selected by the interpreter is needed. The instantaneous phase picker is designed to track spatial variations in the instantaneous phase of the analytic form of the arrival. The picking options implemented into the software package SeisWin were tested on real data drawn from many sources, such as full waveform sonic borehole logs, seismic reflection surveys and borehole radar profiles, as well as seven of the most recent 3-D seismic surveys conducted over Australian coal mines. The results show that the interactive picking system in SeisWin is efficient and tolerant. The 3-D horizon tracking method developed especially attracts industrial users. The visualization of data is also a part of the study, as picking accuracy, and indeed the whole of seismic interpretation depends largely on the quality of the final display. The display is often the only window through which an interpreter can see the earth's substructures. Display is a non-linear operation. Adjustments made to meet display deficiencies such as automatic gain control (AGC) have an important and yet ill-documented effect on the performance of pattern recognition operators, both human and computational. AGC is usually implemented in one dimension. Some of the tools in wide spread use for two dimensional image processing which are of great value in the local gain control of conventional seismic sections such as edge detectors, histogram equalisers, high-pass filters, shaded relief are discussed. Examples are presented to show the relative effectiveness of various display options. Conventional migration requires dense arrays with uniform coverage and uniform illumination of targets. There are, however, many instances in which these ideals can not be approached. Event migration and common tangent plane stacking procedures were developed especially for sparse data sets as a part of the research effort underlying this thesis. Picked-event migration migrates the line between any two points on different traces on the time section to the base map. The interplay between the space and time domain gives the interpreter an immediate view of mapping. Tangent plane migration maps the reflector by accumulating the energy from any two possible reflecting points along the common tangent lines on the space plane. These methods have been applied to both seismic and borehole-radar data and satisfactory results have been achieved.
32

A novel augmented graph approach for estimation in localisation and mapping

Thompson, Paul Robert January 2009 (has links)
Doctor of Philosophy(PhD) / This thesis proposes the use of the augmented system form - a generalisation of the information form representing both observations and states. In conjunction with this, this thesis proposes a novel graph representation for the estimation problem together with a graph based linear direct solving algorithm. The augmented system form is a mathematical description of the estimation problem showing the states and observations. The augmented system form allows a more general range of factorisation orders among the observations and states, which is essential for constraints and is beneficial for sparsity and numerical reasons. The proposed graph structure is a novel sparse data structure providing more symmetric access and faster traversal and modification operations than the compressed-sparse-column (CSC) sparse matrix format. The graph structure was developed as a fundamental underlying structure for the formulation of sparse estimation problems. This graph-theoretic representation replaces conventional sparse matrix representations for the estimation states, observations and their interconnections. This thesis contributes a new implementation of the indefinite LDL factorisation algorithm based entirely in the graph structure. This direct solving algorithm was developed in order to exploit the above new approaches of this thesis. The factorisation operations consist of accessing adjacencies and modifying the graph edges. The developed solving algorithm demonstrates the significant differences in the form and approach of the graph-embedded algorithm compared to a conventional matrix implementation. The contributions proposed in this thesis improve estimation methods by providing novel mathematical data structures used to represent states, observations and the sparse links between them. These offer improved flexibility and capabilities which are exploited in the solving algorithm. The contributions constitute a new framework for the development of future online and incremental solving, data association and analysis algorithms for online, large scale localisation and mapping.
33

Receptive field structures for recognition

Balas, Benjamin, Sinha, Pawan 01 March 2005 (has links)
Localized operators, like Gabor wavelets and difference-of-Gaussian filters, are considered to be useful tools for image representation. This is due to their ability to form a ‘sparse code’ that can serve as a basis set for high-fidelity reconstruction of natural images. However, for many visual tasks, the more appropriate criterion of representational efficacy is ‘recognition’, rather than ‘reconstruction’. It is unclear whether simple local features provide the stability necessary to subserve robust recognition of complex objects. In this paper, we search the space of two-lobed differential operators for those that constitute a good representational code under recognition/discrimination criteria. We find that a novel operator, which we call the ‘dissociated dipole’ displays useful properties in this regard. We describe simple computational experiments to assess the merits of such dipoles relative to the more traditional local operators. The results suggest that non-local operators constitute a vocabulary that is stable across a range of image transformations.
34

Tiebreaking the minimum degree algorithm for ordering sparse symmetric positive definite matrices

Cavers, Ian Alfred January 1987 (has links)
The minimum degree algorithm is known as an effective scheme for identifying a fill reduced ordering for symmetric, positive definite, sparse linear systems, to be solved using a Cholesky factorization. Although the original algorithm has been enhanced to improve the efficiency of its implementation, ties between minimum degree elimination candidates are still arbitrarily broken. For many systems, the fill levels of orderings produced by the minimum degree algorithm are very sensitive to the precise manner in which these ties are resolved. This thesis introduces several tiebreaking enhancements of the minimum degree algorithm. Emphasis is placed upon a tiebreaking strategy based upon the deficiency of minium degree elimination candidates, and which can consistently identify low fill orderings for a wide spectrum of test problems. All tiebreaking strategies are fully integrated into implementations of the minimum degree algorithm based upon a quotient graph model, including indistinguishable sets represented by uneliminated supernodes. The resulting programs are tested on a wide variety of sparse systems in order to investigate the performance of the algorithm enhanced by the tiebreaking strategies and the quality of the orderings they produce. / Science, Faculty of / Computer Science, Department of / Graduate
35

Study on efficient sparse and low-rank optimization and its applications

Lou, Jian 29 August 2018 (has links)
Sparse and low-rank models have been becoming fundamental machine learning tools and have wide applications in areas including computer vision, data mining, bioinformatics and so on. It is of vital importance, yet of great difficulty, to develop efficient optimization algorithms for solving these models, especially under practical design considerations of computational, communicational and privacy restrictions for ever-growing larger scale problems. This thesis proposes a set of new algorithms to improve the efficiency of the sparse and low-rank models optimization. First, facing a large number of data samples during training of empirical risk minimization (ERM) with structured sparse regularization, the gradient computation part of the optimization can be computationally expensive and becomes the bottleneck. Therefore, I propose two gradient efficient optimization algorithms to reduce the total or per-iteration computational cost of the gradient evaluation step, which are new variants of the widely used generalized conditional gradient (GCG) method and incremental proximal gradient (PG) method, correspondingly. In detail, I propose a novel algorithm under GCG framework that requires optimal count of gradient evaluations as proximal gradient. I also propose a refined variant for a type of gauge regularized problem, where approximation techniques are allowed to further accelerate linear subproblem computation. Moreover, under the incremental proximal gradient framework, I propose to approximate the composite penalty by its proximal average under incremental gradient framework, so that a trade-off is made between precision and efficiency. Theoretical analysis and empirical studies show the efficiency of the proposed methods. Furthermore, the large data dimension (e.g. the large frame size of high-resolution image and video data) can lead to high per-iteration computational complexity, thus results into poor-scalability of the optimization algorithm from practical perspective. In particular, in spectral k-support norm regularized robust low-rank matrix and tensor optimization, traditional proximal map based alternating direction method of multipliers (ADMM) requires to evaluate a super-linear complexity subproblem in each iteration. I propose a set of per-iteration computational efficient alternatives to reduce the cost to linear and nearly linear with respect to the input data dimension for matrix and tensor case, correspondingly. The proposed algorithms consider the dual objective of the original problem that can take advantage of the more computational efficient linear oracle of the spectral k-support norm to be evaluated. Further, by studying the sub-gradient of the loss of the dual objective, a line-search strategy is adopted in the algorithm to enable it to adapt to the Holder smoothness. The overall convergence rate is also provided. Experiments on various computer vision and image processing applications demonstrate the superior prediction performance and computation efficiency of the proposed algorithm. In addition, since machine learning datasets often contain sensitive individual information, privacy-preserving becomes more and more important during sparse optimization. I provide two differentially private optimization algorithms under two common large-scale machine learning computing contexts, i.e., distributed and streaming optimization, correspondingly. For the distributed setting, I develop a new algorithm with 1) guaranteed strict differential privacy requirement, 2) nearly optimal utility and 3) reduced uplink communication complexity, for a nearly unexplored context with features partitioned among different parties under privacy restriction. For the streaming setting, I propose to improve the utility of the private algorithm by trading the privacy of distant input instances, under the differential privacy restriction. I show that the proposed method can either solve the private approximation function by a projected gradient update for projection-friendly constraints, or by a conditional gradient step for linear oracle-friendly constraint, both of which improve the regret bound to match the nonprivate optimal counterpart.
36

Parallel Reservoir Simulations with Sparse Grid Techniques and Applications to Wormhole Propagation

Wu, Yuanqing 08 September 2015 (has links)
In this work, two topics of reservoir simulations are discussed. The first topic is the two-phase compositional flow simulation in hydrocarbon reservoir. The major obstacle that impedes the applicability of the simulation code is the long run time of the simulation procedure, and thus speeding up the simulation code is necessary. Two means are demonstrated to address the problem: parallelism in physical space and the application of sparse grids in parameter space. The parallel code can gain satisfactory scalability, and the sparse grids can remove the bottleneck of flash calculations. Instead of carrying out the flash calculation in each time step of the simulation, a sparse grid approximation of all possible results of the flash calculation is generated before the simulation. Then the constructed surrogate model is evaluated to approximate the flash calculation results during the simulation. The second topic is the wormhole propagation simulation in carbonate reservoir. In this work, different from the traditional simulation technique relying on the Darcy framework, we propose a new framework called Darcy-Brinkman-Forchheimer framework to simulate wormhole propagation. Furthermore, to process the large quantity of cells in the simulation grid and shorten the long simulation time of the traditional serial code, standard domain-based parallelism is employed, using the Hypre multigrid library. In addition to that, a new technique called “experimenting field approach” to set coefficients in the model equations is introduced. In the 2D dissolution experiments, different configurations of wormholes and a series of properties simulated by both frameworks are compared. We conclude that the numerical results of the DBF framework are more like wormholes and more stable than the Darcy framework, which is a demonstration of the advantages of the DBF framework. The scalability of the parallel code is also evaluated, and good scalability can be achieved. Finally, a mixed finite element scheme is proposed for the wormhole simulation.
37

Application of Sparse Representation to Radio Frequency Emitter Geolocation from an Airborne Antenna Array

Compaleo, Jacob January 2022 (has links)
No description available.
38

Contributions to Large Covariance and Inverse Covariance Matrices Estimation

Kang, Xiaoning 25 August 2016 (has links)
Estimation of covariance matrix and its inverse is of great importance in multivariate statistics with broad applications such as dimension reduction, portfolio optimization, linear discriminant analysis and gene expression analysis. However, accurate estimation of covariance or inverse covariance matrices is challenging due to the positive definiteness constraint and large number of parameters, especially in the high-dimensional cases. In this thesis, I develop several approaches for estimating large covariance and inverse covariance matrices with different applications. In Chapter 2, I consider an estimation of time-varying covariance matrices in the analysis of multivariate financial data. An order-invariant Cholesky-log-GARCH model is developed for estimating the time-varying covariance matrices based on the modified Cholesky decomposition. This decomposition provides a statistically interpretable parametrization of the covariance matrix. The key idea of the proposed model is to consider an ensemble estimation of covariance matrix based on the multiple permutations of variables. Chapter 3 investigates the sparse estimation of inverse covariance matrix for the highdimensional data. This problem has attracted wide attention, since zero entries in the inverse covariance matrix imply the conditional independence among variables. I propose an orderinvariant sparse estimator based on the modified Cholesky decomposition. The proposed estimator is obtained by assembling a set of estimates from the multiple permutations of variables. Hard thresholding is imposed on the ensemble Cholesky factor to encourage the sparsity in the estimated inverse covariance matrix. The proposed method is able to catch the correct sparse structure of the inverse covariance matrix. Chapter 4 focuses on the sparse estimation of large covariance matrix. Traditional estimation approach is known to perform poorly in the high dimensions. I propose a positive-definite estimator for the covariance matrix using the modified Cholesky decomposition. Such a decomposition provides a exibility to obtain a set of covariance matrix estimates. The proposed method considers an ensemble estimator as the center" of these available estimates with respect to Frobenius norm. The proposed estimator is not only guaranteed to be positive definite, but also able to catch the underlying sparse structure of the true matrix. / Ph. D.
39

Phenomenological features of turbulent hydrodynamics in sparsely vegetated open channel flow

Maji, S., Pal, D., Hanmaiahgari, P.R., Pu, Jaan H. 29 March 2016 (has links)
Yes / The present study investigates the turbulent hydrodynamics in an open channel flow with an emergent and sparse vegetation patch placed in the middle of the channel. The dimensions of the rigid vegetation patch are 81 cm long and 24 cm wide and it is prepared by a 7× 10 array of uniform acrylic cylinders by maintaining 9 cm and 4 cm spacing between centers of two consecutive cylinders along streamwise and lateral directions respectively. From the leading edge of the patch, the observed nature of time averaged flow velocities along streamwise, lateral and vertical directions is not consistent up to half length of the patch; however the velocity profiles develop a uniform behavior after that length. In the interior of the patch, the magnitude of vertical normal stress is small in comparison to the magnitudes of streamwise and lateral normal stresses. The magnitude of Reynolds shear stress profiles decreases with increasing downstream length from the leading edge of the vegetation patch and the trend continues even in the wake region downstream of the trailing edge. The increased magnitude of turbulent kinetic energy profiles is noticed from leading edge up to a certain length inside the patch; however its value decreases with further increasing downstream distance. A new mathematical model is proposed to predict time averaged streamwise velocity inside the sparse vegetation patch and the proposed model shows good agreement with the experimental data. / Debasish Pal received financial assistance from SRIC Project of IIT Kharagpur (Project code: FVP)
40

Sparse Matrix Belief Propagation

Bixler, Reid Morris 11 May 2018 (has links)
We propose sparse-matrix belief propagation, which executes loopy belief propagation in Markov random fields by replacing indexing over graph neighborhoods with sparse-matrix operations. This abstraction allows for seamless integration with optimized sparse linear algebra libraries, including those that perform matrix and tensor operations on modern hardware such as graphical processing units (GPUs). The sparse-matrix abstraction allows the implementation of belief propagation in a high-level language (e.g., Python) that is also able to leverage the power of GPU parallelization. We demonstrate sparse-matrix belief propagation by implementing it in a modern deep learning framework (PyTorch), measuring the resulting massive improvement in running time, and facilitating future integration into deep learning models. / Master of Science / We propose sparse-matrix belief propagation, a modified form of loopy belief propagation that encodes the structure of a graph with sparse matrices. Our modifications replace a potentially complicated design of indexing over graph neighborhoods with more optimized and easily interpretable sparse-matrix operations. These operations, available in sparse linear algebra libraries, can also be performed on modern hardware such as graphical processing units (GPUs). By abstracting away the original index-based design with sparse-matrices it is possible to implement belief propagation in a high-level language such as Python that can also use the power of GPU parallelization, rather than rely on abstruse low-level language implementations. We show that sparse-matrix belief propagation, when implemented in a modern deep learning framework (PyTorch), results in massive improvements irunning time when compared against the original index-based version. Additionally this implementation facilitates future integration into deep learning models for wider adoption and use by data scientists.

Page generated in 0.0262 seconds