• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 53
  • 53
  • 13
  • 13
  • 13
  • 10
  • 10
  • 10
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Sparse Coding and Compressed Sensing: Locally Competitive Algorithms and Random Projections

Unknown Date (has links)
For an 8-bit grayscale image patch of size n x n, the number of distinguishable signals is 256(n2). Natural images (e.g.,photographs of a natural scene) comprise a very small subset of these possible signals. Traditional image and video processing relies on band-limited or low-pass signal models. In contrast, we will explore the observation that most signals of interest are sparse, i.e. in a particular basis most of the expansion coefficients will be zero. Recent developments in sparse modeling and L1 optimization have allowed for extraordinary applications such as the single pixel camera, as well as computer vision systems that can exceed human performance. Here we present a novel neural network architecture combining a sparse filter model and locally competitive algorithms (LCAs), and demonstrate the networks ability to classify human actions from video. Sparse filtering is an unsupervised feature learning algorithm designed to optimize the sparsity of the feature distribution directly without having the need to model the data distribution. LCAs are defined by a system of di↵erential equations where the initial conditions define an optimization problem and the dynamics converge to a sparse decomposition of the input vector. We applied this architecture to train a classifier on categories of motion in human action videos. Inputs to the network were small 3D patches taken from frame di↵erences in the videos. Dictionaries were derived for each action class and then activation levels for each dictionary were assessed during reconstruction of a novel test patch. We discuss how this sparse modeling approach provides a natural framework for multi-sensory and multimodal data processing including RGB video, RGBD video, hyper-spectral video, and stereo audio/video streams. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2016. / FAU Electronic Theses and Dissertations Collection
22

Sparse learning under regularization framework. / 正則化框架下的稀疏學習 / CUHK electronic theses & dissertations collection / Zheng ze hua kuang jia xia de xi shu xue xi

January 2011 (has links)
Regularization is a dominant theme in machine learning and statistics due to its prominent ability in providing an intuitive and principled tool for learning from high-dimensional data. As large-scale learning applications become popular, developing efficient algorithms and parsimonious models become promising and necessary for these applications. Aiming at solving large-scale learning problems, this thesis tackles the key research problems ranging from feature selection to learning with unlabeled data and learning data similarity representation. More specifically, we focus on the problems in three areas: online learning, semi-supervised learning, and multiple kernel learning. / The first part of this thesis develops a novel online learning framework to solve group lasso and multi-task feature selection. To the best our knowledge, the proposed online learning framework is the first framework for the corresponding models. The main advantages of the online learning algorithms are that (1) they can work on the applications where training data appear sequentially; consequently, the training procedure can be started at any time; (2) they can handle data up to any size with any number of features. The efficiency of the algorithms is attained because we derive closed-form solutions to update the weights of the corresponding models. At each iteration, the online learning algorithms just need O (d) time complexity and memory cost for group lasso, while they need O (d x Q) for multi-task feature selection, where d is the number of dimensions and Q is the number of tasks. Moreover, we provide theoretical analysis for the average regret of the online learning algorithms, which also guarantees the convergence rate of the algorithms. In addition, we extend the online learning framework to solve several related models which yield more sparse solutions. / The second part of this thesis addresses a general scenario of semi-supervised learning for the binary classification problern, where the unlabeled data may be a mixture of relevant and irrelevant data to the target binary classification task. Without specifying the relatedness in the unlabeled data, we develop a novel maximum margin classifier, named the tri-class support vector machine (3C-SVM), to seek an inductive rule that can separate these data into three categories: --1, +1, or 0. This is achieved by adopting a novel min loss function and following the maximum entropy principle. For the implementation, we approximate the problem and solve it by a standard concaveconvex procedure (CCCP). The approach is very efficient and it is possible to solve large-scale datasets. / The third part of this thesis focuses on multiple kernel learning (MKL) to solve the insufficiency of the L1-MKL and the Lp-MKL models. Hence, we propose a generalized MKL (GMKL) model by introducing an elastic net-type constraint on the kernel weights. More specifically, it is an MKL model with a constraint on a linear combination of the L1-norm and the square of the L2-norm on the kernel weights to seek the optimal kernel combination weights. Therefore, previous MKL problems based on the L1-norm or the L2-norm constraints can be regarded as its special cases. Moreover, our GMKL enjoys the favorable sparsity property on the solution and also facilitates the grouping effect. In addition, the optimization of our GMKL is a convex optimization problem, where a local solution is the globally optimal solution. We further derive the level method to efficiently solve the optimization problem. / Yang, Haiqin. / Advisers: Kuo Chin Irwin King; Michael Rung Tsong Iyu. / Source: Dissertation Abstracts International, Volume: 73-04, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 152-173). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
23

Parallel solution of sparse linear systems

Nader, Babak 05 1900 (has links) (PDF)
M.S. / Computer Science / This paper deals with the problem of solving a system of sparse nonsymmetric matrices on a distributed memory multiprocessor computer, the Intel iPSC (hypercube). The processors have substantial local memory but no global shared memory. They communicate among themselves and with a host processor through message passing. The primary interest is to design an algorithm which exploits parallelism, and which performs elimination and solution of large sparse matrices. Elimination is performed by LU- decomposition. The storage scheme is based on linked list data-structure defined for a given generated matrix. The matrix is distributed by columns in a "wrapped" fashion so that elimination in the natural order will be balanced, if the sparsity structure is equally distributed across the columns. Numerical results from experiments running on the hypercube are included along with performance analysis.
24

Quantum Chemistry for Large Systems

Rudberg, Elias January 2007 (has links)
This thesis deals with quantum chemistry methods for large systems. In particular, the thesis focuses on the efficient construction of the Coulomb and exchange matrices which are important parts of the Fock matrix in Hartree-Fock calculations. Density matrix purification, which is a method used to construct the density matrix for a given Fock matrix, is also discussed. The methods described are not only applicable in the Hartree-Fock case, but also in Kohn-Sham Density Functional Theory calculations, where the Coulomb and exchange matrices are parts of the Kohn-Sham matrix. Screening techniques for reducing the computational complexity of both Coulomb and exchange computations are discussed, including the fast multipole method, used for efficient computation of the Coulomb matrix. The thesis also discusses how sparsity in the matrices occurring in Hartree-Fock and Kohn-Sham Density Functional Theory calculations can be used to achieve more efficient storage of matrices as well as more efficient operations on them. / QC 20100817
25

Application of L1 reconstruction of sparse signals to ambiguity resolution in radar

Shaban, Fahad 13 May 2013 (has links)
The objective of the proposed research is to develop a new algorithm for range and Doppler ambiguity resolution in radar detection data using L1 minimization methods for sparse signals and to investigate the properties of such techniques. This novel approach to ambiguity resolution makes use of the sparse measurement structure of the post-detection data in multiple pulse repetition frequency radars and the resulting equivalence of the computationally intractable L0 minimization and the surrogate L1 minimization methods. The ambiguity resolution problem is cast as a linear system of equations which is then solved for the unique sparse solution in the absence of errors. It is shown that the new technique successfully resolves range and Doppler ambiguities and the recovery is exact in the ideal case of no errors in the system. The behavior of the technique is then investigated in the presence of real world data errors encountered in radar measurement and detection process. Examples of such errors include blind zone effects, collisions, false alarms and missed detections. It is shown that the mathematical model consisting of a linear system of equations developed for the ideal case can be adjusted to account for data errors. Empirical results show that the L1 minimization approach also works well in the presence of errors with minor extensions to the algorithm. Several examples are presented to demonstrate the successful implementation of the new technique for range and Doppler ambiguity resolution in pulse Doppler radars.
26

Multi-level solver for degenerated problems with applications to p-versions of the fem

Beuchler, Sven 18 July 2003 (has links) (PDF)
Dissertation ueber die effektive Vorkonditionierung linearer Gleichungssysteme resultierend aus der Diskretisierung eines elliptischen Randwertproblems 2. Ordnung mittels der Methode der Finiten Elementen. Als Vorkonditionierer werden multi-level artige Vorkonditionierer (BPX, Multi-grid, Wavelets) benutzt.
27

Design structure and iterative release analysis of scientific software

Zulkarnine, Ahmed Tahsin January 2012 (has links)
One of the main objectives of software development in scientific computing is efficiency. Being focused on highly specialized application domain, important software quality metrics, e.g., usability, extensibility ,etc may not be amongst the list of primary objectives. In this research, we have studied the design structures and iterative releases of scientific research software using Design Structure Matrix(DSM). We implemented a DSM partitioning algorithm using sparse matrix data structure Compressed Row Storage(CRS), and its timing was better than those obtained from the most widely used C++ library boost. Secondly, we computed several architectural complexity metrics, compared releases and total release costs of a number of open source scientific research software. One of the important finding is the absence of circular dependencies in studied software which attributes to the strong emphasis on computational performance of the code. Iterative release analysis indicates that there might be a correspondence between “clustering co-efficient” and “release rework cost” of the software. / x, 87 leaves : ill. ; 29 cm
28

Memory-economic finite element and node renumbering

Auda, Hesham A. January 1981 (has links)
No description available.
29

Parallel processing in power systems computation on a distributed memory message passing multicomputer /

Hong, Chao, January 2000 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2000. / Includes bibliographical references (leaves 160-169).
30

Verarbeitung von Sparse-Matrizen in Kompaktspeicherform KLZ/KZU

Meyer, A., Pester, M. 30 October 1998 (has links) (PDF)
The paper describes a storage scheme for sparse symmetric or nonsymmetric matrices which has been developed and used for many years at the Technical University of Chemnitz. An overview of existing library subroutines using such matrices is included.

Page generated in 0.0659 seconds