Spelling suggestions: "subject:"mparse 2matrices"" "subject:"mparse cicatrices""
1 |
An optimal parallel processing method for inverting sparse matricesBetancourt, Ramon. January 1984 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1984. / Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 195-214).
|
2 |
Sparse representation and fast processing of massive dataLi, Mingfei., 李明飞. January 2012 (has links)
Many computational problems involve massive data. A reasonable solution to those problems should be able to store and process the data in a effective manner. In this thesis, we study sparse representation of data streams and metric spaces, which allows for fast and private computation of heavy hitters from distributed streams, and approximate distance queries between points in a metric space.
Specifically, we consider application scenarios where an untrusted aggregator wishes to continually monitor the heavy-hitters across a set of distributed streams. Since each stream can contain sensitive data, such as the purchase history of customers, we wish to guarantee the privacy of each stream, while allowing the untrusted aggregator to accurately detect the heavy hitters and their approximate frequencies. Our protocols are scalable in settings where the volume of streaming data is large, since we guarantee low memory usage and processing overhead by each data source, and low communication overhead between the data sources and the aggregator.
We also study fault-tolerant spanners in doubling metrics. A subgraph H for a metric space X is called a k-vertex-fault-tolerant t-spanner ((k; t)-VFTS or simply k-VFTS), if for any subset S _ X with |Sj|≤k, it holds that dHnS(x; y) ≤ t ∙d(x; y), for any pair of x, y ∈ X \ S.
For any doubling metric, we give a basic construction of k-VFTS with stretch arbitrarily close to 1 that has optimal O(kn) edges. We also consider bounded hop-diameter, which is studied in the context of fault-tolerance for the first time even for Euclidean spanners. We provide a construction of k-VFTS with bounded hop-diameter: for m ≥2n, we can reduce the hop-diameter of the above k-VFTS to O(α(m; n)) by adding O(km) edges, where α is a functional inverse of the Ackermann's function. In addition, we construct a fault-tolerant single-sink spanner with bounded maximum degree, and use it to reduce the maximum degree of our basic k-VFTS. As a result, we get a k-VFTS with O(k^2n) edges and maximum degree O(k^2). / published_or_final_version / Computer Science / Master / Master of Philosophy
|
3 |
Influence on information networks and sparse representation of metric spaces: y Li Ning.Ning, Li, 宁立 January 2013 (has links)
As the social networking applications become popular and have attracted more and more attention, it becomes possible (or easier) to mine information networks of a large group of people, for instance the spread of viruses, products and innovations. Researchers are also benefited from these sources, as they help to study further about the human behaviors in a social networking environment. In this thesis, we study how the opinions cascade via social links and also the sparse representation of large data sets caused by the fast growing information networks.
Consider an instance of advertising products on an information network, where the objective for an advertiser is to choose a set of initially active nodes, subject to some budget constraints such that the expected number of active nodes over time is maximized. In this work, the non-progressive case where active nodes could become inactive in subsequent time steps, was considered. This setting is interesting, for people have no reason to keep using a product forever, especially when a majority of his neighbors have given it up. Another setting is the users' susceptibilities to the new behavior won't change once they are chosen initially. This is di_erent from the previous works, and we think it is also interesting.
Our main result is that with a specified budget, an advertiser can achieve 1/2 -approximation on maximizing the number of active nodes over a certain period of time, and (1 - 1/e)-approximation in expectation with a randomized algorithm.
In our model, users' opinions are updated according to a weighted averaging scheme, which usually leads to the consensus. When this happens, the expected influence over a long time period is dominated by the consensus value. Hence, it is interesting to study when and how the consensus will be achieved. We study the convergence time required to achieve consensus. In particular, we considered the case when the underlying network is dynamic, and gave conclusions for different averaging models. Both our analysis and experiments show that dynamic networks exhibit fast convergence behavior, even under very mild connectivity assumptions.
In this work, we also studied how to maintain the pairwise distances for a large group of users, when the distances form a doubling metric space. Motivated by Elkin and Solomon's construction of spanners for doubling metrics that has constant maximum degree, hop-diameter O(log n) and lightness O(log n) (i.e., weight O(log n) . w(MST)), we offer a simple alternative construction that is very intuitive and is based on the standard technique of net tree with cross edges. Indeed, our approach can be readily applied to our previous construction of k-fault tolerant spanners (ICALP 2012) to achieve k-fault tolerance, maximum degree O(K^2), hop-diameter O(log n) and lightness O(K^3 log n). / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
|
4 |
Direct sparse matrix methods for interior point algorithms.Jung, Ho-Won. January 1990 (has links)
Recent advances in linear programming solution methodology have focused on interior point algorithms. These are powerful new methods, achieving significant reductions in computer time for large LPs and solving problems significantly larger than previously possible. This dissertation describes the implementation of interior point algorithms. It focuses on applications of direct sparse matrix methods to sparse symmetric positive definite systems of linear equations on scalar computers and vector supercomputers. The most computationally intensive step in each iteration of any interior point algorithm is the numerical factorization of a sparse symmetric positive definite matrix. In large problems or relatively dense problems, 80-90% or more of computational time is spent in this step. This study concentrates on solution methods for such linear systems. It is based on modifications and extensions of graph theory applied to sparse matrices. The row and column permutation of a sparse symmetric positive definite matrix dramatically affects the performance of solution algorithms. Various reordering methods are considered to find the best ordering for various numerical factorization methods and computer architectures. It is assumed that the reordering method will follow the fill-preserving rule, i.e., not allow additional fill-ins beyond that provided by the initial ordering. To follow this rule, a modular approach is used. In this approach, the matrix is first permuted by using any minimum degree heuristic, and then the permuted matrix is again reordered according to a specific reordering objective. Results of different reordering methods are described. There are several ways to compute the Cholesky factor of a symmetric positive definite matrix. A column Cholesky algorithm is a popular method for dense and sparse matrix factorization on serial and parallel computers. Applying this algorithm to a sparse matrix requires the use of sparse vector operations. Graph theory is applied to reduce sparse vector computations. A second and relatively new algorithm is the multifrontal algorithm. This method uses dense operations for sparse matrix computation at the expense of some data manipulation. The performance of the column Cholesky and multifrontal algorithms in the numerical factorization of a sparse symmetric positive definite matrix on an IBM 3090 vector supercomputer is described.
|
5 |
Tiebreaking the minimum degree algorithm for ordering sparse symmetric positive definite matricesCavers, Ian Alfred January 1987 (has links)
The minimum degree algorithm is known as an effective scheme for identifying a fill reduced ordering for symmetric, positive definite, sparse linear systems, to be solved using a Cholesky factorization. Although the original algorithm has been enhanced to improve the efficiency of its implementation, ties between minimum degree elimination candidates are still arbitrarily broken. For many systems, the fill levels of orderings produced by the minimum degree algorithm are very sensitive to the precise manner in which these ties are resolved. This thesis introduces several tiebreaking enhancements of the minimum degree algorithm. Emphasis is placed upon a tiebreaking strategy based upon the deficiency of minium degree elimination candidates, and which can consistently identify low fill orderings for a wide spectrum of test problems. All tiebreaking strategies are fully integrated into implementations of the minimum degree algorithm based upon a quotient graph model, including indistinguishable sets represented by uneliminated supernodes. The resulting programs are tested on a wide variety of sparse systems in order to investigate the performance of the algorithm enhanced by the tiebreaking strategies and the quality of the orderings they produce. / Science, Faculty of / Computer Science, Department of / Graduate
|
6 |
Study on efficient sparse and low-rank optimization and its applicationsLou, Jian 29 August 2018 (has links)
Sparse and low-rank models have been becoming fundamental machine learning tools and have wide applications in areas including computer vision, data mining, bioinformatics and so on. It is of vital importance, yet of great difficulty, to develop efficient optimization algorithms for solving these models, especially under practical design considerations of computational, communicational and privacy restrictions for ever-growing larger scale problems. This thesis proposes a set of new algorithms to improve the efficiency of the sparse and low-rank models optimization. First, facing a large number of data samples during training of empirical risk minimization (ERM) with structured sparse regularization, the gradient computation part of the optimization can be computationally expensive and becomes the bottleneck. Therefore, I propose two gradient efficient optimization algorithms to reduce the total or per-iteration computational cost of the gradient evaluation step, which are new variants of the widely used generalized conditional gradient (GCG) method and incremental proximal gradient (PG) method, correspondingly. In detail, I propose a novel algorithm under GCG framework that requires optimal count of gradient evaluations as proximal gradient. I also propose a refined variant for a type of gauge regularized problem, where approximation techniques are allowed to further accelerate linear subproblem computation. Moreover, under the incremental proximal gradient framework, I propose to approximate the composite penalty by its proximal average under incremental gradient framework, so that a trade-off is made between precision and efficiency. Theoretical analysis and empirical studies show the efficiency of the proposed methods. Furthermore, the large data dimension (e.g. the large frame size of high-resolution image and video data) can lead to high per-iteration computational complexity, thus results into poor-scalability of the optimization algorithm from practical perspective. In particular, in spectral k-support norm regularized robust low-rank matrix and tensor optimization, traditional proximal map based alternating direction method of multipliers (ADMM) requires to evaluate a super-linear complexity subproblem in each iteration. I propose a set of per-iteration computational efficient alternatives to reduce the cost to linear and nearly linear with respect to the input data dimension for matrix and tensor case, correspondingly. The proposed algorithms consider the dual objective of the original problem that can take advantage of the more computational efficient linear oracle of the spectral k-support norm to be evaluated. Further, by studying the sub-gradient of the loss of the dual objective, a line-search strategy is adopted in the algorithm to enable it to adapt to the Holder smoothness. The overall convergence rate is also provided. Experiments on various computer vision and image processing applications demonstrate the superior prediction performance and computation efficiency of the proposed algorithm. In addition, since machine learning datasets often contain sensitive individual information, privacy-preserving becomes more and more important during sparse optimization. I provide two differentially private optimization algorithms under two common large-scale machine learning computing contexts, i.e., distributed and streaming optimization, correspondingly. For the distributed setting, I develop a new algorithm with 1) guaranteed strict differential privacy requirement, 2) nearly optimal utility and 3) reduced uplink communication complexity, for a nearly unexplored context with features partitioned among different parties under privacy restriction. For the streaming setting, I propose to improve the utility of the private algorithm by trading the privacy of distant input instances, under the differential privacy restriction. I show that the proposed method can either solve the private approximation function by a projected gradient update for projection-friendly constraints, or by a conditional gradient step for linear oracle-friendly constraint, both of which improve the regret bound to match the nonprivate optimal counterpart.
|
7 |
DSJM : a software toolkit for direct determination of sparse Jacobian matricesHasan, Mahmudul January 2011 (has links)
DSJM is a software toolkit written in portable C++ that enables direct determination of
sparse Jacobian matrices whose sparsity pattern is a priori known. Using the seed matrix
S 2 Rn×p, the Jacobian A 2 Rm×n can be determined by solving AS = B, where B 2 Rm×p
has been obtained via finite difference approximation or forward automatic differentiation.
Seed matrix S is defined by the nonzero unknowns in A. DSJM includes well-known as
well as new column ordering heuristics. Numerical testing is highly promising both in
terms of running time and the number of matrix-vector products needed to determine A. / x, 71 leaves : ill. ; 29 cm
|
8 |
A study of the performance of a sparse grid cross section representation methodology as applied to MOX fuel12 November 2015 (has links)
M.Phil. (Energy Studies) / Nodal diffusion methods are often used to calculate the distribution of neutrons in a nuclear reactor core. They require few-group homogenized neutron cross sections for every heterogeneous sub-region of the core. The homogenized cross sections are pre-calculated at various reactor states and represented in a way that facilitates the reconstruction of cross sections at other possible states. In this study a number of such representations were built for the homogenized cross sections of a MOX (mixed oxide) fuel assembly via hierarchical Lagrange interpolation on Clenshaw-Curtis sparse grids. These cross sections were represented as a function of various thermal hydraulic and material composition parameters of a pressurized water reactor core (i.e. burnup, soluble boron concentration, fuel temperature, moderator temperature and moderator density), which are generally referred to as state parameters. Representations were produced for the homogenized cross sections of a number of individual isotopes, as well as the e ective (lumped) cross section of all the materials in the assembly. This was done for both two and six energy groups. Additionally, two sets of state parameter intervals were considered for each of the group structures. The first set of intervals was chosen to correspond to conditions that may be encountered during day-to-day reactor operations. The second set of intervals was chosen to be applicable to the simulation of accident scenarios and therefore have wider ranges for fuel temperature, moderator temperature and moderator density.
|
9 |
Video-based face alignment using efficient sparse and low-rank approach.January 2011 (has links)
Wu, King Keung. / "August 2011." / Thesis (M.Phil.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (p. 119-126). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.v / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Overview of Face Alignment Algorithms --- p.1 / Chapter 1.1.1 --- Objectives --- p.1 / Chapter 1.1.2 --- Motivation: Photo-realistic Talking Head --- p.2 / Chapter 1.1.3 --- Existing methods --- p.5 / Chapter 1.2 --- Contributions --- p.8 / Chapter 1.3 --- Outline of the Thesis --- p.11 / Chapter 2 --- Sparse Signal Representation --- p.13 / Chapter 2.1 --- Introduction --- p.13 / Chapter 2.2 --- Problem Formulation --- p.15 / Chapter 2.2.1 --- l0-nonn minimization --- p.15 / Chapter 2.2.2 --- Uniqueness --- p.16 / Chapter 2.3 --- Basis Pursuit --- p.18 / Chapter 2.3.1 --- From l0-norm to l1-norm --- p.19 / Chapter 2.3.2 --- l0-l1 Equivalence --- p.20 / Chapter 2.4 --- l1-Regularized Least Squares --- p.21 / Chapter 2.4.1 --- Noisy case --- p.22 / Chapter 2.4.2 --- Over-determined systems of linear equations --- p.22 / Chapter 2.5 --- Summary --- p.24 / Chapter 3 --- Sparse Corruptions and Principal Component Pursuit --- p.25 / Chapter 3.1 --- Introduction --- p.25 / Chapter 3.2 --- Sparse Corruptions --- p.26 / Chapter 3.2.1 --- Sparse Corruptions and l1-Error --- p.26 / Chapter 3.2.2 --- l1-Error and Least Absolute Deviations --- p.28 / Chapter 3.2.3 --- l1-Regularized l1-Error --- p.29 / Chapter 3.3 --- Robust Principal Component Analysis (RPCA) and Principal Component Pursuit --- p.31 / Chapter 3.3.1 --- Principal Component Analysis (PCA) and RPCA --- p.31 / Chapter 3.3.2 --- Principal Component Pursuit --- p.33 / Chapter 3.4 --- Experiments of Sparse and Low-rank Approach on Surveillance Video --- p.34 / Chapter 3.4.1 --- Least Squares --- p.35 / Chapter 3.4.2 --- l1-Regularized Least Squares --- p.35 / Chapter 3.4.3 --- l1-Error --- p.36 / Chapter 3.4.4 --- l1-Regularized l1-Error --- p.36 / Chapter 3.5 --- Summary --- p.37 / Chapter 4 --- Split Bregman Algorithm for l1-Problem --- p.45 / Chapter 4.1 --- Introduction --- p.45 / Chapter 4.2 --- Bregman Distance --- p.46 / Chapter 4.3 --- Bregman Iteration for Constrained Optimization --- p.47 / Chapter 4.4 --- Split Bregman Iteration for l1-Regularized Problem --- p.50 / Chapter 4.4.1 --- Formulation --- p.51 / Chapter 4.4.2 --- Advantages of Split Bregman Iteration . . --- p.52 / Chapter 4.5 --- Fast l1 Algorithms --- p.54 / Chapter 4.5.1 --- l1-Regularized Least Squares --- p.54 / Chapter 4.5.2 --- l1-Error --- p.55 / Chapter 4.5.3 --- l1-Regularized l1-Error --- p.57 / Chapter 4.6 --- Summary --- p.58 / Chapter 5 --- Face Alignment Using Sparse and Low-rank Decomposition --- p.61 / Chapter 5.1 --- Robust Alignment by Sparse and Low-rank Decomposition for Linearly Correlated Images (RASL) --- p.61 / Chapter 5.2 --- Problem Formulation --- p.62 / Chapter 5.2.1 --- Theory --- p.62 / Chapter 5.2.2 --- Algorithm --- p.64 / Chapter 5.3 --- Direct Extension of RASL: Multi-RASL --- p.66 / Chapter 5.3.1 --- Formulation --- p.66 / Chapter 5.3.2 --- Algorithm --- p.67 / Chapter 5.4 --- Matlab Implementation Details --- p.68 / Chapter 5.4.1 --- Preprocessing --- p.70 / Chapter 5.4.2 --- Transformation --- p.73 / Chapter 5.4.3 --- Jacobian Ji --- p.74 / Chapter 5.5 --- Experiments --- p.75 / Chapter 5.5.1 --- Qualitative Evaluations Using Small Dataset --- p.76 / Chapter 5.5.2 --- Large Dataset Test --- p.81 / Chapter 5.5.3 --- Conclusion --- p.85 / Chapter 5.6 --- Sensitivity analysis on selection of references --- p.87 / Chapter 5.6.1 --- References from consecutive frames --- p.88 / Chapter 5.6.2 --- References from RASL-aligned images --- p.91 / Chapter 5.7 --- Summary --- p.92 / Chapter 6 --- Extension of RASL for video: One-by-One Approach --- p.96 / Chapter 6.1 --- One-by-One Approach --- p.96 / Chapter 6.1.1 --- Motivation --- p.97 / Chapter 6.1.2 --- Algorithm --- p.97 / Chapter 6.2 --- Choices of Optimization --- p.101 / Chapter 6.2.1 --- l1-Regularized Least Squares --- p.101 / Chapter 6.2.2 --- l1-Error --- p.102 / Chapter 6.2.3 --- l1-Regularized l1-Error --- p.103 / Chapter 6.3 --- Experiments --- p.104 / Chapter 6.3.1 --- Evaluation for Different l1 Algorithms --- p.104 / Chapter 6.3.2 --- Conclusion --- p.108 / Chapter 6.4 --- Exploiting Property of Video --- p.109 / Chapter 6.5 --- Summary --- p.110 / Chapter 7 --- Conclusion and Future Work --- p.112 / Chapter A --- Appendix --- p.117 / Bibliography --- p.119
|
10 |
A computational study of sparse matrix storage schemesHaque, Sardar Anisul, University of Lethbridge. Faculty of Arts and Science January 2008 (has links)
The efficiency of linear algebra operations for sparse matrices on modern high performance
computing system is often constrained by the available memory bandwidth. We are interested
in sparse matrices whose sparsity pattern is unknown. In this thesis, we study the
efficiency of major storage schemes of sparse matrices during multiplication with dense
vector. A proper reordering of columns or rows usually results in reduced memory traffic
due to the improved data reuse. This thesis also proposes an efficient column ordering
algorithm based on binary reflected gray code. Computational experiments show that this
ordering results in increased performance in computing the product of a sparse matrix with
a dense vector. / xi, 76 leaves : ill. ; 29 cm.
|
Page generated in 0.0598 seconds