Spelling suggestions: "subject:"lparse 2matrices"" "subject:"lparse cicatrices""
11 |
Concurrent solutions of large sparse linear systems /Zheng, Tongsheng, January 1998 (has links)
Thesis (M.Sc.)--Memorial University of Newfoundland, 1999. / Bibliography: leaves 64-68.
|
12 |
KLU--a high performance sparse linear solver for circuit simulation problemsNatarajan, Ekanathan Palamadai. January 2005 (has links)
Thesis (M.S.)--University of Florida, 2005. / Title from title page of source document. Document formatted into pages; contains 79 pages. Includes vita. Includes bibliographical references.
|
13 |
Parallel solution of sparse linear systems /Nader, Babak, January 1987 (has links)
Thesis (M.S.)--Oregon Graduate Center, 1987.
|
14 |
Particle filter based tracking in a detection sparse discrete event simulation environmentBorovies, Drew A. January 2007 (has links) (PDF)
Thesis (M.S. in Modeling, Virtual Environment, and Simulation (MOVES))--Naval Postgraduate School, March 2007. / Thesis Advisor(s): Christian Darken. "March 2007." Includes bibliographical references (p. 115). Also available in print.
|
15 |
Sparse array representations and some selected array operations on GPUsWang, Hairong 01 September 2014 (has links)
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science. Johannesburg, 2014. / A multi-dimensional data model provides a good conceptual view of the data in data warehousing and On-Line
Analytical Processing (OLAP). A typical representation of such a data model is as a multi-dimensional array
which is well suited when the array is dense. If the array is sparse, i.e., has a few number of non-zero elements
relative to the product of the cardinalities of the dimensions, using a multi-dimensional array to represent the
data set requires extremely large memory space while the actual data elements occupy a relatively small fraction
of the space. Existing storage schemes for Multi-Dimensional Sparse Arrays (MDSAs) of higher dimensions
k (k > 2), focus on optimizing the storage utilization, and offer little flexibility in data access efficiency.
Most efficient storage schemes for sparse arrays are limited to matrices that are arrays in 2 dimensions. In
this dissertation, we introduce four storage schemes for MDSAs that handle the sparsity of the array with two
primary goals; reducing the storage overhead and maintaining efficient data element access. These schemes,
including a well known method referred to as the Bit Encoded Sparse Storage (BESS), were evaluated and
compared on four basic array operations, namely construction of a scheme, large scale random element access,
sub-array retrieval and multi-dimensional aggregation. The four storage schemes being proposed, together
with the evaluation results are: i.) The extended compressed row storage (xCRS) which extends CRS method
for sparse matrix storage to sparse arrays of higher dimensions and achieves the best data element access
efficiency among the methods compared; ii.) The bit encoded xCRS (BxCRS) which optimizes the storage
utilization of xCRS by applying data compression methods with run length encoding, while maintaining its
data access efficiency; iii.) A hybrid approach (Hybrid) that provides the best control of the balance between
the storage utilization and data manipulation efficiency by combining xCRS and BESS. iv.) The PATRICIA
trie compressed storage (PTCS) which uses PATRICIA trie to store the valid non-zero array elements. PTCS
supports efficient data access, and has a unique property of supporting update operations conveniently. v.)
BESS performs the best for the multi-dimensional aggregation, closely followed by the other schemes.
We also addressed the problem of accelerating some selected array operations using General Purpose Computing
on Graphics Processing Unit (GPGPU). The experimental results showed different levels of speed up,
ranging from 2 to over 20 times, on large scale random element access and sub-array retrieval. In particular, we
utilized GPUs on the computation of the cube operator, a special case of multi-dimensional aggregation, using
BESS. This resulted in a 5 to 8 times of speed up compared with our CPU only implementation. The main
contributions of this dissertation include the developments, implementations and evaluations of four efficient
schemes to store multi-dimensional sparse arrays, as well as utilizing massive parallelism of GPUs for some
data warehousing operations.
|
16 |
Local Cohomology of Determinantal Thickening and Properties of Ideals of Minors of Generalized Diagonal Matrices.Hunter Simper (15347248) 26 April 2023 (has links)
<p>This thesis is focused on determinantal rings in 2 different contexts. In Chapter 3 the homological properties of powers of determinantal ideals are studied. In particular the focus is on local cohomology of determinantal thickenings and we explicitly describe the $R$-module structure of some of these local cohomology modules. In Chapter 4 we introduce \textit{generalized diagonal} matrices, a class of sparse matrices which contain diagonal and upper triangular matrices. We study the ideals of minors of such matrices and describe their properties such as height, multiplicity, and Cohen-Macaulayness. </p>
|
17 |
On Improving Sparse Matrix-Matrix Multiplication on GPUsKunchum, Rakshith 15 August 2017 (has links)
No description available.
|
18 |
A Hardware Interpreter for Sparse Matrix LU FactorizationSyed, Akber 16 September 2002 (has links)
No description available.
|
19 |
Parallel processing in power systems computation on a distributed memory message passing multicomputerHong, Chao, 洪潮 January 2000 (has links)
published_or_final_version / Electrical and Electronic Engineering / Doctoral / Doctor of Philosophy
|
20 |
Analysis of sparse systemsDuff, Iain Spencer January 1972 (has links)
The aim of this thesis is to conduct a general investigation in the field of sparse matrices, to investigate and compare various techniques for handling sparse systems suggested in the literature, to develop some new techniques, and to discuss the feasibility of using sparsity techniques in the solution of overdetermined equations and the eigenvalue problem.
|
Page generated in 0.0595 seconds