Compressed sensing (CS) is a relatively new branch of mathematics with very interesting applications in signal processing, statistics and computer science. This thesis presents some theory of compressed sensing, which allows us to recover (high-dimensional) sparse vectors from (low-dimensional) compressed measurements by solving the L1-minimization problem. A possible application of CS to the problem of error correction is also presented, where sparse vectors are that of arbitrary noise. Successful sparse recovery by L1-minimization relies on certain properties of rectangular matrices. But these matrix properties are extremely subtle and difficult to numerically verify. Therefore, to get an idea of how sparse (or dense) errors can be, numerical simulation of error correction was done. These simulations show the performance of error correction with respect to various levels of error sparsity and matrix dimensions. It turns out that error correction degrades slower for low matrix dimensions than for high matrix dimensions, while for sufficiently sparse errors, high matrix dimensions offer a higher likelihood of guaranteed error correction.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:lnu-85499 |
Date | January 2019 |
Creators | Tordsson, Pontus |
Publisher | Linnéuniversitetet, Institutionen för matematik (MA) |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0013 seconds