• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Optimizing Lempel-Ziv Factorization for the GPU Architecture

Ching, Bryan 01 June 2014 (has links) (PDF)
Lossless data compression is used to reduce storage requirements, allowing for the relief of I/O channels and better utilization of bandwidth. The Lempel-Ziv lossless compression algorithms form the basis for many of the most commonly used compression schemes. General purpose computing on graphic processing units (GPGPUs) allows us to take advantage of the massively parallel nature of GPUs for computations other that their original purpose of rendering graphics. Our work targets the use of GPUs for general lossless data compression. Specifically, we developed and ported an algorithm that constructs the Lempel-Ziv factorization directly on the GPU. Our implementation bypasses the sequential nature of the LZ factorization and attempts to compute the factorization in parallel. By breaking down the LZ factorization into what we call the PLZ, we are able to outperform the fastest serial CPU implementations by up to 24x and perform comparatively to a parallel multicore CPU implementation. To achieve these speeds, our implementation outputted LZ factorizations that were on average only 0.01 percent greater than the optimal solution that what could be computed sequentially. We are also able to reevaluate the fastest GPU suffix array construction algorithm, which is needed to compute the LZ factorization. We are able to find speedups of up to 5x over the fastest CPU implementations.
2

Data Compression for use in the Short Messaging System / Datakompression för användning i Short Messaging Systemet

Andersson, Måns January 2010 (has links)
Data compression is a vast subject with a lot of different algorithms. All algorithms are not good at every task and this thesis takes a closer look on compression of small files in the range of 100-300 bytes having in mind that the compressed output are to be sent over the Short Messaging System (SMS). Some well-known algorithms are tested for compression ratio and two of them, the Algorithm Λ, and the Adaptive Arithmetic Coding, are chosen to get a closer understanding of and then implement in the Java language. Those implementations are tested alongside the first tested implementations and one of the algorithms are chosen to answer the question ”Which compression algorithm is best suited for compression of data for use in Short Messaging System messages?”. / Datakompression är ett brett område med ett stort antal olika algoritmer. Alla algoritmer är inte bra för alla tillfällen och denna rapport tittar i huvudsak på kompression av små filer i intervallet 100-300 byte tänkta att skickas komprimerade över SMS. Ett antal välkända algoritmers kompressionsgrad är testade och två av dem, Algorithm Λ och Adaptiv Aritmetisk Kodning, väljs ut och studeras närmre samt implementeras i Java. Dessa implementationer är sedan testade tillsammans med tidigare testade implementationer och en av algoritmerna väljs ut för att besvara frågan "Vilken kompressionsalgoritm är best lämpad för att komprimerad data för användning i SMS-meddelanden?".
3

Auto-Índice de Texto Basado en LZ77

Kreft Carreño, Sebastián Andrés January 2010 (has links)
No description available.
4

Klasifikace biologických sekvencí s využitím bezeztrátové komprese / Biological sequence classification utilizing lossless data compression algorithms

Kruml, Ondřej January 2016 (has links)
Tato diplomová práce se zabývá možností využití bezeztrátových kompresních algoritmů ke klasifikaci biologických sekvencí. Nejdříve je představena literární rešerše o bezeztrátových kompresních algoritmech, která byla využita k výběru slovníkového algoritmu vytvořeného A. Lempelem a J. Zivem v roce 1976 (LZ77). Tento algoritmus je běžně používán k datové kompresi a v předkládané práci byl modifikován tak, aby umožnil klasifikaci biologických sekvencí. K algoritmu byly navrženy další modifikace, které rozvíjí jeho klasifikační možnosti. V průběhu práce byla sestavena sada datasetů biologických sekvencí, která umožnila podrobné testování algoritmu. Algoritmus byl porovnán s klasickými zarovnávacími metodami: Jukes-Cantor, Tamura a Kimura. Bylo ukázáno, že algoritmus dosahuje srovnatelných výsledků v oblasti klasifikace biologických sekvencí a dokonce je u 20% datasetů překonává. Lepší výsledky dosahuje zejména u sekvencí, jež jsou si vzájemně vzdálené.
5

Fast Low Memory T-Transform: string complexity in linear time and space with applications to Android app store security.

Rebenich, Niko 27 April 2012 (has links)
This thesis presents flott, the Fast Low Memory T-Transform, the currently fastest and most memory efficient linear time and space algorithm available to compute the string complexity measure T-complexity. The flott algorithm uses 64.3% less memory and in our experiments runs asymptotically 20% faster than its predecessor. A full C-implementation is provided and published under the Apache Licence 2.0. From the flott algorithm two deterministic information measures are derived and applied to Android app store security. The derived measures are the normalized T-complexity distance and the instantaneous T-complexity rate which are used to detect, locate, and visualize unusual information changes in Android applications. The information measures introduced present a novel, scalable approach to assist with the detection of malware in app stores. / Graduate

Page generated in 0.0281 seconds