Spelling suggestions: "subject:"lze""
1 |
Efficient data encoder for endoscopic imaging applicationsTajallipour, Ramin 05 January 2011
The invention of medical imaging technology revolved the process of diagnosing diseases and opened a new world for better studying inside of the human body. In order to capture images from different human organs, different devices have been developed. Gastro-Endoscopy is an example of a medical imaging device which captures images from human gastrointestinal. With the advancement of technology, the issues regarding such devices started to get rectified. For example, with the invention of swallow-able pill photographer which is called Wireless Capsule Endoscopy (WCE); pain, time, and bleeding risk for patients are radically decreased. The development of such technologies and devices has been increased and the demands for instruments providing better performance are grown along the time. In case ofWCE, the special feature requirements such as a small size (as small as an ordinary pill) and wireless transmission of the captured images dictate restrictions in power consumption and area usage.
In this research, the reduction of image encoder hardware cost for endoscopic imaging application has been focused. Several encoding algorithms have been studied and the comparative results are discussed. An efficient data encoder based on Lempel-Ziv-Welch (LZW) algorithm is presented. The encoder is a library-based one where the size of library can be modified by the user, and hence, the output data rate can be controlled according to the bandwidth requirement. The simulation is carried out with several endoscopic images and the results show that a minimum compression ratio of 92.5 % can be achieved with a minimum reconstruction quality of 30 dB. The hardware architecture and implementation result in Field-Programmable Gate Array (FPGA) for the proposed window-based LZW are also presented. A new lossy LZW algorithm is proposed and implemented in FPGA which provides promising results for such an application.
|
2 |
Efficient data encoder for endoscopic imaging applicationsTajallipour, Ramin 05 January 2011 (has links)
The invention of medical imaging technology revolved the process of diagnosing diseases and opened a new world for better studying inside of the human body. In order to capture images from different human organs, different devices have been developed. Gastro-Endoscopy is an example of a medical imaging device which captures images from human gastrointestinal. With the advancement of technology, the issues regarding such devices started to get rectified. For example, with the invention of swallow-able pill photographer which is called Wireless Capsule Endoscopy (WCE); pain, time, and bleeding risk for patients are radically decreased. The development of such technologies and devices has been increased and the demands for instruments providing better performance are grown along the time. In case ofWCE, the special feature requirements such as a small size (as small as an ordinary pill) and wireless transmission of the captured images dictate restrictions in power consumption and area usage.
In this research, the reduction of image encoder hardware cost for endoscopic imaging application has been focused. Several encoding algorithms have been studied and the comparative results are discussed. An efficient data encoder based on Lempel-Ziv-Welch (LZW) algorithm is presented. The encoder is a library-based one where the size of library can be modified by the user, and hence, the output data rate can be controlled according to the bandwidth requirement. The simulation is carried out with several endoscopic images and the results show that a minimum compression ratio of 92.5 % can be achieved with a minimum reconstruction quality of 30 dB. The hardware architecture and implementation result in Field-Programmable Gate Array (FPGA) for the proposed window-based LZW are also presented. A new lossy LZW algorithm is proposed and implemented in FPGA which provides promising results for such an application.
|
3 |
Classificação de Texturas usando o Algoritmo Lempel- Ziv-WelchMeira, Moab Mariz 29 February 2008 (has links)
Made available in DSpace on 2015-05-14T12:36:37Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 1769968 bytes, checksum: 0ecd162fc2e21d4f6f321e359943426b (MD5)
Previous issue date: 2008-02-29 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Este trabalho apresenta um novo e eficiente método de classificação de
texturas usando o algoritmo de compressão sem perdas Lempel-Ziv-Welch (LZW).
Na fase de aprendizagem, o LZW constrói dicionários para as estruturas horizontal
e vertical de cada classe de textura. Na fase de classificação, amostras de texturas
são codificadas com o LZW no modo estático, usando os dicionários construídos
na fase anterior. Uma amostra é associada à classe cujo dicionário conduz à
melhor taxa de codificação. O classificador foi avaliado para vários tamanhos do
conjunto de treinamento e das amostras de treinamento, e sob diferentes condições
de iluminação das texturas. O método proposto atinge 100% de acerto em alguns
experimentos usando amostras de texturas do álbum Brodatz. Comparações diretas
com outros trabalhos indicam a superioridade do método sobre outros métodos
de alto desempenho.
|
4 |
Lokaliai progresyvaus vaizdų kodavimo metodo realizacija ir tyrimas / Realization and analysis of locally progressive encoding method of imagesKančelkis, Deividas 11 August 2008 (has links)
Skaitmeniniai vaizdai yra plačiai naudojami kompiuterių taikomosiose programose. Nesuspaustų skaitmeninių vaizdų laikymui atmintyje reikia žymiai daugiau talpos ir spartesnio (didesnio pralaidumo) tinklo galimybių jų perdavimui tinklu. Efektyvūs vaizdų suspaudimo sprendimai tampa kritiškesni dėl pastaruoju metu didelio duomenų augimo intensyvumo, multimedijos pagrindu kuriamų tinklo programų. Šiame darbe pristatoma nauja lokaliai progresyvaus vaizdų kodavimo idėja (procedūra). Procedūra grindžiama specifinėmis Haaro bangelių savybėmis bei progresyvųjį vaizdų kodavimą realizuojančiu EZW algoritmu. Taip pat apžvelgiamos kitų diskrečiųjų transformacijų ypatybės, praktinis jų pritaikymas pramoninėje, mokslinėje srityse. Preliminarūs eksperimento rezultatai rodo, jog diskrečioji Haaro transformacija ir progresyvus vaizdų glaudinimo algoritmas EZW skaitmeninių vaizdų spaudimui nėra efektyvus. Efektyvumo didinimui pasirinktas tikslinis vaizdo spektro koeficientų modifikavimas, kas leido pasiekti kur kas geresnių kodavimo rezultatų. / Digital images are widely used in computer applications. Uncompressed digital images require considerable storage capacity and transmission bandwidth. Efficient image compression solutions are becoming more critical with the recent growth of data intensive, multimedia-based web applications. In this paper, a novel locally progressive image encoding idea (procedure) is presented. The procedure explores both specific properties of Haar wavelets and the EZW algorithm originally used for progressive image encoding. Properties of various discrete transforms and areas of their practical applicability are discussed too. Preliminary experimental results show that the joint application of the discrete Haar transform and the EZW algorithm to locally progressive compression of digital images is not effective. To increase efficiency of the approach (idea), some modifications are proposed. In particular, appropriately chosen enlargement of Haar spectral coefficients led to much better overall performance, unfortunately, at the expense of time expenditures.
|
5 |
Analyse de Flux de Trames AFDX en Réception et Méthode d’Optimisation Mémoire / AFDX Frame Flow Analysis in Reception and Memory Optimization MethodBaga, Yohan 03 May 2018 (has links)
L’essor des réseaux AFDX comme infrastructure de communication entre les équipements de bord des aéronefs civils motive de nombreux travaux de recherche pour réduire les délais de communication tout en garantissant un haut niveau de déterminisme et de qualité de service. Cette thèse traite de l’effet des accolements de trames sur l’End System de réception, notamment sur le buffer interne afin de garantir une non perte de trames et un dimensionnement mémoire optimal. Une modélisation pire cas du flux de trames est réalisée selon une première méthode pessimiste, basée sur un flux de trames périodiques ; puis une seconde, plus optimiste, basée sur des intervalles de réception et un placement de trames itératif. Une étude probabiliste met en œuvre des distributions gaussiennes pour évaluer les probabilités d’occurrences des pires cas d’accolements et apporte un éclairage qui ouvre une discussion sur la pertinence de ne considérer que la modélisation pire cas pour dimensionner le buffer de réception. Un gain mémoire supplémentaire peut être obtenu par la mise en œuvre de la compression sans perte LZW. / The rise of AFDX networks as a communication infrastructure between on-board equipment of civil aircraft motivates many research projects to reduce communication delays while guaranteeing a high level of determination and quality of service. This thesis deals with the effect of the back-ot-back frame reception on the reception End System, in particular, on the internal buffer, in order to guarantee a non-loss of frames and optimal memory dimensioning. A worst-case modeling of the frame flow is carried out according to a first pessimistic method, based on a periodic frame flow. Then a more optimistic method is presented based on the reception intervals and an iterative frame placement. A probabilistic study implements Gaussian distributions to evaluate the occurrence probabilities of the worst back-to-back frames and provides an illumination that opens a discussion on the relevance of not considering the worst-case modeling to size the reception buffer. Additional memory gain can be achieved by implementing LZW lossless compression.
|
6 |
Data Compression for use in the Short Messaging System / Datakompression för användning i Short Messaging SystemetAndersson, Måns January 2010 (has links)
Data compression is a vast subject with a lot of different algorithms. All algorithms are not good at every task and this thesis takes a closer look on compression of small files in the range of 100-300 bytes having in mind that the compressed output are to be sent over the Short Messaging System (SMS). Some well-known algorithms are tested for compression ratio and two of them, the Algorithm Λ, and the Adaptive Arithmetic Coding, are chosen to get a closer understanding of and then implement in the Java language. Those implementations are tested alongside the first tested implementations and one of the algorithms are chosen to answer the question ”Which compression algorithm is best suited for compression of data for use in Short Messaging System messages?”. / Datakompression är ett brett område med ett stort antal olika algoritmer. Alla algoritmer är inte bra för alla tillfällen och denna rapport tittar i huvudsak på kompression av små filer i intervallet 100-300 byte tänkta att skickas komprimerade över SMS. Ett antal välkända algoritmers kompressionsgrad är testade och två av dem, Algorithm Λ och Adaptiv Aritmetisk Kodning, väljs ut och studeras närmre samt implementeras i Java. Dessa implementationer är sedan testade tillsammans med tidigare testade implementationer och en av algoritmerna väljs ut för att besvara frågan "Vilken kompressionsalgoritm är best lämpad för att komprimerad data för användning i SMS-meddelanden?".
|
7 |
Compressed Pattern Matching For Text And ImagesTao, Tao 01 January 2005 (has links)
The amount of information that we are dealing with today is being generated at an ever-increasing rate. On one hand, data compression is needed to efficiently store, organize the data and transport the data over the limited-bandwidth network. On the other hand, efficient information retrieval is needed to speedily find the relevant information from this huge mass of data using available resources. The compressed pattern matching problem can be stated as: given the compressed format of a text or an image and a pattern string or a pattern image, report the occurrence(s) of the pattern in the text or image with minimal (or no) decompression. The main advantages of compressed pattern matching versus the naïve decompress-then-search approach are: First, reduced storage cost. Since there is no need to decompress the data or there is only minimal decompression required, the disk space and the memory cost is reduced. Second, less search time. Since the size of the compressed data is smaller than that of the original data, a searching performed on the compressed data will result in a shorter search time. The challenge of efficient compressed pattern matching can be met from two inseparable aspects: First, to utilize effectively the full potential of compression for the information retrieval systems, there is a need to develop search-aware compression algorithms. Second, for data that is compressed using a particular compression technique, regardless whether the compression is search-aware or not, we need to develop efficient searching techniques. This means that techniques must be developed to search the compressed data with no or minimal decompression and with not too much extra cost. Compressed pattern matching algorithms can be categorized as either for text compression or for image compression. Although compressed pattern matching for text compression has been studied for a few years and many publications are available in the literature, there is still room to improve the efficiency in terms of both compression and searching. None of the search engines available today make explicit use of compressed pattern matching. Compressed pattern matching for image compression, on the other hand, has been relatively unexplored. However, it is getting more attention because lossless compression has become more important for the ever-increasing large amount of medical images, satellite images and aerospace photos, which requires the data to be losslessly stored. Developing efficient information retrieval techniques from the losslessly compressed data is therefore a fundamental research challenge. In this dissertation, we have studied compressed pattern matching problem for both text and images. We present a series of novel compressed pattern matching algorithms, which are divided into two major parts. The first major work is done for the popular LZW compression algorithm. The second major work is done for the current lossless image compression standard JPEG-LS. Specifically, our contributions from the first major work are: 1. We have developed an "almost-optimal" compressed pattern matching algorithm that reports all pattern occurrences. An earlier "almost-optimal" algorithm reported in the literature is only capable of detecting the first occurrence of the pattern and the practical performance of the algorithm is not clear. We have implemented our algorithm and provide extensive experimental results measuring the speed of our algorithm. We also developed a faster implementation for so-called "simple patterns". The simple patterns are patterns that no unique symbol appears more than once. The algorithm takes advantage of this property and runs in optimal time. 2. We have developed a novel compressed pattern matching algorithm for multiple patterns using the Aho-Corasick algorithm. The algorithm takes O(mt+n+r) time with O(mt) extra space, where n is the size of the compressed file, m is the total size of all patterns, t is the size of the LZW trie and r is the number of occurrences of the patterns. The algorithm is particularly efficient when being applied on archival search if the archives are compressed with a common LZW trie. All the above algorithms have been implemented and extensive experiments have been conducted to test the performance of our algorithms and to compare with the best existing algorithms. The experimental results show that our compressed pattern matching algorithm for multiple patterns is competitive among the best algorithms and is practically the fastest among all approaches when the number of patterns is not very large. Therefore, our algorithm is preferable for general string matching applications. LZW is one of the most efficient and popular compression algorithms used extensively and both of our algorithms require no modification on the compression algorithm. Our work, therefore, has great economical and market potential Our contributions from the second major work are: 1 We have developed a new global context variation of the JPEG-LS compression algorithm and the corresponding compressed pattern matching algorithm. Comparing to the original JPEG-LS, the global context variation is search-aware and has faster encoding and decoding speeds. The searching algorithm based on the global-context variation requires partial decompression of the compressed image. The experimental results show that it improves the search speed by about 30% comparing to the decompress-then-search approach. Based on our best knowledge, this is the first two-dimensional compressed pattern matching work for the JPEG-LS standard. 2 We have developed a two-pass variation of the JPEG-LS algorithm and the corresponding compressed pattern matching algorithm. The two-pass variation achieves search-awareness through a common compression technique called semi-static dictionary. Comparing to the original algorithm, the compression of the new algorithm is equally well but the encoding takes slightly longer. The searching algorithm based on the two-pass variation requires no decompression at all and therefore works in the fully compressed domain. It runs in time O(nc+mc+nm+m^2) with extra space O(n+m+mc), where n is the number of columns of the image, m is the number of rows and columns of the pattern, nc is the compressed image size and mc is the compressed pattern size. The algorithm is the first known two-dimensional algorithm that works in the fully compressed domain.
|
8 |
Contributions à la compression de donnéesPigeon, Steven January 2001 (has links)
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
|
9 |
Lossless Message CompressionHansson, Erik, Karlsson, Stefan January 2013 (has links)
In this thesis we investigated whether using compression when sending inter-process communication (IPC) messages can be beneficial or not. A literature study on lossless compression resulted in a compilation of algorithms and techniques. Using this compilation, the algorithms LZO, LZFX, LZW, LZMA, bzip2 and LZ4 were selected to be integrated into LINX as an extra layer to support lossless message compression. The testing involved sending messages with real telecom data between two nodes on a dedicated network, with different network configurations and message sizes. To calculate the effective throughput for each algorithm, the round-trip time was measured. We concluded that the fastest algorithms, i.e. LZ4, LZO and LZFX, were most efficient in our tests. / I detta examensarbete har vi undersökt huruvida komprimering av meddelanden för interprocesskommunikation (IPC) kan vara fördelaktigt. En litteraturstudie om förlustfri komprimering resulterade i en sammanställning av algoritmer och tekniker. Från den här sammanställningen utsågs algoritmerna LZO, LZFX, LZW, LZMA, bzip2 och LZ4 för integrering i LINX som ett extra lager för att stödja komprimering av meddelanden. Algoritmerna testades genom att skicka meddelanden innehållande riktig telekom-data mellan två noder på ett dedikerat nätverk. Detta gjordes med olika nätverksinställningar samt storlekar på meddelandena. Den effektiva nätverksgenomströmningen räknades ut för varje algoritm genom att mäta omloppstiden. Resultatet visade att de snabbaste algoritmerna, alltså LZ4, LZO och LZFX, var effektivast i våra tester.
|
10 |
Analysis of the subsequence composition of biosequencesCunial, Fabio 07 May 2012 (has links)
Measuring the amount of information and of shared information in biological strings, as well as relating information to structure, function and evolution, are fundamental computational problems in the post-genomic era. Classical analyses of the information content of biosequences are grounded in Shannon's statistical telecommunication theory, while the recent focus is on suitable specializations of the notions introduced by Kolmogorov, Chaitin and Solomonoff, based on data compression and compositional redundancy. Symmetrically, classical estimates of mutual information based on string editing are currently being supplanted by compositional methods hinged on the distribution of controlled substructures.
Current compositional analyses and comparisons of biological strings are almost exclusively limited to short sequences of contiguous solid characters. Comparatively little is known about longer and sparser components, both from the point of view of their effectiveness in measuring information and in separating biological strings from random strings, and from the point of view of their ability to classify and to reconstruct phylogenies. Yet, sparse structures are suspected to grasp long-range correlations and, at short range, they are known to encode signatures and motifs that characterize molecular families.
In this thesis, we introduce and study compositional measures based on the repertoire of distinct subsequences of any length, but constrained to occur with a predefined maximum gap between consecutive symbols. Such measures highlight previously unknown laws that relate subsequence abundance to string length and to the allowed gap, across a range of structurally and functionally diverse polypeptides. Measures on subsequences are capable of separating only few amino acid strings from their random permutations, but they reveal that random permutations themselves amass along previously undetected, linear loci. This is perhaps the first time in which the vocabulary of all distinct subsequences of a set of structurally and functionally diverse polypeptides is systematically counted and analyzed.
Another objective of this thesis is measuring the quality of phylogenies based on the composition of sparse structures. Specifically, we use a set of repetitive gapped patterns, called motifs, whose length and sparsity have never been considered before. We find that extremely sparse motifs in mitochondrial proteomes support phylogenies of comparable quality to state-of-the-art string-based algorithms. Moving from maximal motifs -- motifs that cannot be made more specific without losing support -- to a set of generators with decreasing size and redundancy, generally degrades classification, suggesting that redundancy itself is a key factor for the efficient reconstruction of phylogenies. This is perhaps the first time in which the composition of all motifs of a proteome is systematically used in phylogeny reconstruction on a large scale.
Extracting all maximal motifs, or even their compact generators, is infeasible for entire genomes. In the last part of this thesis, we study the robustness of measures of similarity built around the dictionary of LZW -- the variant of the LZ78 compression algorithm proposed by Welch -- and of some of its recently introduced gapped variants. These algorithms use a very small vocabulary, they perform linearly in the input strings, and they can be made even faster than LZ77 in practice. We find that dissimilarity measures based on maximal strings in the dictionary of LZW support phylogenies that are comparable to state-of-the-art methods on test proteomes. Introducing a controlled proportion of gaps does not degrade classification, and allows to discard up to 20% of each input proteome during comparison.
|
Page generated in 0.0399 seconds