81 |
Compressed Pattern Matching For Text And ImagesTao, Tao 01 January 2005 (has links)
The amount of information that we are dealing with today is being generated at an ever-increasing rate. On one hand, data compression is needed to efficiently store, organize the data and transport the data over the limited-bandwidth network. On the other hand, efficient information retrieval is needed to speedily find the relevant information from this huge mass of data using available resources. The compressed pattern matching problem can be stated as: given the compressed format of a text or an image and a pattern string or a pattern image, report the occurrence(s) of the pattern in the text or image with minimal (or no) decompression. The main advantages of compressed pattern matching versus the naïve decompress-then-search approach are: First, reduced storage cost. Since there is no need to decompress the data or there is only minimal decompression required, the disk space and the memory cost is reduced. Second, less search time. Since the size of the compressed data is smaller than that of the original data, a searching performed on the compressed data will result in a shorter search time. The challenge of efficient compressed pattern matching can be met from two inseparable aspects: First, to utilize effectively the full potential of compression for the information retrieval systems, there is a need to develop search-aware compression algorithms. Second, for data that is compressed using a particular compression technique, regardless whether the compression is search-aware or not, we need to develop efficient searching techniques. This means that techniques must be developed to search the compressed data with no or minimal decompression and with not too much extra cost. Compressed pattern matching algorithms can be categorized as either for text compression or for image compression. Although compressed pattern matching for text compression has been studied for a few years and many publications are available in the literature, there is still room to improve the efficiency in terms of both compression and searching. None of the search engines available today make explicit use of compressed pattern matching. Compressed pattern matching for image compression, on the other hand, has been relatively unexplored. However, it is getting more attention because lossless compression has become more important for the ever-increasing large amount of medical images, satellite images and aerospace photos, which requires the data to be losslessly stored. Developing efficient information retrieval techniques from the losslessly compressed data is therefore a fundamental research challenge. In this dissertation, we have studied compressed pattern matching problem for both text and images. We present a series of novel compressed pattern matching algorithms, which are divided into two major parts. The first major work is done for the popular LZW compression algorithm. The second major work is done for the current lossless image compression standard JPEG-LS. Specifically, our contributions from the first major work are: 1. We have developed an "almost-optimal" compressed pattern matching algorithm that reports all pattern occurrences. An earlier "almost-optimal" algorithm reported in the literature is only capable of detecting the first occurrence of the pattern and the practical performance of the algorithm is not clear. We have implemented our algorithm and provide extensive experimental results measuring the speed of our algorithm. We also developed a faster implementation for so-called "simple patterns". The simple patterns are patterns that no unique symbol appears more than once. The algorithm takes advantage of this property and runs in optimal time. 2. We have developed a novel compressed pattern matching algorithm for multiple patterns using the Aho-Corasick algorithm. The algorithm takes O(mt+n+r) time with O(mt) extra space, where n is the size of the compressed file, m is the total size of all patterns, t is the size of the LZW trie and r is the number of occurrences of the patterns. The algorithm is particularly efficient when being applied on archival search if the archives are compressed with a common LZW trie. All the above algorithms have been implemented and extensive experiments have been conducted to test the performance of our algorithms and to compare with the best existing algorithms. The experimental results show that our compressed pattern matching algorithm for multiple patterns is competitive among the best algorithms and is practically the fastest among all approaches when the number of patterns is not very large. Therefore, our algorithm is preferable for general string matching applications. LZW is one of the most efficient and popular compression algorithms used extensively and both of our algorithms require no modification on the compression algorithm. Our work, therefore, has great economical and market potential Our contributions from the second major work are: 1 We have developed a new global context variation of the JPEG-LS compression algorithm and the corresponding compressed pattern matching algorithm. Comparing to the original JPEG-LS, the global context variation is search-aware and has faster encoding and decoding speeds. The searching algorithm based on the global-context variation requires partial decompression of the compressed image. The experimental results show that it improves the search speed by about 30% comparing to the decompress-then-search approach. Based on our best knowledge, this is the first two-dimensional compressed pattern matching work for the JPEG-LS standard. 2 We have developed a two-pass variation of the JPEG-LS algorithm and the corresponding compressed pattern matching algorithm. The two-pass variation achieves search-awareness through a common compression technique called semi-static dictionary. Comparing to the original algorithm, the compression of the new algorithm is equally well but the encoding takes slightly longer. The searching algorithm based on the two-pass variation requires no decompression at all and therefore works in the fully compressed domain. It runs in time O(nc+mc+nm+m^2) with extra space O(n+m+mc), where n is the number of columns of the image, m is the number of rows and columns of the pattern, nc is the compressed image size and mc is the compressed pattern size. The algorithm is the first known two-dimensional algorithm that works in the fully compressed domain.
|
82 |
The evaluation of chest images compressed with JPEG and wavelet techniquesWen, Cathlyn Y. 22 August 2008 (has links)
Image compression reduces the amount of space necessary to store digital images and allows quick transmission of images to other hospitals, departments, or clinics. However, the degradation of image quality due to compression may not be acceptable to radiologists or it may affect diagnostic results. A preliminary study was conducted using several chest images with common lung diseases and compressed with JPEG and wavelet techniques at various ratios. Twelve board-certified radiologists were recruited to perform two types of experiments.
In the first part of the experiment, presence of lung disease, confidence of presence of lung disease, severity of lung disease, confidence of severity of lung disease, and difficulty of making a diagnosis were rated by radiologists. The six images presented were either uncompressed or compressed at 32:1 or 48:1 compression ratios.
In the second part of the experiment, radiologists were asked to make subjective ratings by comparing the image quality of the uncompressed version of an image with the compressed version of the same image, and judging the acceptability of the compressed image for diagnosis. The second part examined a finer range of compression ratios (8:1, 16:1, 24:1, 32:1, 44:1, and 48:1).
In all cases, radiologists were able to judge the presence of lung disease and experienced little difficulty diagnosing the images. Image degradation perceptibility increased as the compression ratio increased; however, among the levels of compression ratio tested, the quality of compressed images was judged to be only slightly worse than the original image. At higher compression ratios, JPEG images were judged to be less acceptable than wavelet-based images but radiologists believed that all the images were still acceptable for diagnosis.
These results should be interpreted carefully because there were only six original images tested, but results indicate that compression ratios of up to 48:1 are acceptable using the two medically optimized compression methods, JPEG and wavelet techniques. / Master of Science
|
83 |
Métriques perceptuelles pour la compression d'images : éude et comparaison des algorithmes JPEG et JPEG2000.Brunet, Dominique 13 April 2018 (has links)
Les algorithmes de compression d'images JPEG et JPEG2000 sont présentés, puis comparés grâce à une métrique perceptuelle. L'algorithme JPEG décompose une image par la transformée en cosinus discrète, approxime les coefficients transformés par une quantisation uniforme et encode le résultat par l'algorithme de Huffman. Pour l'algorithme JPEG2000, on utilise une transformée en ondelettes décomposant une image en plusieurs résolutions. On décrit et justifie la construction d'ondelettes orthogonales ou biorthogonales ayant le maximum de propriétés parmi les suivantes: valeurs réelles, support compact, plusieurs moments, régularité et symétrie. Ensuite, on explique sommairement le fonctionnement de l'algorithme JPEG2000, puis on montre que la métrique RMSE n'est pas bonne pour mesurer l'erreur perceptuelle. On présente donc quelques idées pour la construction d'une métrique perceptuelle se basant sur le fonctionnement du système de vision humain, décrivant en particulier la métrique SSIM. On utilise finalement cette dernière métrique pour conclure que JPEG2000 fait mieux que JPEG. / In the present work we describe the image compression algorithms: JPEG and JPEG2000. We then compare them using a perceptual metric. JPEG algorithm decomposes an image with the discrete cosine transform, the transformed map is then quantized and encoded with the Huffman code. Whereas the JPEG2000 algorithm uses wavelet transform to decompose an image in many resolutions. We describe a few properties of wavelets and prove their utility in image compression. The wavelets properties are for instance: orthogonality or biorthogonality, real wavelets, compact support, number of moments, regularity and symmetry. We then briefly show how does JPEG2000 work. After we prove that RMSE error is clearly not the best perceptual metric. So forth we suggest other metrics based on a human vision system model. We describe the SSIM index and suggest it as a tool to evaluate image quality. Finally, using the SSIM metric, we show that JPEG2000 surpasses JPEG.
|
84 |
Exploring JPEG File Containers Without Metadata : A Machine Learning Approach for Encoder ClassificationIko Mattsson, Mattias, Wagner, Raya January 2024 (has links)
This thesis explores a method for identifying JPEG encoders without relying on metadata by analyzing characteristics inherent to the JPEG file format itself. The approach uses machine learning to differentiate encoders based on features such as quantization tables, Huffman tables, and marker sequences. These features are extracted from the file container and analyzed to identify the source encoder. The random forest classification algorithm was applied to test the efficacy of the approach across different datasets, aiming to validate the model's performance and reliability. The results confirm the model's capability to identify JPEG source encoders, providing a useful approach for digital forensic investigations.
|
85 |
Méthodes de transmission d'images optimisées utilisant des techniques de communication numériques avancées pour les systèmes multi-antennes / Optimized image transmission methods using advanced digital communication techniques for multi-antenna systemsMhamdi, Maroua 12 October 2017 (has links)
Cette thèse est consacrée à l'amélioration des performances de codage/décodage de systèmes de transmission d'images fixes sur des canaux bruités et réalistes. Nous proposons, à cet effet, le développement de méthodes de transmission d'images optimisées en se focalisant sur les deux couches application et physique des réseaux sans fil. Au niveau de la couche application et afin d'assurer une bonne qualité de service, on utilise des algorithmes de compression efficaces permettant au récepteur de reconstruire l'image avec un maximum de fidélité (JPEG2000 et JPWL). Afin d'assurer une transmission sur des canaux sans fil avec un minimum de TEB à la réception, des techniques de transmission, de codage et de modulation avancées sont utilisées au niveau de la couche physique (système MIMO-OFDM, modulation adaptative, CCE, etc). Dans un premier temps, nous proposons un système de transmission robuste d'images codées JPWL intégrant un schéma de décodage conjoint source-canal basé sur des techniques de décodage à entrées pondérées. On considère, ensuite, l'optimisation d'une chaîne de transmission d'images sur un canal MIMO-OFDM sans fil réaliste. La stratégie de transmission d'images optimisée s'appuie sur des techniques de décodage à entrées pondérées et une approche d'adaptation de lien. Ainsi, le schéma de transmission proposé offre la possibilité de mettre en oeuvre conjointement de l'UEP, de l'UPA, de la modulation adaptative, du codage de source adaptatif et de décodage conjoint pour améliorer la qualité de l'image à la réception. Dans une seconde partie, nous proposons un système robuste de transmission de flux progressifs basé sur le principe de turbo décodage itératif de codes concaténés offrant une stratégie de protection inégale de données. Ainsi, l'originalité de cette étude consiste à proposer des solutions performantes d'optimisation globale d'une chaîne de communication numérique pour améliorer la qualité de transmission. / This work is devoted to improve the coding/ decoding performance of a transmission scheme over noisy and realistic channels. For this purpose, we propose the development of optimized image transmission methods by focusing on both application and physical layers of wireless networks. In order to ensure a better quality of services, efficient compression algorithms (JPEG2000 and JPWL) are used in terms of the application layer enabling the receiver to reconstruct the images with maximum fidelity. Furthermore, to insure a transmission on wireless channels with a minimum BER at reception, some transmission, coding and advanced modulation techniques are used in the physical layer (MIMO-OFDM system, adaptive modulation, FEC, etc). First, we propose a robust transmission system of JPWL encoded images integrating a joint source-channel decoding scheme based on soft input decoding techniques. Next, the optimization of an image transmission scheme on a realistic MIMO-OFDM channel is considered. The optimized image transmission strategy is based on soft input decoding techniques and a link adaptation approach. The proposed transmission scheme offers the possibility of jointly implementing, UEP, UPA, adaptive modulation, adaptive source coding and joint decoding strategies, in order to improve the image visual quality at the reception. Then, we propose a robust transmission system for embedded bit streams based on concatenated block coding mechanism offering an unequal error protection strategy. Thus, the novelty of this study consists in proposing efficient solutions for the global optimization of wireless communication system to improve transmission quality.
|
86 |
Aproximações para DCT via pruning com aplicações em codificação de imagem e vídeoCOUTINHO, Vítor de Andrade 23 February 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-06-21T15:14:55Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Vitor_de_Andrade_Coutinho-dissertacao_ppgee.pdf: 3622975 bytes, checksum: 01a22e0302dfc1890d745c6b1bffe327 (MD5) / Made available in DSpace on 2016-06-21T15:14:56Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Vitor_de_Andrade_Coutinho-dissertacao_ppgee.pdf: 3622975 bytes, checksum: 01a22e0302dfc1890d745c6b1bffe327 (MD5)
Previous issue date: 2015-02-23 / CNPq / O presente trabalho aborda o desenvolvimento de aproximações para a transformada dis-
reta do
osseno (DCT) utilizando a abordagem pruning. Devido à propriedade da
ompa
ta-
ção de energia, a DCT é empregada em diversas apli
ações de
ompressão de dados. Embora
algoritmos rápidos permitam
omputar a DCT e
ientemente, operações de multipli
ação são
inevitáveis. Devido a
res
ente demanda por métodos de baixo
onsumo energéti
o, novos
algoritmos de
usto
omputa
ional reduzido são ne
essários. Neste
ontexto, aproximações
para a DCT foram propostas nos últimos anos. Tais aproximações permitem algoritmos livres
de multipli
ação, sem a ne
essidade de operações de ponto utuante, mantendo o desempe-
nho de
ompressão
omparável ao forne
ido por métodos baseados na DCT. Uma abordagem
adi
ional para reduzir o
usto
omputa
ional da DCT é a utilização de pruning. Tal té
ni
a
onsiste em não
onsiderar
oe
ientes dos vetores de entrada e/ou saída que apresentam
menor relevân
ia em termos de energia
on
entrada. No
aso da DCT, esses
oe
ientes são
os termos de mais alta frequên
ia do vetor transformado. A apli
ação de pruning a aproxima-
ções para a DCT é uma área pou
o explorada. O objetivo deste trabalho é apli
ar a té
ni
a a
diferentes métodos aproximados para a DCT. As transformações resultantes foram apli
adas
no
ontexto de
ompressão de imagem e vídeo e os resultados mostraram desempenho
ompa-
rável ao de métodos exatos a um
usto
omputa
ional bastante reduzido. Uma generalização
do
on
eito é apresentada, assim
omo uma análise da
omplexidade aritméti
a. / This work introdu
es approximate dis
rete
osine transforms (DCT) based on the pruning
approa
h. Due to the energy
ompa
tion property, the DCT is employed in several data
ompression appli
ations. Although fast algorithms allow an e
ient DCT
omputation,
multipli
ation operations are inevitable. Due to the in
reasing demand for energy e
ient
methods, new algorithms with redu
ed
omputational
ost are required. In this
ontext,
DCT approximations have been proposed re
ently. Su
h approximations allow multipli
ation
free algorithms whi
h
an avoid oating point operations, while maintaining a
ompetitive
performan
e. A further approa
h to redu
e the
omputational
ost of the DCT is pruning. The
te
hnique
onsists of dis
arding input and/or output ve
tors
oe
ients whi
h are regarded
as less signi
ant. In the
ase of the DCT, su
h
oe
ients are output
oe
ients asso
iated
to higher frequen
y terms. Pruned DCT approximations is a relatively unexplored eld of
resear
h. The obje
tive of this work is the
ombination of approximations and pruning to
derive extremely low-
omplexity DCT approximations. The resulting methods were applied in
the image and vídeo
ompression s
enario and results showed
omparative performan
e with
exa
t methods at a mu
h lower
omputational
omplexity. A qualitative and quantitative
omparison with a
omprehensive list of existing methods is presented. A generalization of
the pruning
on
ept is presented.
|
87 |
Stratégie de codage conjoint pour la transmission d'images dans un système MIMO / Joint coding strategy for image transmission over MIMO systemAbot, Julien 03 December 2012 (has links)
Ce travail de thèse présente une stratégie de transmission exploitant la diversité spatiale pour la transmission d'images sur canal sans fil. On propose ainsi une approche originale mettant en correspondance la hiérarchie de la source avec celle des sous-canauxSISO issus de la décomposition d'un canal MIMO. On évalue les performances des précodeurs usuels dans le cadre de cette stratégie via une couche physique réaliste, respectant la norme IEEE802.11n, et associé à un canal de transmission basé sur un modèle de propagation à tracé de rayons 3D. On montre ainsi que les précodeurs usuels sont mal adaptés pour la transmission d'un contenu hiérarchisé. On propose alors un algorithme de précodage allouant successivement la puissance sur les sous-canaux SISO afin de maximiser la qualité des images reçues. Le précodeur proposé permet d'atteindre un TEB cible compte tenu ducodage canal, de la modulation et du SNR des sous-canaux SISO. A partir de cet algorithme de précodage, on propose une solution d'adaptation de lien permettant de régler dynamiquement les paramètres de la chaîne en fonction des variations sur le canal de transmission. Cette solution détermine la configuration de codage/transmission maximisant la qualité de l'image en réception. Enfin, on présente une étude sur la prise en compte de contraintes psychovisuelles dans l'appréciation de la qualité des images reçues. On propose ainsi l'intégration d'une métrique à référence réduite basée sur des contraintes psychovisuelles permettant d'assister le décodeur vers la configuration de décodage offrant la meilleure qualité d'expérience. Des tests subjectifs confirment l'intérêt de l'approche proposée. / This thesis presents a transmission strategy for exploiting the spatial diversity for image transmission over wireless channel. We propose an original approach based on the matching between the source hierarchy and the SISO sub-channels hierarchy, resulting from the MIMO channel decomposition. We evaluate common precoder performance in the context of this strategy via a realistic physical layer respecting the IEEE802.11n standard and associated with a transmission channel based on a 3D-ray tracer propagation model. It is shown that common precoders are not adapted for the transmission of a hierarchical content. Then, we propose a precoding algorithm which successively allocates power over SISO subchannels in order to maximize the received images quality. The proposed precoder achieves a target BER according to the channel coding, the modulation and the SISO subchannels SNR. From this precoding algorithm, we propose a link adaptation scheme to dynamically adjust the system parameters depending on the variations of the transmission channel. This solution determines the optimal coding/transmission configuration maximizing the image quality in reception. Finally, we present a study for take into account some psychovisual constraints in the assessment of the received images quality. We propose the insertion of a reduced reference metric based on psychovisual constraints, to assist the decoder in order to determine the decoding configuration providing the highest quality of experience. Subjective tests confirm the interest of the proposed approach.
|
88 |
Time Stamp Synchronization in Video SystemsYang, Hsueh-szu, Kupferschmidt, Benjamin 10 1900 (has links)
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California / Synchronized video is crucial for data acquisition and telecommunication applications. For real-time applications, out-of-sync video may cause jitter, choppiness and latency. For data analysis, it is important to synchronize multiple video channels and data that are acquired from PCM, MIL-STD-1553 and other sources. Nowadays, video codecs can be easily obtained to play most types of video. However, a great deal of effort is still required to develop the synchronization methods that are used in a data acquisition system. This paper will describe several methods that TTC has adopted in our system to improve the synchronization of multiple data sources.
|
89 |
Applying the MDCT to image compressionMuller, Rikus 03 1900 (has links)
Thesis (DSc (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2009. / The replacement of the standard discrete cosine transform (DCT) of JPEG with the
windowed modifed DCT (MDCT) is investigated to determine whether improvements
in numerical quality can be achieved. To this end, we employ an existing algorithm
for optimal quantisation, for which we also propose improvements. This involves the
modelling and prediction of quantisation tables to initialise the algorithm, a strategy that
is also thoroughly tested. Furthermore, the effects of various window functions on the
coding results are investigated, and we find that improved quality can indeed be achieved
by modifying JPEG in this fashion.
|
90 |
Mirror Images: Penelope Umbrico’s Mirrors (from Home Décor Catalogs and Websites)Ambrosio, Jeanie 15 November 2018 (has links)
As the artwork’s title suggests, Penelope Umbrico’s "Mirrors (from Home Décor Catalogs and Websites)" (2001-2011), are photographs of mirrors that Umbrico has appropriated from print and web based home décor advertisements like those from Pottery Barn or West Elm. The mirrors in these advertisements reflect the photo shoot constructed for the ad, often showing plants or light filled windows empty of people. To print the "Mirrors," Umbrico first applies a layer of white-out to everything in the advertisement except for the mirror and then scans the home décor catalog. In the case of the web-based portion of the series, she removes the advertising space digitally through photo editing software. Once the mirror has been singled out and made digital, Umbrico then adjusts the perspective of the mirror so that it faces the viewer. Finally, she scales the photograph of the mirror cut from the advertisement to the size and shape of the actual mirror for sale. By enlarging the photograph, she must increase the file size and subsequent print significantly, which distorts the final printed image thereby causing pixelation, otherwise known as “compression artifacts.” Lastly, she mounts these pixelated prints to non-glare Plexiglas both to remove any incidental reflective surface effects and to create a physical object. What hangs on the wall, then, looks like a mirror in its shape, size and beveled frame: the photograph becomes a one-to-one representation of the object it portrays. When looking at a real mirror, often the viewer is aware of either a reflection of the self or a shifting reflection caused by his or her own movement. However, the image that the "Mirror" ‘reflects’ is not the changing reflection of a real mirror. Nor is it a clear, fixed image of the surface of a mirror. Instead the "Mirrors" present a highly abstract, pixelated surface to meet our eyes. The "Mirrors" are physical objects that merge two forms of representation into one: the mirror and the photograph, thus highlighting similarities between them as surfaces that can potentially represent or reflect almost anything. However, in their physical form, they show us only their pixelation, their digitally constructed nature.
Penelope Umbrico’s "Mirrors" are photographs of mirrors that become simultaneously photograph and mirror: the image reflected on the mirror’s surface becomes a photograph, thus showing an analogy between the two objects. In their self-reflexive nature, I argue that Umbrico’s "Mirrors" point to their status as digital photographs, therefore signaling a technological shift from analog to digital photography. Umbrico’s "Mirrors," in altering both mirrors and photographs simultaneously refer to the long history of photography in relation to mirrors. The history of photography is seen first through these objects by the reflective surface of the daguerreotype which mirrored the viewer when observing the daguerreotype, and because of the extremely high level of detail in the photographic image, which mirrored the photographic subject. The relation to the history of photography is also seen in the phenomenon of the mirror within a photograph and the idea that the mirror’s reflection shows the realistic way that photographs represent reality. Craig Owens calls this "en abyme," or the miniature reproduction of a text that represents the text as a whole. In the case of the mirror, this is because the mirror within the photograph shows how both mediums display highly naturalistic depictions of reality. I contend that as an object that is representative of the photographic medium itself, the shift from analog to digital photography is in part seen through the use of the mirror that ultimately creates an absent referent as understood through a comparison of Diego Velázquez’s "Las Meninas" (1656). As Foucault suggests that "Las Meninas" signals a shift in representation from the Classical age to the Modern period, I suggest that the "Mirrors" signal the shift in representation from analog to digital.
This latter shift spurred debate among photo history scholars related to the ontology of the photographic medium as scholars were anxious that the ease of editing digital images compromised the photograph’s seeming relationship to truth or reality and that it would be impossible to know whether an image had been altered. They were also concerned with the idea that computers could generate images from nothing but code, removing the direct relationship of the photograph to its subject and thereby declaring the “death” of the medium. The "Mirrors" embody the technological phenomenon with visual addition of “compression artifacts,” otherwise known as pixelation, where this representation of digital space appears not directly from our own creation but as a by-product of digital JPEG programming. In this way they are no longer connected to the subject but only to the digital space they represent. As self-reflexive objects, the "Mirrors" show that there has been a technological transformation from the physically made analog photograph to the inherently mutable digital file.
|
Page generated in 0.0272 seconds