Spelling suggestions: "subject:"image coding"" "subject:"image boding""
81 |
A study of image compression techniques, with specific focus on weighted finite automataMuller, Rikus 12 1900 (has links)
Thesis (MSc (Mathematical Sciences)--University of Stellenbosch, 2005. / Image compression using weighted finite automata (WFA) is studied and implemented
in Matlab. Other more prominent image compression techniques, namely JPEG, vector
quantization, EZW wavelet image compression and fractal image compression are also
presented. The performance of WFA image compression is then compared to those of
some of the abovementioned techniques.
|
82 |
Image coding with H.264 I-frames / Stillbildskodning med H.264 I-framesEklund, Anders January 2007 (has links)
In this thesis work a part of the video coding standard H.264 has been implemented. The part of the video coder that is used to code the I-frames has been implemented to see how well suited it is for regular image coding. The big difference versus other image coding standards, such as JPEG and JPEG2000, is that this video coder uses both a predictor and a transform to compress the I-frames, while JPEG and JPEG2000 only use a transform. Since the prediction error is sent instead of the actual pixel values, a lot of the values are zero or close to zero before the transformation and quantization. The method is much like a video encoder but the difference is that blocks of an image are predicted instead of frames in a video sequence. / I det här examensarbetet har en del av videokodningsstandarden H.264 implementerats. Den del av videokodaren som används för att koda s.k. I-bilder har implementerats för att testa hur bra den fungerar för ren stillbildskodning. Den stora skillnaden mot andra stillbildskodningsmetoder, såsom JPEG och JPEG2000, är att denna videokodaren använder både en prediktor och en transform för att komprimera stillbilderna, till skillnad från JPEG och JPEG2000 som bara använder en transform. Eftersom prediktionsfelen skickas istället för själva pixelvärdena så är många värden lika med noll eller nära noll redan innan transformationen och kvantiseringen. Metoden liknar alltså till mycket en ren videokodare, med skillnaden att man predikterar block i en bild istället för bilder i en videosekvens.
|
83 |
PCA and JPEG2000-based Lossy Compression for Hyperspectral ImageryZhu, Wei 30 April 2011 (has links)
This dissertation develops several new algorithms to solve existing problems in practical application of the previously developed PCA+JPEG2000, which has shown superior rate-distortion performance in hyperspectral image compression. In addition, a new scheme is proposed to facilitate multi-temporal hyperspectral image compression. Specifically, the uniqueness in each algorithm is described as follows. 1. An empirical piecewise linear equation is proposed to estimate the optimal number of major principal components (PCs) used in SubPCA+JPEG2000 for AVIRIS data. Sensor-specific equations are presented with excellent fitting performance for AVIRIS, HYDICE, and HyMap data. As a conclusion, a general guideline is provided for finding sensor-specific piecewise linear equations. 2. An anomaly-removal-based hyperspectral image compression algorithm is proposed. It preserves anomalous pixels in a lossless manner, and yields the same or even improved rate-distortion performance. It is particularly useful to SubPCA+JPEG2000 when compressing data with anomalies that may reside in minor PCs. 3. A segmented PCA-based PCA+JPEG2000 compression algorithm is developed, which spectrally partitions an image based on its spectral correlation coefficients. This compression scheme greatly improves the rate-distortion performance of PCA+JPEG2000 when the spatial size of the data is relatively smaller than its spectral size, especially at low bitrates. A sensor-specific partition method is also developed for fast processing with suboptimal performance. 4. A joint multi-temporal image compression scheme is proposed. The algorithm preserves change information in a lossless fashion during the compression. It can yield perfect change detection with slightly degraded rate-distortion performance.
|
84 |
On error-robust source coding with image coding applicationsAndersson, Tomas January 2006 (has links)
This thesis treats the problem of source coding in situations where the encoded data is subject to errors. The typical scenario is a communication system, where source data such as speech or images should be transmitted from one point to another. A problem is that most communication systems introduce some sort of error in the transmission. A wireless communication link is prone to introduce individual bit errors, while in a packet based network, such as the Internet, packet losses are the main source of error. The traditional approach to this problem is to add error correcting codes on top of the encoded source data, or to employ some scheme for retransmission of lost or corrupted data. The source coding problem is then treated under the assumption that all data that is transmitted from the source encoder reaches the source decoder on the receiving end without any errors. This thesis takes another approach to the problem and treats source and channel coding jointly under the assumption that there is some knowledge about the channel that will be used for transmission. Such joint source--channel coding schemes have potential benefits over the traditional separated approach. More specifically, joint source--channel coding can typically achieve better performance using shorter codes than the separated approach. This is useful in scenarios with constraints on the delay of the system. Two different flavors of joint source--channel coding are treated in this thesis; multiple description coding and channel optimized vector quantization. Channel optimized vector quantization is a technique to directly incorporate knowledge about the channel into the source coder. This thesis contributes to the field by using channel optimized vector quantization in a couple of new scenarios. Multiple description coding is the concept of encoding a source using several different descriptions in order to provide robustness in systems with losses in the transmission. One contribution of this thesis is an improvement to an existing multiple description coding scheme and another contribution is to put multiple description coding in the context of channel optimized vector quantization. The thesis also presents a simple image coder which is used to evaluate some of the results on channel optimized vector quantization. / QC 20101108
|
85 |
Descrição e proposição de uma solução de IPTV / Description and proposal of an end-to-end IPTV solutionMagri, Marcus Pereira 21 August 2007 (has links)
Orientador: Yuzo Iano / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-08T18:54:47Z (GMT). No. of bitstreams: 1
Magri_MarcusPereira_M.pdf: 3808381 bytes, checksum: e208495d1dffda0491764c4092da7246 (MD5)
Previous issue date: 2007 / Resumo: O objetivo deste trabalho é tanto estudar quanto analisar uma solução de IPTV (TV por IP). Tal serviço está emergindo nas operadoras de telecomunicações de todo o mundo. Essa solução está sendo bem aceita e mudando radicalmente a visão de negócios dos provedores de telecomunicações em todo o mundo. Consumidores finais vão demandar serviços personalizados através de múltiplos dispositivos, incluindo canais de TV, conteúdo sob demanda, TV interativa, comunicações de vídeo e de voz, música, compartilhamento de fotos e arquivos e jogos on-line. Esses são alguns dos serviços de usuário final, entregues de forma transparente e combinada. Isso tudo requer mudanças inovadoras nas operadoras de telecomunicações, desde uma nova arquitetura de rede, com qualidade de serviço fim-a-fim, confiabilidade, gerência, e segurança. Contribui-se com uma descrição detalhada de como arquitetar essa solução, cobrindo todos os desafios para se implementar um sistema de IPTV, descrevendo-se todos os componentes e serviços da solução, e também qual seria a evolução natural dessa solução. É também parte de nossa contribuição a proposição de algumas melhorias para reduzir o tempo de troca de canal, além de sugestões para se implementar uma solução de inserção de comercial e de inserção de legenda.
Palavras-chave: Codificação de imagem, Imagem, IPTV, Triple Play, STB, VOD, IGMP / Abstract: This works concerns the study and the analysis of an IPTV solution (Internet Protocol Television). This kind of service is emerging in the Telco all over the world and its good acceptance is radically changing the business vision of the telecommunications providers. Final consumers will demand personalized services using multiple devices including TV channels, video-on-demand, interative TV, video and voice communications, music, photo and file sharing and online games. All these services require novelties and changes at the telecoms field embracing new network architecture using P2P service quality, reliability, customer management and security. The contribution of this work concerns a detailed description of how to design the solution for these challenges, how to implement a new IPTV system describing all the components and services that have to be provided. The natural evolution of this approach is also considered. Additionally, this work proposes new solutions to reduce the channel zapping time and to insert commercials and subtitles.
Keywords: Image coding, Imaging, IPTV, Triple Play, STB, VOD, IGMP, MPEG, H.264, Multicast, MPEG-2 TS, RTSP, Middleware, DRM / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
|
86 |
On the design of fast and efficient wavelet image coders with reduced memory usageOliver Gil, José Salvador 06 May 2008 (has links)
Image compression is of great importance in multimedia systems and
applications because it drastically reduces bandwidth requirements for
transmission and memory requirements for storage. Although earlier
standards for image compression were based on the Discrete Cosine
Transform (DCT), a recently developed mathematical technique, called
Discrete Wavelet Transform (DWT), has been found to be more efficient
for image coding.
Despite improvements in compression efficiency, wavelet image coders
significantly increase memory usage and complexity when compared with
DCT-based coders. A major reason for the high memory requirements is
that the usual algorithm to compute the wavelet transform requires the
entire image to be in memory. Although some proposals reduce the memory
usage, they present problems that hinder their implementation. In
addition, some wavelet image coders, like SPIHT (which has become a
benchmark for wavelet coding), always need to hold the entire image in
memory.
Regarding the complexity of the coders, SPIHT can be considered quite
complex because it performs bit-plane coding with multiple image scans.
The wavelet-based JPEG 2000 standard is still more complex because it
improves coding efficiency through time-consuming methods, such as an
iterative optimization algorithm based on the Lagrange multiplier
method, and high-order context modeling.
In this thesis, we aim to reduce memory usage and complexity in
wavelet-based image coding, while preserving compression efficiency. To
this end, a run-length encoder and a tree-based wavelet encoder are
proposed. In addition, a new algorithm to efficiently compute the
wavelet transform is presented. This algorithm achieves low memory
consumption using line-by-line processing, and it employs recursion to
automatically place the order in which the wavelet transform is
computed, solving some synchronization problems that have not been
tackled by previous proposals. The proposed encode / Oliver Gil, JS. (2006). On the design of fast and efficient wavelet image coders with reduced memory usage [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1826
|
Page generated in 0.0459 seconds