Spelling suggestions: "subject:"682000"" "subject:"342000""
31 |
Hardware / Software co-design for JPEG2000Nilsson, Per January 2006 (has links)
<p>For demanding applications, for example image or video processing, there may be computations that aren’t very suitable for digital signal processors. While a DSP processor is appropriate for some tasks, the instruction set could be extended in order to achieve higher performance for the tasks that such a processor normally isn’t actually design for. The platform used in this project is flexible in the sense that new hardware can be designed to speed up certain computations.</p><p>This thesis analyzes the computational complex parts of JPEG2000. In order to achieve sufficient performance for JPEG2000, there may be a need for hardware acceleration.</p><p>First, a JPEG2000 decoder was implemented for a DSP processor in assembler. When the firmware had been written, the cycle consumption of the parts was measured and estimated. From this analysis, the bottlenecks of the system were identified. Furthermore, new processor instructions are proposed that could be implemented for this system. Finally the performance improvements are estimated.</p>
|
32 |
Transport à fiabilité partielle d'images compressées<br />sur les réseaux à commutation de paquetsHoll, Thomas 13 July 2007 (has links) (PDF)
Bien que les données qui transitent sur l'Internet et les contraintes inhérentes à leur transmission soient de<br />natures variées, ces transmissions reposent presque toujours sur TCP ou UDP. L'adoption d'une approche de<br />bout en bout impose d'agir sur le service de transport pour s'adapter aux besoins des données en fonction<br />de la qualité de service offerte par le réseau. Les données multimédias en général sont tolérantes aux<br />pertes, elles peuvent alors faire l'objet d'une compression et d'un transport avec pertes tout en conservant une bonne qualité de l'information reconstruite si les informations perdues sont parfaitement contrôlées.<br />Cela suppose que les applications multimédias puissent compter sur un service de transport adapté à leurs<br />besoins, associant les principes de fiabilité partielle et le contrôle d'erreur déterministe.<br />Dans cette thèse, nous définissons un service de transport à fiabilité partielle de ce type, utilisé pour la<br />transmission d'images fixes compressées sur les réseaux de type « best-effort ». Les travaux portent sur<br />la conciliation du transport à fiabilité partielle avec les données compressées ainsi que la recherche d'un<br />gain de temps de service et d'une dégradation acceptable. Un protocole de transport appelé 2CP-ARQ est proposé à cet effet. Sa compatibilité avec le transport d'images codées selon le standard JPEG2000 est d'abord étudiée. Les résultats de cette étude nous conduisent à élaborer un schéma de compression et<br />d'organisation des données plus approprié à l'utilisation d'un système de transport à fiabilité partielle basé<br />sur 2CP-ARQ. Les résultats montrent que le réseau peut bénéficier de l'utilisation de ce service de transport,<br />qui se traduit par une moindre sollicitation des ressources du réseau, tout en satisfaisant les contraintes de l'application en termes de qualité des images reconstruites.
|
33 |
Hardware / Software co-design for JPEG2000Nilsson, Per January 2006 (has links)
For demanding applications, for example image or video processing, there may be computations that aren’t very suitable for digital signal processors. While a DSP processor is appropriate for some tasks, the instruction set could be extended in order to achieve higher performance for the tasks that such a processor normally isn’t actually design for. The platform used in this project is flexible in the sense that new hardware can be designed to speed up certain computations. This thesis analyzes the computational complex parts of JPEG2000. In order to achieve sufficient performance for JPEG2000, there may be a need for hardware acceleration. First, a JPEG2000 decoder was implemented for a DSP processor in assembler. When the firmware had been written, the cycle consumption of the parts was measured and estimated. From this analysis, the bottlenecks of the system were identified. Furthermore, new processor instructions are proposed that could be implemented for this system. Finally the performance improvements are estimated.
|
34 |
Low Bitrate Video and Audio Codecs for Internet Communication / Smalbandiga video och audio kodare för kommunikation via internetNilsson, Jonas, Nilsson, Jesper January 2003 (has links)
This master thesis discusses the design and the implementation of an own developed wavelet-based codec for both video and image compression. The codec is specifically designed for low bitrate video with minimum complexity for use in online gaming environments. Results indicate that the performance of the codec in many areas equals or even surpasses that of the international JPEG 2000 standard. We believe that it is suitable for any situation where low bitrate is desirable, e.g. video conferences and mobile communications. The game development company Moosehill Productions AB has shown great interest in our codec and its possible applications. We have also implemented an existing audio solution for low bandwidth use. / Wavelet-baserad bild/video kompression. / Jonas Nilsson, Jesper Nilsson Lovägen 13, 37250 Kallinge tel: 0709708617
|
35 |
Hardware and Software Codesign of a JPEG2000 Watermarking EncoderMendoza, Jose Antonio 12 1900 (has links)
Analog technology has been around for a long time. The use of analog technology is necessary since we live in an analog world. However, the transmission and storage of analog technology is more complicated and in many cases less efficient than digital technology. Digital technology, on the other hand, provides fast means to be transmitted and stored. Digital technology continues to grow and it is more widely used than ever before. However, with the advent of new technology that can reproduce digital documents or images with unprecedented accuracy, it poses a risk to the intellectual rights of many artists and also on personal security. One way to protect intellectual rights of digital works is by embedding watermarks in them. The watermarks can be visible or invisible depending on the application and the final objective of the intellectual work. This thesis deals with watermarking images in the discrete wavelet transform domain. The watermarking process was done using the JPEG2000 compression standard as a platform. The hardware implementation was achieved using the ALTERA DSP Builder and SIMULINK software to program the DE2 ALTERA FPGA board. The JPEG2000 color transform and the wavelet transformation blocks were implemented using the hardware-in-the-loop (HIL) configuration.
|
36 |
3D Wavelet-Based Algorithms For The Compression Of Geoscience DataRucker, Justin Thomas 10 December 2005 (has links)
Geoscience applications generate large datasets; thus, compression is necessary to facilitate the storage and transmission of geoscience data. One focus is on the coding of hyperspectral imagery and the prominent JPEG2000 standard. Certain aspects of the encoder, such as rate-allocation between bands and spectral decorrelation, are not covered by the JPEG2000 standard. This thesis investigates the performance of several JPEG2000 encoding strategies. Additionally, a relatively low-complexity 3D embedded wavelet-based coder, 3D-tarp, is proposed for the compression of geoscience data. 3D-tarp employs an explicit estimate of the probability of coefficient significance to drive a nonadaptive arithmetic coder, resulting in a simple implementation suited to vectorized hardware acceleration. Finally, an embedded wavelet-based coder is proposed for the shapeaptive coding of ocean-temperature data. 3D binary set-splitting with $k$-d trees, 3D-BISK, replaces the octree splitting structure of other shapeaptive coders with $k$-d trees, a simpler set partitioning structure that is well-suited to shapeaptive coding.
|
37 |
On the Performance of Jpeg2000 and Principal Component Analysis in Hyperspectral Image CompressionZhu, Wei 05 May 2007 (has links)
Because of the vast data volume of hyperspectral imagery, compression becomes a necessary process for hyperspectral data transmission, storage, and analysis. Three-dimensional discrete wavelet transform (DWT) based algorithms are particularly of interest due to their excellent rate-distortion performance. This thesis investigates several issues surrounding efficient compression using JPEG2000. Firstly, the rate-distortion performance is studied when Principal Component Analysis (PCA) replaces DWT for spectral decorrelation with the focus on the use of a subset of principal components (PCs) rather than all the PCs. Secondly, the algorithms are evaluated in terms of data analysis performance, such as anomaly detection and linear unmixing, which is directly related to the useful information preserved. Thirdly, the performance of compressing radiance and reflectance data with or without bad band removal is compared, and instructive suggestions are provided for practical applications. Finally, low-complexity PCA algorithms are presented to reduce the computational complexity and facilitate the future hardware design.
|
38 |
FPGA Implementation of the JPEG2000 MQ DecoderLucking, David Joseph 05 May 2010 (has links)
No description available.
|
39 |
Fast Split Arithmetic Encoder Architectures and Perceptual Coding Methods for Enhanced JPEG2000 PerformanceVarma, Krishnaraj M. 11 April 2006 (has links)
JPEG2000 is a wavelet transform based image compression and coding standard. It provides superior rate-distortion performance when compared to the previous JPEG standard. In addition JPEG2000 provides four dimensions of scalability-distortion, resolution, spatial, and color. These superior features make JPEG2000 ideal for use in power and bandwidth limited mobile applications like urban search and rescue. Such applications require a fast, low power JPEG2000 encoder to be embedded on the mobile agent. This embedded encoder needs to also provide superior subjective quality to low bitrate images. This research addresses these two aspects of enhancing the performance of JPEG2000 encoders.
The JPEG2000 standard includes a perceptual weighting method based on the contrast sensitivity function (CSF). Recent literature shows that perceptual methods based on subband standard deviation are also effective in image compression. This research presents two new perceptual weighting methods that combine information from both the human contrast sensitivity function as well as the standard deviation within a subband or code-block. These two new sets of perceptual weights are compared to the JPEG2000 CSF weights. The results indicate that our new weights performed better than the JPEG2000 CSF weights for high frequency images. Weights based solely on subband standard deviation are shown to perform worse than JPEG2000 CSF weights for all images at all compression ratios.
Embedded block coding, EBCOT tier-1, is the most computationally intensive part of the JPEG2000 image coding standard. Past research on fast EBCOT tier-1 hardware implementations has concentrated on cycle efficient context formation. These pass-parallel architectures require that JPEG2000's three mode switches be turned on. While turning on the mode switches allows for arithmetic encoding from each coding pass to run independent of each other (and thus in parallel), it also disrupts the probability estimation engine of the arithmetic encoder, thus sacrificing coding efficiency for improved throughput. In this research a new fast EBCOT tier-1 design is presented: it is called the Split Arithmetic Encoder (SAE) process. The proposed process exploits concurrency to obtain improved throughput while preserving coding efficiency. The SAE process is evaluated using three methods: clock cycle estimation, multithreaded software implementation, a field programmable gate array (FPGA) hardware implementation. All three methods achieve throughput improvement; the hardware implementation exhibits the largest speedup, as expected.
A high speed, task-parallel, multithreaded, software architecture for EBCOT tier-1 based on the SAE process is proposed. SAE was implemented in software on two shared-memory architectures: a PC using hyperthreading and a multi-processor non-uniform memory access (NUMA) machine. The implementation adopts appropriate synchronization mechanisms that preserve the algorithm's causality constraints. Tests show that the new architecture is capable of improving throughput as much as 50% on the NUMA machine and as much as 19% on a PC with two virtual processing units. A high speed, multirate, FPGA implementation of the SAE process is also proposed. The mismatch between the rate of production of data by the context formation (CF) module and the rate of consumption of data by the arithmetic encoder (AE) module is studied in detail. Appropriate choices for FIFO sizes and FIFO write and read capabilities are made based on the statistics obtained from test runs of the algorithm. Using a fast CF module, this implementation was able to achieve as much as 120% improvement in throughput. / Ph. D.
|
40 |
Objective Perceptual Quality Assessment of JPEG2000 Image Coding Format Over Wireless ChannelChintala, Bala Venkata Sai Sundeep January 2019 (has links)
A dominant source of Internet traffic, today, is constituted of compressed images. In modern multimedia communications, image compression plays an important role. Some of the image compression standards set by the Joint Photographic Expert Group (JPEG) include JPEG and JPEG2000. The expert group came up with the JPEG image compression standard so that still pictures could be compressed to be sent over an e-mail, be displayed on a webpage, and make high-resolution digital photography possible. This standard was originally based on a mathematical method, used to convert a sequence of data to the frequency domain, called the Discrete Cosine Transform (DCT). In the year 2000, however, a new standard was proposed by the expert group which came to be known as JPEG2000. The difference between the two is that the latter is capable of providing better compression efficiency. There is also a downside to this new format introduced. The computation required for achieving the same sort of compression efficiency as one would get with the original JPEG format is higher. JPEG is a lossy compression standard which can throw away some less important information without causing any noticeable perception differences. Whereas, in lossless compression, the primary purpose is to reduce the number of bits required to represent the original image samples without any loss of information. The areas of application of the JPEG image compression standard include the Internet, digital cameras, printing, and scanning peripherals. In this thesis work, a simulator kind of functionality setup is needed for conducting the objective quality assessment. An image is given as an input to our wireless communication system and its data size is varied (e.g. 5%, 10%, 15%, etc) and a Signal-to-Noise Ratio (SNR) value is given as input, for JPEG2000 compression. Then, this compressed image is passed through a JPEG encoder and then transmitted over a Rayleigh fading channel. The corresponding image obtained after having applied these constraints on the original image is then decoded at the receiver and inverse discrete wavelet transform (IDWT) is applied to inverse the JPEG 2000 compression. Quantization is done for the coefficients which are scalar-quantized to reduce the number of bits to represent them, without the loss of quality of the image. Then the final image is displayed on the screen. The original input image is co-passed with the images of varying data size for an SNR value at the receiver after decoding. In particular, objective perceptual quality assessment through Structural Similarity (SSIM) index using MATLAB is provided.
|
Page generated in 0.0345 seconds