• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 13
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 61
  • 42
  • 20
  • 17
  • 12
  • 12
  • 11
  • 10
  • 10
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Use of Multi-Threading, Modern Programming Language, and Lossless Compression in a Dynamic Commutation/Decommutation System

Wigent, Mark A., Mazzario, Andrea M., Matsumura, Scott M. 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / The Spectrum Efficient Technology Science and Technology (SET S&T) Program is sponsoring the development of the Dynamic Commutation and Decommutation System (DCDS), which optimizes telemetry data transmission in real time. The goal of DCDS is to improve spectrum efficiency - not through improving RF techniques but rather through changing and optimizing contents of the telemetry stream during system test. By allowing the addition of new parameters to the telemetered stream at any point during system test, DCDS removes the need to transmit measured data unless it is actually needed on the ground. When compared to serial streaming telemetry, real time re-formatting of the telemetry stream does require additional processing onboard the test article. DCDS leverages advances in microprocessor technology to perform this processing while meeting size, weight, and power constraints of the test environment. Performance gains of the system have been achieved by significant multi-threading of the application, allowing it to run on modern multi-core processors. Two other enhancing technologies incorporated into DCDS are the Java programming language and lossless compression.
22

Perceptual Image Compression using JPEG2000

Oh, Han January 2011 (has links)
Image sizes have increased exponentially in recent years. The resulting high-resolution images are typically encoded in a lossy fashion to achieve high compression ratios. Lossy compression can be categorized into visually lossless and visually lossy compression depending on the visibility of compression artifacts. This dissertation proposes visually lossless coding methods as well as a visually lossy coding method with perceptual quality control. All resulting codestreams are JPEG2000 Part-I compliant.Visually lossless coding is increasingly considered as an alternative to numerically lossless coding. In order to hide compression artifacts caused by quantization, visibility thresholds (VTs) are measured and used for quantization of subbands in JPEG2000. In this work, VTs are experimentally determined from statistically modeled quantization distortion, which is based on the distribution of wavelet coefficients and the dead-zone quantizer of JPEG2000. The resulting VTs are adjusted for locally changing background through a visual masking model, and then used to determine the minimum number of coding passes to be included in a codestream for visually lossless quality under desired viewing conditions. The proposed coding scheme successfully yields visually lossless images at competitive bitrates compared to those of numerically lossless coding and visually lossless algorithms in the literature.This dissertation also investigates changes in VTs as a function of display resolution and proposes a method which effectively incorporates multiple VTs for various display resolutions into the JPEG2000 framework. The proposed coding method allows for visually lossless decoding at resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely, this method can significantly reduce bandwidth usage.Contrary to images encoded in the visually lossless manner, highly compressed images inevitably have visible compression artifacts. To minimize these artifacts, many compression algorithms exploit the varying sensitivity of the human visual system (HVS) to different frequencies, which is typically obtained at the near-threshold level where distortion is just noticeable. However, it is unclear that the same frequency sensitivity applies at the supra-threshold level where distortion is highly visible. In this dissertation, the sensitivity of the HVS for several supra-threshold distortion levels is measured based on the JPEG2000 quantization distortion model. Then, a low-complexity JPEG2000 encoder using the measured sensitivity is described. The proposed visually lossy encoder significantly reduces encoding time while maintaining superior visual quality compared with conventional JPEG2000 encoders.
23

Dictionary-based Compression Algorithms in Mobile Packet Core

Tikkireddy, Lakshmi Venkata Sai Sri January 2019 (has links)
With the rapid growth in technology, the amount of data to be transmitted and stored is increasing. The efficiency of information retrieval and storage has become a major drawback, thereby the concept of data compression has come into the picture. Data compression is a technique that effectively reduces the size of the data to save storage and speed up the transmission of the data from one place to another. Data compression is present in various formats and mainly categorized into lossy compression and lossless compression where lossless compression is often used to compress the data. In Ericsson, SGSN-MME is using one of the data compression technique namely Deflate, to compress each user data independently. Due to the compression ratio between compress and decompress speed, the deflate algorithm is not optimal for the SGSN-MME’s use case. To mitigate this problem, the deflate algorithm has to be replaced with a better compression algorithm.
24

Progressive Lossless Image Compression Using Image Decomposition and Context Quantization

Zha, Hui 23 January 2007 (has links)
Lossless image compression has many applications, for example, in medical imaging, space photograph and film industry. In this thesis, we propose an efficient lossless image compression scheme for both binary images and gray-scale images. The scheme first decomposes images into a set of progressively refined binary sequences and then uses the context-based, adaptive arithmetic coding algorithm to encode these sequences. In order to deal with the context dilution problem in arithmetic coding, we propose a Lloyd-like iterative algorithm to quantize contexts. Fixing the set of input contexts and the number of quantized contexts, our context quantization algorithm iteratively finds the optimum context mapping in the sense of minimizing the compression rate. Experimental results show that by combining image decomposition and context quantization, our scheme can achieve competitive lossless compression performance compared to the JBIG algorithm for binary images, and the CALIC algorithm for gray-scale images. In contrast to CALIC, our scheme provides the additional feature of allowing progressive transmission of gray-scale images, which is very appealing in applications such as web browsing.
25

Improving compression ratio in backup / Förbättring av kompressionsgrad för säkerhetskopiering

Zeidlitz, Mattias January 2012 (has links)
This report describes a master thesis performed at Degoo Backup AB in Stockholm, Sweden in the spring of 2012. The purpose was to design a compression suite in Java which aims to improve the compression ratio for file types assumed to be commonly used in a backup software. A tradeoff between compression ratio and compression speed has been made in order to meet the requirement that the compression suite has to be able to compress the data fast enough. A study of the best performing existing compression algorithms has been made in order to be able to choose the best suitable compression algorithm for every possible scenario and file type specific compression algorithms have been developed in order to further improve the compression ratio for files considered needing improved compression. The resulting compression performance is presented for file types assumed to be common in a backup software and the overall performance is good. The final conclusion is that the compression suite fulfills all requirements set of this thesis. / Denna rapport beskriver ett examensarbete genomfört på Degoo Backup AB i Stockholm under våren 2012. Syftet var att designa en kompressionssvit i Java vilket siktar på att förbättra kompressionsgraden för filtyper som anses vara vanliga att användas i en säkerhetskopieringsprogramvara. En avvägning mellan kompressionsgrad och kompressionshastighet har gjort för att uppfylla kravet att kompressionssviten ska kunna komprimera data tillräckligt snabbt. En studie över de bäst presterande existerande kompressionsalgoritmerna har gjorts för att möjliggöra ett val av den bäst anpassade kompressionsalgoritmen för varje tänkbar situation. Dessutom har filtypsspecifika komprimeringsalgoritmer utvecklats för att ytterligare förbättra kompressionsgraden för filer som anses var av behov av förbättrad komprimering. Den resulterande kompressionsprestandan finns presenterad för filtyper som antas vara vanliga i en säkerhetskopieringsprogramvara och på det hela taget är prestandan bra. Slutsatsen är att kompressionssviten uppfyller alla krav som var uppsatta för detta examensarbete.
26

Progressive Lossless Image Compression Using Image Decomposition and Context Quantization

Zha, Hui 23 January 2007 (has links)
Lossless image compression has many applications, for example, in medical imaging, space photograph and film industry. In this thesis, we propose an efficient lossless image compression scheme for both binary images and gray-scale images. The scheme first decomposes images into a set of progressively refined binary sequences and then uses the context-based, adaptive arithmetic coding algorithm to encode these sequences. In order to deal with the context dilution problem in arithmetic coding, we propose a Lloyd-like iterative algorithm to quantize contexts. Fixing the set of input contexts and the number of quantized contexts, our context quantization algorithm iteratively finds the optimum context mapping in the sense of minimizing the compression rate. Experimental results show that by combining image decomposition and context quantization, our scheme can achieve competitive lossless compression performance compared to the JBIG algorithm for binary images, and the CALIC algorithm for gray-scale images. In contrast to CALIC, our scheme provides the additional feature of allowing progressive transmission of gray-scale images, which is very appealing in applications such as web browsing.
27

High Dynamic Range Image Compression of Color Filter Array Data for the Digital Camera Pipeline

Lee, Dohyoung 14 December 2011 (has links)
Typical consumer digital cameras capture the scene by generating a mosaic-like grayscale image, known as a color filter array (CFA) image. One obvious challenge in digital photography is the storage of image, which requires the development of an efficient compression solution. This issue has become more significant due to a growing demand for high dynamic range (HDR) imaging technology, which requires increased bandwidth to allow realistic presentation of visual scene. This thesis proposes two digital camera pipelines, efficiently encoding CFA image data represented in HDR format. Firstly, a lossless compression scheme exploiting a predictive coding followed by a JPEG XR encoding module is introduced. It achieves efficient data reduction without loss of quality. Secondly, a lossy compression scheme that consists of a series of processing operations and a JPEG XR encoding module is introduced. Performance evaluation indicates that the proposed method delivers high quality images at low computational costs.
28

High Dynamic Range Image Compression of Color Filter Array Data for the Digital Camera Pipeline

Lee, Dohyoung 14 December 2011 (has links)
Typical consumer digital cameras capture the scene by generating a mosaic-like grayscale image, known as a color filter array (CFA) image. One obvious challenge in digital photography is the storage of image, which requires the development of an efficient compression solution. This issue has become more significant due to a growing demand for high dynamic range (HDR) imaging technology, which requires increased bandwidth to allow realistic presentation of visual scene. This thesis proposes two digital camera pipelines, efficiently encoding CFA image data represented in HDR format. Firstly, a lossless compression scheme exploiting a predictive coding followed by a JPEG XR encoding module is introduced. It achieves efficient data reduction without loss of quality. Secondly, a lossy compression scheme that consists of a series of processing operations and a JPEG XR encoding module is introduced. Performance evaluation indicates that the proposed method delivers high quality images at low computational costs.
29

Crosstalk in Stereoscopic LCD 3-D Systems

Feng, Hsin-Chang January 2015 (has links)
Stereoscopic 3-D has received considerable attention over the last few decades. Since a stereoscopic 3-D pair includes two 2-D images together, the amount of data for an uncompressed stereo image is double compared to that for an uncompressed 2-D image. Thus efficient compression techniques are of paramount importance. However, crosstalk effect is an inherent perceivable problem in current 3-D display technologies. It can lead not only to degradation in the perceived quality of 3-D images, but also to discomfort in some individuals. Correspondingly, when crosstalk occurs, the compression artifacts in a compressed stereo pair can be perceived, despite the fact that such artifacts are imperceptible in individual left and right images. This dissertation proposes a methodology for visually lossless compression of monochrome stereoscopic 3-D images in which crosstalk effect is carefully considered. In the proposed methodology for visually lossless compression of monochrome stereoscopic 3-D images, visibility thresholds are measured for quantization distortion in JPEG2000 to conceal perceivable compression artifacts. These thresholds are found to be functions of not only spatial frequency, but also of wavelet coefficient variance, as well as the gray level in both the left and right images. In order to avoid a daunting number of measurements of visibility thresholds during subjective experiments, a model for visibility thresholds is developed. The left image and right image of a stereo pair are then compressed jointly using the visibility thresholds obtained from the proposed model to ensure that quantization errors in each image are imperceptible to both eyes. This methodology is then demonstrated via a 3-D stereoscopic liquid crystal display (LCD) system with an associated viewing condition. The resulting images are visually lossless when displayed individually as 2-D images, and also when displayed in stereoscopic 3-D mode. In order to have better perceptual quality of stereoscopic 3-D images, hardware based techniques have been used to reduce crosstalk in 3-D stereoscopic display systems. However, crosstalk is still readily apparent in some 3-D viewing systems. To reduce crosstalk remains after hardware crosstalk compensation, a methodology for crosstalk compensation accomplished via image processing is provided in this dissertation. This methodology focuses on crosstalk compensation of 3-D stereoscopic LCD systems in which active shutter glasses are employed. Subjective experiments indicate that crosstalk is a function of not only the pixel intensity in both the left and right channels, but also of spatial location. Accordingly, look-up tables (LUTs) are developed for spatially-adaptive crosstalk compensation. For a given combination of gray levels in the left and right channels at a specific spatial location, the original pixel values are replaced by values contained in the LUTs. The crosstalk in the resulting stereo pair is significantly reduced, resulting in a significant increase in perceptual image quality.
30

Sufixové grafy a bezeztrátová komprese dat / Suffix Graphs and Lossless Data Compression

Senft, Martin January 2013 (has links)
Title: Suffix Graphs and Lossless Data Compression Author: Martin Senft Department: Department of Software and Computer Science Education Supervisor of the doctoral thesis: doc. RNDr. Tomáš Dvorˇák, CSc., Depart- ment of Software and Computer Science Education Abstract: Suffix tree and its variants are widely studied data structures that enable an efficient solution to a number of string problems, but also serve for implementation of data compression algorithms. This work explores the opposite approach: design of compression methods, based entirely on prop- erties of suffix graphs. We describe a unified construction algorithm for suf- fix trie, suffix tree, DAWG and CDAWG, accompanied by analysis of implicit suffix link simulation that yields two practical alternatives. Since the com- pression applications require maintaining text in the sliding window, an in- depth discussionof slidingsuffixgraphsisneeded. Fillinggapsin previously published proofs, we verify that suffix tree is capable of perfect sliding in amortised constant time. On the other hand, we show that this is not the case with CDAWG, thus resolving a problem of Inenaga et al. Building on these investigations,we describea family of data compression methods,based on a description of suffix tree construction for the string to be compressed. While some of...

Page generated in 0.0438 seconds