• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 19
  • 19
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Hardware Implementation of a Novel Image Compression Algorithm

Sanikomm, Vikas Kumar Reddy 20 January 2006 (has links)
Image-related communications are forming an increasingly large part of modern communications, bringing the need for efficient and effective compression. Image compression is important for effective storage and transmission of images. Many techniques have been developed in the past, including transform coding, vector quantization and neural networks. In this thesis, a novel adaptive compression technique is introduced based on adaptive rather than fixed transforms for image compression. The proposed technique is similar to Neural Network (NN)-based image compression and its superiority over other techniques is presented It is shown that the proposed algorithm results in higher image quality for a given compression ratio than existing Neural Network algorithms and that the training of this algorithm is significantly faster than the NN based algorithms. This is also compared to the JPEG in terms of Peak Signal to Noise Ratio (PSNR) for a given compression ratio and computational complexity. Advantages of this idea over JPEG are also presented in this thesis.
2

Enhancing data transfer performance in LEO satellite networks : A QUIC and lossless compression approach

Fallström, Ludwig January 2024 (has links)
Low Earth Orbit (LEO) satellite networks have revolutionized space internet access, offering better network performance than previous alternatives. While being the best option for space internet access, it does not yet compete with terrestrial networks in latency and bandwidth. The QUIC transport protocol was developed for Hypertext Transfer Protocol (HTTP) to reduce page load times and work better in low-bandwidth and high-loss networks than the Transmission Control Protocol (TCP). Studies have shown that QUIC performs well for small file sizes, which can be achieved by using compression. This thesis investigates whether combining QUIC as a general data transfer protocol with lossless compression enhances encrypted data transmission in a LEO satellite network. To test this, a program consisting of a client and server deployed on a LEO satellite network emulator is developed, where files with increasing sizes are compressed and sent using both QUIC and TCP in various network conditions. Results indicate that QUIC should be paired with lossless compression for file sizes up to 1MB. It should not be implemented for file sizes above 1MB in low-loss and high-bandwidth conditions, while it can be implemented in medium to poor conditions.
3

Use of Multi-Threading, Modern Programming Language, and Lossless Compression in a Dynamic Commutation/Decommutation System

Wigent, Mark A., Mazzario, Andrea M., Matsumura, Scott M. 10 1900 (has links)
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada / The Spectrum Efficient Technology Science and Technology (SET S&T) Program is sponsoring the development of the Dynamic Commutation and Decommutation System (DCDS), which optimizes telemetry data transmission in real time. The goal of DCDS is to improve spectrum efficiency - not through improving RF techniques but rather through changing and optimizing contents of the telemetry stream during system test. By allowing the addition of new parameters to the telemetered stream at any point during system test, DCDS removes the need to transmit measured data unless it is actually needed on the ground. When compared to serial streaming telemetry, real time re-formatting of the telemetry stream does require additional processing onboard the test article. DCDS leverages advances in microprocessor technology to perform this processing while meeting size, weight, and power constraints of the test environment. Performance gains of the system have been achieved by significant multi-threading of the application, allowing it to run on modern multi-core processors. Two other enhancing technologies incorporated into DCDS are the Java programming language and lossless compression.
4

Perceptual Image Compression using JPEG2000

Oh, Han January 2011 (has links)
Image sizes have increased exponentially in recent years. The resulting high-resolution images are typically encoded in a lossy fashion to achieve high compression ratios. Lossy compression can be categorized into visually lossless and visually lossy compression depending on the visibility of compression artifacts. This dissertation proposes visually lossless coding methods as well as a visually lossy coding method with perceptual quality control. All resulting codestreams are JPEG2000 Part-I compliant.Visually lossless coding is increasingly considered as an alternative to numerically lossless coding. In order to hide compression artifacts caused by quantization, visibility thresholds (VTs) are measured and used for quantization of subbands in JPEG2000. In this work, VTs are experimentally determined from statistically modeled quantization distortion, which is based on the distribution of wavelet coefficients and the dead-zone quantizer of JPEG2000. The resulting VTs are adjusted for locally changing background through a visual masking model, and then used to determine the minimum number of coding passes to be included in a codestream for visually lossless quality under desired viewing conditions. The proposed coding scheme successfully yields visually lossless images at competitive bitrates compared to those of numerically lossless coding and visually lossless algorithms in the literature.This dissertation also investigates changes in VTs as a function of display resolution and proposes a method which effectively incorporates multiple VTs for various display resolutions into the JPEG2000 framework. The proposed coding method allows for visually lossless decoding at resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely, this method can significantly reduce bandwidth usage.Contrary to images encoded in the visually lossless manner, highly compressed images inevitably have visible compression artifacts. To minimize these artifacts, many compression algorithms exploit the varying sensitivity of the human visual system (HVS) to different frequencies, which is typically obtained at the near-threshold level where distortion is just noticeable. However, it is unclear that the same frequency sensitivity applies at the supra-threshold level where distortion is highly visible. In this dissertation, the sensitivity of the HVS for several supra-threshold distortion levels is measured based on the JPEG2000 quantization distortion model. Then, a low-complexity JPEG2000 encoder using the measured sensitivity is described. The proposed visually lossy encoder significantly reduces encoding time while maintaining superior visual quality compared with conventional JPEG2000 encoders.
5

Dictionary-based Compression Algorithms in Mobile Packet Core

Tikkireddy, Lakshmi Venkata Sai Sri January 2019 (has links)
With the rapid growth in technology, the amount of data to be transmitted and stored is increasing. The efficiency of information retrieval and storage has become a major drawback, thereby the concept of data compression has come into the picture. Data compression is a technique that effectively reduces the size of the data to save storage and speed up the transmission of the data from one place to another. Data compression is present in various formats and mainly categorized into lossy compression and lossless compression where lossless compression is often used to compress the data. In Ericsson, SGSN-MME is using one of the data compression technique namely Deflate, to compress each user data independently. Due to the compression ratio between compress and decompress speed, the deflate algorithm is not optimal for the SGSN-MME’s use case. To mitigate this problem, the deflate algorithm has to be replaced with a better compression algorithm.
6

Crosstalk in Stereoscopic LCD 3-D Systems

Feng, Hsin-Chang January 2015 (has links)
Stereoscopic 3-D has received considerable attention over the last few decades. Since a stereoscopic 3-D pair includes two 2-D images together, the amount of data for an uncompressed stereo image is double compared to that for an uncompressed 2-D image. Thus efficient compression techniques are of paramount importance. However, crosstalk effect is an inherent perceivable problem in current 3-D display technologies. It can lead not only to degradation in the perceived quality of 3-D images, but also to discomfort in some individuals. Correspondingly, when crosstalk occurs, the compression artifacts in a compressed stereo pair can be perceived, despite the fact that such artifacts are imperceptible in individual left and right images. This dissertation proposes a methodology for visually lossless compression of monochrome stereoscopic 3-D images in which crosstalk effect is carefully considered. In the proposed methodology for visually lossless compression of monochrome stereoscopic 3-D images, visibility thresholds are measured for quantization distortion in JPEG2000 to conceal perceivable compression artifacts. These thresholds are found to be functions of not only spatial frequency, but also of wavelet coefficient variance, as well as the gray level in both the left and right images. In order to avoid a daunting number of measurements of visibility thresholds during subjective experiments, a model for visibility thresholds is developed. The left image and right image of a stereo pair are then compressed jointly using the visibility thresholds obtained from the proposed model to ensure that quantization errors in each image are imperceptible to both eyes. This methodology is then demonstrated via a 3-D stereoscopic liquid crystal display (LCD) system with an associated viewing condition. The resulting images are visually lossless when displayed individually as 2-D images, and also when displayed in stereoscopic 3-D mode. In order to have better perceptual quality of stereoscopic 3-D images, hardware based techniques have been used to reduce crosstalk in 3-D stereoscopic display systems. However, crosstalk is still readily apparent in some 3-D viewing systems. To reduce crosstalk remains after hardware crosstalk compensation, a methodology for crosstalk compensation accomplished via image processing is provided in this dissertation. This methodology focuses on crosstalk compensation of 3-D stereoscopic LCD systems in which active shutter glasses are employed. Subjective experiments indicate that crosstalk is a function of not only the pixel intensity in both the left and right channels, but also of spatial location. Accordingly, look-up tables (LUTs) are developed for spatially-adaptive crosstalk compensation. For a given combination of gray levels in the left and right channels at a specific spatial location, the original pixel values are replaced by values contained in the LUTs. The crosstalk in the resulting stereo pair is significantly reduced, resulting in a significant increase in perceptual image quality.
7

Comparison of lossy and lossless compression algorithms for time series data in the Internet of Vehicles / Jämförelse av destruktiva och icke-förstörande komprimeringsalgorithmer för tidsseriedata inom fordonens internet

Hughes, Joseph January 2023 (has links)
As automotive development advances, connectivity features are continually added to vehicles that, in conjunction, form an Internet of Vehicles. For numerous reasons, it is vital for vehicle manufacturers to collect telemetry from their fleets. However, the volume of the generated data is too immense to feasibly be transmitted to a server due to CPU and memory limitations of embedded hardware and the monetary cost of cellular network usage. The purpose of this thesis is thus to investigate how these issues can be alleviated by the use of real-time compression of time series data before off-board transmission. A hybrid approach is proposed that results in fast and effective performance on a variety of time series exhibiting varying numerical data features, all while limiting the maximum reconstruction error to a user-specified absolute value. We first perform a literature review to identify state of the art compression algorithms for time series compression that run online and provide max-error guarantees. We then choose a subset of lossless and lossy algorithms that are implemented and benchmarked with regards to their compression ratio, resource usage, and reconstruction error when used on time series that exhibit a variety of data features. Finally, we ask whether we are able to run a lossy and lossless algorithm in succession in order to further increase the compression ratio. The literature review identifies a diverse range of compression algorithms. Out of these, the algorithms Poor Man's Compression - MidRange (PMC-MR) and Swing filter are selected as lossy algorithms, and Run-length Binary Encoding (RLBE) and Gorilla are selected as lossless algorithms. The experiments yield positive results for the lossy algorithms, which excel on different data sets. These are able to achieve compression ratios between 22.0% and 99.5%, depending on the data set, while limiting the max-error to 1%. In contrast, Gorilla achieves compression ratios between 66.6% and 83.7%, outperforming RLBE in nearly all aspects. Moreover, we conclude that there is a strictly positive improvement to the compression ratio when losslessly compressing the result of lossily compressed data. When combining either PMC-MR or Swing filter with Gorilla, we achieve compression ratios between 83.1% and 99.6% across a variety of time series with a maximum error for any given data point of 1%.
8

Novel scalable and real-time embedded transceiver system

Mohammed, Rand Basil January 2017 (has links)
Our society increasingly relies on the transmission and reception of vast amounts of data using serial connections featuring ever-increasing bit rates. In imaging systems, for example, the frame rate achievable is often limited by the serial link between camera and host even when modern serial buses with the highest bit rates are used. This thesis documents a scalable embedded transceiver system with a bandwidth and interface standard that can be adapted to suit a particular application. This new approach for a real-time scalable embedded transceiver system is referred to as a Novel Reference Model (NRM), which connects two or more applications through a transceiver network in order to provide real-time data to a host system. Different transceiver interfaces for which the NRM model has been tested include: LVDS, GIGE, PMA-direct, Rapid-IO and XAUI, one support a specific range for transceiver speed that suites a special type for transceiver physical medium. The scalable serial link approach has been extended with loss-less data compression with the aim of further increasing dataflow at a given bit rate. Two lossless compression methods were implemented, based on Huffman coding and a novel method called Reduced Lossless Compression Method (RLCM). Both methods are integrated into the scalable transceivers providing a comprehensive solution for optimal data transmission over a variety of different interfaces. The NRM is implemented on a field programmable gate array (FPGA) using a system architecture that consists of three layers: application, transport and physical. A Terasic DE4 board was used as the main platform for implementing and testing the embedded system, while Quartus-II software and tools were used to design and debug the embedded hardware systems.
9

Evaluation and Hardware Implementation of Real-Time Color Compression Algorithms

Ojani, Amin, Caglar, Ahmet January 2008 (has links)
A major bottleneck, for performance as well as power consumption, for graphics hardware in mobile devices is the amount of data that needs to be transferred to and from memory. In, for example, hardware accelerated 3D graphics, a large part of the memory accesses are due to large and frequent color buffer data transfers. In a graphic hardware block color data is typically processed using RGB color format. For both 3D graphic rasterization and image composition several pixels needs to be read from and written to memory to generate a pixel in the frame buffer. This generates a lot of data traffic on the memory interfaces which impacts both performance and power consumption. Therefore it is important to minimize the amount of color buffer data. One way of reducing the memory bandwidth required is to compress the color data before writing it to memory and decompress it before using it in the graphics hardware block. This compression/decompression must be done “on-the-fly”, i.e. it has to be very fast so that the hardware accelerator does not have to wait for data. In this thesis, we investigated several exact (lossless) color compression algorithms from hardware implementation point of view to be used in high throughput hardware. Our study shows that compression/decompression datapath is well implementable even with stringent area and throughput constraints. However memory interfacing of these blocks is more critical and could be dominating.
10

Algorithmes et structures de données compactes pour la visualisation interactive d’objets 3D volumineux / Algorithms and compact data structures for interactive visualization of gigantic 3D objects

Jamin, Clément 25 September 2009 (has links)
Les méthodes de compression progressives sont désormais arrivées à maturité (les taux de compression sont proches des taux théoriques) et la visualisation interactive de maillages volumineux est devenue une réalité depuis quelques années. Cependant, même si l’association de la compression et de la visualisation est souvent mentionnée comme perspective, très peu d’articles traitent réellement ce problème, et les fichiers créés par les algorithmes de visualisation sont souvent beaucoup plus volumineux que les originaux. En réalité, la compression favorise une taille réduite de fichier au détriment de l’accès rapide aux données, alors que les méthodes de visualisation se concentrent sur la rapidité de rendu : les deux objectifs s’opposent et se font concurrence. A partir d’une méthode de compression progressive existante incompatible avec le raffinement sélectif et interactif, et uniquement utilisable sur des maillages de taille modeste, cette thèse tente de réconcilier compression sans perte et visualisation en proposant de nouveaux algorithmes et structures de données qui réduisent la taille des objets tout en proposant une visualisation rapide et interactive. En plus de cette double capacité, la méthode proposée est out-of-core et peut traiter des maillages de plusieurs centaines de millions de points. Par ailleurs, elle présente l’avantage de traiter tout complexe simplicial de dimension n, des soupes de triangles aux maillages volumiques. / Progressive compression methods are now mature (obtained rates are close to theoretical bounds) and interactive visualization of huge meshes has been a reality for a few years. However, even if the combination of compression and visualization is often mentioned as a perspective, very few papers deal with this problem, and the files created by visualization algorithms are often much larger than the original ones. In fact, compression favors a low file size to the detriment of a fast data access, whereas visualization methods focus on rendering speed : both goals are opposing and competing. Starting from an existing progressive compression method incompatible with selective and interactive refinements and usable on small-sized meshes only, this thesis tries to reconcile lossless compression and visualization by proposing new algorithms and data structures which radically reduce the size of the objects while supporting a fast interactive navigation. In addition to this double capability, our method works out-of-core and can handle meshes containing several hundreds of millions vertices. Furthermore, it presents the advantage of dealing with any n-dimensional simplicial complex, which includes triangle soups or volumetric meshes.

Page generated in 0.1034 seconds