• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1353
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 32
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3051
  • 534
  • 465
  • 418
  • 410
  • 359
  • 329
  • 276
  • 266
  • 222
  • 219
  • 201
  • 169
  • 161
  • 159
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

A Review of Compression Testing Procedure with Reference to a Trenton Limestone

Wilson, Keith January 1967 (has links)
Note:
142

A New Audio Compression Scheme that Leverages Repetition in Music

Lotfaliei, Matin 17 August 2022 (has links)
Music frequently exhibits regular repeating structure because of musical beats, vamps, and rhythms. In this thesis, an audio compression algorithm that takes advantage of this structure is described. After providing some background, a simplified implementation of a perceptual audio compression algorithm is described and then modified to explore this idea. The newly proposed algorithm is evaluated using a publicly available dataset. The results show that the proposed algorithm can improve the compression ratio while retaining audio quality on rhythmic mixtures and background music. Better audio compression has potential applications in virtual music performance. / Graduate
143

Adding lossless compression to MPEGs

Jiang, Jianmin, Xiao, G. January 2006 (has links)
Yes / In this correspondence, we propose to add a lossless compression functionality into existing MPEGs by developing a new context tree to drive arithmetic coding for lossless video compression. In comparison with the existing work on context tree design, the proposed algorithm features in 1) prefix sequence matching to locate the statistics model at the internal node nearest to the stopping point, where successful match of context sequence is broken; 2) traversing the context tree along a fixed order of context structure with a maximum number of four motion compensated errors; and 3) context thresholding to quantize the higher end of error values into a single statistics cluster. As a result,the proposed algorithm is able to achieve competitive processing speed, low computational complexity and high compression performances, which bridges the gap between universal statistics modeling and practical compression techniques. Extensive experiments show that the proposed algorithm outperforms JPEG-LS by up to 24% and CALIC by up to 22%, yet the processing time ranges from less than 2 seconds per frame to 6 seconds per frame on a typical PC computing platform.
144

Study of Uniaxial Compression Process for AA7075 Aluminum Alloy

Liang, Xiao January 2018 (has links)
An experimental and complementary FE modeling study was conducted to characterize the room and elevated temperature uniaxial compressive deformation behavior of AA7075-T6 and O-temper materials. The experiments consisted of testing cylindrical and cubic specimens prepared from a rolled and heat-treated plate stock of AA7075 alloy. The tests were conducted in the lower range of elevated temperatures up to 300 °C, at several different test speeds in rolling and transverse orientations, as well as under isothermal and non-isothermal test conditions. The test results were analyzed in terms of true stress-train responses of the two tempers under the above experimental conditions. The deformed test specimens were also observed for surface features and deformed microstructures in the interior of the specimen under the above experimental conditions. A suitable strain rate and temperature dependent constitutive hardening law, in the form of modified Voce-Kocks law, was developed and coded as a UMAT subroutine in ABAQUS FE code to simulate the uniaxial compression experiments and compare the experimental and model results. In general, good general agreement was obtained between experiments and model predictions. A suitable fracture criterion, in the form of Tresca fracture model, was also implemented as a VUMAT subroutine in ABAQUS FE code to simulate the uniaxial compression experiments and predict fracture mode and other characteristics. Once again, good general agreement was obtained between room temperature fracture shapes and model predictions. The experimental and model results collectively provide a broad-based understanding of the effect of temperature, strain rate, material anisotropy, temperature field on material flow and deformed shapes of cylinders and cubic specimens, the nature of deformation (predominantly shear along two intersecting shear directions) and fracture (predominantly shear). The constraints to deformation at the corners in the cubic specimen yielded rather complex curvature development in deformed cubes. The non-isothermal rapid heating of test specimens using electrical resistance heating and subsequent compression of the specimen provided results similar to the isothermal case. However, the electrical resistance method offers a cost-effective process to form smaller high quality components in forming modes such as hot upsetting. / Thesis / Master of Applied Science (MASc)
145

An Image Compression Approach to Cooperative Processing for Swarming Autonomous Underwater Vehicles

Hutchison, Caroline Anne 08 September 2008 (has links)
Current wireless underwater communication technologies—i.e. underwater acoustic modems—are extremely bandwidth limited as compared to land-based wireless technologies. Additionally, acoustic modem technologies are not advancing at the same high rate as computing technologies. Therefore, it is proposed that image compression techniques be applied to sonar maps. This will both reduce the amount of information that must be transferred by these modems which in turn reduces the amount of time required to send information across acoustic channels. After compression is performed on one platform's map, the information is transformed into the coordinate system of the uncompressed second, non-collocated platform's map and the two maps are additively compared. If returns are common in both maps, they will be show up with higher energy than the individual maps' returns. This thesis proves that application of image compression techniques on range-angle maps allow for target detection, down to a minimum target strength value of 0 dB, independent of target return strength. / Master of Science
146

Le contrôle de la compression, de la prise de son au matriçage : une approche intégrée pour l'examen de problèmes relatifs à la perte de la gamme dynamique causée par la compression lors du matriçage

Chiriac, Dragos 20 April 2018 (has links)
Dans le cadre de notre mémoire, nous nous intéressons aux moyens d’obtenir des niveaux de sonie élevés, particulièrement lorsqu’une telle pratique est pertinente (dans notre cas, pour la musique électronique et pour le hip-hop), soit des sons qui sont forts du le point de vue de la perception (loudness), et ce, sans obtenir des distorsions relatives à l’écrêtage, soit la coupure horizontale d’une amplitude, et sans perdre trop de gamme dynamique, soit la diminution de l’écart entre la valeur en dB la plus faible et la plus élevée d’une pièce (par l’utilisation agressive de limiteurs/compresseurs). Concrètement, nous tentons trois approches autonomes parallèles qui s’inscrivent dans cette méthode organique : 1) Concevoir la conception sonore en fonction de la perception du niveau de sonie ; 2) Contrôler dynamique des percussions en prévision de la compression ; 3) Mixer et matricer en même temps. / This study is interested in the means of obtaining loud mixes, particularly in musical genres where such practices are pertinent (electronic music and hip-hop, in our case), while avoiding the usual disadvantages and defects of loud mixes. In other words, we are seeking for ways to achieve loud tunes from the listener’s subjective perception, without encountering distortions caused by the squaring of the waves’ amplitudes, usually associated with the heavy usage of compressors and limiters, and without loosing to much dynamic variety and/or dynamic range (the range between the lowest and the highest value [dB] in a mix). For these endeavours we have tried three different—yet parallel—autonomous methods. These are: 1) Sound design in function of our subjective perception of loudness (psychoacoustics); 2) Dynamic control of percussions for an efficient compression at the mastering stage; 3) Mixing and mastering at the same time.
147

Improving Image Compression through the Optimization of Move-to-Front using the Genetic Alrorithm

Keshavarzkuhjerdi, Maliheh 16 January 2024 (has links)
Titre de l'écran-titre (visionné le 10 janvier 2024) / La Transformée de Burrows-Wheeler (BWT) permet d'effectuer une excellente compression de données grâce à son tri de la séquence des caractères sources, regroupant ainsi des données similaires pour améliorer la compression. Bien que l'algorithme Move-to-Front (MTF) ne cause pas de compression en soi, la synergie qu'elle a avec l'algorithme de Huffman, classique ou amélioré, améliore la performance de compression. MTF, en augmentant la fréquence des petites valeurs dans les données codées, combiné au réarrangement des caractères effectué par BWT, crée un cadre optimal pour un codage efficace de Huffman. Néanmoins, la version classique de MTF a ses limites, car elle promeut uniformément les symboles accédés à l'avant de la liste, négligeant éventuellement les structures présentes dans le texte d'entrée. En introduisant une politique de promotion optimale qui réordonne minutieusement les symboles selon les modèles de données inhérents, on peut accentuer l'asymétrie de la distribution des valeurs, renforçant la compatibilité avec l'algorithme de Huffman. Cette méthode améliorée a démontré sa capacité à compresser des images complexes, non linéaires, avec une efficacité notable, bien que l'avantage obtenu diminue chez des images à la géométrie plus simple. / The Burrows-Wheeler Transform (BWT) allows for excellent data compression due to its sorting of the source character sequence, grouping similar data to enhance compression. Although the Moveto-Front (MTF) algorithm does not cause compression in itself, its synergy with the classic or enhanced Huffman algorithm improves compression performance. MTF, by increasing the frequency of small values in coded data, combined with the character rearrangement performed by BWT, creates an optimal framework for efficient Huffman coding. However, the classic version of MTF has its limits, as it uniformly promotes accessed symbols to the front of the list, possibly neglecting structures present in the input text. By introducing an optimal promotion policy that carefully reorders symbols according to inherent data patterns, one can accentuate the asymmetry of value distribution, enhancing compatibility with the Huffman algorithm. This improved method has demonstrated its ability to compress complex, non-linear images with notable efficiency, although the advantage obtained decreases for images with simpler geometry.
148

Compression therapy for venous ulcers

Vowden, Kath, Vowden, Peter 01 October 2008 (has links)
No / International consensus on compression therapy has been reached. Kathryn Vowden and Professor Peter Vowden discuss the guidance.
149

Evaluation and Hardware Implementation of Real-Time Color Compression Algorithms

Ojani, Amin, Caglar, Ahmet January 2008 (has links)
A major bottleneck, for performance as well as power consumption, for graphics hardware in mobile devices is the amount of data that needs to be transferred to and from memory. In, for example, hardware accelerated 3D graphics, a large part of the memory accesses are due to large and frequent color buffer data transfers. In a graphic hardware block color data is typically processed using RGB color format. For both 3D graphic rasterization and image composition several pixels needs to be read from and written to memory to generate a pixel in the frame buffer. This generates a lot of data traffic on the memory interfaces which impacts both performance and power consumption. Therefore it is important to minimize the amount of color buffer data. One way of reducing the memory bandwidth required is to compress the color data before writing it to memory and decompress it before using it in the graphics hardware block. This compression/decompression must be done “on-the-fly”, i.e. it has to be very fast so that the hardware accelerator does not have to wait for data. In this thesis, we investigated several exact (lossless) color compression algorithms from hardware implementation point of view to be used in high throughput hardware. Our study shows that compression/decompression datapath is well implementable even with stringent area and throughput constraints. However memory interfacing of these blocks is more critical and could be dominating.
150

APPLICATION OF DATA COMPRESSION TO FRAME AND PACKET TELEMETRY

Horan, Stephen, Horan, Sheila B. 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Reduction of signal transmission is of paramount concern to many in the telemetry and wireless industry. One technique that is available is the compression of the data before transmission. With telemetry type data, there are many approaches that can be used to achieve compression. Data compression of the Advanced Range Telemetry (ARTM) PCM data sets in the frame and packet modes, and for the entire data file will be considered and compared. The technique of differencing data will also be applied to the data files by subtracting the previous major frame and then applying compression techniques. It will be demonstrated that telemetry compression is a viable option to reduce the amount of data to be transmitted, and hence the bandwidth. However, this compression produces variable-length data segments with implications for real-time data synchronization.

Page generated in 0.0662 seconds