• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Adaptive Transform Coding of Images Using a Mixture of Principal Components

Dony, Douglas Robert 07 1900 (has links)
<p>The optimal linear block transform for coding images is well known to be the Karhunen-Loève transformation (KLT). However, the assumption of stationarity in the optimality condition is far from valid for images. Images are composed of regions whose local statistics may vary widely across an image. A new approach to data representation, a mixture of principal components (MPC), is developed in this thesis. It combines advantages of both principal components analysis and vector quantization and is therefore well suited to the problem of compressing images. The author proposes a number of new transform coding methods which optimally adapt to such local differences based on neural network methods using the MPC representation. The new networks are modular, consisting of a number of modules corresponding to different classes of the input data. Each module consists of a linear transformation, whose bases are calculated during an initial training period. The appropriate class for a given input vector is determined by an optimal classifier. The performance of the resulting adaptive networks is shown to be superior to that of the optimal nonadaptive linear transformation, both in terms of rate-distortion and computational complexity. When applied to the problem of compressing digital chest radiographs, compression ratios of between 30:1 and 40:1 are possible without any significant loss in image quality. In addition, the quality of the images were consistently judged to be as good as or better than the KLT at equivalent compression ratios.</p> <p>The new networks can also be used as segmentors with the resulting segmentation being independent of variations in illumination. In addition, the organization of the resulting class representations are analogous to the arrangement of the directionally sensitive columns in the visual cortex.</p> / Thesis / Doctor of Philosophy (PhD)
2

Sinusoidal coding of speech at very low bit rates

Sun, Xiaoqin January 1996 (has links)
No description available.
3

Regression Wavelet Analysis for Lossless Coding of Remote-Sensing Data

Marcellin, Michael W., Amrani, Naoufal, Serra-Sagristà. Joan, Laparra, Valero, Malo, Jesus 08 May 2016 (has links)
A novel wavelet-based scheme to increase coefficient independence in hyperspectral images is introduced for lossless coding. The proposed regression wavelet analysis (RWA) uses multivariate regression to exploit the relationships among wavelettransformed components. It builds on our previous nonlinear schemes that estimate each coefficient from neighbor coefficients. Specifically, RWA performs a pyramidal estimation in the wavelet domain, thus reducing the statistical relations in the residuals and the energy of the representation compared to existing wavelet-based schemes. We propose three regression models to address the issues concerning estimation accuracy, component scalability, and computational complexity. Other suitable regression models could be devised for other goals. RWA is invertible, it allows a reversible integer implementation, and it does not expand the dynamic range. Experimental results over a wide range of sensors, such as AVIRIS, Hyperion, and Infrared Atmospheric Sounding Interferometer, suggest that RWA outperforms not only principal component analysis and wavelets but also the best and most recent coding standard in remote sensing, CCSDS-123.
4

DATA COMPRESSION SYSTEM FOR VIDEO IMAGES

RAJYALAKSHMI, P.S., RAJANGAM, R.K. 10 1900 (has links)
International Telemetering Conference Proceedings / October 13-16, 1986 / Riviera Hotel, Las Vegas, Nevada / In most transmission channels, bandwidth is at a premium and an important attribute of any good digital signalling scheme is to optimally utilise the bandwidth for transmitting the information. The Data Compression System in this way plays a significant role in the transmission of picture data from any Remote Sensing Satellite by exploiting the statistical properties of the imagery. The data rate required for transmission to ground can be reduced by using suitable compression technique. A data compression algorithm has been developed for processing the images of Indian Remote Sensing Satellite. Sample LANDSAT imagery and also a reference photo are used for evaluating the performance of the system. The reconstructed images are obtained after compression for 1.5 bits per pixel and 2 bits per pixel as against the original of 7 bits per pixel. The technique used is uni-dimensional Hadamard Transform Technique. The Histograms are computed for various pictures which are used as samples. This paper describes the development of such a hardware and software system and also indicates how hardware can be adopted for a two dimensional Hadamard Transform Technique.
5

Adaptive hybrid (motion compensated interframe transform) coding technique for multiframe image data

Peck, Minsok January 1987 (has links)
No description available.
6

A New Feature Coding Scheme for Video-Content Matching Tasks

Qiao, Yingchan January 2017 (has links)
This thesis present a new feature coding scheme for video-content matching tasks. The purpose of this feature coding scheme is to compress features under a strict bitrate budget. Features contain two parts of information: the descriptors and the feature locations. We propose a variable level scalar quantizer for descriptors and a variable block size location coding scheme for feature locations. For descriptor coding, the SIFT descriptors are transformed using Karhunen-Loéve Transform (KLT). This K-L transformation matrix is trained using the descriptors extracted from the 25K-MIRFLICKR image dataset. The quantization of descriptors is applied after descriptor transformation. Our proposed descriptor quantizer allocates different bitrates to the elements in the transformed descriptor according to the sequence order. We establish the correlation between the descriptor quantizer distortion and the video matching performance, given a strict bitrate budget. Our feature location coding scheme is built upon the location histogram coding method. Instead of using uniform block size, we use different sizes of blocks to quantize different areas of a video frame. We have achieved nearly 50% reduction in the bitrate allocated for location information compared to the bitrate allocated by the coding schemes that use uniform block size. With this location coding scheme, we achieve almost the same video matching performance as that of the uniform block size coding. By combining the descriptor and location coding schemes, experimental results have shown that the overall feature coding scheme achieves excellent video matching performance. / Thesis / Master of Applied Science (MASc)
7

Amélioration de codecs audio standardisés avec maintien de l'interopérabilité

Lapierre, Jimmy January 2016 (has links)
Résumé : L’audio numérique s’est déployé de façon phénoménale au cours des dernières décennies, notamment grâce à l’établissement de standards internationaux. En revanche, l’imposition de normes introduit forcément une certaine rigidité qui peut constituer un frein à l’amélioration des technologies déjà déployées et pousser vers une multiplication de nouveaux standards. Cette thèse établit que les codecs existants peuvent être davantage valorisés en améliorant leur qualité ou leur débit, même à l’intérieur du cadre rigide posé par les standards établis. Trois volets sont étudiés, soit le rehaussement à l’encodeur, au décodeur et au niveau du train binaire. Dans tous les cas, la compatibilité est préservée avec les éléments existants. Ainsi, il est démontré que le signal audio peut être amélioré au décodeur sans transmettre de nouvelles informations, qu’un encodeur peut produire un signal amélioré sans ajout au décodeur et qu’un train binaire peut être mieux optimisé pour une nouvelle application. En particulier, cette thèse démontre que même un standard déployé depuis plusieurs décennies comme le G.711 a le potentiel d’être significativement amélioré à postériori, servant même de cœur à un nouveau standard de codage par couches qui devait préserver cette compatibilité. Ensuite, les travaux menés mettent en lumière que la qualité subjective et même objective d’un décodeur AAC (Advanced Audio Coding) peut être améliorée sans l’ajout d’information supplémentaire de la part de l’encodeur. Ces résultats ouvrent la voie à davantage de recherches sur les traitements qui exploitent une connaissance des limites des modèles de codage employés. Enfin, cette thèse établit que le train binaire à débit fixe de l’AMR WB+ (Extended Adaptive Multi-Rate Wideband) peut être compressé davantage pour le cas des applications à débit variable. Cela démontre qu’il est profitable d’adapter un codec au contexte dans lequel il est employé. / Abstract : Digital audio applications have grown exponentially during the last decades, in good part because of the establishment of international standards. However, imposing such norms necessarily introduces hurdles that can impede the improvement of technologies that have already been deployed, potentially leading to a proliferation of new standards. This thesis shows that existent coders can be better exploited by improving their quality or their bitrate, even within the rigid constraints posed by established standards. Three aspects are studied, being the enhancement of the encoder, the decoder and the bit stream. In every case, the compatibility with the other elements of the existent coder is maintained. Thus, it is shown that the audio signal can be improved at the decoder without transmitting new information, that an encoder can produce an improved signal without modifying its decoder, and that a bit stream can be optimized for a new application. In particular, this thesis shows that even a standard like G.711, which has been deployed for decades, has the potential to be significantly improved after the fact. This contribution has even served as the core for a new standard embedded coder that had to maintain that compatibility. It is also shown that the subjective and objective audio quality of the AAC (Advanced Audio Coding) decoder can be improved, without adding any extra information from the encoder, by better exploiting the knowledge of the coder model’s limitations. Finally, it is shown that the fixed rate bit stream of the AMR-WB+ (Extended Adaptive Multi-Rate Wideband) can be compressed more efficiently when considering a variable bit rate scenario, showing the need to adapt a coder to its use case.
8

Application interference analysis: Towards energy-efficient workload management on heterogeneous micro-server architectures

Hähnel, Markus, Arega, Frehiwot Melak, Dargie, Waltenegus, Khasanov, Robert, Castrillo, Jeronimo 11 May 2023 (has links)
The ever increasing demand for Internet traffic, storage and processing requires an ever increasing amount of hardware resources. In addition to this, infrastructure providers over-provision system architectures to serve users at peak times without performance delays. Over-provisioning leads to underutilization and thus to unnecessary power consumption. Therefore, there is a need for workload management strategies to map and schedule different services simultaneously in an energy-efficient manner without compromising performance, specially for heterogeneous micro-server architectures. This requires statistical models of how services interfere with each other, thereby affecting both performance and energy consumption. Indeed, the performance-energy behavior when mixing workloads is not well understood. This paper presents an interference analysis for heterogeneous workloads (i.e., CPU- and memory-intensive) on a big.LITTLE MPSoC architecture. We employ state-of-the-art tools to generate multiple single-application mappings and characterize the interference among two different services. We observed a performance degradation factor between 1.1 and 2.5. For some configurations, executing on different clusters resulted in reduced energy consumption with no performance penalty. This kind of detailed analysis give us first insights towards more general models for future workload management systems.
9

Výukový video kodek / Educational video codec

Dvořák, Martin January 2012 (has links)
The first goal of diploma thesis is to study the basic principles of video signal compression. Introduction to techniques used to reduce irrelevancy and redundancy in the video signal. The second goal is, on the basis of information about compression tools, implement the individual compression tools in the programming environment of Matlab and assemble simple model of the video codec. Diploma thesis contains a description of the three basic blocks, namely - interframe coding, intraframe coding and coding with variable length word - according the standard MPEG-2.

Page generated in 0.0881 seconds