• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 183
  • 30
  • 14
  • 10
  • 8
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 309
  • 309
  • 79
  • 64
  • 58
  • 47
  • 47
  • 42
  • 40
  • 37
  • 36
  • 32
  • 31
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Novel Applications Of Fractal Compression And Wavelet Analysis For Partial Discharge Pattern Classification

Lalitha, E M 05 1900 (has links) (PDF)
No description available.
262

On Perception-Based Image Compression Schemes

Ramasubramanian, D 03 1900 (has links) (PDF)
No description available.
263

Représentation et compression à haut niveau sémantique d’images 3D / Representation and compression at high semantic level of 3D images

Samrouth, Khouloud 19 December 2014 (has links)
La diffusion de données multimédia, et particulièrement les images, continuent à croitre de manière très significative. La recherche de schémas de codage efficaces des images reste donc un domaine de recherche très dynamique. Aujourd'hui, une des technologies innovantes les plus marquantes dans ce secteur est sans doute le passage à un affichage 3D. La technologie 3D est largement utilisée dans les domaines de divertissement, d'imagerie médicale, de l'éducation et même plus récemment dans les enquêtes criminelles. Il existe différentes manières de représenter l'information 3D. L'une des plus répandues consiste à associer à une image classique dite de texture, une image de profondeur de champs. Cette représentation conjointe permet ainsi une bonne reconstruction 3D dès lors que les deux images sont bien corrélées, et plus particulièrement sur les zones de contours de l'image de profondeur. En comparaison avec des images 2D classiques, la connaissance de la profondeur de champs pour les images 3D apporte donc une information sémantique importante quant à la composition de la scène. Dans cette thèse, nous proposons un schéma de codage scalable d'images 3D de type 2D + profondeur avec des fonctionnalités avancées, qui préserve toute la sémantique présente dans les images, tout en garantissant une efficacité de codage significative. La notion de préservation de la sémantique peut être traduite en termes de fonctionnalités telles que l'extraction automatique de zones d'intérêt, la capacité de coder plus finement des zones d'intérêt par rapport au fond, la recomposition de la scène et l'indexation. Ainsi, dans un premier temps, nous introduisons un schéma de codage scalable et joint texture/profondeur. La texture est codée conjointement avec la profondeur à basse résolution, et une méthode de compression de la profondeur adaptée aux caractéristiques des cartes de profondeur est proposée. Ensuite, nous présentons un schéma global de représentation fine et de codage basé contenu. Nous proposons ainsi schéma global de représentation et de codage de "Profondeur d'Intérêt", appelé "Autofocus 3D". Il consiste à extraire finement des objets en respectant les contours dans la carte de profondeur, et de se focaliser automatiquement sur une zone de profondeur pour une meilleure qualité de synthèse. Enfin, nous proposons un algorithme de segmentation en régions d'images 3D, fournissant une forte consistance entre la couleur, la profondeur et les régions de la scène. Basé sur une exploitation conjointe de l'information couleurs, et celle de profondeur, cet algorithme permet la segmentation de la scène avec un degré de granularité fonction de l'application visée. Basé sur cette représentation en régions, il est possible d'appliquer simplement le même principe d'Autofocus 3D précédent, pour une extraction et un codage de la profondeur d'Intérêt (DoI). L'élément le plus remarquable de ces deux approches est d'assurer une pleine cohérence spatiale entre texture, profondeur, et régions, se traduisant par une minimisation des problèmes de distorsions au niveau des contours et ainsi par une meilleure qualité dans les vues synthétisées. / Dissemination of multimedia data, in particular the images, continues to grow very significantly. Therefore, developing effective image coding schemes remains a very active research area. Today, one of the most innovative technologies in this area is the 3D technology. This 3D technology is widely used in many domains such as entertainment, medical imaging, education and very recently in criminal investigations. There are different ways of representing 3D information. One of the most common representations, is to associate a depth image to a classic colour image called texture. This joint representation allows a good 3D reconstruction, as the two images are well correlated, especially along the contours of the depth image. Therefore, in comparison with conventional 2D images, knowledge of the depth of field for 3D images provides an important semantic information about the composition of the scene. In this thesis, we propose a scalable 3D image coding scheme for 2D + depth representation with advanced functionalities, which preserves all the semantics present in the images, while maintaining a significant coding efficiency. The concept of preserving the semantics can be translated in terms of features such as an automatic extraction of regions of interest, the ability to encode the regions of interest with higher quality than the background, the post-production of the scene and the indexing. Thus, firstly we introduce a joint and scalable 2D plus depth coding scheme. First, texture is coded jointly with depth at low resolution, and a method of depth data compression well suited to the characteristics of the depth maps is proposed. This method exploits the strong correlation between the depth map and the texture to better encode the depth map. Then, a high resolution coding scheme is proposed in order to refine the texture quality. Next, we present a global fine representation and contentbased coding scheme. Therefore, we propose a representation and coding scheme based on "Depth of Interest", called "3D Autofocus". It consists in a fine extraction of objects, while preserving the contours in the depth map, and it allows to automatically focus on a particular depth zone, for a high rendering quality. Finally, we propose 3D image segmentation, providing a high consistency between colour, depth and regions of the scene. Based on a joint exploitation of the colour and depth information, this algorithm allows the segmentation of the scene with a level of granularity depending on the intended application. Based on such representation of the scene, it is possible to simply apply the same previous 3D Autofocus, for Depth of Interest extraction and coding. It is remarkable that both approaches ensure a high spatial coherence between texture, depth, and regions, allowing to minimize the distortions along object of interest's contours and then a higher quality in the synthesized views.
264

Efektivní nástroj pro kompresi obrazu v jazyce Java / JAVA-based effective implementation of an image compression tool

Průša, Zdeněk January 2008 (has links)
This diploma thesis deals with digital image lossy compression. Lossy compression in general inserts some kind of distorsion to the resulting image. The distorsion should not be interupting or even noticable in the better case. For image analysis there is used process called transformation and for choosing relevant coefficients process called coding. Evaluation of image quallity can be done by objective or subjective method. There is encoder introduced and realized in this work. Encoder utilizes two-dimension wavelet transform and SPIHT algortihm for coefficient coding. It was made use of accelerated method of wavelet transform computation by lifting scheme. Coder can proccess color information of images using modificated original SPIHT algorithm. For implementation the JAVA programming language was employed. The object-oriented design principes was made use of and thus the program is easy to extended. At demonstaration pictures there are shown effectiveness and characteristic way of distorsion of the proposed coder at high compression rates.
265

Lossless Image compression using MATLAB : Comparative Study

Kodukulla, Surya Teja January 2020 (has links)
Context: Image compression is one of the key and important applicationsin commercial, research, defence and medical fields. The largerimage files cannot be processed or stored quickly and efficiently. Hencecompressing images while maintaining the maximum quality possibleis very important for real-world applications. Objectives: Lossy compression is widely popular for image compressionand used in commercial applications. In order to perform efficientwork related to images, the quality in many situations needs to be highwhile having a comparatively low file size. Hence lossless compressionalgorithms are used in this study to compare the lossless algorithmsand to check which algorithm makes the compression retaining thequality with decent compression ratio. Method: The lossless algorithms compared are LZW, RLE, Huffman,DCT in lossless mode, DWT. The compression techniques areimplemented in MATLAB by using image processing toolbox. Thecompressed images are compared for subjective image quality. The imagesare compressed with emphasis on maintaining the quality ratherthan focusing on diminishing file size. Result: The LZW algorithm compression produces binary imagesfailing in this implementation to produce a lossless image. Huffmanand RLE algorithms produce similar results with compression ratiosin the range of 2.5 to 3.7, and the algorithms are based on redundancyreduction. The DCT and DWT algorithms compress every elementin the matrix defined for the images maintaining lossless quality withcompression ratios in the range 2 to 3.5. Conclusion: The DWT algorithm is best suitable for a more efficientway to compress an image in a lossless technique. As the wavelets areused in this compression, all the elements in the image are compressedwhile retaining the quality. The Huffman and RLE produce losslessimages, but for a large variety of images, some of the images may notbe compressed with complete efficiency.
266

[pt] COMPRESSÃO COM PERDAS, DE IMAGENS OBTIDAS POR SATÉLITES DE SENSORIAMENTO REMOTO, PARA TRANSMISSÃO EM CANAL COM RUÍDO / [en] LOSSY COMPRESSION OF REMOTE SENSING IMAGES FOR TRANSMISSION OVER NOISY CHANNEL

ARMANDO TEMPORAL NETO 10 November 2005 (has links)
[pt] Este trabalho apresenta um estudo sobre compressão de imagens de sensoriamento remoto para serem transmitidas através de um canal com ruído. As imagens são capturadas por um satélite de sensoriamento remoto e transmitidas a uma estação terrestre. A compreensão das imagens é necessária para se economizar banda e potência de transmissão. Algumas técnicas muito boas de compressão de imagens apresentam sérios problemas quando na presença de ruído. Assim, a técnica de quantização vetorial foi escolhida para ser utilizada neste trabalho. Utilizando-se a idéia de quantização vetorial multi-estágios, propões-se um esquema de compressão com remoção de médias, onde separa-se a informação contida na imagem para tratá-la de forma diferenciada, de acordo com a sua importância. É feita então uma análise sobre o projeto do enlace do satélite do sensoriamento remoto comparando-se o esquema utilizado atualmente com o esquema proposto. / [en] This thesis presents a study of remote sensing image compression to be transmitted over a noisy channel. The images are obtained by a remote sensing satellite and transmitting to an earth station. The compression is due to savings in bandwidth and transmitting power. Some of the most efficient image codecs presents serious problems in the presence of noise. So, the vector quantization technique was chosen to be used. Using the multi-stage vector quantization idea, a compression scheme with mean remove is proposed as a manner to separate and treat unequally the image information as its importance. An analysis on the design of the remote sensing satellite link is done with a comparison between the current scheme used the proposed one.
267

[pt] CODIFICAÇÃO CONJUNTA, PARA FONTE E CANAL, USANDO QUANTIZAÇÃO VETORIAL ESTRUTURADA EM ÁRVORE, PARA IMAGENS DE SENSORIAMENTO REMOTO / [en] JOINT SOURCE-CHANNEL CODING USING TREE-STRCTURED VECTOR QUANTIZATION FOR REMOTE SENSING IMAGES

RAFAEL DONNICI DE AZEVEDO 16 November 2005 (has links)
[pt] Este trabalho estuda o problema de compressão de imagens de sensoriamento remoto segundo a ótica da codificação conjunta fonte-canal. É analisado o desempenho de métodos baseados em quantização vetorial segundo o algoritmo LBG, principalmente o COVQ (Channel Optimized Vector Quantizer) bem como a quantização vetorial estruturada em árvore. Dentro desse contexto, são propostos 2 novos métodos para a resolução do problema: (1)Uma quantização vetorial estruturada em árvores que leva em conta a transmissão através de canais ruidosos, solução denominada COTSVQ (Channel-Design Tree Strutured Vecotr Quantizer), bem como (2) uma classe de métodos que se utiliza de códigos corretores de erro sobre a estrutura progressiva do TSVQ, de forma a proteger os dados de forma ativa durante a transmissão. Os dois métodos propostos podem ser combinados no mesmo compressor, de forma a originar uma classe ampla de compressores adaptados à transmissão por canais com ruído. São apresentados resultados que comparam os desempenhos dos métodos propostos com aqueles já existentes para uma análise de desempenho, na situação de transmissão via satélite de imagens captadas e comprimidas para uma taxa de 1,5bpp. Os resultados mostram que os métodos propostos são muito menos complexos que os já existentes, porém conseguindo atingir uma qualidade de imagem equivalente, ou, em alguns casos, superior. / [en] This work studies the problem of remote sensorng image compression by joint source-channel coding. The vector quantizer methods evaluated are those designed with the LBG algorithm, the COVQ (channel-optimized vector quantizer) algorithm as well as tree-structured vector quantizer. The noisy channel is modelled as a BSC. In this context, two news methods are proposed: (1) A tree- structures vector quantizer that considers the transmission through noisy channels (denominated CD-TSVQ), and (2) a new class of compressors that uses forward error- correcting codes over the TSVQ structure, as a way to actively protect data during the transmission. The twoproposed methods can be combined on the same compressor architecture, resulting in a vast class of compressors well-adapted to the transmission through noisy channels. Results allowing the comparision of the proposed methods with existing ones are presented. Performance evaluated in a scenery where images are compressed to be transmited at a rate of 1.5bpp. Results yield to the conclusion that the porposed methods are much less complex than the existing methods, yet achieve equivalent or, in some situations, improved performance.
268

Design and Implementation of an Embedded NIOS II System for JPEG2000 Tier II Encoding

McNichols, John M. 21 August 2012 (has links)
No description available.
269

Image/video compression and quality assessment based on wavelet transform

Gao, Zhigang 14 September 2007 (has links)
No description available.
270

Comparação da transformada wavelet discreta e da transformada do cosseno, para compressão de imagens de impressão digital / Comparison of the discrete transform cosine and the discrete wavelet transform for image of compression of fingerprint

Reigota, Nilvana dos Santos 27 February 2007 (has links)
Este trabalho tem por objetivo comparar os seguintes métodos de compressão de imagens de impressão digital: transformada discreta do cosseno (DCT), transformada de wavelets de Haar, transformada de wavelets de Daubechies e transformada de wavelets de quantização escalar (WSQ). O propósito da comparação é identificar o método que resulta numa menor perda de dados, para a maior taxa de compressão possível. São utilizadas as seguintes métricas para avaliação da qualidade da imagem para os métodos: erro quadrático médio (ERMS), a relação sinal e ruído (SNR) e a relação sinal ruído de pico (PSNR). Para as métricas utilizadas a DCT apresentou os melhores resultados, seguida pela WSQ. No entanto, o melhor tempo de compressão e a melhor qualidade das imagens recuperadas avaliadas pelo software GrFinger 4.2, foram obtidos com a técnica WSQ. / This research aims to compare the following fingerprint image compression methods: the discrete cosseno transform (DCT), Haar wavelet transform, Daubechies wavelets transform and wavelet scalar quantization (WSQ). The main interest is to find out the technique with the smallest distortion and higher compression ratio. Image quality is measured using peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR) and root mean square (ERMS). Image quality using these metrics showed best results for the DCT followed by WSQ, although the WSQ had the best compression time and presented the best quality when evaluated by the GrFinger 4.2 software.

Page generated in 0.0734 seconds