• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 51
  • 9
  • 9
  • 7
  • 2
  • 2
  • 1
  • Tagged with
  • 87
  • 34
  • 17
  • 16
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Codage d'images avec et sans pertes à basse complexité et basé contenu / Lossy and lossless image coding with low complexity and based on the content

Liu, Yi 18 March 2015 (has links)
Ce projet de recherche doctoral vise à proposer solution améliorée du codec de codage d’images LAR (Locally Adaptive Resolution), à la fois d’un point de vue performances de compression et complexité. Plusieurs standards de compression d’images ont été proposés par le passé et mis à profit dans de nombreuses applications multimédia, mais la recherche continue dans ce domaine afin d’offrir de plus grande qualité de codage et/ou de plus faibles complexité de traitements. JPEG fut standardisé il y a vingt ans, et il continue pourtant à être le format de compression le plus utilisé actuellement. Bien qu’avec de meilleures performances de compression, l’utilisation de JPEG 2000 reste limitée due à sa complexité plus importe comparée à JPEG. En 2008, le comité de standardisation JPEG a lancé un appel à proposition appelé AIC (Advanced Image Coding). L’objectif était de pouvoir standardiser de nouvelles technologies allant au-delà des standards existants. Le codec LAR fut alors proposé comme réponse à cet appel. Le système LAR tend à associer une efficacité de compression et une représentation basée contenu. Il supporte le codage avec et sans pertes avec la même structure. Cependant, au début de cette étude, le codec LAR ne mettait pas en oeuvre de techniques d’optimisation débit/distorsions (RDO), ce qui lui fut préjudiciable lors de la phase d’évaluation d’AIC. Ainsi dans ce travail, il s’agit dans un premier temps de caractériser l’impact des principaux paramètres du codec sur l’efficacité de compression, sur la caractérisation des relations existantes entre efficacité de codage, puis de construire des modèles RDO pour la configuration des paramètres afin d’obtenir une efficacité de codage proche de l’optimal. De plus, basée sur ces modèles RDO, une méthode de « contrôle de qualité » est introduite qui permet de coder une image à une cible MSE/PSNR donnée. La précision de la technique proposée, estimée par le rapport entre la variance de l’erreur et la consigne, est d’environ 10%. En supplément, la mesure de qualité subjective est prise en considération et les modèles RDO sont appliqués localement dans l’image et non plus globalement. La qualité perceptuelle est visiblement améliorée, avec un gain significatif mesuré par la métrique de qualité objective SSIM. Avec un double objectif d’efficacité de codage et de basse complexité, un nouveau schéma de codage LAR est également proposé dans le mode sans perte. Dans ce contexte, toutes les étapes de codage sont modifiées pour un meilleur taux de compression final. Un nouveau module de classification est également introduit pour diminuer l’entropie des erreurs de prédiction. Les expérimentations montrent que ce codec sans perte atteint des taux de compression équivalents à ceux de JPEG 2000, tout en économisant 76% du temps de codage et de décodage. / This doctoral research project aims at designing an improved solution of the still image codec called LAR (Locally Adaptive Resolution) for both compression performance and complexity. Several image compression standards have been well proposed and used in the multimedia applications, but the research does not stop the progress for the higher coding quality and/or lower coding consumption. JPEG was standardized twenty years ago, while it is still a widely used compression format today. With a better coding efficiency, the application of the JPEG 2000 is limited by its larger computation cost than the JPEG one. In 2008, the JPEG Committee announced a Call for Advanced Image Coding (AIC). This call aims to standardize potential technologies going beyond existing JPEG standards. The LAR codec was proposed as one response to this call. The LAR framework tends to associate the compression efficiency and the content-based representation. It supports both lossy and lossless coding under the same structure. However, at the beginning of this study, the LAR codec did not implement the rate-distortion-optimization (RDO). This shortage was detrimental for LAR during the AIC evaluation step. Thus, in this work, it is first to characterize the impact of the main parameters of the codec on the compression efficiency, next to construct the RDO models to configure parameters of LAR for achieving optimal or sub-optimal coding efficiencies. Further, based on the RDO models, a “quality constraint” method is introduced to encode the image at a given target MSE/PSNR. The accuracy of the proposed technique, estimated by the ratio between the error variance and the setpoint, is about 10%. Besides, the subjective quality measurement is taken into consideration and the RDO models are locally applied in the image rather than globally. The perceptual quality is improved with a significant gain measured by the objective quality metric SSIM (structural similarity). Aiming at a low complexity and efficient image codec, a new coding scheme is also proposed in lossless mode under the LAR framework. In this context, all the coding steps are changed for a better final compression ratio. A new classification module is also introduced to decrease the entropy of the prediction errors. Experiments show that this lossless codec achieves the equivalent compression ratio to JPEG 2000, while saving 76% of the time consumption in average in encoding and decoding.
42

Adressage et contrôle de nanosources optiques par plasmonique intégrée ou fibrée / Addressing and control of optical nanosources by integrated or fibered plasmonics

Barthes, Julien 18 June 2015 (has links)
Les plasmons polaritons de surface, modes supportés par des nanostructures métalliques permettent de confiner la lumière à des échelles sub-longueurs d’onde. En s’affranchissant de la limite de diffraction, ces modes constituent des pistes intéressantes pour l’adressage et le contrôle de nanosources optiques (molécules, boites quantiques...). Par exemple, un nanofil métallique constitue un guide plasmonique unidimensionnel qui permet d’exciter une nanosource ou encore de coupler deux émetteurs avec des applications possibles pour la réalisation de composants nano-optiques intégrés. En revanche, la perte d’énergie dans le métal diminue la portée de ces dispositifs. Une stratégie consiste donc à travailler sur une configuration hybride : plasmonique et fibre optique, pour coupler efficacement l’émission de la nanosource à un mode de fibre. Ceci ouvre la voie à la réalisation d’une nanosource fibrée de manipulation aisée pouvant être utilisée comme source de photon unique pour la cryptographie quantique ou plus simplement comme une sonde de champ proche optique haute résolution.Après une étude des principaux canaux de relaxation d’une molécule fluorescente à proximité d’un guide plasmonique, nous discutons de l’optimisation du couplage entre l’émetteur et le guide plasmonique en jouant sur sa forme et la longueur d’onde d’émission. Ensuite, nous nous intéressons au comportement d’une structure hybride composée d’une fibre optique étirée et métallisée. Enfin, nous montrons que l’optimisation du transfert d’énergie d’une molécule fluorescente en présence de cette structure permet de collecter plus de 50% de l’énergie lumineuse d’un nano-émetteur posé sur un substrat vers une fibre optique par le truchement d’un plasmon. / Surface plasmon polariton (SPP) can confine light on subwavelength dimensions. Since they are not diffraction limited, they are of great interest for addressing and controlling optical nanosources. For example, a metal nanowire defines 1D plasmonic waveguide with a great potential for either addressing or coupling quantum emitters. Therefore, SPP opens great opportunities for integrated optical applications. However, SPP suffer from ohmic losses that jeopardize the applications of plasmonic components. In this context, we study the possibilities provided by an hybrid plasmonic-photonicstructure to couple efficiently an emitter to a fiber mode. Such a structure paves the way for fibered single photon nanosource or high resolution optical probe. In this thesis manuscript, we first study the coupling rate between a fluorescent molecule and a metallic nanowire thanks to Green’s dyad formalism. This leads us to distinguish the different relaxation channels and the enhancement of the energy transferred into the plasmonic guided mode by optimizing the shape of the guide (crystalline nano-wire,slow modes). Then, we investigate the energy propagation in a metal coated taperedoptical fiber. Finally, we achieve an optimal configuration for which more than 50% of the energy emitted by a quantum emitter laid on a substrat is transferred into an optical fiber.
43

Komprese výškových map / Height map compression techniques

Lašan, Michal January 2016 (has links)
The goal of this thesis is to design a suitable method for lossy compression of heightmap terrain data. This method should accept blocks of float samples of dimensions 2^n x 2^n as an input, for which it should be able to perform progressive decompression of mip-maps (lower-resolution representations). It should keep the reconstructed data within a certain maximum per-sample error bound for each mip-map level. This bound should be in the unit of meters and adjustable by the user. Given these constraints, it should be as efficient as possible. Our method is inspired by the second generation of progressive wavelet-based compression scheme modified to satisfy the~maximum-error constraint. We simplified this scheme by factoring out unnecessary computations in order to improve the efficiency. Our method can compress a 256x256 block in about 30 ms and decompress it in about 2 ms. Thanks to these attributes, the method can be used in a real-time planet renderer. It achieves the compression ratio of 37:1 on the whole Earth 90m/sample terrain dataset transformed and separated into square blocks, while respecting the maximum error of 5m. Powered by TCPDF (www.tcpdf.org)
44

Efektivní nástroj pro kompresi obrazu v jazyce Java / JAVA-based effective implementation of an image compression tool

Průša, Zdeněk January 2008 (has links)
This diploma thesis deals with digital image lossy compression. Lossy compression in general inserts some kind of distorsion to the resulting image. The distorsion should not be interupting or even noticable in the better case. For image analysis there is used process called transformation and for choosing relevant coefficients process called coding. Evaluation of image quallity can be done by objective or subjective method. There is encoder introduced and realized in this work. Encoder utilizes two-dimension wavelet transform and SPIHT algortihm for coefficient coding. It was made use of accelerated method of wavelet transform computation by lifting scheme. Coder can proccess color information of images using modificated original SPIHT algorithm. For implementation the JAVA programming language was employed. The object-oriented design principes was made use of and thus the program is easy to extended. At demonstaration pictures there are shown effectiveness and characteristic way of distorsion of the proposed coder at high compression rates.
45

Využití pokročilých objektivních kritérií hodnocení při kompresi obrazu / Advanced objective measurement criteria applied to image compression

Šimek, Josef January 2010 (has links)
This diploma thesis deals with the problem of using an objective quality assessment methods in image data compression. Lossy compression always introduces some kind of distortion into the processed data causing degradation in the quality of the image. The intensity of this distortion can be measured using subjective or objective methods. To be able to optimize compression algorithms the objective criteria are used. In this work the SSIM index as a useful tool for describing the quality of compressed images has been presented. Lossy compression scheme is realized using the wavelet transform and SPIHT algorithm. The modification of this algorithm using partitioning of the wavelet coefficients into the separate tree-preserving blocks followed by independent coding, which is especially suitable for parallel processing, was implemented. For the given compression ratio the traditional problem is being solved – how to allocate available bits among the spatial blocks to achieve the highest possible image quality. The possible approaches to achieve this solution were discussed. As a result, some methods for bit allocation based on MSSIM index were proposed. To test the effectivity of these methods the MATLAB environment was used.
46

Lossless Image compression using MATLAB : Comparative Study

Kodukulla, Surya Teja January 2020 (has links)
Context: Image compression is one of the key and important applicationsin commercial, research, defence and medical fields. The largerimage files cannot be processed or stored quickly and efficiently. Hencecompressing images while maintaining the maximum quality possibleis very important for real-world applications. Objectives: Lossy compression is widely popular for image compressionand used in commercial applications. In order to perform efficientwork related to images, the quality in many situations needs to be highwhile having a comparatively low file size. Hence lossless compressionalgorithms are used in this study to compare the lossless algorithmsand to check which algorithm makes the compression retaining thequality with decent compression ratio. Method: The lossless algorithms compared are LZW, RLE, Huffman,DCT in lossless mode, DWT. The compression techniques areimplemented in MATLAB by using image processing toolbox. Thecompressed images are compared for subjective image quality. The imagesare compressed with emphasis on maintaining the quality ratherthan focusing on diminishing file size. Result: The LZW algorithm compression produces binary imagesfailing in this implementation to produce a lossless image. Huffmanand RLE algorithms produce similar results with compression ratiosin the range of 2.5 to 3.7, and the algorithms are based on redundancyreduction. The DCT and DWT algorithms compress every elementin the matrix defined for the images maintaining lossless quality withcompression ratios in the range 2 to 3.5. Conclusion: The DWT algorithm is best suitable for a more efficientway to compress an image in a lossless technique. As the wavelets areused in this compression, all the elements in the image are compressedwhile retaining the quality. The Huffman and RLE produce losslessimages, but for a large variety of images, some of the images may notbe compressed with complete efficiency.
47

Temporal Lossy In-Situ Compression for Computational Fluid Dynamics Simulations

Lehmann, Henry 31 August 2018 (has links)
Während CFD Simulationen für Metallschmelze im Rahmen des SFB920 fallen auf dem Taurus HPC Cluster in Dresden sehr große Datenmengen an, deren Handhabung den wissenschaftlichen Arbeitsablauf stark verlangsamen. Zum einen ist der Transfer in Visualisierungssysteme nur unter hohem Zeitaufwand möglich. Zum anderen ist interaktive Analyse von zeitlich abhängigen Prozessen auf Grund des Speicherflaschenhalses nahezu unmöglich. Aus diesen Gründen beschäftigt sich die vorliegende Dissertation mit der Entwicklung sog. Temporaler In-Situ Kompression für wissenschaftliche Daten direkt innerhalb von CFD Simulationen. Dabei werden mittels neuer Quantisierungsverfahren die Daten auf ~10% komprimiert, wobei dekomprimierte Daten einen Fehler von maximal 1% aufweisen. Im Gegensatz zu nicht-temporaler Kompression, wird bei temporaler Kompression der Unterschied zwischen Zeitschritten komprimiert, um den Kompressionsgrad zu erhöhen. Da die Datenmenge um ein Vielfaches kleiner ist, werden Kosten für die Speicherung und die Übertragung gesenkt. Da Kompression, Transfer und Dekompression bis zu 4 mal schneller ablaufen als der Transfer von unkomprimierten Daten, wird der wissenschaftliche Arbeitsablauf beschleunigt.
48

Détection binaire distribuée sous contraintes de communication / Distributed binary detection with communication constraints

Katz, Gil 06 January 2017 (has links)
Ces dernières années, l'intérêt scientifique porté aux différents aspects des systèmes autonomes est en pleine croissance. Des voitures autonomes jusqu'à l'Internet des objets, il est clair que la capacité de systèmes à prendre des décision de manière autonome devient cruciale. De plus, ces systèmes opéreront avec des ressources limitées. Dans cette thèse, ces systèmes sont étudiés sous l'aspect de la théorie de l'information, dans l'espoir qu'une compréhension fondamentale de leurs limites et de leurs utilisations pourrait aider leur conception par les futures ingénieurs.Dans ce travail, divers problèmes de décision binaire distribuée et collaborative sont considérés. Deux participants doivent "déclarer" la mesure de probabilité de deux variables aléatoires, distribuées conjointement par un processus sans mémoire et désignées par $vct{X}^n=(X_1,dots,X_n)$ et $vct{Y}^n=(Y_1,dots,Y_n)$. Cette décision et prise entre deux mesures de probabilité possibles sur un alphabet fini, désignés $P_{XY}$ et $P_{bar{X}bar{Y}}$. Les prélèvements marginaux des variables aléatoires, $vct{X}^n$ et $vct{Y}^n$ sont supposés à être disponibles aux différents sites .Il est permis aux participants d'échanger des quantités limitées d'information sur un canal parfait avec un contraint de débit maximal. Durant cette thèse, la nature de cette communication varie. La communication unidirectionnelle est considérée d'abord, suivie par la considération de communication bidirectionnelle, qui permet des échanges interactifs entre les participants. / In recents years, interest has been growing in research of different autonomous systems. From the self-dring car to the Internet of Things (IoT), it is clear that the ability of automated systems to make autonomous decisions in a timely manner is crucial in the 21st century. These systems will often operate under stricts constains over their resources. In this thesis, an information-theoric approach is taken to this problem, in hope that a fundamental understanding of the limitations and perspectives of such systems can help future engineers in designing them.Throughout this thesis, collaborative distributed binary decision problems are considered. Two statisticians are required to declare the correct probability measure of two jointly distributed memoryless process, denoted by $vct{X}^n=(X_1,dots,X_n)$ and $vct{Y}^n=(Y_1,dots,Y_n)$, out of two possible probability measures on finite alphabets, namely $P_{XY}$ and $P_{bar{X}bar{Y}}$. The marginal samples given by $vct{X}^n$ and $vct{Y}^n$ are assumed to be available at different locations.The statisticians are allowed to exchange limited amounts of data over a perfect channel with a maximum-rate constraint. Throughout the thesis, the nature of communication varies. First, only unidirectional communication is allowed. Using its own observations, the receiver of this communication is required to first identify the legitimacy of its sender by declaring the joint distribution of the process, and then depending on such authentication it generates an adequate reconstruction of the observations satisfying an average per-letter distortion. Bidirectional communication is subsequently considered, in a scenario that allows interactive communication between the participants.
49

A Consolidated Global Navigation Satellite System Multipath Analysis Considering Modern Signals, Antenna Installation, and Boundary Conditions for Ground-Based Applications

Appleget, Andrew L. 16 September 2020 (has links)
No description available.
50

[en] PERMUTATION CODES FOR DATA COMPRESSION AND MODULATION / [pt] CÓDIGOS DE PERMUTAÇÃO PARA COMPRESSÃO DE DADOS E MODULAÇÃO

DANILO SILVA 01 April 2005 (has links)
[pt] Códigos de permutação são uma interessante ferramenta matemática que pode ser empregada para construir tanto esquemas de compressão com perdas quanto esquemas de modulação em um sistema de transmissão digital. Códigos de permutação vetorial, uma extensão mais poderosa dos códigos de permutação escalar, foram recentemente introduzidos no contexto de compressão de fontes. Este trabalho apresenta novas contribuições a essa teoria e introduz os códigos de permutação vetorial no contexto de modulação. Para compressão de fontes, é demonstrado matematicamente que os códigos de permutação vetorial (VPC) têm desempenho assintótico idêntico ao do quantizador vetorial com restrição de entropia (ECVQ). Baseado neste desenvolvimento, é proposto um método eficiente para o projeto de VPC s. O bom desempenho dos códigos projetados com esse método é verificado através de resultados experimentais para as fontes uniforme e gaussiana: são exibidos VPC s cujo desempenho é semelhante ao do ECVQ e superior ao de sua versão escalar. Para o propósito de transmissão digital, é verificado que também a modulação baseada em códigos de permutação vetorial (VPM) possui desempenho superior ao de sua versão escalar. São desenvolvidas as expressões para o projeto ótimo de VPM, e um método é apresentado para detecção ótima de VPM em canais AWGN e com desvanecimento. / [en] Permutation codes are an interesting mathematical tool which can be used to devise both lossy compression schemes and modulation schemes for digital transmission systems. Vector permutation codes, a more powerful extension of scalar permutation codes, were recently introduced for the purpose of source compression. This work presents new contributions to this theory and also introduces vector permutation codes for the purpose of modulation. For source compression, it is proved that vector permutation codes (VPC) have an asymptotical performance equal to that of an entropy-constrained vector quantizer (ECVQ). Based on this development, an efficient method is proposed for VPC design. Experimental results for Gaussian and uniform sources show that the codes designed by this method have indeed a good performance: VPC s are exhibited whose performances are similar to that of ECVQ and superior to those of their scalar counterparts. In the context of digital transmission, it is verified that also vector permutation modulation (VPM) is superior in performance to scalar permutation modulation. Expressions are developed for the optimal design of VPM, and a method is presented for maximum-likelihood detection of VPM in AWGN and fading channels.

Page generated in 0.0412 seconds