• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 4
  • 1
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 23
  • 15
  • 15
  • 11
  • 11
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Compression progressive et tatouage conjoint de maillages surfaciques avec attributs de couleur / Progressive compression and joint compression and watermarking of surface mesh with color attributes

Lee, Ho 21 June 2011 (has links)
L’utilisation des modèles 3D, représentés sous forme de maillage, est sans cesse croissante dans de nombreuses applications. Pour une transmission efficace et pour une adaptation à l’hétérogénéité des ressources de ces modèles, des techniques de compression progressive sont généralement utilisées. Afin de protéger le droit d’auteur de ces modèles pendant la transmission, des techniques de tatouage sont également employées. Dans ces travaux de thèse, nous proposons premièrement deux méthodes de compression progressive pour des maillages avec ou sans information de couleurs et nous présentons finalement un système conjoint de compression progressive et de tatouage. Dans une première partie, nous proposons une méthode d’optimisation du compromis débit-distorsion pour des maillages sans attribut de couleur. Pendant le processus de l’encodage, nous adoptons la précision de quantification au nombre d’éléments et à la complexité géométrique pour chaque niveau de détail. Cette adaptation peut s’effectuer de manière optimale en mesurant la distance par rapport au maillage original, ou de façon quasi-optimale en utilisant un modèle théorique pour une optimisation rapide. Les résultats montrent que notre méthode donne des résultats compétitifs par rapport aux méthodes de l’état de l’art. Dans une deuxième partie, nous nous focalisons sur l’optimisation du compromis débit-distorsion pour des maillages possédant l’information de couleur attachée aux sommets. Après avoir proposé deux méthodes de compression pour ce type de maillage, nous présentons une méthode d’optimisation du débit-distorsion qui repose sur l’adaptation de la précision de quantification de la géométrie et de la couleur pour chaque maillage intermédiaire. Cette adaptation peut être effectuée rapidement selon un modèle théorique qui permet d’évaluer le nombre de bits de quantification nécessaire pour chaque maillage intermédiaire. Une métrique est également proposée pour préserver les éléments caractéristiques durant la phase de simplification. Finalement, nous proposons un schéma conjoint de compression progressive et de tatouage. Afin de protéger tous les niveaux de détails, nous insérons le tatouage dans chaque étape du processus d’encodage. Pour cela, à chaque itération de la simplification, nous séparons les sommets du maillage en deux ensembles et nous calculons un histogramme de distribution de normes pour chacun d’entre eux. Ensuite, nous divisons ces histogrammes en plusieurs classes et nous modifions ces histogrammes en décalant les classes pour insérer un bit. Cette technique de tatouage est réversible et permet de restaurer de manière exacte le maillage original en éliminant la déformation induite par l’insertion du tatouage. Nous proposons également une nouvelle méthode de prédiction de la géométrie afin de réduire le surcoût provoqué par l’insertion du tatouage. Les résultats expérimentaux montrent que notre méthode est robuste à diverses attaques géométriques tout en maintenant un bon taux de compression / The use of 3D models, represented as a mesh, is growing in many applications. For efficient transmission and adaptation of these models to the heterogeneity of client devices, progressive compression techniques are generally used. To protect the copyright during the transmission, watermarking techniques are also used. In this thesis, we first propose two progressive compression methods for meshes with or without color information, and we present a joint system of compression and watermarking. In the first part, we propose a method for optimizing the rate-distortion trade-off for meshes without color attribute. During the encoding process, we adopt the quantization precision to the number of elements and geometric complexity. This adaptation can be performed optimally by measuring the distance regarding the original mesh, or can be carried out using a theoretical model for fast optimization. The results show that our method yields competitive results with the state-of-the-art methods. In the second part, we focus on optimizing the rate-distortion performance for meshes with color information attached to mesh vertices. We propose firstly two methods of compression for this type of mesh and then we present a method for optimizing the rate-distortion trade-off based on the adaptation of the quantification precision of both geometry and color for each intermediate mesh. This adaptation can be performed rapidly by a theoretical model that evaluates the required number of quantization bits for each intermediate mesh. A metric is also proposed in order to preserve the feature elements throughout simplification. Finally, we propose a joint scheme of progressive compression and watermarking. To protect all levels of detail, we insert the watermark within each step of the encoding process. More precisely, at each iteration of simplification, we separate vertices into two sets and compute a histogram of distribution of vertex norms for each set. Then, we divide these histograms into several bins and we modify these histograms by shifting bins to insert a bit. This watermarking technique is reversible and can restore exactly the original mesh by eliminating the distortion caused by the insertion of the watermark. We also propose a new prediction method for geometry encoding to reduce the overhead caused by the insertion of the watermark. Experimental results show that our method is robust to various geometric attacks while maintaining a good compression ratio
42

Scalable video compression with optimized visual performance and random accessibility

Leung, Raymond, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2006 (has links)
This thesis is concerned with maximizing the coding efficiency, random accessibility and visual performance of scalable compressed video. The unifying theme behind this work is the use of finely embedded localized coding structures, which govern the extent to which these goals may be jointly achieved. The first part focuses on scalable volumetric image compression. We investigate 3D transform and coding techniques which exploit inter-slice statistical redundancies without compromising slice accessibility. Our study shows that the motion-compensated temporal discrete wavelet transform (MC-TDWT) practically achieves an upper bound to the compression efficiency of slice transforms. From a video coding perspective, we find that most of the coding gain is attributed to offsetting the learning penalty in adaptive arithmetic coding through 3D code-block extension, rather than inter-frame context modelling. The second aspect of this thesis examines random accessibility. Accessibility refers to the ease with which a region of interest is accessed (subband samples needed for reconstruction are retrieved) from a compressed video bitstream, subject to spatiotemporal code-block constraints. We investigate the fundamental implications of motion compensation for random access efficiency and the compression performance of scalable interactive video. We demonstrate that inclusion of motion compensation operators within the lifting steps of a temporal subband transform incurs a random access penalty which depends on the characteristics of the motion field. The final aspect of this thesis aims to minimize the perceptual impact of visible distortion in scalable reconstructed video. We present a visual optimization strategy based on distortion scaling which raises the distortion-length slope of perceptually significant samples. This alters the codestream embedding order during post-compression rate-distortion optimization, thus allowing visually sensitive sites to be encoded with higher fidelity at a given bit-rate. For visual sensitivity analysis, we propose a contrast perception model that incorporates an adaptive masking slope. This versatile feature provides a context which models perceptual significance. It enables scene structures that otherwise suffer significant degradation to be preserved at lower bit-rates. The novelty in our approach derives from a set of "perceptual mappings" which account for quantization noise shaping effects induced by motion-compensated temporal synthesis. The proposed technique reduces wavelet compression artefacts and improves the perceptual quality of video.
43

Distributed compressed data gathering in wireless sensor networks

Leinonen, M. (Markus) 02 October 2018 (has links)
Abstract Wireless sensor networks (WSNs) consisting of battery-powered sensors are increasingly deployed for a myriad of Internet of Things applications, e.g., environmental, industrial, and healthcare monitoring. Since wireless access is typically the main contributor to battery usage, minimizing communications is crucial to prolong network lifetime and improve user experience. The objective of this thesis is to develop and analyze energy-efficient distributed compressed data acquisition techniques for WSNs. The thesis proposes four approaches to conserve sensors' energy by minimizing the amount of information each sensor has to transmit to meet given application requirements. The first part addresses a cross-layer design to minimize the sensors’ sum transmit power via joint optimization of resource allocation and multi-path routing. A distributed consensus optimization based algorithm is proposed to solve the problem. The algorithm is shown to have superior convergence compared to several baselines. The remaining parts deal with compressed sensing (CS) of sparse/compressible sources. The second part focuses on the distributed CS acquisition of spatially and temporally correlated sensor data streams. A CS algorithm based on sliding window and recursive decoding is developed. The method is shown to achieve higher reconstruction accuracy with fewer transmissions and less decoding delay and complexity compared to several baselines, and to progressively refine past estimates. The last two approaches incorporate the quantization of CS measurements and focus on lossy source coding. The third part addresses the distributed quantized CS (QCS) acquisition of correlated sparse sources. A distortion-rate optimized variable-rate QCS method is proposed. The method is shown to achieve higher distortion-rate performance than the baselines and to enable a trade-off between compression performance and encoding complexity via the pre-quantization of measurements. The fourth part investigates information-theoretic rate-distortion (RD) performance limits of single-sensor QCS. A lower bound to the best achievable compression — defined by the remote RD function (RDF) — is derived. A method to numerically approximate the remote RDF is proposed. The results compare practical QCS methods to the derived limits, and show a novel QCS method to approach the remote RDF. / Tiivistelmä Patterikäyttöisistä antureista koostuvat langattomat anturiverkot yleistyvät esineiden internetin myötä esim. ympäristö-, teollisuus-, ja terveydenhoitosovelluksissa. Koska langaton tiedonsiirto kuluttaa merkittävästi energiaa, kommunikoinnin minimointi on elintärkeää pidentämään verkon elinikää ja parantamaan käyttäjäkokemusta. Väitöskirjan tavoitteena on kehittää ja analysoida energiatehokkaita hajautettuja pakattuja datankeruumenetelmiä langattomiin anturiverkkoihin. Työssä ehdotetaan neljä lähestymistapaa, jotka säästävät anturien energiaa minimoimalla se tiedonsiirron määrä, mikä vaaditaan täyttämään sovelluksen asettamat kriteerit. Väitöskirjan ensimmäinen osa tarkastelee protokollakerrosten yhteissuunnittelua, jossa minimoidaan anturien yhteislähetysteho optimoimalla resurssiallokaatio ja monitiereititys. Ratkaisuksi ehdotetaan konsensukseen perustuva hajautettu algoritmi. Tulokset osoittavat algoritmin suppenemisominaisuuksien olevan verrokkejaan paremmat. Loppuosat keskittyvät harvojen lähteiden pakattuun havaintaan (compressed sensing, CS). Toinen osa keskittyy tila- ja aikatasossa korreloituneen anturidatan hajautettuun keräämiseen. Työssä kehitetään liukuvaan ikkunaan ja rekursiiviseen dekoodaukseen perustuva CS-algoritmi. Tulokset osoittavat menetelmän saavuttavan verrokkejaan korkeamman rekonstruktiotarkkuuden pienemmällä tiedonsiirrolla sekä dekoodausviiveellä ja -kompleksisuudella ja kykenevän asteittain parantamaan menneitä estimaatteja. Työn viimeiset osat sisällyttävät järjestelmämalliin CS-mittausten kvantisoinnin keskittyen häviölliseen lähdekoodaukseen. Kolmas osa käsittelee hajautettua korreloitujen harvojen signaalien kvantisoitua CS-havaintaa (quantized CS, QCS). Työssä ehdotetaan särön ja muuttuvan koodinopeuden välisen suhteen optimoiva QCS-menetelmä. Menetelmällä osoitetaan olevan verrokkejaan parempi pakkaustehokkuus sekä kyky painottaa suorituskyvyn ja enkooderin kompleksisuuden välillä mittausten esikvantisointia käyttäen. Neljäs osa tutkii informaatioteoreettisia, koodisuhde-särösuhteeseen perustuvia suorituskykyrajoja yhden anturin QCS-järjestelmässä. Parhaimmalle mahdolliselle pakkaustehokkuudelle johdetaan alaraja, sekä kehitetään menetelmä sen numeeriseen arviointiin. Tulokset vertaavat käytännön QCS-menetelmiä johdettuihin rajoihin, ja osoittavat ehdotetun QCS-menetelmän saavuttavan lähes optimaalinen suorituskyky.
44

Compression Based Analysis of Image Artifacts: Application to Satellite Images

Roman-Gonzalez, Avid 02 October 2013 (has links) (PDF)
This thesis aims at an automatic detection of artifacts in optical satellite images such as aliasing, A/D conversion problems, striping, and compression noise; in fact, all blemishes that are unusual in an undistorted image. Artifact detection in Earth observation images becomes increasingly difficult when the resolution of the image improves. For images of low, medium or high resolution, the artifact signatures are sufficiently different from the useful signal, thus allowing their characterization as distortions; however, when the resolution improves, the artifacts have, in terms of signal theory, a similar signature to the interesting objects in an image. Although it is more difficult to detect artifacts in very high resolution images, we need analysis tools that work properly, without impeding the extraction of objects in an image. Furthermore, the detection should be as automatic as possible, given the quantity and ever-increasing volumes of images that make any manual detection illusory. Finally, experience shows that artifacts are not all predictable nor can they be modeled as expected. Thus, any artifact detection shall be as generic as possible, without requiring the modeling of their origin or their impact on an image. Outside the field of Earth observation, similar detection problems have arisen in multimedia image processing. This includes the evaluation of image quality, compression, watermarking, detecting attacks, image tampering, the montage of photographs, steganalysis, etc. In general, the techniques used to address these problems are based on direct or indirect measurement of intrinsic information and mutual information. Therefore, this thesis has the objective to translate these approaches to artifact detection in Earth observation images, based particularly on the theories of Shannon and Kolmogorov, including approaches for measuring rate-distortion and pattern-recognition based compression. The results from these theories are then used to detect too low or too high complexities, or redundant patterns. The test images being used are from the satellite instruments SPOT, MERIS, etc. We propose several methods for artifact detection. The first method is using the Rate-Distortion (RD) function obtained by compressing an image with different compression factors and examines how an artifact can result in a high degree of regularity or irregularity affecting the attainable compression rate. The second method is using the Normalized Compression Distance (NCD) and examines whether artifacts have similar patterns. The third method is using different approaches for RD such as the Kolmogorov Structure Function and the Complexity-to-Error Migration (CEM) for examining how artifacts can be observed in compression-decompression error maps. Finally, we compare our proposed methods with an existing method based on image quality metrics. The results show that the artifact detection depends on the artifact intensity and the type of surface cover contained in the satellite image.
45

ON THE RATE-COST TRADEOFF OF GAUSSIAN LINEAR CONTROL SYSTEMS WITH RANDOM COMMUNICATION DELAY

Jia Zhang (13176651) 01 August 2022 (has links)
<p>    </p> <p>This thesis studies networked Gaussian linear control systems with random delays. Networked control systems is a popular topic these years because of their versatile applications in daily life, such as smart grid and unmanned vehicles. With the development of these systems, researchers have explored this area in two directions. The first one is to derive the inherent rate-cost relationship in the systems, that is the minimal transmission rate needed to achieve an arbitrarily given stability requirement. The other one is to design achievability schemes, which aim at using as less as transmission rate to achieve an arbitrarily given stability requirement. In this thesis, we explore both directions. We assume the sensor-to-controller channels experience independently and identically distributed random delays of bounded support. Our work separates into two parts. In the first part, we consider networked systems with only one sensor. We focus on deriving a lower bound, R_{LB}(D), of the rate-cost tradeoff with the cost function to be E{| <strong>x^</strong>T<strong>x </strong>|} ≤ D, where <strong>x </strong>refers to the state to be controlled. We also propose an achievability scheme as an upper bound, R_{UB}(D), of the optimal rate-cost tradeoff. The scheme uses lattice quantization, entropy encoder, and certainty-equivalence controller. It achieves a good performance that roughly requires 2 bits per time slot more than R_{LB}(D) to achieve the same stability level. We also generalize the cost function to be of both the state and the control actions. For the joint state-and-control cost, we propose the minimal cost a system can achieve. The second part focuses on to the covariance-based fusion scheme design for systems with multiple > 1 sensors. We notice that in the multi-sensor scenario, the outdated arrivals at the controller, which many existing fusion schemes often discard, carry additional information. Therefore, we design an implementable fusion scheme (CQE) which is the MMSE estimator using both the freshest and outdated information at the controller. Our experiment demonstrates that CQE out-performances the MMSE estimator using the freshest information (LQE) exclusively by achieving a 15% smaller average L2 norm using the same transmission rate. As a benchmark, we also derive the minimal achievable L2 norm, Dmin, for the multi-sensor systems. The simulation shows that CQE approaches Dmin significantly better than LQE. </p>

Page generated in 0.0954 seconds