• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 183
  • 30
  • 14
  • 10
  • 8
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 309
  • 309
  • 79
  • 64
  • 58
  • 47
  • 47
  • 42
  • 40
  • 37
  • 36
  • 32
  • 31
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Synthetic test patterns and compression artefact distortion metrics for image codecs : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Engineering at Massey University, Palmerston North, New Zealand

Punchihewa, Amal January 2009 (has links)
This thesis presents a framework of test methodology to assess spatial domain compression artefacts produced by image and intra-frame coded video codecs. Few researchers have studied this broad range of artefacts. A taxonomy of image and video compression artefacts is proposed. This is based on the point of origin of the artefact in the image communication model. This thesis presents objective evaluation of distortions known as artefacts due to image and intra-frame coded video compression made using synthetic test patterns. The American National Standard Institute document ANSI T1 801 qualitatively defines blockiness, blur and ringing artefacts. These definitions have been augmented with quantitative definitions in conjunction with test patterns proposed. A test and measurement environment is proposed in which the codec under test is exercised using a portfolio of test patterns. The test patterns are designed to highlight the artefact under study. Algorithms have been developed to detect and measure individual artefacts based on the characteristics of respective artefacts. Since the spatial contents of the original test patterns form known structural details, the artefact distortion metrics based on the characteristics of those artefacts are clean and swift to calculate. Distortion metrics are validated using a human vision system inspired modern image quality metric. Blockiness, blur and ringing artefacts are evaluated for representative codecs using proposed synthetic test patterns. Colour bleeding due to image and video compression is discussed with both qualitative and quantitative definitions for the colour bleeding artefacts introduced. The image reproduction performance of a few codecs was evaluated to ascertain the utility of proposed metrics and test patterns.
252

Digital rights management (DRM) : watermark encoding scheme for JPEG images

Samuel, Sindhu. January 2007 (has links)
Thesis (M.Eng. (Electrical, Electromic and Computer Engineering)) -- University of Pretoria, 2007. / Includes bibliographical references (leaves 82-87)
253

Image Compression and Channel Error Correction using Neurally-Inspired Network Models

Watkins, Yijing Zhang 01 May 2018 (has links)
Everyday an enormous amount of information is stored, processed and transmitted digitally around the world. Neurally-inspired compression models have been rapidly developed and researched as a solution to image processing tasks and channel error correction control. This dissertation presents a deep neural network (DNN) for gray high-resolution image compression and a fault-tolerant transmission system with channel error-correction capabilities. A feed-forward DNN implemented with the Levenberg-Marguardt learning algorithm is proposed and implemented for image compression. I demonstrate experimentally that the DNN not only provides better quality reconstructed images but also requires less computational capacity as compared to DCT Zonal coding, DCT Threshold coding, Set Partitioning in Hierarchical Trees (SPIHT) and Gaussian Pyramid. An artificial neural network (ANN) with improved channel error-correction rate is also proposed. The experimental results indicate that the implemented artificial neural network provides a superior error-correction ability by transmitting binary images over the noisy channel using Hamming and Repeat-Accumulate coding. Meanwhile, the network’s storage requirement is 64 times less than the Hamming coding and 62 times less than the Repeat-Accumulate coding. Thumbnail images contain higher frequencies and much less redundancy, which makes them more difficult to compress compared to high-resolution images. Bottleneck autoencoders have been actively researched as a solution to image compression tasks. However, I observed that thumbnail images compressed at a 2:1 ratio through bottleneck autoencoders often exhibit subjectively low visual quality. In this dissertation, I compared bottleneck autoencoders with two sparse coding approaches. Either 50\% of the pixels are randomly removed or every other pixel is removed, each achieving a 2:1 compression ratio. In the subsequent decompression step, a sparse inference algorithm is used to in-paint the missing the pixel values. Compared to bottleneck autoencoders, I observed that sparse coding with a random dropout mask yields decompressed images that are superior based on subjective human perception yet inferior according to pixel-wise metrics of reconstruction quality, such as PSNR and SSIM. With a regular checkerboard mask, decompressed images were superior as assessed by both subjective and pixel-wise measures. I hypothesized that alternative feature-based measures of reconstruction quality would better support my subjective observations. To test this hypothesis, I fed thumbnail images processed using either bottleneck autoencoder or sparse coding using either checkerboard or random masks into a Deep Convolutional Neural Network (DCNN) classifier. Consistent, with my subjective observations, I discovered that sparse coding with checkerboard and random masks support on average 2.7\% and 1.6\% higher classification accuracy and 18.06\% and 3.74\% lower feature perceptual loss compared to bottleneck autoencoders, implying that sparse coding preserves more feature-based information. The optic nerve transmits visual information to the brain as trains of discrete events, a low-power, low-bandwidth communication channel also exploited by silicon retina cameras. Extracting high-fidelity visual input from retinal event trains is thus a key challenge for both computational neuroscience and neuromorphic engineering. % Here, we investigate whether sparse coding can enable the reconstruction of high-fidelity images and video from retinal event trains. Our approach is analogous to compressive sensing, in which only a random subset of pixels are transmitted and the missing information is estimated via inference. We employed a variant of the Locally Competitive Algorithm to infer sparse representations from retinal event trains, using a dictionary of convolutional features optimized via stochastic gradient descent and trained in an unsupervised manner using a local Hebbian learning rule with momentum. Static images, drawn from the CIFAR10 dataset, were passed to the input layer of an anatomically realistic retinal model and encoded as arrays of output spike trains arising from separate layers of integrate-and-fire neurons representing ON and OFF retinal ganglion cells. The spikes from each model ganglion cell were summed over a 32 msec time window, yielding a noisy rate-coded image. Analogous to how the primary visual cortex is postulated to infer features from noisy spike trains in the optic nerve, we inferred a higher-fidelity sparse reconstruction from the noisy rate-coded image using a convolutional dictionary trained on the original CIFAR10 database. Using a similar approach, we analyzed the asynchronous event trains from a silicon retina camera produced by self-motion through a laboratory environment. By training a dictionary of convolutional spatiotemporal features for simultaneously reconstructing differences of video frames (recorded at 22HZ and 5.56Hz) as well as discrete events generated by the silicon retina (binned at 484Hz and 278Hz), we were able to estimate high frame rate video from a low-power, low-bandwidth silicon retina camera.
254

[en] IMAGE TRANSMISSION THROUGH NOISY CHANNELS WITH LT CODES / [pt] TRANSMISSÃO DE IMAGEM ATRAVÉS DE CANAL RUIDOSO USANDO CÓDIGOS LT

CARLOS MARIO CORREA TORRES 13 July 2010 (has links)
[pt] Para transmissão da informação de maneira confiável, em canais com apagamento, foram criados os códigos LT (Luby Transform), uma das principais classes de códigos fontanais. Estes códigos não têm uma taxa fixa, em outras palavras, eles têm taxa versátil. Esta dissertação aborda o estudo da transmissão de imagens através de canal ruidoso, AWGN (Aditive White Gaussian Noise), com o uso de Códigos LT. Investigou-se o desempenho usando uma modulação BPSK, dois esquemas foram testados: Um esquema para canal que inclui apagamento (BESC) e um outro que foi proposto usando um código Hamming em série com um código LT. O esquema LT-Hamming apresentou um ganho de código maior que o esquema BESC e o código convolucional de semelhantes características. Foi testado o esquema LT-Hamming para diferentes tipos de imagens em um canal AWGN usando a técnica SPIHT para a compressão das imagens. Para obter uma medida objetiva da qualidade da imagem recuperada foi usado o parâmetro PSNR (Peak Sinal to Noise Ratio) e foram apresentadas algumas imagens com o objetivo de analisar sua qualidade através de uma inspeção visual. Dado que o código LT é versátil para o que diz respeito à taxa de código, foi proposto um método para método para atribuir diferentes níveis de proteção da informação codificada, UEP (Unequal Error Protection). / [en] To transfer reliably information in erasure channels, LT (Luby Transform) codes were created, they are part of the main class of fountain codes, this codes don’t have fixed rate, in other words, they have a versatile code rate. This thesis address to the study of images transmission through noisy channel, AWGN (Aditive White Gaussian Noise) using LT codes. We investigated the performance using a BPSK modulation, two schemes were tested: A scheme of channel that includes deletion (BESC) and another that was proposed, using a Hamming code in series with a LT code. The LT-Hamming scheme present a gain code larger than BESC scheme and convolutional codes of similar characteristics. Was tested LT-Hamming scheme for different types of images on AWGN channel using the SPIHT technique for images compression. To obtain an objective measure of image quality was used the PSNR (Peak Signal Noise Ratio) and some images were presented in order to analize its quality through visual inspection given that LT code is a versatile for what concern the code rate it was proposed a method to assign different protection levels to the code information, UEP (Unequal Error Protection).
255

Conception d'un micro capteur d'image CMOS à faible consommation d'énergie pour les réseaux de capteurs sans fil / Design of a CMOS image sensor with low energy consumption for wireless sensor networks

Chefi, Ahmed 28 January 2014 (has links)
Ce travail de recherche vise à concevoir un système de vision à faible consommation d'énergie pour les réseaux de capteurs sans fil. L'imageur en question doit respecter les contraintes spécifiques des applications multimédias pour les réseaux de capteurs de vision sans fil. En effet, de par sa nature, une application multimédia impose un traitement intensif au niveau du noeud et un nombre considérable de paquets à échanger à travers le lien radio, et par conséquent beaucoup d'énergie à consommer. Une solution évidente pour diminuer la quantité de données transmise, et donc la durée de vie du réseau, est de compresser les images avant de les transmettre. Néanmoins, les contraintes strictes des noeuds du réseau rendent inefficace en pratique l'exécution des algorithmes de compression standards (JPEG, JPEG2000, MJPEG, MPEG, H264, etc.). Le système de vision à concevoir doit donc intégrer des techniques de compression d'image à la fois efficaces et à faible complexité. Une attention particulière doit être prise en compte en vue de satisfaire au mieux le compromis "Consommation énergétique - Qualité de Service (QoS)". / This research aims to develop a vision system with low energy consumption for Wireless Sensor Networks (WSNs). The imager in question must meet the specific requirements of multimedia applications for Wireless Vision Sensor Networks. Indeed, a multimedia application requires intensive computation at the node and a considerable number of packets to be exchanged through the transceiver, and therefore consumes a lot of energy. An obvious solution to reduce the amount of transmitted data is to compress the images before sending them over WSN nodes. However, the severe constraints of nodes make ineffective in practice the implementation of standard compression algorithms (JPEG, JPEG2000, MJPEG, MPEG, H264, etc.). Desired vision system must integrate image compression techniques that are both effective and with low-complexity. Particular attention should be taken into consideration in order to best satisfy the compromise "Energy Consumption - Quality of Service (QoS)".
256

Image compression system for a 3u cubesat

Nzeugaing, Gutembert Nganpet January 2013 (has links)
Thesis submitted in partial fulfilment of the requirements for the degree of Master of Technology: Electrical Engineering in the Faculty of Engineering at the Cape Peninsula University of Technology 2013 / Earth observation satellites utilise sensors or cameras to capture data or images that are relayed to the ground station(s). The ZACUBE-02 CubeSat currently in development at the French South African Institute of Technology (F’SATI) contains a high resolution 5 megapixel on-board camera. The purpose of the camera is to capture images of Earth and relay them to the ground station once communication is established. The captured images, which can amount to a large volume of data, have to be stored on-board as the CubeSat awaits the next cycle of transmission to the ground station. This mode of operation introduces a number of problems, as the CubeSat has limited storage and memory capacity and is not able to store large amounts of data. This, together with the limitation of the downlink capacity, has set the need for the design and development of an image compression system suitable for the CubeSat environment. Image compression focuses on reducing the size of images to be stored as well as reducing the size of the images to be transmitted to the ground station. The purpose of the study is to propose a compression system to be implemented on ZACUBE-02. An intensive study of current, proposed and implemented compression methods, algorithms and techniques as well as the CubeSat specification, served as input for defining the requirements for such a system. The proposed design is a combination of image segmentation, image linearization and image entropy coding (run-length coding). This combination technique is implemented in order to achieve lossless image compression. For the proposed design, a compression ratio of 10:1 was obtained without negatively affecting image quality.The on-board storage memory constraints, the power constraints and the bandwidth constraints are met with the implementation of the proposed design, resulting in the downlink transmission time being minimised. Within the study a number of objectives were met in order to design, implement and test the compression system. These included a detailed study of image compression techniques; a look into techniques for improving the compression ratio; and a study of industrial hardware components suitable for the space environment. Keywords: CubeSat, hardware, compression, satellite image compression, Gumstix Overo Water, ZACUBE-02.
257

Uma proposta de morphing utilizando tecnicas de interpolação de formas e media morfologica / A morphing proposal using shape interpolation and morphological median

Higa, Rogerio Seiji, 1978- 12 August 2018 (has links)
Orientador: Yuzo Iano / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-12T14:35:59Z (GMT). No. of bitstreams: 1 Higa_RogerioSeiji_M.pdf: 2694908 bytes, checksum: 4f1722cc06ee38d8d637afcc25ae51db (MD5) Previous issue date: 2008 / Resumo: A animação é muito utilizada na área cinematográfica para produzir efeitos visuais ou para fazer filmes inteiros. Nessa área, a busca por novas ferramentas caminha junto com a necessidade da indústria do entretenimento de sempre mostrar novidades. Tentando suprir essa demanda, este trabalho propõe uma nova ferramenta que mistura o morphing de imagens, já muito utilizado nesta área, com a média morfológica. O uso dessas duas técnicas permite a geração de uma seqüência de imagens diferente de outras técnicas de morphing. Além disso, também é proposta a utilização de um algoritmo de interpolação de formas, que é utilizado junto ao morphing para interpolar os seus marcadores de atributos. O uso da interpolação de formas permite que dois marcadores correspondentes tenham um número diferente de pontos, e também adiciona opções de compensação de rotação entre as formas dos marcadores. Neste trabalho são mostrados os resultados obtidos com o método proposto. / Abstract: Animation is used in the cinema industry to produce visual effects or entire movies. In this area, the search for new tools comes along with the need of the entertainment industry to show new stuff all the time. To fulfill this need, this work proposes a new tool that blends the image morphing, a famous tool for visual effects, with the morphological median. The use of these two techniques provides an image sequence that's distinct from other similar tools. It is also proposed in this work the use of the shape interpolation which is used in the morphing algorithm to interpolate the features markers. The shape interpolation algorithm allows that the corresponding markers have a different number of points, it also includes a rotation compensation options for the interpolation of the shapes of the markers. In this work are shown the results obtained form the proposed method. / Mestrado / Telecomunicações e Telemática / Mestre em Engenharia Elétrica
258

A Portable DARC Fax Service / En Bärbar Faxtjänst För DARC

Husberg, Björn January 2002 (has links)
DARC is a technique for data broadcasting over the FM radio network. Sectra Wireless Technologies AB has developed a handheld DARC receiver known as the Sectra CitySurfer. The CitySurfer is equipped with a high-resolution display along with buttons and a joystick that allows the user to view and navigate through various types of information received over DARC. Sectra Wireless Technologies AB has, among other services, also developed a paging system that enables personal message transmission over DARC. The background of this thesis is a wish to be able to send fax documents using the paging system and to be able to view received fax documents in the CitySurfer. The presented solution is a central PC-based fax server. The fax server is responsible for receiving standard fax transmissions and converting the fax documents before redirecting them to the right receiver in the DARC network. The topics discussed in this thesis are fax document routing, fax document conversion and fax server system design.
259

Codage d'images avec et sans pertes à basse complexité et basé contenu / Lossy and lossless image coding with low complexity and based on the content

Liu, Yi 18 March 2015 (has links)
Ce projet de recherche doctoral vise à proposer solution améliorée du codec de codage d’images LAR (Locally Adaptive Resolution), à la fois d’un point de vue performances de compression et complexité. Plusieurs standards de compression d’images ont été proposés par le passé et mis à profit dans de nombreuses applications multimédia, mais la recherche continue dans ce domaine afin d’offrir de plus grande qualité de codage et/ou de plus faibles complexité de traitements. JPEG fut standardisé il y a vingt ans, et il continue pourtant à être le format de compression le plus utilisé actuellement. Bien qu’avec de meilleures performances de compression, l’utilisation de JPEG 2000 reste limitée due à sa complexité plus importe comparée à JPEG. En 2008, le comité de standardisation JPEG a lancé un appel à proposition appelé AIC (Advanced Image Coding). L’objectif était de pouvoir standardiser de nouvelles technologies allant au-delà des standards existants. Le codec LAR fut alors proposé comme réponse à cet appel. Le système LAR tend à associer une efficacité de compression et une représentation basée contenu. Il supporte le codage avec et sans pertes avec la même structure. Cependant, au début de cette étude, le codec LAR ne mettait pas en oeuvre de techniques d’optimisation débit/distorsions (RDO), ce qui lui fut préjudiciable lors de la phase d’évaluation d’AIC. Ainsi dans ce travail, il s’agit dans un premier temps de caractériser l’impact des principaux paramètres du codec sur l’efficacité de compression, sur la caractérisation des relations existantes entre efficacité de codage, puis de construire des modèles RDO pour la configuration des paramètres afin d’obtenir une efficacité de codage proche de l’optimal. De plus, basée sur ces modèles RDO, une méthode de « contrôle de qualité » est introduite qui permet de coder une image à une cible MSE/PSNR donnée. La précision de la technique proposée, estimée par le rapport entre la variance de l’erreur et la consigne, est d’environ 10%. En supplément, la mesure de qualité subjective est prise en considération et les modèles RDO sont appliqués localement dans l’image et non plus globalement. La qualité perceptuelle est visiblement améliorée, avec un gain significatif mesuré par la métrique de qualité objective SSIM. Avec un double objectif d’efficacité de codage et de basse complexité, un nouveau schéma de codage LAR est également proposé dans le mode sans perte. Dans ce contexte, toutes les étapes de codage sont modifiées pour un meilleur taux de compression final. Un nouveau module de classification est également introduit pour diminuer l’entropie des erreurs de prédiction. Les expérimentations montrent que ce codec sans perte atteint des taux de compression équivalents à ceux de JPEG 2000, tout en économisant 76% du temps de codage et de décodage. / This doctoral research project aims at designing an improved solution of the still image codec called LAR (Locally Adaptive Resolution) for both compression performance and complexity. Several image compression standards have been well proposed and used in the multimedia applications, but the research does not stop the progress for the higher coding quality and/or lower coding consumption. JPEG was standardized twenty years ago, while it is still a widely used compression format today. With a better coding efficiency, the application of the JPEG 2000 is limited by its larger computation cost than the JPEG one. In 2008, the JPEG Committee announced a Call for Advanced Image Coding (AIC). This call aims to standardize potential technologies going beyond existing JPEG standards. The LAR codec was proposed as one response to this call. The LAR framework tends to associate the compression efficiency and the content-based representation. It supports both lossy and lossless coding under the same structure. However, at the beginning of this study, the LAR codec did not implement the rate-distortion-optimization (RDO). This shortage was detrimental for LAR during the AIC evaluation step. Thus, in this work, it is first to characterize the impact of the main parameters of the codec on the compression efficiency, next to construct the RDO models to configure parameters of LAR for achieving optimal or sub-optimal coding efficiencies. Further, based on the RDO models, a “quality constraint” method is introduced to encode the image at a given target MSE/PSNR. The accuracy of the proposed technique, estimated by the ratio between the error variance and the setpoint, is about 10%. Besides, the subjective quality measurement is taken into consideration and the RDO models are locally applied in the image rather than globally. The perceptual quality is improved with a significant gain measured by the objective quality metric SSIM (structural similarity). Aiming at a low complexity and efficient image codec, a new coding scheme is also proposed in lossless mode under the LAR framework. In this context, all the coding steps are changed for a better final compression ratio. A new classification module is also introduced to decrease the entropy of the prediction errors. Experiments show that this lossless codec achieves the equivalent compression ratio to JPEG 2000, while saving 76% of the time consumption in average in encoding and decoding.
260

The contour tree image encoding technique and file format

Turner, Martin John January 1994 (has links)
The process of contourization is presented which converts a raster image into a discrete set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimises noticeable artifacts in the simplified image. The contour merging technique offers a complementary lossy compression system to the QDCT (Quantised Discrete Cosine Transform). The artifacts introduced by the two methods are very different; QDCT produces a general blurring and adds extra highlights in the form of overshoots, whereas contour merging sharpens edges, reduces highlights and introduces a degree of false contouring. A format based on the contourization technique which caters for most image types is defined, called the contour tree image format. Image operations directly on this compressed format have been studied which for certain manipulations can offer significant operational speed increases over using a standard raster image format. A couple of examples of operations specific to the contour tree format are presented showing some of the features of the new format.

Page generated in 0.1465 seconds