• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 86
  • 86
  • 32
  • 29
  • 28
  • 17
  • 17
  • 16
  • 15
  • 14
  • 12
  • 11
  • 10
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Kernel methods in steganalysis

Pevný, Tomáš. January 2008 (has links)
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Computer Science, 2008. / Includes bibliographical references.
52

VHDL modeling and synthesis of the JPEG-XR inverse transform /

Frandina, Peter. January 2009 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 2009. / Typescript. Includes bibliographical references (leaf 45).
53

A query engine of novelty in video streams /

Kang, James M. January 2005 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 2005. / Typescript. Includes bibliographical references (leaves 112-116).
54

Image compression quality measurement : a comparison of the performance of JPEG and fractal compression on satellite images

Nolte, Ernst Hendrik 04 1900 (has links)
Thesis (MEng)--Stellenbosch University, 2000. / ENGLISH ABSTRACT: The purpose of this thesis is to investigate the nature of digital image compression and the calculation of the quality of the compressed images. The work is focused on greyscale images in the domain of satellite images and aerial photographs. Two compression techniques are studied in detail namely the JPEG and fractal compression methods. Implementations of both these techniques are then applied to a set of test images. The rest of this thesis is dedicated to investigating the measurement of the loss of quality that was introduced by the compression. A general method for quality measurement (signal To Noise Ratio) is discussed as well as a technique that was presented in literature quite recently (Grey Block Distance). Hereafter, a new measure is presented. After this, a means of comparing the performance of these measures is presented. It was found that the new measure for image quality estimation performed marginally better than the SNR algorithm. Lastly, some possible improvements on this technique are mentioned and the validity of the method used for comparing the quality measures is discussed. / AFRIKAANSE OPSOMMING: Die doel van hierdie tesis is om ondersoek in te stel na die aard van digitale beeldsamepersing en die berekening van beeldkwaliteit na samepersing. Daar word gekonsentreer op grysvlak beelde in die spesifieke domein van satellietbeelde en lugfotos. Twee spesifieke samepersingstegnieke word in diepte ondersoek naamlik die JPEG en fraktale samepersingsmetodes. Implementasies van beide hierdie tegnieke word op 'n stel toetsbeelde aangewend. Die res van hierdie tesis word dan gewy aan die ondersoek van die meting van die kwaliteitsverlies van hierdie saamgeperste beelde. Daar word gekyk na 'n metode wat in algemene gebruik in die praktyk is asook na 'n nuwer metode wat onlangs in die literatuur veskyn het. Hierna word 'n nuwe tegniek bekendgestel. Verder word daar 'n vergelyking van hierdie mates en 'n ondersoek na die interpretasie van die 'kwaliteit' van hierdie kwaliteitsmate gedoen. Daar is gevind dat die nuwe maatstaf vir kwaliteit net so goed en selfs beter werk as die algemene maat vir beeldkwaliteit naamlik die Sein tot Ruis Verhouding. Laastens word daar moontlike verbeterings op die maatstaf genoem en daar volg 'n bespreking oor die geldigheid van die metode wat gevolg is om die kwaliteit van die kwaliteitsmate te bepaal
55

Prediction of the Optimum Binder Content of Open-Graded Friction Course Mixtures Using Digital Image Processing

Mejias De Pernia, Yolibeth 28 October 2015 (has links)
Florida Department of Transportation (FDOT) has been using Open Graded Friction Course (OGFC) mixture to improve skid resistance of asphalt pavements under wet weather. The OGFC mixture design strongly depends on the Optimum Binder Content (OBC) which represents if the mixture has sufficient bonding between the aggregate and asphalt binder. At present, the FDOT designs OGFC mixtures using a pie plate visual draindown method (FM 5-588). In this method, the OBC is determined based on visual inspection of the asphalt binder draindown (ABD) configuration of three OGFC samples placed on pie plates with pre-determined trial asphalt binder contents (AC). The inspection of the ABD configuration is performed by trained and experienced technicians who determine the OBC using perceptive interpolation or extrapolation based on the known AC of the above samples. In order to eliminate the human subjectivity involved in the current visual method, an automated method for quantifying the OBC of OGFC mixtures was developed using digital images of the pie plates and concepts of perceptual image coding and neural network (NN). Phase I of the project involved the FM-5-588 based OBC testing of OGFC mixture designs consisting of a large set of samples prepared from a variety of granitic and oolitic limestone aggregate sources used by FDOT. Then the digital images of the pie plates containing samples of the above mixtures were acquired using an imaging setup customized by FDOT. The correlation between relevant digital imaging parameters and the corresponding AC was investigated initially using conventional regression analysis. Phase II of the project involved the development of a perceptual image model using human perception metrics considered to be used in the OBC estimation. A General Regression Neural Network (GRNN) was used to uncover the nonlinear correlation between the selected parameters of pie plate images, the corresponding AC and the visually estimated OBC. GRNN was found to be the most viable method to deal with the multi-dimensional nature of the input test data set originating from each individual OGFC sample that contains AC and imaging parameter information from a set of three pie plates. GRNN was trained by 70% and tested by 30% of the database completed in Phase I. Phase III of the project involved the configuration of a quality control tool (QCT) for the aforementioned automated method to enhance its robustness and the likelihood of implementation by other agencies and contractors. QCT is developed using three quality control imaging parameters (QCIP), orientation, spatial distribution, and segregation of ABD configuration of pie plate specimens (PPS) images. Then, the above QCIP were evaluated from PPS images of a variety of independent mixture designs produced using the FDOT visual method. In general, this study found that the newly developed software (GRNN-based) provides satisfactory and reliable estimations of OBC. Furthermore, the statistical and computer-generated results indicated that the selected QCIP are adequate for the formulation of quality control criteria for PPS production. It is believed that the developed QCT will enhance the reliability of the automated OBC estimation image processing-based methodology.
56

Light-field image and video compression for future immersive applications / Compression d'image et vidéo light-field pour les futures applications immersives

Dricot, Antoine 01 March 2017 (has links)
L’évolution des technologies vidéo permet des expériences de plus en plus immersives. Cependant, les technologies 3D actuelles sont encore très limitées et offrent des situations de visualisation qui ne sont ni confortables ni naturelles. La prochaine génération de technologies vidéo immersives apparait donc comme un défi technique majeur, en particulier avec la prometteuse approche light-field (LF). Le light-field représente tous les rayons lumineux dans une scène. De nouveaux dispositifs d’acquisition apparaissent, tels que des ensembles de caméras ou des appareils photo plénoptiques (basés sur des micro-lentilles). Plusieurs sortes de systèmes d’affichage ciblent des applications immersives, comme les visiocasques ou les écrans light-field basés sur la projection, et des applications cibles prometteuses existent déjà (e.g. la vidéo 360°, la réalité virtuelle, etc.). Depuis plusieurs années, le light-field a stimulé l’intérêt de plusieurs entreprises et institutions, par exemple dans des groupes MPEG et JPEG. Les contenus light-feld ont des structures spécifiques et utilisent une quantité massive de données, ce qui représente un défi pour implémenter les futurs services. L'un des buts principaux de notre travail est d'abord de déterminer quelles technologies sont réalistes ou prometteuses. Cette étude est faite sous l'angle de la compression image et vidéo, car l'efficacité de la compression est un facteur clé pour mettre en place ces services light-field sur le marché. On propose ensuite des nouveaux schémas de codage pour augmenter les performances de compression et permettre une transmission efficace des contenus light-field sur les futurs réseaux. / Evolutions in video technologies tend to offer increasingly immersive experiences. However, currently available 3D technologies are still very limited and only provide uncomfortable and unnatural viewing situations to the users. The next generation of immersive video technologies appears therefore as a major technical challenge, particularly with the promising light-field (LF) approach. The light-field represents all the light rays (i.e. in all directions) in a scene. New devices for sampling/capturing the light-field of a scene are emerging fast such as camera arrays or plenoptic cameras based on lenticular arrays. Several kinds of display systems target immersive applications like Head Mounted Display and projection-based light-field display systems, and promising target applications already exist. For several years now this light-field representation has been drawing a lot of interest from many companies and institutions, for example in MPEG and JPEG groups. Light-field contents have specific structures, and use a massive amount of data, that represent a challenge to set up future services. One of the main goals of this work is first to assess which technologies and formats are realistic or promising. The study is done through the scope of image/video compression, as compression efficiency is a key factor for enabling these services on the consumer markets. Secondly, improvements and new coding schemes are proposed to increase compression performance in order to enable efficient light-field content transmission on future networks.
57

Applying the MDCT to image compression

Muller, Rikus 03 1900 (has links)
Thesis (DSc (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2009. / The replacement of the standard discrete cosine transform (DCT) of JPEG with the windowed modifed DCT (MDCT) is investigated to determine whether improvements in numerical quality can be achieved. To this end, we employ an existing algorithm for optimal quantisation, for which we also propose improvements. This involves the modelling and prediction of quantisation tables to initialise the algorithm, a strategy that is also thoroughly tested. Furthermore, the effects of various window functions on the coding results are investigated, and we find that improved quality can indeed be achieved by modifying JPEG in this fashion.
58

Objective Perceptual Quality Assessment of JPEG2000 Image Coding Format Over Wireless Channel

Chintala, Bala Venkata Sai Sundeep January 2019 (has links)
A dominant source of Internet traffic, today, is constituted of compressed images. In modern multimedia communications, image compression plays an important role. Some of the image compression standards set by the Joint Photographic Expert Group (JPEG) include JPEG and JPEG2000. The expert group came up with the JPEG image compression standard so that still pictures could be compressed to be sent over an e-mail, be displayed on a webpage, and make high-resolution digital photography possible. This standard was originally based on a mathematical method, used to convert a sequence of data to the frequency domain, called the Discrete Cosine Transform (DCT). In the year 2000, however, a new standard was proposed by the expert group which came to be known as JPEG2000. The difference between the two is that the latter is capable of providing better compression efficiency. There is also a downside to this new format introduced. The computation required for achieving the same sort of compression efficiency as one would get with the original JPEG format is higher. JPEG is a lossy compression standard which can throw away some less important information without causing any noticeable perception differences. Whereas, in lossless compression, the primary purpose is to reduce the number of bits required to represent the original image samples without any loss of information. The areas of application of the JPEG image compression standard include the Internet, digital cameras, printing, and scanning peripherals. In this thesis work, a simulator kind of functionality setup is needed for conducting the objective quality assessment. An image is given as an input to our wireless communication system and its data size is varied (e.g. 5%, 10%, 15%, etc) and a Signal-to-Noise Ratio (SNR) value is given as input, for JPEG2000 compression. Then, this compressed image is passed through a JPEG encoder and then transmitted over a Rayleigh fading channel. The corresponding image obtained after having applied these constraints on the original image is then decoded at the receiver and inverse discrete wavelet transform (IDWT) is applied to inverse the JPEG 2000 compression. Quantization is done for the coefficients which are scalar-quantized to reduce the number of bits to represent them, without the loss of quality of the image. Then the final image is displayed on the screen. The original input image is co-passed with the images of varying data size for an SNR value at the receiver after decoding. In particular, objective perceptual quality assessment through Structural Similarity (SSIM) index using MATLAB is provided.
59

An Algorithm for Image Quality Assessment

Ivkovic, Goran 10 July 2003 (has links)
Image quality measures are used to optimize image processing algorithms and evaluate their performances. The only reliable way to assess image quality is subjective evaluation by human observers, where the mean value of their scores is used as the quality measure. This is known as mean opinion score (MOS). In addition to this measure there are various objective (quantitative) measures. Most widely used quantitative measures are: mean squared error (MSE), peak signal to noise ratio (PSNR) and signal to noise ratio (SNR). Since these simple measures do not always produce results that are in agreement with subjective evaluation, many other quality measures have been proposed. They are mostly various modifications of MSE, which try to take into account some properties of human visual system (HVS) such as nonlinear character of brightness perception, contrast sensitivity function (CSF) and texture masking. In these approaches quality measure is computed as MSE of input image intensities or frequency domain coefficients obtained after some transform (DFT, DCT etc.), weighted by some coefficients which account for the mentioned properties of HVS. These measures have some advantages over MSE, but their ability to predict image quality is still limited. A different approach is presented here. Quality measure proposed here uses simple model of HVS, which has one user-defined parameter, whose value depends on the reference image. This quality measure is based on the average value of locally computed correlation coefficients. This takes into account structural similarity between original and distorted images, which cannot be measured by MSE or any kind of weighted MSE. The proposed measure also differentiates between random and signal dependant distortion, because these two have different effect on human observer. This is achieved by computing the average correlation coefficient between reference image and error image. Performance of the proposed quality measure is illustrated by examples involving images with different types of degradation.
60

Implementering av realtidsvideolänk med MPEG- och wavelet-teknik / Implementation of aReal Time Video Transmission Link using MPEG- and Wavelet Methods

Heijdenberg, Karl, Johansson, Thomas January 2004 (has links)
<p>At Saab Aerosystems, situated in Linköping Sweden, there is a presentation and manoeuvre simulator simulating the fighter jet JAS-39 Gripen. This flight simulator is called PMSIM. In this thesis we study how to transfer sensor images generated by PMSIM to other simulators or desktop computers. The transmission is band-limited so some kind of image coding must be used. Because of this the greater part of this thesis is concerned with image coding. </p><p>To fulfill the real time requirement the image coding has to be quite simple and the transmission has to be fast. To achieve fast transmission the network protocol has to use as little overhead information as possible. Such a protocol has therefore been designed and implemented. </p><p>This report also includes a survey about real radio links. This survey investigates how the quality of the video stream can be affected by noise and other disturbing elements. </p><p>The work in this report revolves around the implementation of a video link. The purpose of this link is to transmit and display sensor images. The link consists mainly of the three following parts: image coder, network link and image player. The image coding has been focused on MPEG and wavelets. </p><p>The wavelet technique is not a well known coding principle for video applications. Although as a coding technique for still images the technique is well known. For instance it is used in the JPEG2000-standard. Experiments conducted and published in this report suggest that for some applications the wavelet technique can be a viable candidate, with respect to the MPEG technique, for a video coder.</p>

Page generated in 0.0479 seconds