• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 85
  • 85
  • 31
  • 29
  • 27
  • 17
  • 17
  • 16
  • 15
  • 14
  • 11
  • 11
  • 10
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Application of Least Squares Support Vector Machines in Image Coding

Chen, Pao-jung 19 July 2006 (has links)
In this thesis, least squares support vector machine for regression (LS-SVR) is applied to image coding. First, we propose five simple algorithms for solving LS-SVR. For linear regression, two simple Widrow-Hoff-like algorithms, in primal form and in dual form, are proposed for LS-SVR problems. The dual form of the algorithm is then generalized to kernel-based nonlinear LS-SVR. The elegant and powerful two-parameter sequential minimization optimization (2PSMO) and three-parameter sequential minimization optimization (3PSMO) algorithms are provided in detail. A predictive function obtained from LS-SVR is utilized to approximate the gray levels of the image. After pruning, only a subset of training data called support vectors is saved. Experimental results on seven image blocks show that the LS-SVR with Gaussian kernel is more appropriate than that with Mahalanobis kernel with a covariance matrix. Two-layer LS-SVR is proposed to choose the machine parameters of the LS-SVR. Before training outer LS-SVR, feature extraction is used to reduce the input dimensionality. Experimental results on three whole images show that the results with two-layer LS-SVR after reducing dimensionality are better than those with two-layer LS-SVR without reducing dimensionality in PSNR for Lena and Baboon images and they are almost the same in PSNR for F16 image.
42

DCT-based Image/Video Compression: New Design Perspectives

Sun, Chang January 2014 (has links)
To push the envelope of DCT-based lossy image/video compression, this thesis is motivated to revisit design of some fundamental blocks in image/video coding, ranging from source modelling, quantization table, quantizers, to entropy coding. Firstly, to better handle the heavy tail phenomenon commonly seen in DCT coefficients, a new model dubbed transparent composite model (TCM) is developed and justified. Given a sequence of DCT coefficients, the TCM first separates the tail from the main body of the sequence, and then uses a uniform distribution to model DCT coefficients in the heavy tail, while using a parametric distribution to model DCT coefficients in the main body. The separation boundary and other distribution parameters are estimated online via maximum likelihood (ML) estimation. Efficient online algorithms are proposed for parameter estimation and their convergence is also proved. When the parametric distribution is truncated Laplacian, the resulting TCM dubbed Laplacian TCM (LPTCM) not only achieves superior modeling accuracy with low estimation complexity, but also has a good capability of nonlinear data reduction by identifying and separating a DCT coefficient in the heavy tail (referred to as an outlier) from a DCT coefficient in the main body (referred to as an inlier). This in turn opens up opportunities for it to be used in DCT-based image compression. Secondly, quantization table design is revisited for image/video coding where soft decision quantization (SDQ) is considered. Unlike conventional approaches where quantization table design is bundled with a specific encoding method, we assume optimal SDQ encoding and design a quantization table for the purpose of reconstruction. Under this assumption, we model transform coefficients across different frequencies as independently distributed random sources and apply the Shannon lower bound to approximate the rate distortion function of each source. We then show that a quantization table can be optimized in a way that the resulting distortion complies with certain behavior, yielding the so-called optimal distortion profile scheme (OptD). Guided by this new theoretical result, we present an efficient statistical-model-based algorithm using the Laplacian model to design quantization tables for DCT-based image compression. When applied to standard JPEG encoding, it provides more than 1.5 dB performance gain (in PSNR), with almost no extra burden on complexity. Compared with the state-of-the-art JPEG quantization table optimizer, the proposed algorithm offers an average 0.5 dB gain with computational complexity reduced by a factor of more than 2000 when SDQ is off, and a 0.1 dB performance gain or more with 85% of the complexity reduced when SDQ is on. Thirdly, based on the LPTCM and OptD, we further propose an efficient non-predictive DCT-based image compression system, where the quantizers and entropy coding are completely re-designed, and the relative SDQ algorithm is also developed. The proposed system achieves overall coding results that are among the best and similar to those of H.264 or HEVC intra (predictive) coding, in terms of rate vs visual quality. On the other hand, in terms of rate vs objective quality, it significantly outperforms baseline JPEG by more than 4.3 dB on average, with a moderate increase on complexity, and ECEB, the state-of-the-art non-predictive image coding, by 0.75 dB when SDQ is off, with the same level of computational complexity, and by 1 dB when SDQ is on, at the cost of extra complexity. In comparison with H.264 intra coding, our system provides an overall 0.4 dB gain or so, with dramatically reduced computational complexity. It offers comparable or even better coding performance than HEVC intra coding in the high-rate region or for complicated images, but with only less than 5% of the encoding complexity of the latter. In addition, our proposed DCT-based image compression system also offers a multiresolution capability, which, together with its comparatively high coding efficiency and low complexity, makes it a good alternative for real-time image processing applications.
43

Design and application of quincunx filter banks

Chen, Yi 30 January 2007 (has links)
Quincunx filter banks are two-dimensional, two-channel, nonseparable filter banks. They are widely used in many signal processing applications. In this thesis, we study the design and applications of quincunx filter banks in the processing of two-dimensional digital signals. Symmetric extension algorithms for quincunx filter banks are proposed. In the one-dimensional case,symmetric extension is a commonly used technique to build nonexpansive transforms of finite-length sequences. We show how this technique can be extended to the nonseparable quincunx case. We consider three types of quadrantally-symmetric linear-phase quincunx filter banks, and for each of these types we show how nonexpansive transforms of two-dimensional sequences defined on arbitrary rectangular regions can be constructed. New optimization-based techniques are proposed for the design of high-performance quincunx filter banks for the application of image coding. The new methods yield linear-phase perfect-reconstruction systems with high coding gain, good analysis/synthesis filter frequency responses, and certain prescribed vanishing moment properties. We present examples of filter banks designed with these techniques and demonstrate their efficiency for image coding relative to existing filter banks. The best filter banks in our design examples outperformother previously proposed quincunx filter banks in approximately 80% cases and sometimes even outperform the well-known 9/7 filter bank from the JPEG-2000 standard.
44

Fractal techniques for face recognition

Ebrahimpour-Komleh, Hossein January 2006 (has links)
Fractals are popular because of their ability to create complex images using only several simple codes. This is possible by capturing image redundancy and presenting the image in compressed form using the self similarity feature. For many years fractals were used for image compression. In the last few years they have also been used for face recognition. In this research we present new fractal methods for recognition, especially human face recognition. This research introduces three new methods for using fractals for face recognition, the use of fractal codes directly as features, Fractal image-set coding and Subfractals. In the first part, the mathematical principle behind the application of fractal image codes for recognition is investigated. An image Xf can be represented as Xf = A x Xf + B which A and B are fractal parameters of image Xf . Different fractal codes can be presented for any arbitrary image. With the defnition of a fractal transformation, T(X) = A(X - Xf ) + Xf , we can define the relationship between any image produced in the fractal decoding process starting with any arbitrary image X0 as Xn = Tn(X) = An(X - Xf ) + Xf . We show that some choices for A or B lead to faster convergence to the final image. Fractal image-set coding is based on the fact that a fractal code of an arbitrary gray-scale image can be divided in two parts - geometrical parameters and luminance parameters. Because the fractal codes for an image are not unique, we can change the set of fractal parameters without significant change in the quality of the reconstructed image. Fractal image-set coding keeps geometrical parameters the same for all images in the database. Differences between images are captured in the non-geometrical or luminance parameters - which are faster to compute. For recognition purposes, the fractal code of a query image is applied to all the images in the training set for one iteration. The distance between an image and the result after one iteration is used to define a similarity measure between this image and the query image. The fractal code of an image is a set of contractive mappings each of which transfer a domain block to its corresponding range block. The distribution of selected domain blocks for range blocks in an image depends on the content of image and the fractal encoding algorithm used for coding. A small variation in a part of the input image may change the contents of the range and domain blocks in the fractal encoding process, resulting in a change in the transformation parameters in the same part or even other parts of the image. A subfractal is a set of fractal codes related to range blocks of a part of the image. These codes are calculated to be independent of other codes of the other parts of the same image. In this case the domain blocks nominated for each range block must be located in the same part of the image which the range blocks come from. The proposed fractal techniques were applied to face recognition using the MIT and XM2VTS face databases. Accuracies of 95% were obtained with up to 156 images.
45

Error resilience in JPEG2000 /

Natu, Ambarish Shrikrishna. January 2003 (has links)
Thesis (M.E.)--University of New South Wales, 2003. / Also available online.
46

Contributions to image encryption and authentication

Uehara, Takeyuki. January 2003 (has links)
Thesis (Ph.D.)--University of Wollongong, 2003. / Typescript. Bibliographical references: leaf 201-211.
47

Robust image transmission with rate-compatible low-density parity-check codes over noisy channels /

Pan, Xiang, January 1900 (has links)
Thesis (M. App. Sc.)--Carleton University, 2005. / Includes bibliographical references (p. 85-89). Also available in electronic format on the Internet.
48

Automatic source camera identification by lens aberration and JPEG compression statistics

Choi, Kai-san. January 2006 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2007. / Title proper from title frame. Also available in printed format.
49

A study of image compression techniques, with specific focus on weighted finite automata /

Muller, Rikus. January 2005 (has links)
Thesis (MSc)--University of Stellenbosch, 2005. / Bibliography. Also available via the Internet.
50

Kernel methods in steganalysis

Pevný, Tomáš. January 2008 (has links)
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Computer Science, 2008. / Includes bibliographical references.

Page generated in 0.0509 seconds