• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 183
  • 30
  • 14
  • 10
  • 8
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 311
  • 311
  • 80
  • 65
  • 59
  • 47
  • 47
  • 43
  • 40
  • 37
  • 36
  • 32
  • 31
  • 30
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

An RBF Neural Network Method for Image Progressive Transmission

Chen, Ying-Chung 13 July 2000 (has links)
None
32

Group testing for image compression /

Hong, Edwin S. January 2001 (has links)
Thesis (Ph. D.)--University of Washington, 2001. / Vita. Includes bibliographical references (p. 155-161).
33

Some variations on Discrete-Cosine-Transform-based lossy image compression

Chua, Doi-eng., 蔡岱榮. January 2000 (has links)
published_or_final_version / Mathematics / Master / Master of Philosophy
34

Parallel implementation of fractal image compression

Uys, Ryan F. January 2000 (has links)
Fractal image compression exploits the piecewise self-similarity present in real images as a form of information redundancy that can be eliminated to achieve compression. This theory based on Partitioned Iterated Function Systems is presented. As an alternative to the established JPEG, it provides a similar compression-ratio to fidelity trade-off. Fractal techniques promise faster decoding and potentially higher fidelity, but the computationally intensive compression process has prevented commercial acceptance. This thesis presents an algorithm mapping the problem onto a parallel processor architecture, with the goal of reducing the encoding time. The experimental work involved implementation of this approach on the Texas Instruments TMS320C80 parallel processor system. Results indicate that the fractal compression process is unusually well suited to parallelism with speed gains approximately linearly related to the number of processors used. Parallel processing issues such as coherency, management and interfacing are discussed. The code designed incorporates pipelining and parallelism on all conceptual and practical levels ensuring that all resources are fully utilised, achieving close to optimal efficiency. The computational intensity was reduced by several means, including conventional classification of image sub-blocks by content with comparisons across class boundaries prohibited. A faster approach adopted was to perform estimate comparisons between blocks based on pixel value variance, identifying candidates for more time-consuming, accurate RMS inter-block comparisons. These techniques, combined with the parallelism, allow compression of 512x512 pixel x 8 bit images in under 20 seconds, while maintaining a 30dB PSNR. This is up to an order of magnitude faster than reported for conventional sequential processor implementations. Fractal based compression of colour images and video sequences is also considered. The work confirms the potential of fractal compression techniques, and demonstrates that a parallel implementation is appropriate for addressing the compression time problem. The processor system used in these investigations is faster than currently available PC platforms, but the relevance lies in the anticipation that future generations of affordable processors will exceed its performance. The advantages of fractal image compression may then be accessible to the average computer user, leading to commercial acceptance. / Thesis (M.Sc.Eng.)-University of Natal, Durban, 2000.
35

Implementation of an application specific low bit rate video compression scheme.

McIntosh, Ian James. January 2001 (has links)
The trend towards digital video has created huge demands all the link bandwidth required to carry the digital stream, giving rise to the growing research into video compression schemes. General video compression standards, which focus on providing the best compression for any type of video scene, have been shown to perform badly at low bit rates and thus are not often used for such applications. A suitable low bit rate scheme would be one that achieves a reasonable degree of quality over a range of compression ratios, while perhaps being limited to a small set of specific applications. One such application specific scheme. as presented in this thesis, is to provide a differentiated image quality, allowing a user-defined region of interest to be reproduced at a higher quality than the rest of the image. The thesis begins by introducing some important concepts that are used for video compression followed by a survey of relevant literature concerning the latest developments in video compression research. A video compression scheme, based on the Wavelet transform, and using an application specific idea, is proposed and implemented on a digital signal processor (DSP), the Philips Trimedia TM·1300. The scheme is able to capture and compress the video stream and transmit the compressed data via a low bit· rate serial link to be decompressed and displayed on a video monilor. A wide range of flexibility is supported, with the ability to change various compression parameters 'on-the-fly', The compression allgorithm is controlled by a PC application that displays the decompressed video and the original video for comparison, while displaying useful rate metrics such as Peak Signal to Noise Ratio (PSNR), Details of implementation and practicality are discussed. The thesis then presents examples and results from both implementation and testing before concluding with suggestions for further improvement. / Thesis (M.Sc.Eng.)-University of Natal, Durban, 2001.
36

Compression and restoration of noisy images

Al-Shaykh, Osama K. 12 1900 (has links)
No description available.
37

Adaptive triangulations

Maizlish, Oleksandr 17 April 2014 (has links)
In this dissertation, we consider the problem of piecewise polynomial approximation of functions over sets of triangulations. Recently developed adaptive methods, where the hierarchy of triangulations is not fixed in advance and depends on the local properties of the function, have received considerable attention. The quick development of these adaptive methods has been due to the discovery of the wavelet transform in the 1960's, probably the best tool for image coding. Since the mid 80's, there have been many attempts to design `Second Generation' adaptive techniques that particularly take into account the geometry of edge singularities of an image. But it turned out that almost none of the proposed `Second Generation' approaches are competitive with wavelet coding. Nevertheless, there are instances that show deficiencies in the wavelet algorithms. The method suggested in this dissertation incorporates the geometric properties of convex sets in the construction of adaptive triangulations of an image. The proposed algorithm provides a nearly optimal order of approximation for cartoon images of convex sets, and is based on the idea that the location of the centroid of certain types of domains provides a sufficient amount of information to construct a 'good' approximation of the boundaries of those domains. Along with the theoretical analysis of the algorithm, a Matlab code has been developed and implemented on some simple cartoon images.
38

Contour encoded compression and transmission /

Nelson, Christopher, January 2006 (has links) (PDF)
Thesis (M.S.)--Brigham Young University. Dept. of Computer Science, 2006. / Includes bibliographical references (p. 141-145).
39

Image compression using a double differential pulse code modulation technique (DPCM/DPCM

Ma, Kuang-Hua. January 1996 (has links)
Thesis (M.S.)--Ohio University, June, 1996. / Title from PDF t.p.
40

Advances in image modeling and data communications

Xiao, Shengkuan. January 2007 (has links)
Thesis (Ph.D.)--University of Delaware, 2006. / Principal faculty advisor: Charles G. Boncelet, Dept. of Computer & Information Sciences. Includes bibliographical references.

Page generated in 0.1051 seconds