• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 183
  • 30
  • 14
  • 10
  • 8
  • 5
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 309
  • 309
  • 79
  • 64
  • 58
  • 47
  • 47
  • 42
  • 40
  • 37
  • 36
  • 32
  • 31
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Efficient data encoder for endoscopic imaging applications

Tajallipour, Ramin 05 January 2011
The invention of medical imaging technology revolved the process of diagnosing diseases and opened a new world for better studying inside of the human body. In order to capture images from different human organs, different devices have been developed. Gastro-Endoscopy is an example of a medical imaging device which captures images from human gastrointestinal. With the advancement of technology, the issues regarding such devices started to get rectified. For example, with the invention of swallow-able pill photographer which is called Wireless Capsule Endoscopy (WCE); pain, time, and bleeding risk for patients are radically decreased. The development of such technologies and devices has been increased and the demands for instruments providing better performance are grown along the time. In case ofWCE, the special feature requirements such as a small size (as small as an ordinary pill) and wireless transmission of the captured images dictate restrictions in power consumption and area usage. In this research, the reduction of image encoder hardware cost for endoscopic imaging application has been focused. Several encoding algorithms have been studied and the comparative results are discussed. An efficient data encoder based on Lempel-Ziv-Welch (LZW) algorithm is presented. The encoder is a library-based one where the size of library can be modified by the user, and hence, the output data rate can be controlled according to the bandwidth requirement. The simulation is carried out with several endoscopic images and the results show that a minimum compression ratio of 92.5 % can be achieved with a minimum reconstruction quality of 30 dB. The hardware architecture and implementation result in Field-Programmable Gate Array (FPGA) for the proposed window-based LZW are also presented. A new lossy LZW algorithm is proposed and implemented in FPGA which provides promising results for such an application.
182

Efficient data encoder for endoscopic imaging applications

Tajallipour, Ramin 05 January 2011 (has links)
The invention of medical imaging technology revolved the process of diagnosing diseases and opened a new world for better studying inside of the human body. In order to capture images from different human organs, different devices have been developed. Gastro-Endoscopy is an example of a medical imaging device which captures images from human gastrointestinal. With the advancement of technology, the issues regarding such devices started to get rectified. For example, with the invention of swallow-able pill photographer which is called Wireless Capsule Endoscopy (WCE); pain, time, and bleeding risk for patients are radically decreased. The development of such technologies and devices has been increased and the demands for instruments providing better performance are grown along the time. In case ofWCE, the special feature requirements such as a small size (as small as an ordinary pill) and wireless transmission of the captured images dictate restrictions in power consumption and area usage. In this research, the reduction of image encoder hardware cost for endoscopic imaging application has been focused. Several encoding algorithms have been studied and the comparative results are discussed. An efficient data encoder based on Lempel-Ziv-Welch (LZW) algorithm is presented. The encoder is a library-based one where the size of library can be modified by the user, and hence, the output data rate can be controlled according to the bandwidth requirement. The simulation is carried out with several endoscopic images and the results show that a minimum compression ratio of 92.5 % can be achieved with a minimum reconstruction quality of 30 dB. The hardware architecture and implementation result in Field-Programmable Gate Array (FPGA) for the proposed window-based LZW are also presented. A new lossy LZW algorithm is proposed and implemented in FPGA which provides promising results for such an application.
183

Adaptive Fractal and Wavelet Image Denoising

Ghazel, Mohsen January 2004 (has links)
The need for image enhancement and restoration is encountered in many practical applications. For instance, distortion due to additive white Gaussian noise (AWGN) can be caused by poor quality image acquisition, images observed in a noisy environment or noise inherent in communication channels. In this thesis, image denoising is investigated. After reviewing standard image denoising methods as applied in the spatial, frequency and wavelet domains of the noisy image, the thesis embarks on the endeavor of developing and experimenting with new image denoising methods based on fractal and wavelet transforms. In particular, three new image denoising methods are proposed: context-based wavelet thresholding, predictive fractal image denoising and fractal-wavelet image denoising. The proposed context-based thresholding strategy adopts localized hard and soft thresholding operators which take in consideration the content of an immediate neighborhood of a wavelet coefficient before thresholding it. The two fractal-based predictive schemes are based on a simple yet effective algorithm for estimating the fractal code of the original noise-free image from the noisy one. From this predicted code, one can then reconstruct a fractally denoised estimate of the original image. This fractal-based denoising algorithm can be applied in the pixel and the wavelet domains of the noisy image using standard fractal and fractal-wavelet schemes, respectively. Furthermore, the cycle spinning idea was implemented in order to enhance the quality of the fractally denoised estimates. Experimental results show that the proposed image denoising methods are competitive, or sometimes even compare favorably with the existing image denoising techniques reviewed in the thesis. This work broadens the application scope of fractal transforms, which have been used mainly for image coding and compression purposes.
184

Progressive Lossless Image Compression Using Image Decomposition and Context Quantization

Zha, Hui 23 January 2007 (has links)
Lossless image compression has many applications, for example, in medical imaging, space photograph and film industry. In this thesis, we propose an efficient lossless image compression scheme for both binary images and gray-scale images. The scheme first decomposes images into a set of progressively refined binary sequences and then uses the context-based, adaptive arithmetic coding algorithm to encode these sequences. In order to deal with the context dilution problem in arithmetic coding, we propose a Lloyd-like iterative algorithm to quantize contexts. Fixing the set of input contexts and the number of quantized contexts, our context quantization algorithm iteratively finds the optimum context mapping in the sense of minimizing the compression rate. Experimental results show that by combining image decomposition and context quantization, our scheme can achieve competitive lossless compression performance compared to the JBIG algorithm for binary images, and the CALIC algorithm for gray-scale images. In contrast to CALIC, our scheme provides the additional feature of allowing progressive transmission of gray-scale images, which is very appealing in applications such as web browsing.
185

Video Coding Based on the Kantorovich Distance / Video Kodning Baserat på Kantorovich Avstånd

Östman, Martin January 2004 (has links)
In this Master Thesis, a model of a video coding system that uses the transportation plan taken from the calculation of the Kantorovich distance is developed. The coder uses the transportation plan instead of the differential image and sends it through blocks of transformation, quantization and coding. The Kantorovich distance is a rather unknown distance metric that is used in optimization theory but is also applicable on images. It can be defined as the cheapest way to transport the mass of one image into another and the cost is determined by the distance function chosen to measure distance between pixels. The transportation plan is a set of finitely many five-dimensional vectors that show exactly how the mass should be moved from the transmitting pixel to the receiving pixel in order to achieve the Kantorovich distance between the images. A vector in the transportation plan is called an arc. The original transportation plan was transformed into a new set of four-dimensional vectors called the modified difference plan. This set replaces the transmitting pixel and the receiving pixel with the distance from the transmitting pixel of the last arc and the relative distance between the receiving pixel and the transmitting pixel. The arcs where the receiving pixels are the same as the transmitting pixels are redundant and were removed. The coder completed an eleven frame sequence of size 128x128 pixels in eight to ten hours.
186

Adaptive Fractal and Wavelet Image Denoising

Ghazel, Mohsen January 2004 (has links)
The need for image enhancement and restoration is encountered in many practical applications. For instance, distortion due to additive white Gaussian noise (AWGN) can be caused by poor quality image acquisition, images observed in a noisy environment or noise inherent in communication channels. In this thesis, image denoising is investigated. After reviewing standard image denoising methods as applied in the spatial, frequency and wavelet domains of the noisy image, the thesis embarks on the endeavor of developing and experimenting with new image denoising methods based on fractal and wavelet transforms. In particular, three new image denoising methods are proposed: context-based wavelet thresholding, predictive fractal image denoising and fractal-wavelet image denoising. The proposed context-based thresholding strategy adopts localized hard and soft thresholding operators which take in consideration the content of an immediate neighborhood of a wavelet coefficient before thresholding it. The two fractal-based predictive schemes are based on a simple yet effective algorithm for estimating the fractal code of the original noise-free image from the noisy one. From this predicted code, one can then reconstruct a fractally denoised estimate of the original image. This fractal-based denoising algorithm can be applied in the pixel and the wavelet domains of the noisy image using standard fractal and fractal-wavelet schemes, respectively. Furthermore, the cycle spinning idea was implemented in order to enhance the quality of the fractally denoised estimates. Experimental results show that the proposed image denoising methods are competitive, or sometimes even compare favorably with the existing image denoising techniques reviewed in the thesis. This work broadens the application scope of fractal transforms, which have been used mainly for image coding and compression purposes.
187

Progressive Lossless Image Compression Using Image Decomposition and Context Quantization

Zha, Hui 23 January 2007 (has links)
Lossless image compression has many applications, for example, in medical imaging, space photograph and film industry. In this thesis, we propose an efficient lossless image compression scheme for both binary images and gray-scale images. The scheme first decomposes images into a set of progressively refined binary sequences and then uses the context-based, adaptive arithmetic coding algorithm to encode these sequences. In order to deal with the context dilution problem in arithmetic coding, we propose a Lloyd-like iterative algorithm to quantize contexts. Fixing the set of input contexts and the number of quantized contexts, our context quantization algorithm iteratively finds the optimum context mapping in the sense of minimizing the compression rate. Experimental results show that by combining image decomposition and context quantization, our scheme can achieve competitive lossless compression performance compared to the JBIG algorithm for binary images, and the CALIC algorithm for gray-scale images. In contrast to CALIC, our scheme provides the additional feature of allowing progressive transmission of gray-scale images, which is very appealing in applications such as web browsing.
188

Video analysis and abstraction in the compressed domain

Lee, Sangkeun 01 December 2003 (has links)
No description available.
189

Efficient Access Methods on the Hilbert Curve

Wu, Chen-Chang 18 June 2012 (has links)
The design of multi-dimensional access methods is difficult as compared to those of one-dimensional case because of no total ordering that preserves spatial locality. One way is to look for the total order that preserves spatial proximity at least to some extent. A space-filling curve is a continuous path which passes through every point in a space once so giving a one-to-one correspondence between the coordinates of the points and the 1D-sequence numbers of points on the curve. The Hilbert curve is a famous space filling curve, since it has been shown to have strong locality preserving properties; that is, it is the best space-filling curve in minimizing the number of clusters. Hence, it has been extensively used to maintain spatial locality of multidimensional data in a wide variety of applications. A window query is an important query operation in spatial (image) databases. Given a Hilbert curve, a window query reports its corresponding orders without the need to decode all the points inside this window into the corresponding Hilbert orders. Chung et al. have proposed an algorithm for decomposing a window into the corresponding Hilbert orders. However, the Hilbert curve requires that the region is of size 2^k x 2^k, where k∈N. The intuitive method such as Chung et al.¡¦s algorithm is to directly use Hilbert curves in the decomposed areas and then connect them. They must generate a sequence of the scanned quadrants additionally before encoding and decoding the Hilbert order of one pixel and scan this sequence one time while encoding and decoding one pixel. In this dissertation, on the design of methods for window queries on a Hilbert curve, we propose an efficient algorithm, named as Quad-Splitting, for decomposing a window into the corresponding Hilbert orders on a Hilbert curve without individual sorting and merging steps. The proposed algorithm does not perform individual sorting and merging steps which are needed in Chung et al.¡¦s algorithm. From our experimental results, we show that the Quad-Splitting algorithm outperforms Chung et al.¡¦s algorithm. On the design of the methods for generating the Hilbert curve of an arbitrary-sized image, we propose approximately even partition approach to generate a pseudo Hilbert curve of an arbitrary-sized image. From our experimental results, we show that our proposed pseudo Hilbert curve preserves the similar strong locality property to the Hilbert curve. On the design of the methods for coding Hilbert curve of an arbitrary-sized image, we propose encoding and decoding algorithms. From our experimental results, we show that our encoding and decoding algorithms outperform the Chung et al.¡¦s algorithms.
190

Topics in genomic image processing

Hua, Jianping 12 April 2006 (has links)
The image processing methodologies that have been actively studied and developed now play a very significant role in the flourishing biotechnology research. This work studies, develops and implements several image processing techniques for M-FISH and cDNA microarray images. In particular, we focus on three important areas: M-FISH image compression, microarray image processing and expression-based classification. Two schemes, embedded M-FISH image coding (EMIC) and Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis, have been introduced for M-FISH image compression and microarray image processing, respectively. In the expression-based classification area, we investigate the relationship between optimal number of features and sample size, either analytically or through simulation, for various classifiers.

Page generated in 0.1089 seconds