• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 36
  • 32
  • 21
  • Tagged with
  • 180
  • 45
  • 28
  • 19
  • 19
  • 18
  • 17
  • 15
  • 13
  • 11
  • 11
  • 11
  • 10
  • 9
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Multiresolutional techniques for digital image filtering and watermarking

Fraser, Stewart Ian January 2006 (has links)
This thesis examines the use of multiresolutional techniques in two areas of digital image processing: denoising (speckle reduction) and watermarking. A speckle reduction algorithm operating in the wavelet a irons domain is proposed. This novel algorithm iteratively approaches the difference between the estimated noise standard deviation (in an image.) and the removed noise standard deviation. A method for ascertaining the overall performance of a filter, based upon noise removal and edge; preservation, is presented. Comparisons between the novel denoising algorithm and existing denoising filters are carried out using test images and medical ultrasound images. Results show that the novel denoising algorithm reduces speckle drastically whilst maintaining sharp edges. Two distinct areas of digital image watermarking are addressed in this thesis: (1)  the presentation of a novel watermarking system for copyright protection and (2)  a fair comparison of the effects of incorporating Error Correcting Codes (ECC) into various watermarking systems. The newly proposed watermarking system is blind, quantization based and operates in the wavelet domain. Tests carried out on this novel system show it to be highly robust and reliable. An extensive and fair study of the effects of incorporating ECCs (Bose. Chaud-huri and Hoequenghem (BCI1) and repetition codes) into various watermarking systems is carried out. Spatial. Discrete Cosine Transform (I)CT) and wavelet based systems are tested. It is shown that it is not always beneficial to add ECCs into a watermarking system.
42

Uniform colour spaces for image applications

Zhu, Shao Ying January 2002 (has links)
No description available.
43

The scientific exploitation of SWIFT

Fogarty, Elizabeth Mary Rose January 2011 (has links)
The SWIFT integral field spectrograph is an adaptive optics assisted I and z-band IFS designed an built by a dedicated instrument team at the University of Oxford. In this thesis I describe my contribution to the construction, commissioning and characterisation of SWIFT as part of the SWIFT team. I also describe two observational projects subsequently undertaken with SWIFT. Firstly a kine- matic study of the ring galaxy Arp 147. This is a typical ring galaxy and companion system, thought to have been created during a collision between the companion galaxy and a normal disk which was subsequently disrupted to form the ring shape seen today. SWIFT was used to obtain spatially resolved kinematics over the ring galaxy thereby probing the conditions under which the collision occurred. Integrated spectra are also used to establish some physical properties associated with the system, leading to a robust understanding of the timescales involved in the interaction. Next SWIFT was used to observe a small sample of redshift desert (z ~ 1) galaxies. These objects were chosen from the DEEP2 sample in order to probe a range of different dominant kinematics, that is galaxies which are rotation-dominated, galaxies which are velocity dispersion- dominated and some objects displaying a mixture between the two. Here I analyse two objects from this sample. The Eagle galaxy is a turbulent and highly star-forming galaxy at a redshift of z = 0.7686. The Eagle exhibits a morphology and uneven kinematics indicating a possible major merger. The Diamond Ring galaxy is a galaxy at a redshift of z = 1.1592. The Diamond Ring also displays disrupted kinematics, with a knotty morphology. It has a typical star formation rate for its mass.
44

Development of CCD and EM-CCD technology for high resolution X-ray spectrometry

Tutt, James Henry January 2012 (has links)
This thesis discusses the development of Charge-Couple Device (CCD) and Electron Multiplying CCD (EM-CCD) technology for high resolution X-ray spectroscopy. Of particular interest is the spectral resolution performance of the devices alongside the optimisation of the quantum efficiency through the use of back-illuminated CCDs, thin filter technology and improved passivation techniques. The early chapters (1 through 5) focus on the background and theory that is required to understand the purpose of the work in this thesis and how semiconductors can be used as the detector of high resolution X-ray spectrometers. Chapter 6 focuses on the soft X-ray performance of three different types of conventional CCD using the PTB beamline at BESSY 11. The results show that there is degradation in spectral resolution in all three devices below 500 eV due to incomplete charge collection and X-ray peak asymmetry. The Hamamatsu device is shown to degrade faster than the CCD30-11 variants and this is attributed to the thickness of the active silicon (>50 urn] in the device and also its thicker dead-layer (~75 nm) which is found by evaluating the device's soft X-ray QE). The charge loss at the back-surface generation/recombination centres is also investigated and is found to be higher in the Hamamatsu device, again due to its thicker dead-layer. Chapter 7 is an investigation of the Modified Fano Factor which aims to describe the spectral resolution degradation that is expected when an EM-CCD is used to directly detect soft X-rays. The factor is predicted analytically, modelled and then verified experimentally allowing EM-CCD performance over the soft X-ray range to be predicted with high levels of confidence. Chapter 8 is a detailed look into work completed for the phase 0 study of the off plane X-ray grating spectrometer on the International X-ray Observatory. The work includes a detailed contamination study, effective area analysis, the pointing knowledge requirement and the use of filters to minimise optical background.
45

Ad hoc electrode arrangements for electrical tomography

Murphy, Stephen January 2008 (has links)
This thesis explores new methods of applying Electrical Impedance Tomography (EIT) using unconventional electrode arrangements and novel measurement collection strategies. It focuses on opportunities for improving the quality of 3D electrical tomography by employing an increased number of measurements facilitated using moving electrodes. Industrial process applications of EIT are targeted to focus the research and it is a requirement that any electrode motion complies with the process to maintain the non-intrusive nature of EIT.
46

Design and development of a mechanical millimetre wave imaging scanning system

Andres, Carlos Callejero January 2010 (has links)
This thesis describes the design of a very compact, real time, passive millimetre wave imager. The most relevant scanning techniques and designs have been explained. Two possible configurations have been studied, simulated and analysed with OSLO and also with Matlab. The design requires an array to provide real time frame rates, whose curvature has been optimized with GRASP, an antenna design program. The system has the following advantages: The imager is compact by using polarisation-rotation techniques to fold the antenna optics. The two rotating components produce a linear scan pattern with a single receiver. The system is capable of real time operation using an array. A novel method of mm-wave illumination has been developed and tested at 35 GHz. Several illumination experiments have been undertaken to increase the temperature of the object compared to its surrounding background and consequently increase the contrast. An opto-mechanical millimetre-wave imager has been used to facilitate these experiments. This prototype, called "Nasa Imager" was a second unit developed at Reading University by Alfa Imaging Ltd. under a NASA grant. This system displays images at a speed rate of 20 seconds/image, with a spatial resolution of 7.5 mrad. The Nasa imager has also been used to take images of different materials, potential threats or barriers at 35 and 94 GHz. The transmission and reflections properties of some of these materials have been measured at the University ofNavarra. Comparing the results from the mm-wave image analysis with those from the threat characterization, it is observed that there is a high correlation between the two.
47

Shot descriptors for video temporal decomposition

Sidiropoulos, Panagiotis January 2012 (has links)
Video temporal decomposition is an essential element of a variety of video processing applications, from semantic indexing and classification to non-linear browsing, video summarization and video retrieval. The decomposition is traditionally conducted using shots as the video structural units. However, while shots are video segments that can be explicitly defined, they lack semantic meaning. On the other hand, scenes, which are generally defined as the elementary semantic video units, are expected to generate more meaningful video representations and to enhance the performance of video processing applications that employ temporal decomposition. However, before replacing shot with scene segmentation the latter need to reach the high performance levels of the former. This thesis aims to provide directions towards this goal, first by identifying some of the main current limitations of video scene seg- mentation and next by suggesting ways to overcome them. More specifically, four main restraints have been identified. Firstly, the ambiguity in the definition of what a scene is, which is an inherent domain characteristic. The general scene definition as the ele- mentary semantic unit finds various interpretations depending on the video genre, the application etc. Next, the semantic gap between what makes two shots belong to the same scene and ' the available scene descriptors. Indeed, the scenes are formed by links between pairs of neighboring shots that are similar in content. The shot content similarity cant be efficiently modeled by low-level descriptors, which are typically used by the community for this purpose. Additionally, the limited scalability of the existing scene segmentation algorithms. As a matter of fact, it seems to be difficult to generalize and efficiently tune scene segmen- tation approaches not only for videos of multiple genres but also for a small number of videos from the same genre. Finally, the lack of a uni-dimensional evaluation mea- sure that' would efficiently gauge the performance of an automatic scene segmentation system. This thesis includes the development of a novel approach to evaluating video temporal decomposition algorithms, which is not only effective in evaluating scene segmentation . techniques and in helping to optimize their parameters, but also satisfies a number of qualitative prerequisites that previous measures do not. Furthermore, the novel measure is proven to be a _metric, which is a property that can be used to alleviate the effects of the scene definition ambiguity. Subsequently, a scheme that fully exploits the scene discrimination potential of shot descriptors deriving both from visual and audio modality is presented, followed by the introduction of a number of novel shot descriptors. These employ high-level features automatically extracted from the visual and the auditory channel, which are shown to be able to contribute towards improved video segmentation to scenes. Finally, conclusions and future work complete this thesis.
48

Hybrid predictive, wavelet and arithmetic (HPWA) still image coding system

Radi, Naeem M. M. January 2012 (has links)
The vast spread of the use of mobile computing devices and social media prompted the need for higher compression ratios than those currently offered by JPEG and JPEG2000. Furthermore, the Increase of the processing power of computing devices made it possible to implement more sophisticated image coding algorithms or hybrid techniques that wouldn't have been possible in the past. A novel lossy image compression technique called Hybrid Predictive Wavelet and Arithmetic (HPWA) is proposed. This technique can achieve higher compression ratios in comparison to JPEG and JPEG2000 without noticeable loss in the image quality. A lossless image coding technique can be derived from the proposed lossy technique. HPWA uses Predictive Coding as a front end to the Discrete Wavelet Transform Coding. In this case, Transform Coding (Discrete Wavelet Transform) is enhanced by performing Predictive Coding first on the image and then passing the output to the next stage which is DWT. A very important point is to note that predictive coding will remove inter-pixel redundancy, while DWT will remove coding redundancy. The proposed H PWA consists of four stages. Stage 1 (Predictive Coding): The aim of using Predictive Coding in the HPWA system as a front-end to Discrete Wavelet Transform coding is different from the original aim of Predictive Image Coding as the actual image compression system; however the principle is still the same. The aim is to get the smallest mean square error (or the largest signal to noise ratio) of the prediction error data. A neural network based nonlinear predictor is used to predict the pixel values in the image since nonlinear predictors have proved to be more efficient in predictive image coding. Stage 2 (Quantisation): The quantiser is used usually to code the prediction error data in a compressed manner which results in reducing the size of the prediction errors file. A novel variable- length truly adaptive quantiser which outperforms the popular Lloyd-Max non-uniform quantiser was developed as part of this project. The improved results come at the expense of a relatively small amount of extra data that has to be saved or transmitted with the image. Stage 3 (Discrete Wavelet Transform): a standard DWT algorithm is used to transform the prediction error data (or optionally the quantised prediction error data) to frequency coefficients. It has been noted that up to 3 levels of decomposition (compression ratio of 64:1) gives very good compression rate without significant loss of accuracy when transforming the original image data using DWT. However, when the prediction error data was transformed; up to 5 or 6 levels of decomposition (compression ratio of 1024:1 or 4096:1) gives very good compression rate without significant loss of accuracy. Stage 4 (Arithmetic Coder): The fourth and final stage in the HPWA system is Arithmetic Coding (AC) which is a lossless technique. Arithmetic Coding is used to code the transform coefficients of the most significant frequency sub-band, the mean value of the coefficients in each of the neglected sub-band and the original image values of the two rows and three columns that are not coded in the first stage - Predictive Coding. The complete HPWA system was benched marked against JPEG2000 using 10 standard grey levels test images. JPEG2000 performed better at decomposition levels 1 and 2, i.e. for compression ratios 4:1 and 16:1. At decomposition level 3 (compression ratio 64:1), HPWA systems performed slightly better than JPEG2000. However, JPEG2000 completely failed at decomposition levels 4 (compression ratio 256:1) and beyond while the HPWA continued to perform well at decomposition levels 4 and 5 and even at level 6 for larger images. The average improvement offered by the proposed HPWA system over JPEG2000 in terms ofthe PSNR at decomposition level 4 and 5 is 5.05 dB and 13.75 dB, respectively.
49

Sinogram recovery algorithms for hard-field tomography with low number of path integrals

Constantino, Eugenio Pereira Araujo January 2009 (has links)
Imaging from limited data is a common practice in many industrial tomography applications where sensor design often assumes an irregular approach with low number of measurements. In hard-field tomography, where access restrictions forbid the continuous scanning around the subject, the measurements represent an undersampled set of data resulting in a sparse sinogram and a reconstructed image with severe artefacts.
50

Lossless compression for on-board satellite imaging

Atek, S. January 2004 (has links)
No description available.

Page generated in 0.0132 seconds