• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 65
  • 65
  • 13
  • 13
  • 13
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Novel Pixel-Level and Subpixel-Level Registration Algorithms for Multi-Modal Imagery Data

Elbakary, Mohamed Ibrahim January 2005 (has links)
Image registration is an important pre-processing operation to be performed before many image exploitation and processing functions such as data fusion, and super-resolution frame. Given two image frames, obtained from the same sensor or from different sensors, the registration problem involves determining the transformation that most nearly maps (or aligns) one image frame into the other. Typically, image registration requires intensive computational effort and the developed techniques are scene dependent. Furthermore, the problems of multimodal image registration (i.e. problem of registering images acquired from dissimilar sensors) and sub-pixel image registration (i.e. registering two images at sub-pixel accuracy) are highly challenging and no satisfactory solutions exist.This dissertation introduces novel techniques to solve the image registration problem both at the pixel-level and at the sub-pixel level. For pixel-level registration, a procedure is offered that enjoys the advantages that it is not scene dependent and provides the same level of accuracy for registering images acquired from different types of sensors. The new technique is based on obtaining the local frequency content of an image and using this local frequency representation to extract control points for establishing correspondence. To extract the local frequency representation of an image, a computationally efficient scheme based on minimizing the latency of a Gabor filter bank by exploiting certain biological considerations is presented. The dissertation also introduces an extension of using local frequency to solve the sub-pixel image registration problem. The new algorithm is based on using the scaled local frequency representation of the images to be registered, with computationally inexpensive scaling of the local frequency of the images prior to correlation matching. Finally, this dissertation provides a novel approach to solve the problem of multi-modal image registration. The principal idea behind this approach is to employ Computer Aided Design (CAD) models of man-made objects in the scene to permit extraction of regions-of-interest (ROI) whose local frequency representations are computed for extraction of stable matching points. Detailed performance evaluation results from an extensive set of experiments using diverse types of images are presented to highlight the strong points of the proposed registration algorithms.

An investigation of the potential of multi-modality imaging in three dimensional thick tissue microscopy

Jones, Michael Greystock January 1997 (has links)
No description available.

Remote Sensing Region Based Image Fusion Using the Contourlet Transform

Ibrahim, Soad 27 January 2012 (has links)
Remote sensing imaging is a tool for collecting information about the Earth's surface such as soil, vegetation and water. Recent progress in electronics, telecommunications and sensor developments have resulted in the launch of many satellites in the past three decades. Different sensors in remote sensing systems capture a variety of images with differing characteristics. Image fusion has been used to integrate two or more images and provides output images with better accuracy. This research provides a new technique for image fusion using the contourlet transform in combination with the YCbCr color space. The output images preserve both the spectral and spatial characteristics of the input images and they are better for human and machine interpretation. This technique provides solutions to some problems (\emph{i.e.}, ghosting effect, and blocking artifacts) which the traditional image fusion techniques fail to address. The proposed technique is tested on both classical and remote sensing images. Quality metrics are used to evaluate the results of the proposed technique. The results proved significant enhancement of the quality of the output images. More fine details are successfully captured and the original chromaticity information is preserved as well. The proposed technique eliminates the blocking artifacts in the output images. Also, a new metric is presented to measure the blocking artifacts in the fused image. The results showed that increasing the number of contourlet decomposition levels does not degrade the quality of the output image. Therefore, the output images do not lose their chromaticity information when the number of contourlet decomposition levels increases. The proposed technique is tested on a variety of the remote sensing images that have large resolution ratios (\emph{i.e.}, 1:8, 1:16 and 1:32). The proposed technique is robust and suitable for many image applications. The detection of the concealed objects is an example of such applications, where the proposed technique is tested to measure its capability to fuse images with different features. The results of the Contourlet-YCbCr fusion technique are compared with the conventional fusion methods, where the proposed technique is more capable in detecting the hidden objects and preserving the original color components of the input image.

Pixel-level Image Fusion Algorithms for Multi-camera Imaging System

Zheng, Sicong 01 December 2010 (has links)
This thesis work is motivated by the potential and promise of image fusion technologies in the multi sensor image fusion system and applications. With specific focus on pixel level image fusion, the process after the image registration is processed, we develop graphic user interface for multi-sensor image fusion software using Microsoft visual studio and Microsoft Foundation Class library. In this thesis, we proposed and presented some image fusion algorithms with low computational cost, based upon spatial mixture analysis. The segment weighted average image fusion combines several low spatial resolution data source from different sensors to create high resolution and large size of fused image. This research includes developing a segment-based step, based upon stepwise divide and combine process. In the second stage of the process, the linear interpolation optimization is used to sharpen the image resolution. Implementation of these image fusion algorithms are completed based on the graphic user interface we developed. Multiple sensor image fusion is easily accommodated by the algorithm, and the results are demonstrated at multiple scales. By using quantitative estimation such as mutual information, we obtain the experiment quantifiable results. We also use the image morphing technique to generate fused image sequence, to simulate the results of image fusion. While deploying our pixel level image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also makes it hard to become deployed in system and applications that require real-time feedback, high flexibility and low computation ability

Extending Depth of Field via Multifocus Fusion

Hariharan, Harishwaran 01 December 2011 (has links)
In digital imaging systems, due to the nature of the optics involved, the depth of field is constricted in the field of view. Parts of the scene are in focus while others are defocused. Here, a framework of versatile data-driven application independent methods to extend the depth of field in digital imaging systems is presented. The principal contributions in this effort are the use of focal connectivity, the direct use of curvelets and features extracted by Empirical Mode Decomposition, namely Intrinsic Mode Images, for multifocus fusion. The input images are decomposed into focally connected components, peripheral and medial coefficients and intrinsic mode images depending on the approach and fusion is performed on extracted focal information, by relevant schema that allow emphasis of focused regions from each input image. The fused image unifies information from all focal planes, while maintaining the verisimilitude of the scene. The final output is an image where all focal volumes of the scene are in focus, as acquired by a pinhole camera with an infinitesimal depth of field. In order to validate the fusion performance of our method, we have compared our results with those of region-based and multiscale decomposition based fusion techniques. Several illustrative examples, supported by in depth objective comparisons are shown and various practical recommendations are made.

Enhancing Multispectral Imagery of Ancient Documents

Griffiths, Trace A 01 May 2011 (has links)
Multispectral imaging (MSI) provides a wealth of imagery data that, together with modern signal processing techniques, facilitates the enhancement of document images. In this thesis, four topic areas are reviewed and applied to ancient documents. They are image fusion, matched filters, bleed-through removal, and shadow removal. These four areas of focus provide useful tools for papyrologists studying the digital imagery of documents. The results presented form a strong case for the utility of MSI data over the use of a single image captured at any given wavelength of light.

Multi-camera Human Tracking on Realtime 3D Immersive Surveillance System

Hsieh, Meng-da 23 June 2010 (has links)
Conventional surveillance systems present video to a user from more than one camera on a single display. Such a display allows the user to observe different part of the scene, or to observe the same part of the scene from different viewpoints. Each video is usually labeled by a fixed textual annotation displayed under the video segment to identify the image. With the growing number of surveillance cameras set up and the expanse of surveillance area, the conventional split-screen display approach cannot provide intuitive correspondence between the images acquired and the areas under surveillance. Such a system has a number of inherent flaws¡GLower relativity of split videos¡BThe difficulty of tracking new activities¡BLow resolution of surveillance videos¡BThe difficulty of total surveillance¡FIn order to improve the above defects, the ¡§Immersive Surveillance for Total Situational Awareness¡¨ use computer graphic technique to construct 3D model of buildings on the 2D satellite-images, the users can construct the floor platform by defining the information of each floor or building and the position of each camera. This information is combined to construct 3D surveillance scene, and the images acquired by surveillance cameras are pasted into the constructed 3D model to provide intuitively visual presentation. The users could also walk through the scene by a fixed-frequency , self-defined business model to perform a virtual surveillance. Multi-camera Human Tracking on Realtime 3D Immersive Surveillance System based on the ¡§Immersive Surveillance for Total Situational Awareness,¡¨ 1. Salient object detection¡GThe System converts videos to corresponding image sequences and analyze the videos provided by each camera. In order to filter out the foreground pixels, the background model of each image is calculated by pixel-stability-based background update algorithm. 2. Nighttime image fusion¡GUse the fuzzy enhancement method to enhance the dark area in nighttime image, and also maintain the saturation information. Then apply the Salient object detection Algorithm to extract salient objects of the dark area. The system divides fusion results into 3 parts: wall, ceiling, and floor, then pastes them as materials into corresponding parts of 3D scene. 3. Multi-camera human tracking¡GApply connected component labeling to filter out small area and save each block¡¦s infomation. Use RGB-weight percentage information in each block and 5-state status (Enter¡BLeave¡BMatch¡BOcclusion¡BFraction) to draw out the trajectory of each person in every camera¡¦s field of view on the 3D surveillance scene. Finally, fuse every camera together to complete the multi-camera realtime people tracking. Above all, we can track every human in our 3D immersive surveillance system without watching out each of thousand of camera views.

Complementary imaging for pavement cracking measurements

Zhao, Zuyun 03 February 2015 (has links)
Cracking is a major pavement distress that jeopardizes road serviceability and traffic safety. Automated pavement distress survey (APDS) systems have been developed using digital imaging technology to replace human surveys for more timely and accurate inspections. Most APDS systems require special lighting devices to illuminate pavements and prevent shadows of roadside objects that distort cracks in the image. Most of the artificial lighting devices are laser based, which are either hazardous to unprotected people, or require dedicated power supplies on the vehicle. This study is aimed to develop a new imaging system that can scan pavement surface at highway speed and determine the severity level of pavement cracking without using any artificial lighting. The new system consists of dual line-scan cameras that are installed side by side to scan the same pavement area as the vehicle moves. Cameras are controlled with different exposure settings so that both sunlit and shadowed areas can be visible in two separate images. The paired images contain complementary details useful for reconstructing an image in which the shadows are eliminated. This paper intends to presents (1) the design of the dual line-scan camera system for a high-speed pavement imaging system that does not require artificial lighting, (2) a new calibration method for line-scan cameras to rectify and register paired images, which does not need mechanical assistance for dynamical scan, (3) a customized image-fusion algorithm that merges the multi-exposure images into one shadow-free image for crack detection, and (4) the results of the field tests on a selected road over a long period. / text

Probabilistic Methods for Discrete Labeling Problems in Digital Image Processing and Analysis

Shen, Rui Unknown Date
No description available.

Spectral edge image fusion: theory and applications

Connah, David, Drew, M.S., Finlayson, G. January 2014 (has links)
No / This paper describes a novel approach to the fusion of multidimensional images for colour displays. The goal of the method is to generate an output image whose gradient matches that of the input as closely as possible. It achieves this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is subsequently reintegrated to generate an output. Constraints on the output colours are provided by an initial RGB rendering to produce ‘naturalistic’ colours: we provide a theorem for projecting higher-D contrast onto the initial colour gradients such that they remain close to the original gradients whilst maintaining exact high-D contrast. The solution to this constrained optimisation is closed-form, allowing for a very simple and hence fast and efficient algorithm. Our approach is generic in that it can map any N-D image data to any M-D output, and can be used in a variety of applications using the same basic algorithm. In this paper we focus on the problem of mapping N-D inputs to 3-D colour outputs. We present results in three applications: hyperspectral remote sensing, fusion of colour and near-infrared images, and colour visualisation of MRI Diffusion-Tensor imaging.

Page generated in 0.1317 seconds