11 |
Automatic Detection of Elongated Objects in X-Ray Images of LuggageLiu, Wenye III 20 October 1997 (has links)
This thesis presents a part of the research work at Virginia Tech on developing a prototype automatic luggage scanner for explosive detection, and it deals with the automatic detection of elongated objects (detonators) in x-ray images using matched filtering, the Hough transform, and information fusion techniques. A sophisticated algorithm has been developed for detonator detection in x-ray images, and computer software utilizing this algorithm was programmed to implement the detection on both UNIX and PC platforms. A variety of template matching techniques were evaluated, and the filtering parameters (template size, template model, thresholding value, etc.) were optimized. A variation of matched filtering was found to be reasonably effective, while a Gabor-filtering method was found not to be suitable for this problem. The developed software for both single orientations and multiple orientations was tested on x-ray images generated on AS&E and Fiscan inspection systems, and was found to work well for a variety of images. The effects of object overlapping, luggage position on the conveyor, and detonator orientation variation were also investigated using the single-orientation algorithm. It was found that the effectiveness of the software depended on the extent of overlapping as well as on the objects the detonator overlapped. The software was found to work well regardless of the position of the luggage bag on the conveyor, and it was able to tolerate a moderate amount of orientation change. / Master of Science
|
12 |
Spectral edge image fusion: theory and applicationsConnah, David, Drew, M.S., Finlayson, G. January 2014 (has links)
No / This paper describes a novel approach to the fusion of multidimensional images for colour displays. The goal of the method is to generate an output image whose gradient matches that of the input as closely as possible. It achieves this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is subsequently reintegrated to generate an output. Constraints on the output colours are provided by an initial RGB rendering to produce ‘naturalistic’ colours: we provide a theorem for projecting higher-D contrast onto the initial colour gradients such that they remain close to the original gradients whilst maintaining exact high-D contrast. The solution to this constrained optimisation is closed-form, allowing for a very simple and hence fast and efficient algorithm. Our approach is generic in that it can map any N-D image data to any M-D output, and can be used in a variety of applications using the same basic algorithm. In this paper we focus on the problem of mapping N-D inputs to 3-D colour outputs. We present results in three applications: hyperspectral remote sensing, fusion of colour and near-infrared images, and colour visualisation of MRI Diffusion-Tensor imaging.
|
13 |
EXTREME LOW-LIGHT IMAGING OF DYNAMIC HDR SCENES USING DEEP LEARNING METHODSYiheng Chi (19234225) 02 August 2024 (has links)
<p dir="ltr">Imaging in low light is difficult because few photons can arrive at the sensor in a particular time interval. Increasing the exposure time is not always an option, as images will be blurry if the scenes are dynamic. If scenes or objects are moving, one can capture multiple frames with short exposure time and fuse them using carefully designed algorithms; however, aligning the pixels in adjacent frames is challenging due to the high photon shot noise and sensor read noise at low light. If the dynamic range of the scene is high, one needs to further blend multiple exposures from the frames. This blending requires removal of spatially varying noise at various lighting conditions while todays high dynamic range (HDR) fusion algorithms usually assume well illuminated scenes. Therefore, this low-light HDR imaging problem remains unsolved. </p><p dir="ltr">To address these dynamic low-light imaging problems, researches in this dissertation explore both conventional CMOS image sensors and a new type of image sensor, named quanta image sensor (QIS), develop models of the imaging conditions of interest, and propose new image reconstruction algorithms based on deep neural networks together with new training protocols to assist the learning. Researches in this dissertation target to reconstruct dynamic HDR scenes at a light level of 1 photon per pixel (ppp) or less than 1 lux illuminance.</p>
|
14 |
Fusion of images from dissimilar sensor systemsChow, Khin Choong 12 1900 (has links)
Approved for public release; distribution in unlimited. / Different sensors exploit different regions of the electromagnetic spectrum; therefore, a multi-sensor image fusion system can take full advantage of the complementary capabilities of individual sensors in the suit; to produce information that cannot be obtained by viewing the images separately. In this thesis, a framework for the multiresolution fusion of the night vision devices and thermal infrared imagery is presented. It encompasses a wavelet-based approach that supports both pixel-level and region-based fusion, and aims to maximize scene content by incorporating spectral information from both the source images. In pixel-level fusion, source images are decomposed into different scales, and salient directional features are extracted and selectively fused together by comparing the corresponding wavelet coefficients. To increase the degree of subject relevance in the fusion process, a region-based approach which uses a multiresolution segmentation algorithm to partition the image domain at different scales is proposed. The region's characteristics are then determined and used to guide the fusion process. The experimental results obtained demonstrate the feasibility of the approach. Potential applications of this development include improvements in night piloting (navigation and target discrimination), law enforcement etc. / Civilian, Republic of Singapore
|
15 |
Vyhodnocování nádorů pomocí analýz DCE-MRI snímků / Tumor assessment using DCE-MRI image analysisŠilhán, Jiří January 2012 (has links)
This thesis deals with processing of data obtained by DCE-MRI, which uses magnetic resonance to track the propagation of contrast agents in the blo- odstream. Patient is given a contrast agent and then a series of images of the target area is taken. The output is a set of image data and perfusion maps. Work employs segmentation method which uses graph cuts to interactively look for the tumor, and evaluates it according to its shape properties. Study of whole data sets is simplified by image fusion methods.
|
16 |
Flash Photography Enhancement via Intrinsic RelightingEisemann, Elmar, Durand, Frédo 01 1900 (has links)
We enhance photographs shot in dark environments by combining a picture taken with the available light and one taken with the flash. We preserve the ambiance of the original lighting and insert the sharpness from the flash image. We use the bilateral filter to decompose the images into detail and large scale. We reconstruct the image using the large scale of the available lighting and the detail of the flash. We detect and correct flash shadows. This combines the advantages of available illumination and flash photography. / Singapore-MIT Alliance (SMA)
|
17 |
Segmentation of RADARSAT-2 Dual-Polarization Sea Ice ImageryYu, Peter January 2009 (has links)
The mapping of sea ice is an important task for understanding global climate and for safe shipping. Currently, sea ice maps are created by human analysts with the help of remote sensing imagery, including synthetic aperture radar (SAR) imagery. While the maps are generally correct, they can be somewhat subjective and do not have pixel-level resolution due to the time consuming nature of manual segmentation. Therefore, automated sea ice mapping algorithms such as the multivariate iterative region growing with semantics (MIRGS) sea ice image segmentation algorithm are needed.
MIRGS was designed to work with one-channel single-polarization SAR imagery from the RADARSAT-1 satellite. The launch of RADARSAT-2 has made available two-channel dual-polarization SAR imagery for the purposes of sea ice mapping. Dual-polarization imagery provides more information for distinguishing ice types, and one of the channels is less sensitive to changes in the backscatter caused by the SAR incidence angle parameter. In the past, this change in backscatter due to the incidence angle was a key limitation that prevented automatic segmentation of full SAR scenes.
This thesis investigates techniques to make use of the dual-polarization data in MIRGS. An evaluation of MIRGS with RADARSAT-2 data was performed and showed that some detail was lost and that the incidence angle caused errors in segmentation. Several data fusion schemes were investigated to determine if they can improve performance. Gradient generation methods designed to take advantage of dual-polarization data, feature space fusion using linear and non-linear transforms as well as image fusion methods based on wavelet combination rules were implemented and tested. Tuning of the MIRGS parameters was performed to find the best set of parameters for segmentation of dual-polarization data. Results show that the standard MIRGS algorithm with default parameters provides the highest accuracy, so no changes are necessary for dual-polarization data. A hierarchical segmentation scheme that segments the dual-polarization channels separately was implemented to overcome the incidence angle errors. The technique is effective but requires more user input than the standard MIRGS algorithm.
|
18 |
Segmentation of RADARSAT-2 Dual-Polarization Sea Ice ImageryYu, Peter January 2009 (has links)
The mapping of sea ice is an important task for understanding global climate and for safe shipping. Currently, sea ice maps are created by human analysts with the help of remote sensing imagery, including synthetic aperture radar (SAR) imagery. While the maps are generally correct, they can be somewhat subjective and do not have pixel-level resolution due to the time consuming nature of manual segmentation. Therefore, automated sea ice mapping algorithms such as the multivariate iterative region growing with semantics (MIRGS) sea ice image segmentation algorithm are needed.
MIRGS was designed to work with one-channel single-polarization SAR imagery from the RADARSAT-1 satellite. The launch of RADARSAT-2 has made available two-channel dual-polarization SAR imagery for the purposes of sea ice mapping. Dual-polarization imagery provides more information for distinguishing ice types, and one of the channels is less sensitive to changes in the backscatter caused by the SAR incidence angle parameter. In the past, this change in backscatter due to the incidence angle was a key limitation that prevented automatic segmentation of full SAR scenes.
This thesis investigates techniques to make use of the dual-polarization data in MIRGS. An evaluation of MIRGS with RADARSAT-2 data was performed and showed that some detail was lost and that the incidence angle caused errors in segmentation. Several data fusion schemes were investigated to determine if they can improve performance. Gradient generation methods designed to take advantage of dual-polarization data, feature space fusion using linear and non-linear transforms as well as image fusion methods based on wavelet combination rules were implemented and tested. Tuning of the MIRGS parameters was performed to find the best set of parameters for segmentation of dual-polarization data. Results show that the standard MIRGS algorithm with default parameters provides the highest accuracy, so no changes are necessary for dual-polarization data. A hierarchical segmentation scheme that segments the dual-polarization channels separately was implemented to overcome the incidence angle errors. The technique is effective but requires more user input than the standard MIRGS algorithm.
|
19 |
An integrated approach to real-time multisensory inspection with an application to food processingDing, Yuhua 26 November 2003 (has links)
Real-time inspection based on machine vision technologies is being widely used in quality control and cost reduction in a variety of application domains. The high demands on the inspection performance and low cost requirements make the algorithm design a challenging task that requires new and innovative methodologies in image processing and fusion. In this research, an integrated approach that combines novel image processing and fusion techniques is proposed for the efficient design of accurate and real-time machine vision-based inspection algorithms with an application to the food processing problem.
Firstly, a general methodology is introduced for effective detection of defects and foreign objects that possess certain spectral and shape features. The factors that affect performance metrics are analyzed, and a recursive segmentation and classification scheme is proposed in order to improve the segmentation accuracy. The developed methodology is applied to real-time fan bone detection in deboned poultry meat with a detection rate of 93% and a false alarm rate of 7% from a lab-scale testing on 280 samples.
Secondly, a novel snake-based algorithm is developed for the segmentation of vector-valued images. The snakes are driven by the weighted sum of the optimal forces derived from corresponding energy functionals in each image, where the weights are determined based on a novel metric that measures both local contrasts and noise powers in individual sensor images. This algorithm is effective in improving the segmentation accuracy when imagery from multiple sensors is available to the inspection system. The effectiveness of the developed algorithm is verified using (i) synthesized images (ii) real medical and aerial images and (iii) color and x-ray chicken breast images. The results further confirmed that the algorithm yields higher segmentation accuracy than monosensory methods and is able to accommodate a certain amount of registration error. This feature-level image fusion technique can be combined with pixel- and decision- level techniques to improve the overall inspection system performance.
|
20 |
FORMULATION OF DETECTION STRATEGIES IN IMAGESFadhil, Ahmed Freidoon 01 May 2014 (has links)
This dissertation focuses on two distinct but related problems involving detection in multiple images. The first problem focuses on the accurate detection of runways by fusing Synthetic Vision System (SVS) and Enhanced Vision System (EVS) images. A novel procedure is developed to accurately detect runways and horizons and also enhance runway surrounding areas by fusing enhanced vision system (EVS) and synthetic vision system (SVS) images of the runway while an aircraft is landing. Because the EVS and SVS frames are not aligned, a registration step is introduced to align the EVS and SVS images prior to fusion. The most notable feature of the registration procedure is that it is guided by the information extracted from the weather-invariant SVS images. Four fusion rules based on combining Discrete Wavelet Transform (DWT) sub-bands are implemented and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and also on image pairs containing simulated EVS images with varying levels of turbulence. The subjective and objective evaluations reveal that runways and horizons can be detected accurately even in poor visibility conditions. Furthermore, it is demonstrated that different aspects of the EVS and SVS images can be emphasized by using different DWT fusion rules. Another notable feature is that the entire procedure is autonomous throughout the landing sequence irrespective of the weather conditions. Given the excellent fusion results and the autonomous feature, it can be concluded that the fusion procedure developed is quite promising for incorporation into head-up displays (HUDs) to assist pilots in safely landing aircrafts in varying weather conditions. The second problem focuses on the blind detection of hidden messages that are embedded in images using various steganography methods. A new steganalysis strategy is introduced to blindly detect hidden messages that have been embedded in JPEG images using various steganography techniques. The key contribution is the formulation of a multi-domain feature extraction, ranking, and selection strategy to improve the steganalysis performance. The multi-domain features are statistical measures extracted from DWT, muti-wavelet (MWT), and slantlet (SLT) transforms. Feature ranking and selection is based on evaluating the performance of each feature independently and combining the best uncorrelated features. The resulting feature set is used in conjunction with discriminant analysis and support vector classifiers to detect the presence/absence of hidden messages in images. Numerous experiments are conducted to demonstrate the improved performance of the new steganalysis strategy over existing steganalysis methods.
|
Page generated in 0.0465 seconds