1 |
Multi-perspective, Multi-modal Image Registration and FusionBelkhouche, Mohammed Yassine 08 1900 (has links)
Multi-modal image fusion is an active research area with many civilian and military applications. Fusion is defined as strategic combination of information collected by various sensors from different locations or different types in order to obtain a better understanding of an observed scene or situation. Fusion of multi-modal images cannot be completed unless these two modalities are spatially aligned. In this research, I consider two important problems. Multi-modal, multi-perspective image registration and decision level fusion of multi-modal images. In particular, LiDAR and visual imagery. Multi-modal image registration is a difficult task due to the different semantic interpretation of features extracted from each modality. This problem is decoupled into three sub-problems. The first step is identification and extraction of common features. The second step is the determination of corresponding points. The third step consists of determining the registration transformation parameters. Traditional registration methods use low level features such as lines and corners. Using these features require an extensive optimization search in order to determine the corresponding points. Many methods use global positioning systems (GPS), and a calibrated camera in order to obtain an initial estimate of the camera parameters. The advantages of our work over the previous works are the following. First, I used high level-features, which significantly reduce the search space for the optimization process. Second, the determination of corresponding points is modeled as an assignment problem between a small numbers of objects. On the other side, fusing LiDAR and visual images is beneficial, due to the different and rich characteristics of both modalities. LiDAR data contain 3D information, while images contain visual information. Developing a fusion technique that uses the characteristics of both modalities is very important. I establish a decision-level fusion technique using manifold models.
|
2 |
The Study of Knowledge-Based Lidar Data Filtering and Terrain RecoveryTsai, Tsung-shao 04 February 2010 (has links)
There is an increasing need for three-dimensional description for various applications such as the development of catchment areas, forest fire control and restoration. Three-dimensional information plays an indispensable role; therefore acquisition of the digital elevation models (DEMs) is the first step in these applications.
LiDAR is a recent development in remote sensing with great potential for providing high resolution and accurate three-dimensional point clouds for describing terrain surface. The acquired LiDAR data represents the surface where the laser pulse is reflected from the height of the terrain and object above ground. These objects should be removed to derive the DEMs. Many LiDAR data-filtering studies are based on surface, block, and slope algorithms. These methods have been developed to filter out most features above the terrain; however, in certain situations they have proved unsatisfactory.
The different algorithm based on different point of view to describe the terrain surface. The appropriate adoption of the advantages from these algorithms will develop a more complete way to derive DEMs. Knowledge-based system is developed to solve some specific problems according to the given appropriate domain knowledge. Huang (2007) proposed a Knowledge-based classification system in urban feature classification using LiDAR data and high resolution aerial imagery with 93% classification accuracy. This research proposed a knowledge-based LiDAR filtering (KBLF) as a follow-up study of Huang¡¦s study. KBLF integrates various knowledge rules derived from experts in the area of ground feature extraction using LiDAR data to increase the capability of describing terrain and ground feature classification. The filtering capability of KBLF is enhanced as expected to get better quality of referenced ground points to recover terrain height and DEMs using Inverse Distance Weighting (IDW) and Nearest Neighbor (NN) methods.
|
Page generated in 0.0691 seconds