• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Multi-perspective, Multi-modal Image Registration and Fusion

Belkhouche, Mohammed Yassine 08 1900 (has links)
Multi-modal image fusion is an active research area with many civilian and military applications. Fusion is defined as strategic combination of information collected by various sensors from different locations or different types in order to obtain a better understanding of an observed scene or situation. Fusion of multi-modal images cannot be completed unless these two modalities are spatially aligned. In this research, I consider two important problems. Multi-modal, multi-perspective image registration and decision level fusion of multi-modal images. In particular, LiDAR and visual imagery. Multi-modal image registration is a difficult task due to the different semantic interpretation of features extracted from each modality. This problem is decoupled into three sub-problems. The first step is identification and extraction of common features. The second step is the determination of corresponding points. The third step consists of determining the registration transformation parameters. Traditional registration methods use low level features such as lines and corners. Using these features require an extensive optimization search in order to determine the corresponding points. Many methods use global positioning systems (GPS), and a calibrated camera in order to obtain an initial estimate of the camera parameters. The advantages of our work over the previous works are the following. First, I used high level-features, which significantly reduce the search space for the optimization process. Second, the determination of corresponding points is modeled as an assignment problem between a small numbers of objects. On the other side, fusing LiDAR and visual images is beneficial, due to the different and rich characteristics of both modalities. LiDAR data contain 3D information, while images contain visual information. Developing a fusion technique that uses the characteristics of both modalities is very important. I establish a decision-level fusion technique using manifold models.
2

Automated Tree Crown Discrimination Using Three-Dimensional Shape Signatures Derived from LiDAR Point Clouds

Sadeghinaeenifard, Fariba 05 1900 (has links)
Discrimination of different tree crowns based on their 3D shapes is essential for a wide range of forestry applications, and, due to its complexity, is a significant challenge. This study presents a modified 3D shape descriptor for the perception of different tree crown shapes in discrete-return LiDAR point clouds. The proposed methodology comprises of five main components, including definition of a local coordinate system, learning salient points, generation of simulated LiDAR point clouds with geometrical shapes, shape signature generation (from simulated LiDAR points as reference shape signature and actual LiDAR point clouds as evaluated shape signature), and finally, similarity assessment of shape signatures in order to extract the shape of a real tree. The first component represents a proposed strategy to define a local coordinate system relating to each tree to normalize 3D point clouds. In the second component, a learning approach is used to categorize all 3D point clouds into two ranks to identify interesting or salient points on each tree. The third component discusses generation of simulated LiDAR point clouds for two geometrical shapes, including a hemisphere and a half-ellipsoid. Then, the operator extracts 3D LiDAR point clouds of actual trees, either deciduous or evergreen. In the fourth component, a longitude-latitude transformation is applied to simulated and actual LiDAR point clouds to generate 3D shape signatures of tree crowns. A critical step is transformation of LiDAR points from their exact positions to their longitude and latitude positions using the longitude-latitude transformation, which is different from the geographic longitude and latitude coordinates, and labeled by their pre-assigned ranks. Then, natural neighbor interpolation converts the point maps to raster datasets. The generated shape signatures from simulated and actual LiDAR points are called reference and evaluated shape signatures, respectively. Lastly, the fifth component determines the similarity between evaluated and reference shape signatures to extract the shape of each examined tree. The entire process is automated by ArcGIS toolboxes through Python programming for further evaluation using more tree crowns in different study areas. Results from LiDAR points captured for 43 trees in the City of Surrey, British Columbia (Canada) suggest that the modified shape descriptor is a promising method for separating different shapes of tree crowns using LiDAR point cloud data. Experimental results also indicate that the modified longitude-latitude shape descriptor fulfills all desired properties of a suitable shape descriptor proposed in computer science along with leaf-off, leaf-on invariance, which makes this process autonomous from the acquisition date of LiDAR data. In summary, the modified longitude-latitude shape descriptor is a promising method for discriminating different shapes of tree crowns using LiDAR point cloud data.

Page generated in 0.0773 seconds