Return to search

Mapping individual trees from airborne multi-sensor imagery

Airborne multi-sensor imaging is increasingly used to examine vegetation properties. The advantage of using multiple types of sensor is that each detects a different feature of the vegetation, so that collectively they provide a detailed understanding of the ecological pattern. Specifically, Light Detection And Ranging (LiDAR) devices produce detailed point clouds of where laser pulses have been backscattered from surfaces, giving information on vegetation structure; hyperspectral sensors measure reflectances within narrow wavebands, providing spectrally detailed information about the optical properties of targets; while aerial photographs provide high spatial-resolution imagery so that they can provide more feature details which cannot be identified from hyperspectral or LiDAR intensity images. Using a combination of these sensors, effective techniques can be developed for mapping species and inferring leaf physiological processes at ITC-level. Although multi-sensor approaches have revolutionised ecological research, their application in mapping individual tree crowns is limited by two major technical issues: (a) Multi-sensor imaging requires all images taken from different sensors to be co-aligned, but different sensor characteristics result in scale, rotation or translation mismatches between the images, making correction a pre-requisite of individual tree crown mapping; (b) reconstructing individual tree crowns from unstructured raw data space requires an accurate tree delineation algorithm. This thesis develops a schematic way to resolve these technical issues using the-state-of-the-art computer vision algorithms. A variational method, called NGF-Curv, was developed to co-align hyperspectral imagery, LiDAR and aerial photographs. NGF-Curv algorithm can deal with very complex topographic and lens distortions efficiently, thus improving the accuracy of co-alignment compared to established image registration methods for airborne data. A graph cut method, named MCNCP-RNC was developed to reconstruct individual tree crowns from fully integrated multi-sensor imagery. MCNCP-RNC is not influenced by interpolation artefacts because it detects trees in 3D, and it detects individual tree crowns using both hyperspectral imagery and LiDAR. Based on these algorithms, we developed a new workflow to detect species at pixel and ITC levels in a temperate deciduous forest in the UK. In addition, we modified the workflow to monitor physiological responses of two oak species with respect to environmental gradients in a Mediterranean woodland in Spain. The results show that our scheme can detect individual tree crowns, find species and monitor physiological responses of canopy leaves.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:723521
Date January 2016
CreatorsLee, Juheon
ContributorsCoomes, David ; SchoĢˆnlieb, Carola-Bibiane
PublisherUniversity of Cambridge
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Sourcehttps://www.repository.cam.ac.uk/handle/1810/266686

Page generated in 0.0022 seconds