Spelling suggestions: "subject:"free 1species detection"" "subject:"free 1species 1detection""
1 |
Mapping individual trees from airborne multi-sensor imageryLee, Juheon January 2016 (has links)
Airborne multi-sensor imaging is increasingly used to examine vegetation properties. The advantage of using multiple types of sensor is that each detects a different feature of the vegetation, so that collectively they provide a detailed understanding of the ecological pattern. Specifically, Light Detection And Ranging (LiDAR) devices produce detailed point clouds of where laser pulses have been backscattered from surfaces, giving information on vegetation structure; hyperspectral sensors measure reflectances within narrow wavebands, providing spectrally detailed information about the optical properties of targets; while aerial photographs provide high spatial-resolution imagery so that they can provide more feature details which cannot be identified from hyperspectral or LiDAR intensity images. Using a combination of these sensors, effective techniques can be developed for mapping species and inferring leaf physiological processes at ITC-level. Although multi-sensor approaches have revolutionised ecological research, their application in mapping individual tree crowns is limited by two major technical issues: (a) Multi-sensor imaging requires all images taken from different sensors to be co-aligned, but different sensor characteristics result in scale, rotation or translation mismatches between the images, making correction a pre-requisite of individual tree crown mapping; (b) reconstructing individual tree crowns from unstructured raw data space requires an accurate tree delineation algorithm. This thesis develops a schematic way to resolve these technical issues using the-state-of-the-art computer vision algorithms. A variational method, called NGF-Curv, was developed to co-align hyperspectral imagery, LiDAR and aerial photographs. NGF-Curv algorithm can deal with very complex topographic and lens distortions efficiently, thus improving the accuracy of co-alignment compared to established image registration methods for airborne data. A graph cut method, named MCNCP-RNC was developed to reconstruct individual tree crowns from fully integrated multi-sensor imagery. MCNCP-RNC is not influenced by interpolation artefacts because it detects trees in 3D, and it detects individual tree crowns using both hyperspectral imagery and LiDAR. Based on these algorithms, we developed a new workflow to detect species at pixel and ITC levels in a temperate deciduous forest in the UK. In addition, we modified the workflow to monitor physiological responses of two oak species with respect to environmental gradients in a Mediterranean woodland in Spain. The results show that our scheme can detect individual tree crowns, find species and monitor physiological responses of canopy leaves.
|
2 |
CNN-Based Methods for Tree Species Detection in UAV Images / CNN-baserade Metoder för Detektion av Trädarter i DrönarbilderSievers, Olle January 2022 (has links)
Unmanned aerial vehicles (UAVs) with high-resolution cameras are common in today’s society. Industries, such as the forestry industry, use drones to get a fast overview of tree populations. More advanced sensors, such as near-infrared light or depth data, can increase the amount of information that UAV images provide, providing information about the forest, such as; tree quantity or forest health. However, the fast-expanding field of deep learning could help expand the information acquired using only RGB cameras. Three deep learning models, FasterR-CNN, RetinaNet, and YOLOR were compared to investigate this. It was also investigated if initializing the models using transfer learning from the MS COCO dataset could increase the performance of the models. The dataset used was Swedish Forest Agency (2021): Forest Damages-Spruce Bark Beetle 1.0 National Forest Data Lab and drone images provided by IT-Bolaget Per & Per. The deep learning models were to detect five different tree species; spruce, pine, birch, aspen, and others. The results show potential for the usage of deep learning to detect tree species in images from UAVs. / Obemannade drönare med högupplösta kameror är vanliga i dagens samhälle. Branscher, så som skogsindustrin, kan använda sig av sådana drönare för att få en snabb översikt över ett skogsområde.Mer avancerade sensorer, som använder nära-infrarött ljus eller djupdata, kan öka mängden information som drönarna kan samla in, information såsom; trädmängd eller data om skogens hälsa. Det snabbt växande området djup-maskinlärning kan dock hjälpa till att utöka informationen som kan extraheras vid användning av endast RGB-kameror. Tre modeller för djupinlärning, Faster R-CNN, RetinaNet och YOLOR, jämfördes för att undersöka detta. Det undersöktes också om initiering med för-tränade vikter, med överföringsinlärning från datasetet MS COCO, skulle kunna öka modellernas prestanda. Datasetet som användes var Skogsstyrelsen (2021): Skogsskador-Granbarkborre1.0 Nationell Forest Data Lab samt drönarbilder tillhandahållna av IT-Bolaget Per & Per. Det tredjupinlärnings-modellerna skulle detektera fem olika trädarter: gran, tall, björk, asp, och övrigt.Resultaten visar potential för användning av djupinlärning för att upptäcka trädarter i bilder från drönare.
|
Page generated in 0.0863 seconds