1 |
DEEP NEURAL NETWORKS AND TRANSFER LEARNINGFOR CROP PHENOTYPING USING MULTI-MODALITYREMOTE SENSING AND ENVIRONMENTAL DATATaojun Wang (15360640) 27 April 2023 (has links)
<p>High-throughput phenotyping has emerged as a powerful approach to expedite crop breeding programs. Modern remote sensing systems, including manned aircraft, unmanned aerial vehicles (UAVs), and terrestrial platforms equipped with multiple sensors, such as RGB cameras, multispectral, hyperspectral, and infrared thermal sensors, as well as light detection and ranging (LiDAR) scanners are now widely used technologies in advancing high throughput phenotyping. These systems can collect high spatial, spectral, and temporal resolution data on various phenotypic traits, such as plant height, canopy cover, and leaf area. Enhancing the capability of utilizing such remote sensing data for automated phenotyping is crucial in advancing crop breeding. This dissertation focuses on developing deep learning and transfer learning methodologies for crop phenotyping using multi-modality remote sensing and environmental data. The techniques address two main areas: multi-temporal/across-field biomass prediction and multi-scale remote sensing data fusion.</p>
<p><br></p>
<p>Biomass is a plant characteristic that strongly correlates with biofuel production, but is also influenced by genetic and environmental factors. Previous studies have shown that deep learning-based models are effective in predicting end-of-season biomass for a single year and field. This dissertation includes development of transfer learning methodologies for multiyear,</p>
<p>across-field biomass prediction. Feature importance analysis was performed to identify and remove redundant features. The proposed model can incorporate high-dimensional genetic marker data, along with other features representing phenotypic information, environmental conditions, or management practices. It can also predict end-of-season biomass using mid-season remote sensing and environmental data to provide early rankings. The framework was evaluated using experimental trials conducted from 2017 to 2021 at the Agronomy Center for Research and Education (ACRE) at Purdue University. The proposed transfer learning techniques effectively selected the most informative training samples in the target domain, resulting in significant improvements in end-of-season yield prediction and ranking. Furthermore, the importance of input remote sensing features was assessed at different growth stages.</p>
<p><br></p>
<p>Remote sensing technology enables multi-scale, multi-temporal data acquisition. However, to fully exploit the potential of the acquired data, data fusion techniques that leverage the strengths of different sensors and platforms are necessary. In this dissertation, a generative adversarial network (GAN) based multiscale RGB-guided model and domain adaptation framework were developed to enhance the spatial resolution of multispectral images. The model was trained on limited high spatial resolution images from a wheel-based platform and then applied to low spatial resolution images acquired by UAV and airborne platforms.</p>
<p>The strategy was tested in two distinct scenarios, sorghum plant breeding, and urban areas, to evaluate its effectiveness.</p>
|
2 |
Extending Depth of Field via Multifocus FusionHariharan, Harishwaran 01 December 2011 (has links)
In digital imaging systems, due to the nature of the optics involved, the depth of field is constricted in the field of view. Parts of the scene are in focus while others are defocused. Here, a framework of versatile data-driven application independent methods to extend the depth of field in digital imaging systems is presented. The principal contributions in this effort are the use of focal connectivity, the direct use of curvelets and features extracted by Empirical Mode Decomposition, namely Intrinsic Mode Images, for multifocus fusion. The input images are decomposed into focally connected components, peripheral and medial coefficients and intrinsic mode images depending on the approach and fusion is performed on extracted focal information, by relevant schema that allow emphasis of focused regions from each input image. The fused image unifies information from all focal planes, while maintaining the verisimilitude of the scene. The final output is an image where all focal volumes of the scene are in focus, as acquired by a pinhole camera with an infinitesimal depth of field. In order to validate the fusion performance of our method, we have compared our results with those of region-based and multiscale decomposition based fusion techniques. Several illustrative examples, supported by in depth objective comparisons are shown and various practical recommendations are made.
|
Page generated in 0.1043 seconds