• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 6
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

3D shape estimation of negative obstacles using LiDAR point cloud data

Lebakula, Viswadeep 10 December 2021 (has links)
Obstacle detection and avoidance plays a crucial role in the autonomous navigation of unmanned ground vehicles (UGV). Information about the obstacles decreases as the distance between the UGV and obstacles increases. However, this information decreases much more rapidly for negative obstacles than for positive obstacles. UGV navigation becomes more challenging in off-road environments due to the higher probability of finding negative obstacles (e.g., potholes, ditches, trenches, etc.) compared with on-road environments. One approach to solve this problem is to avoid the candidate path with a negative obstacle, but in off-road environments avoiding negative obstacles in all situations is not possible. In such cases, the local path planner may need to choose a candidate path with a negative obstacle that causes the least amount of damage to the vehicle. To deal better with these types of scenarios, this research introduces a novel approach to perform 3D shape estimation of negative obstacles using LiDAR point cloud data. The dimensions (width, diameter, and depth), location (center), and curvature of negative obstacles were calculated based on an estimated shape. The presented approach can estimate the shape of different kinds of negative obstacles such as holes, trenches, in addition to large and complicated negative obstacles. This approach was tested on different terrain types using the Mississippi Autonomous Vehicle Simulation (MAVS).
2

Classification of Man-made Urban Structures from Lidar Point Clouds with Applications to Extrusion-based 3-D City Models

Thomas, Anita 19 May 2015 (has links)
No description available.
3

Wavelet-enhanced 2D and 3D Lightweight Perception Systems for autonomous driving

Alaba, Simegnew Yihunie 10 May 2024 (has links) (PDF)
Autonomous driving requires lightweight and robust perception systems that can rapidly and accurately interpret the complex driving environment. This dissertation investigates the transformative capacity of discrete wavelet transform (DWT), inverse DWT, CNNs, and transformers as foundational elements to develop lightweight perception architectures for autonomous vehicles. The inherent properties of DWT, including its invertibility, sparsity, time-frequency localization, and ability to capture multi-scale information, present an inductive bias. Similarly, transformers capture long-range dependency between features. By harnessing these attributes, novel wavelet-enhanced deep learning architectures are introduced. The first contribution is introducing a lightweight backbone network that can be employed for real-time processing. This network balances processing speed and accuracy, outperforming established models like ResNet-50 and VGG16 in terms of accuracy while remaining computationally efficient. Moreover, a multiresolution attention mechanism is introduced for CNNs to enhance feature extraction. This mechanism directs the network's focus toward crucial features while suppressing less significant ones. Likewise, a transformer model is proposed by leveraging the properties of DWT with vision transformers. The proposed wavelet-based transformer utilizes the convolution theorem in the frequency domain to mitigate the computational burden on vision transformers caused by multi-head self-attention. Furthermore, a proposed wavelet-multiresolution-analysis-based 3D object detection model exploits DWT's invertibility, ensuring comprehensive environmental information capture. Lastly, a multimodal fusion model is presented to use information from multiple sensors. Sensors have limitations, and there is no one-fits-all sensor for specific applications. Therefore, multimodal fusion is proposed to use the best out of different sensors. Using a transformer to capture long-range feature dependencies, this model effectively fuses the depth cues from LiDAR with the rich texture derived from cameras. The multimodal fusion model is a promising approach that integrates backbone networks and transformers to achieve lightweight and competitive results for 3D object detection. Moreover, the proposed model utilizes various network optimization methods, including pruning, quantization, and quantization-aware training, to minimize the computational load while maintaining optimal performance. The experimental results across various datasets for classification networks, attention mechanisms, 3D object detection, and multimodal fusion indicate a promising direction in developing a lightweight and robust perception system for robotics, particularly in autonomous driving.
4

DEEP NEURAL NETWORKS AND TRANSFER LEARNINGFOR CROP PHENOTYPING USING MULTI-MODALITYREMOTE SENSING AND ENVIRONMENTAL DATA

Taojun Wang (15360640) 27 April 2023 (has links)
<p>High-throughput phenotyping has emerged as a powerful approach to expedite crop breeding programs. Modern remote sensing systems, including manned aircraft, unmanned aerial vehicles (UAVs), and terrestrial platforms equipped with multiple sensors, such as RGB cameras, multispectral, hyperspectral, and infrared thermal sensors, as well as light detection and ranging (LiDAR) scanners are now widely used technologies in advancing high throughput phenotyping. These systems can collect high spatial, spectral, and temporal resolution data on various phenotypic traits, such as plant height, canopy cover, and leaf area. Enhancing the capability of utilizing such remote sensing data for automated phenotyping is crucial in advancing crop breeding. This dissertation focuses on developing deep learning and transfer learning methodologies for crop phenotyping using multi-modality remote sensing and environmental data. The techniques address two main areas: multi-temporal/across-field biomass prediction and multi-scale remote sensing data fusion.</p> <p><br></p> <p>Biomass is a plant characteristic that strongly correlates with biofuel production, but is also influenced by genetic and environmental factors. Previous studies have shown that deep learning-based models are effective in predicting end-of-season biomass for a single year and field. This dissertation includes development of transfer learning methodologies for multiyear,</p> <p>across-field biomass prediction. Feature importance analysis was performed to identify and remove redundant features. The proposed model can incorporate high-dimensional genetic marker data, along with other features representing phenotypic information, environmental conditions, or management practices. It can also predict end-of-season biomass using mid-season remote sensing and environmental data to provide early rankings. The framework was evaluated using experimental trials conducted from 2017 to 2021 at the Agronomy Center for Research and Education (ACRE) at Purdue University. The proposed transfer learning techniques effectively selected the most informative training samples in the target domain, resulting in significant improvements in end-of-season yield prediction and ranking. Furthermore, the importance of input remote sensing features was assessed at different growth stages.</p> <p><br></p> <p>Remote sensing technology enables multi-scale, multi-temporal data acquisition. However, to fully exploit the potential of the acquired data, data fusion techniques that leverage the strengths of different sensors and platforms are necessary. In this dissertation, a generative adversarial network (GAN) based multiscale RGB-guided model and domain adaptation framework were developed to enhance the spatial resolution of multispectral images. The model was trained on limited high spatial resolution images from a wheel-based platform and then applied to low spatial resolution images acquired by UAV and airborne platforms.</p> <p>The strategy was tested in two distinct scenarios, sorghum plant breeding, and urban areas, to evaluate its effectiveness.</p>
5

Development of a Laser-Guided Variable-Rate Sprayer with Improved Canopy Estimations for Greenhouse Spray Applications

Nair, Uchit January 2020 (has links)
No description available.
6

LiDAR Point Cloud De-noising for Adverse Weather

Bergius, Johan, Holmblad, Jesper January 2022 (has links)
Light Detection And Ranging (LiDAR) is a hot topic today primarily because of its vast importance within autonomous vehicles. LiDAR sensors are capable of capturing and identifying objects in the 3D environment. However, a drawback of LiDAR is that they perform poorly under adverse weather conditions. Noise present in LiDAR scans can be divided into random and pseudo-random noise. Random noise can be modeled and mitigated by statistical means. The same approach works on pseudo-random noise, but it is less effective. For this, Deep Neural Nets (DNN) are better suited. The main goal of this thesis is to investigate how snow can be detected in LiDAR point clouds and filtered out. The dataset used is Winter Adverse DrivingdataSet (WADS). Supervised filtering contains a comparison between statistical filtering and segmentation-based neural networks and is evaluated on recall, precision, and F1. The supervised approach is expanded by investigating an ensemble approach. The supervised result indicates that neural networks have an advantage over statistical filters, and the best result was obtained from the 3D convolution network with an F1 score of 94.58%. Our ensemble approaches improved the F1 score but did not lead to more snow being removed. We determine that an ensemble approach is a sub-optimal way of increasing the prediction performance and holds the drawback of being more complex. We also investigate an unsupervised approach. The unsupervised networks are evaluated on their ability to find noisy data and correct it. Correcting the LiDAR data means predicting new values for detected noise instead of just removing it. Correctness of such predictions is evaluated manually but with the assistance of metrics like PSNR and SSIM. None of the unsupervised networks produced an acceptable result. The reason behind this negative result is investigated and presented in our conclusion, along with a model that suffers none of the flaws pointed out.

Page generated in 0.0669 seconds