• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A system that learns to recognize 3-D objects

Gabrielides, Gabriel January 1988 (has links)
A system that learns to recognize 3-D objects from single and multiple views is presented. It consists of three parts: a simulator of 3-D figures, a Learner, and a recognizer. The 3-D figure simulator generates and plots line drawings of certain 3-D objects. A series of transformations leads to a number of 2-D images of a 3-D object, which are considered as different views and are the basic input to the next two parts. The learner works in three stages using the method of Learning from examples. In the first stage an elementary-concept learner learns the basic entities that make up a line drawing. In the second stage a multiple-view learner learns the definitions of 3-D objects that are to be recognized from multiple views. In the third stage a single-view learner learns how to recognize the same objects from single views. The recognizer is presented with line drawings representing 3-D scenes. A single-view recognizer segments the input into faces of possible 3-D objects, and attempts to match the segmented scene with a set of single-view definitions of 3-D objects. The result of the recognition may include several alternative answers, corresponding to different 3-D objects. A unique answer can be obtained by making assumptions about hidden elements (e. g. faces) of an object and using a multiple-view recognizer. Both single-view and multiple-view recognition are based on the structural relations of the elements that make up a 3-D object. Some analytical elements (e. g. angles) of the objects are also calculated, in order to determine point containment and conveziti. The system performs well on polyhedra with triangular and quadrilateral faces. A discussion of the system's performance and suggestions for further development is given at the end. The simulator and the part of the recognizer that makes the analytical calculations are written in C. The learner and the rest of the recognizer are written in PROLOG.
2

Integration of Camera and LiDAR Units onboard Mobile Mapping Systems for Deriving Accurate, Comprehensive Products

Tian Zhou (6114419) 08 August 2024 (has links)
<p>Modern mobile mapping systems (MMSs)  -- such as Uncrewed Aerial Vehicles (UAVs), backpack systems, Unmanned Ground Vehicles (UGVs), and wheel-based systems -- equipped with imaging/ranging modalities and navigation units -- i.e., integrated Global Navigation Satellite System/Inertial Navigation System (GNSS/INS) -- have emerged as promising platforms due to their ability to conduct fine spatial/temporal resolution mapping at a reasonable cost. The integration of camera and LiDAR data acquired by these MMSs can result in an accurate and comprehensive description of the object space, due to their complementary characteristics. Meaningful integration of multi-temporal data/products from different modalities onboard single or multiple systems is contingent on their positional quality. The objective of this dissertation is to develop strategies that enable the derivation of accurately georeferenced data from LiDAR and camera units onboard UAVs and backpack systems across diverse mapping environments. To do so, accurate system calibration parameters -- including the sensor's interior orientation parameters (IOP) and mounting parameters relating the sensors to the INS's Inertial Measurement Unit (IMU) body frame -- and trajectory information need to be derived.</p> <p><br></p> <p>In this dissertation, to resolve the issues that arose from unstable IOP of consumer-grade camera onboard a GNSS/INS-assisted UAV, a LiDAR-aided camera IOP refinement strategy is first proposed. Additionally, in a more general case where system calibration is required for both camera and LiDAR units onboard single or multiple GNSS/INS-assisted UAV(s), an automated, tightly-coupled camera/LiDAR integration workflow through simultaneous system calibration and trajectory refinement is developed. While UAVs typically operate in open sky conditions, conducting in-canopy mapping using backpack systems for forest inventory applications is significantly affected by GNSS signal outages induced by the canopy cover. To derive accurate trajectory information in such scenarios, a system-driven strategy for trajectory enhancement and mounting parameters refinement of UAV and backpack LiDAR systems in forest applications is developed. Furthermore, considering that this approach requires an initial trajectory with limited drift errors, the Simultaneous Localization and Mapping (SLAM) technique is adopted to directly derive the trajectory information. Specifically, a comprehensive forest feature-based (i.e., tree trunks and ground) LiDAR SLAM framework using 3D LiDAR mounted on backpack systems is developed. These proposed strategies are tested using multiple datasets from UAV and backpack mobile mapping systems. Experimental results verify that the proposed approaches successfully derive accurate system calibration parameters and trajectory information, and consequently well-aligned multi-system, multi-temporal, multi-sensor data with high relative/absolute accuracy.</p>

Page generated in 0.0825 seconds