• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • Tagged with
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

COUNTING SORGHUM LEAVES FROM RGB IMAGES BY PANOPTIC SEGMENTATION

Ian Ostermann (15321589) 19 April 2023 (has links)
<p>    </p> <p>Meeting the nutritional requirements of an increasing population in a changing climate is the foremost concern of agricultural research in recent years. A solution to some of the many questions posed by this existential threat is breeding crops that more efficiently produce food with respect to land and water use. A key aspect to this optimization is geometric aspects of plant physiology such as canopy architecture that, while based in the actual 3D structure of the organism, does not necessarily require such a representation to measure. Although deep learning is a powerful tool to answer phenotyping questions that do not require an explicit intermediate 3D representation, training a network traditionally requires a large number of hand-segmented ground truth images. To bypass the enormous time and expense of hand- labeling datasets, we utilized a procedural sorghum image pipeline from another student in our group that produces images similar enough to the ground truth images from the phenotyping facility that the network can be directly used on real data while training only on automatically generated data. The synthetic data was used to train a deep segmentation network to identify which pixels correspond to which leaves. The segmentations were then processed to find the number of leaves identified in each image to use for the leaf-counting task in high-throughput phenotyping. Overall, our method performs comparably with human annotation accuracy by correctly predicting within a 90% confidence interval of the true leaf count in 97% of images while being faster and cheaper. This helps to add another expensive- to-collect phenotypic trait to the list of those that can be automatically collected. </p>
2

[pt] SLAM VISUAL EM AMBIENTES DINÂMICOS UTILIZANDO SEGMENTAÇÃO PANÓPTICA / [en] VISUAL SLAM IN DYNAMIC ENVIRONMENTS USING PANOPTIC SEGMENTATION

GABRIEL FISCHER ABATI 10 August 2023 (has links)
[pt] Robôs moveis se tornaram populares nos últimos anos devido a sua capacidade de operar de forma autônoma e performar tarefas que são perigosas, repetitivas ou tediosas para seres humanos. O robô necessita ter um mapa de seus arredores e uma estimativa de sua localização dentro desse mapa para alcançar navegação autônoma. O problema de Localização e Mapeamento Simultâneos (SLAM) está relacionado com a determinação simultânea do mapa e da localização usando medidas de sensores. SLAM visual diz respeito a estimar a localização e o mapa de um robô móvel usando apenas informações visuais capturadas por câmeras. O uso de câmeras para o sensoriamento proporciona uma vantagem significativa, pois permite resolver tarefas de visão computacional que fornecem informações de alto nível sobre a cena, incluindo detecção, segmentação e reconhecimento de objetos. A maioria dos sistemas de SLAM visuais não são robustos a ambientes dinâmicos. Os sistemas que lidam com conteúdo dinâmico normalmente contem com métodos de aprendizado profundo para detectar e filtrar objetos dinâmicos. Existem vários sistemas de SLAM visual na literatura com alta acurácia e desempenho, porem a maioria desses métodos não englobam objetos desconhecidos. Este trabalho apresenta um novo sistema de SLAM visual robusto a ambientes dinâmicos, mesmo na presença de objetos desconhecidos. Este método utiliza segmentação panóptica para filtrar objetos dinâmicos de uma cena durante o processo de estimação de estado. A metodologia proposta é baseada em ORB-SLAM3, um sistema de SLAM estado-da-arte em ambientes estáticos. A implementação foi testada usando dados reais e comparado com diversos sistemas da literatura, incluindo DynaSLAM, DS-SLAM e SaD-SLAM. Além disso, o sistema proposto supera os resultados do ORB-SLAM3 em um conjunto de dados personalizado composto por ambientes dinâmicos e objetos desconhecidos em movimento. / [en] Mobile robots have become popular in recent years due to their ability to operate autonomously and accomplish tasks that would otherwise be too dangerous, repetitive, or tedious for humans. The robot must have a map of its surroundings and an estimate of its location within this map to achieve full autonomy in navigation. The Simultaneous Localization and Mapping (SLAM) problem is concerned with determining both the map and localization concurrently using sensor measurements. Visual SLAM involves estimating the location and map of a mobile robot using only visual information captured by cameras. Utilizing cameras for sensing provides a significant advantage, as they enable solving computer vision tasks that offer high-level information about the scene, including object detection, segmentation, and recognition. There are several visual SLAM systems in the literature with high accuracy and performance, but the majority of them are not robust in dynamic scenarios. The ones that deal with dynamic content in the scenes usually rely on deep learning-based methods to detect and filter dynamic objects. However, these methods cannot deal with unknown objects. This work presents a new visual SLAM system robust to dynamic environments, even in the presence of unknown moving objects. It uses Panoptic Segmentation to filter dynamic objects from the scene during the state estimation process. The proposed methodology is based on ORB-SLAM3, a state-of-the-art SLAM system for static environments. The implementation was tested using real-world datasets and compared with several systems from the literature, including DynaSLAM, DS-SLAM and SaD-SLAM. Also, the proposed system surpasses ORB-SLAM3 results in a custom dataset composed of dynamic environments with unknown moving objects.
3

Depth-Aware Deep Learning Networks for Object Detection and Image Segmentation

Dickens, James 01 September 2021 (has links)
The rise of convolutional neural networks (CNNs) in the context of computer vision has occurred in tandem with the advancement of depth sensing technology. Depth cameras are capable of yielding two-dimensional arrays storing at each pixel the distance from objects and surfaces in a scene from a given sensor, aligned with a regular color image, obtaining so-called RGBD images. Inspired by prior models in the literature, this work develops a suite of RGBD CNN models to tackle the challenging tasks of object detection, instance segmentation, and semantic segmentation. Prominent architectures for object detection and image segmentation are modified to incorporate dual backbone approaches inputting RGB and depth images, combining features from both modalities through the use of novel fusion modules. For each task, the models developed are competitive with state-of-the-art RGBD architectures. In particular, the proposed RGBD object detection approach achieves 53.5% mAP on the SUN RGBD 19-class object detection benchmark, while the proposed RGBD semantic segmentation architecture yields 69.4% accuracy with respect to the SUN RGBD 37-class semantic segmentation benchmark. An original 13-class RGBD instance segmentation benchmark is introduced for the SUN RGBD dataset, for which the proposed model achieves 38.4% mAP. Additionally, an original depth-aware panoptic segmentation model is developed, trained, and tested for new benchmarks conceived for the NYUDv2 and SUN RGBD datasets. These benchmarks offer researchers a baseline for the task of RGBD panoptic segmentation on these datasets, where the novel depth-aware model outperforms a comparable RGB counterpart.
4

Maximizing the performance of point cloud 4D panoptic segmentation using AutoML technique / Maximera prestandan för punktmoln 4D panoptisk segmentering med hjälp av AutoML-teknik

Ma, Teng January 2022 (has links)
Environment perception is crucial to autonomous driving. Panoptic segmentation and objects tracking are two challenging tasks, and the combination of both, namely 4D panoptic segmentation draws researchers’ attention recently. In this work, we implement 4D panoptic LiDAR segmentation (4D-PLS) on Volvo datasets and provide a pipeline of data preparation, model building and model optimization. The main contributions of this work include: (1) building the Volvo datasets; (2) adopting an 4D-PLS model improved by Hyperparameter Optimization (HPO). We annotate point cloud data collected from Volvo CE, and take a supervised learning approach by employing a Deep Neural Network (DNN) to extract features from point cloud data. On the basis of the 4D-PLS model, we employ Bayesian Optimization to find the best hyperparameters for our data, and improve the model performance within a small training budget. / Miljöuppfattning är avgörande för autonom körning. Panoptisk segmentering och objektspårning är två utmanande uppgifter, och kombinationen av båda, nämligen 4D panoptisk segmentering, har nyligen uppmärksammat forskarna. I detta arbete implementerar vi 4D-PLS på Volvos datauppsättningar och tillhandahåller en pipeline av dataförberedelse, modellbyggande och modelloptimering. De huvudsakliga bidragen från detta arbete inkluderar: (1) bygga upp Volvos datauppsättningar; (2) anta en 4D-PLS-modell förbättrad av HPO. Vi kommenterar punktmolndata som samlats in från Volvo CE och använder ett övervakat lärande genom att använda en DNN för att extrahera funktioner från punktmolnsdata. På basis av 4D-PLS-modellen använder vi Bayesian Optimization för att hitta de bästa hyperparametrarna för vår data och förbättra modellens prestanda inom en liten utbildningsbudget.
5

Point Cloud Data Augmentation for 4D Panoptic Segmentation / Punktmolndataförstärkning för 4D-panoptisk Segmentering

Jin, Wangkang January 2022 (has links)
4D panoptic segmentation is an emerging topic in the field of autonomous driving, which jointly tackles 3D semantic segmentation, 3D instance segmentation, and 3D multi-object tracking based on point cloud data. However, the difficulty of collection limits the size of existing point cloud datasets. Therefore, data augmentation is employed to expand the amount of existing data for better generalization and prediction ability. In this thesis, we built a new point cloud dataset named VCE dataset from scratch. Besides, we adopted a neural network model for the 4D panoptic segmentation task and proposed a simple geometric method based on translation operation. Compared to the baseline model, better results were obtained after augmentation, with an increase of 2.15% in LSTQ. / 4D-panoptisk segmentering är ett framväxande ämne inom området autonom körning, som gemensamt tar itu med semantisk 3D-segmentering, 3D-instanssegmentering och 3D-spårning av flera objekt baserat på punktmolnsdata. Svårigheten att samla in begränsar dock storleken på befintliga punktmolnsdatauppsättningar. Därför används dataökning för att utöka mängden befintliga data för bättre generalisering och förutsägelseförmåga. I det här examensarbetet byggde vi en ny punktmolndatauppsättning med namnet VCE-datauppsättning från grunden. Dessutom antog vi en neural nätverksmodell för 4D-panoptisk segmenteringsuppgift och föreslog en enkel geometrisk metod baserad på översättningsoperation. Jämfört med baslinjemodellen erhölls bättre resultat efter förstärkning, med en ökning på 2.15% i LSTQ.

Page generated in 0.1064 seconds