• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Off-road Driving with Deteriorated Road Conditions for Autonomous Driving Systems

Ekström, Eric January 2022 (has links)
Recent studies on robustness of machine learning systems shows that today’s autonomous vehicles struggle with very basic visual disturbances such as rain or snow. There is also a lack of training data that includes off road scenes or scenes with different forms of deformation to the road surface. The purpose of this thesis is to address the lack of off-road scenes in current dataset for training of autonomous vehicles and the issue of visual disturbances by building a simulated 3D environment for generating training scenarios and training data for specific environments. The synthesised scenes is implemented using modern OpenGL, and we propose methods to synthesis rutting and the formation of potholes on road surfaces as well as rain and fog with a parameterized approach. \\ The generated datasets are tested through semantic segmentation using state of the art pretrained neural networks. The results show that the neural networks accurately identifies the road surface in in clear weather as long as the road surface is mostly coherent. The synthesised rain and fog decrease performance of the neural networks significantly. \\ Generating training data with the method presented in this thesis and incorporating it as part of the training data used in training neural networks for autonomous driving systems could be used to improve performance in certain scenarios. Specifically, it could improve performance in driving scenes with heavy road deformations, and in scenes with low visibility. Further research is needed to conclude that the data is useful, but the results generated in this thesis is promising.
2

Generic instance segmentation for object-oriented bin-picking / Segmentation en instances génériques pour le dévracage orienté objet

Grard, Matthieu 20 May 2019 (has links)
Le dévracage robotisé est une tâche industrielle en forte croissance visant à automatiser le déchargement par unité d’une pile d’instances d'objet en vrac pour faciliter des traitements ultérieurs tels que la formation de kits ou l’assemblage de composants. Cependant, le modèle explicite des objets est souvent indisponible dans de nombreux secteurs industriels, notamment alimentaire et automobile, et les instances d'objet peuvent présenter des variations intra-classe, par exemple en raison de déformations élastiques.Les techniques d’estimation de pose, qui nécessitent un modèle explicite et supposent des transformations rigides, ne sont donc pas applicables dans de tels contextes. L'approche alternative consiste à détecter des prises sans notion explicite d’objet, ce qui pénalise fortement le dévracage lorsque l’enchevêtrement des instances est important. Ces approches s’appuient aussi sur une reconstruction multi-vues de la scène, difficile par exemple avec des emballages alimentaires brillants ou transparents, ou réduisant de manière critique le temps de cycle restant dans le cadre d’applications à haute cadence.En collaboration avec Siléane, une entreprise française de robotique industrielle, l’objectif de ce travail est donc de développer une solution par apprentissage pour la localisation des instances les plus prenables d’un vrac à partir d’une seule image, en boucle ouverte, sans modèles d'objet explicites. Dans le contexte du dévracage industriel, notre contribution est double.Premièrement, nous proposons un nouveau réseau pleinement convolutionnel (FCN) pour délinéer les instances et inférer un ordre spatial à leurs frontières. En effet, les méthodes état de l'art pour cette tâche reposent sur deux flux indépendants, respectivement pour les frontières et les occultations, alors que les occultations sont souvent sources de frontières. Plus précisément, l'approche courante, qui consiste à isoler les instances dans des boîtes avant de détecter les frontières et les occultations, se montre inadaptée aux scénarios de dévracage dans la mesure où une région rectangulaire inclut souvent plusieurs instances. A contrario, notre architecture sans détection préalable de régions détecte finement les frontières entre instances, ainsi que le bord occultant correspondant, à partir d'une représentation unifiée de la scène.Deuxièmement, comme les FCNs nécessitent de grands ensembles d'apprentissage qui ne sont pas disponibles dans les applications de dévracage, nous proposons une procédure par simulation pour générer des images d'apprentissage à partir de moteurs physique et de rendu. Plus précisément, des vracs d'instances sont simulés et rendus avec les annotations correspondantes à partir d'ensembles d'images de texture et de maillages auxquels sont appliquées de multiples déformations aléatoires. Nous montrons que les données synthétiques proposées sont vraisemblables pour des applications réelles au sens où elles permettent l'apprentissage de représentations profondes transférables à des données réelles. A travers de nombreuses expériences sur une maquette réelle avec robot, notre réseau entraîné sur données synthétiques surpasse la méthode industrielle de référence, tout en obtenant des performances temps réel. L'approche proposée établit ainsi une nouvelle référence pour le dévracage orienté-objet sans modèle d'objet explicite. / Referred to as robotic random bin-picking, a fast-expanding industrial task consists in robotizing the unloading of many object instances piled up in bulk, one at a time, for further processing such as kitting or part assembling. However, explicit object models are not always available in many bin-picking applications, especially in the food and automotive industries. Furthermore, object instances are often subject to intra-class variations, for example due to elastic deformations.Object pose estimation techniques, which require an explicit model and assume rigid transformations, are therefore not suitable in such contexts. The alternative approach, which consists in detecting grasps without an explicit notion of object, proves hardly efficient when the object geometry makes bulk instances prone to occlusion and entanglement. These approaches also typically rely on a multi-view scene reconstruction that may be unfeasible due to transparent and shiny textures, or that reduces critically the time frame for image processing in high-throughput robotic applications.In collaboration with Siléane, a French company in industrial robotics, we thus aim at developing a learning-based solution for localizing the most affordable instance of a pile from a single image, in open loop, without explicit object models. In the context of industrial bin-picking, our contribution is two-fold.First, we propose a novel fully convolutional network (FCN) for jointly delineating instances and inferring the spatial layout at their boundaries. Indeed, the state-of-the-art methods for such a task rely on two independent streams for boundaries and occlusions respectively, whereas occlusions often cause boundaries. Specifically, the mainstream approach, which consists in isolating instances in boxes before detecting boundaries and occlusions, fails in bin-picking scenarios as a rectangle region often includes several instances. By contrast, our box proposal-free architecture recovers fine instance boundaries, augmented with their occluding side, from a unified scene representation. As a result, the proposed network outperforms the two-stream baselines on synthetic data and public real-world datasets.Second, as FCNs require large training datasets that are not available in bin-picking applications, we propose a simulation-based pipeline for generating training images using physics and rendering engines. Specifically, piles of instances are simulated and rendered with their ground-truth annotations from sets of texture images and meshes to which multiple random deformations are applied. We show that the proposed synthetic data is plausible for real-world applications in the sense that it enables the learning of deep representations transferable to real data. Through extensive experiments on a real-world robotic setup, our synthetically trained network outperforms the industrial baseline while achieving real-time performances. The proposed approach thus establishes a new baseline for model-free object-oriented bin-picking.
3

INFRARED SENSOR MODELING FOR OBJECT DETECTION IN AUTONOMOUS VEHICLES USING POST-PROCESS MATERIAL IN UNREAL ENGINE

Sri sai teja Vemulapalli (20379372) 05 December 2024 (has links)
<p dir="ltr">Recent advancements in autonomous vehicle research and development have significantly enhanced their capabilities, largely due to innovations in sensor technology. Infrared (IR) sensing, in particular, has rapidly advanced to the point where sensors have been miniaturized and made commercially viable for integration into autonomous vehicles. While hardware advancements are notable, large-scale deployment requires rigorous testing to ensure reliability and safety.</p><p dir="ltr">Autonomous vehicle technology heavily relies on machine learning (ML) and object detection algorithms, which necessitate precisely annotated image data for effective training. Although IR sensors are commercially available, their acquisition at scale remains economically challenging, making it difficult to generate the necessary volume of data. Furthermore, annotating this data is resource-intensive, requiring significant human effort.</p><p dir="ltr">This research addresses these challenges by proposing a cost-effective and scalable solution: developing a virtual IR model within a virtual simulation environment using the hyper-realistic open-source graphics engine, Unreal Engine. While some proprietary solutions offer virtual IR sensing simulations, there is a significant gap in open-source options that are economically accessible to most researchers.</p><p dir="ltr">The proposed IR camera model is designed using Unreal Engine’s blueprint scripting and open-source object models, creating a virtual simulation environment capable of generating auto-annotated IR images at a rate of 60 frames per second. These images are then used to train a YOLO object detection model, which is subsequently applied to open-source real infrared images, simulating the actual use of IR camera-based object detection in autonomous vehicles. The resulting model demonstrates promising potential in providing a user-friendly, open-source virtual IR camera that can generate annotated images suitable for training object detection models.</p>

Page generated in 0.0978 seconds