Spelling suggestions: "subject:"3D abject detection"" "subject:"3D abject 1detection""
1 |
3D Object Detection for Advanced Driver Assistance SystemsDemilew, Selameab 29 June 2021 (has links)
Robust and timely perception of the environment is an essential requirement of all autonomous and semi-autonomous systems. This necessity has been the main factor behind the rapid growth and adoption of LiDAR sensors within the ADAS sensor suite. In this thesis, we develop a fast and accurate 3D object detector that converts raw point clouds collected by LiDARs into sparse occupancy cuboids to detect cars and other road users using deep convolutional neural networks. The proposed pipeline reduces the runtime of PointPillars by 43% and performs on par with other state-of-the-art models. We do not gain improvements in speed by compromising the network's complexity and learning capacity but rather through the use of an efficient input encoding procedure. In addition to rigorous profiling on three different platforms, we conduct a comprehensive error analysis and recognize principal sources of error among the predicted attributes.
Even though point clouds adequately capture the 3D structure of the physical world, they lack the rich texture information present in color images. In light of this, we explore the possibility of fusing the two modalities with the intent of improving detection accuracy. We present a late fusion strategy that merges the classification head of our LiDAR-based object detector with semantic segmentation maps inferred from images. Extensive experiments on the KITTI 3D object detection benchmark demonstrate the validity of the proposed fusion scheme.
|
2 |
Indoor 3D Scene Understanding Using Depth SensorsLahoud, Jean 09 1900 (has links)
One of the main goals in computer vision is to achieve a human-like understanding of images. Nevertheless, image understanding has been mainly studied in the 2D image frame, so more information is needed to relate them to the 3D world. With the emergence of 3D sensors (e.g. the Microsoft Kinect), which provide depth along with color information, the task of propagating 2D knowledge into 3D becomes more attainable and enables interaction between a machine (e.g. robot) and its environment. This dissertation focuses on three aspects of indoor 3D scene understanding: (1) 2D-driven 3D object detection for single frame scenes with inherent 2D information, (2) 3D object instance segmentation for 3D reconstructed scenes, and (3) using room and floor orientation for automatic labeling of indoor scenes that could be used for self-supervised object segmentation. These methods allow capturing of physical extents of 3D objects, such as their sizes and actual locations within a scene.
|
3 |
Wavelet-enhanced 2D and 3D Lightweight Perception Systems for autonomous drivingAlaba, Simegnew Yihunie 10 May 2024 (has links) (PDF)
Autonomous driving requires lightweight and robust perception systems that can rapidly and accurately interpret the complex driving environment. This dissertation investigates the transformative capacity of discrete wavelet transform (DWT), inverse DWT, CNNs, and transformers as foundational elements to develop lightweight perception architectures for autonomous vehicles. The inherent properties of DWT, including its invertibility, sparsity, time-frequency localization, and ability to capture multi-scale information, present an inductive bias. Similarly, transformers capture long-range dependency between features. By harnessing these attributes, novel wavelet-enhanced deep learning architectures are introduced. The first contribution is introducing a lightweight backbone network that can be employed for real-time processing. This network balances processing speed and accuracy, outperforming established models like ResNet-50 and VGG16 in terms of accuracy while remaining computationally efficient. Moreover, a multiresolution attention mechanism is introduced for CNNs to enhance feature extraction. This mechanism directs the network's focus toward crucial features while suppressing less significant ones. Likewise, a transformer model is proposed by leveraging the properties of DWT with vision transformers. The proposed wavelet-based transformer utilizes the convolution theorem in the frequency domain to mitigate the computational burden on vision transformers caused by multi-head self-attention. Furthermore, a proposed wavelet-multiresolution-analysis-based 3D object detection model exploits DWT's invertibility, ensuring comprehensive environmental information capture. Lastly, a multimodal fusion model is presented to use information from multiple sensors. Sensors have limitations, and there is no one-fits-all sensor for specific applications. Therefore, multimodal fusion is proposed to use the best out of different sensors. Using a transformer to capture long-range feature dependencies, this model effectively fuses the depth cues from LiDAR with the rich texture derived from cameras. The multimodal fusion model is a promising approach that integrates backbone networks and transformers to achieve lightweight and competitive results for 3D object detection. Moreover, the proposed model utilizes various network optimization methods, including pruning, quantization, and quantization-aware training, to minimize the computational load while maintaining optimal performance. The experimental results across various datasets for classification networks, attention mechanisms, 3D object detection, and multimodal fusion indicate a promising direction in developing a lightweight and robust perception system for robotics, particularly in autonomous driving.
|
4 |
Simulation Framework for Driving Data Collection and Object Detection Algorithms to Aid Autonomous Vehicle Emulation of Human Driving StylesJanuary 2020 (has links)
abstract: Autonomous Vehicles (AVs), or self-driving cars, are poised to have an enormous impact on the automotive industry and road transportation. While advances have been made towards the development of safe, competent autonomous vehicles, there has been inadequate attention to the control of autonomous vehicles in unanticipated situations, such as imminent crashes. Even if autonomous vehicles follow all safety measures, accidents are inevitable, and humans must trust autonomous vehicles to respond appropriately in such scenarios. It is not plausible to program autonomous vehicles with a set of rules to tackle every possible crash scenario. Instead, a possible approach is to align their decision-making capabilities with the moral priorities, values, and social motivations of trustworthy human drivers.Toward this end, this thesis contributes a simulation framework for collecting, analyzing, and replicating human driving behaviors in a variety of scenarios, including imminent crashes. Four driving scenarios in an urban traffic environment were designed in the CARLA driving simulator platform, in which simulated cars can either drive autonomously or be driven by a user via a steering wheel and pedals. These included three unavoidable crash scenarios, representing classic trolley-problem ethical dilemmas, and a scenario in which a car must be driven through a school zone, in order to examine driver prioritization of reaching a destination versus ensuring safety. Sample human driving data in CARLA was logged from the simulated car’s sensors, including the LiDAR, IMU and camera. In order to reproduce human driving behaviors in a simulated vehicle, it is necessary for the AV to be able to identify objects in the environment and evaluate the volume of their bounding boxes for prediction and planning. An object detection method was used that processes LiDAR point cloud data using the PointNet neural network architecture, analyzes RGB images via transfer learning using the Xception convolutional neural network architecture, and fuses the outputs of these two networks. This method was trained and tested on both the KITTI Vision Benchmark Suite dataset and a virtual dataset exclusively generated from CARLA. When applied to the KITTI dataset, the object detection method achieved an average classification accuracy of 96.72% and an average Intersection over Union (IoU) of 0.72, where the IoU metric compares predicted bounding boxes to those used for training. / Dissertation/Thesis / Masters Thesis Mechanical Engineering 2020
|
5 |
Automotive 3D Object Detection Without Target Domain AnnotationsGustafsson, Fredrik, Linder-Norén, Erik January 2018 (has links)
In this thesis we study a perception problem in the context of autonomous driving. Specifically, we study the computer vision problem of 3D object detection, in which objects should be detected from various sensor data and their position in the 3D world should be estimated. We also study the application of Generative Adversarial Networks in domain adaptation techniques, aiming to improve the 3D object detection model's ability to transfer between different domains. The state-of-the-art Frustum-PointNet architecture for LiDAR-based 3D object detection was implemented and found to closely match its reported performance when trained and evaluated on the KITTI dataset. The architecture was also found to transfer reasonably well from the synthetic SYN dataset to KITTI, and is thus believed to be usable in a semi-automatic 3D bounding box annotation process. The Frustum-PointNet architecture was also extended to explicitly utilize image features, which surprisingly degraded its detection performance. Furthermore, an image-only 3D object detection model was designed and implemented, which was found to compare quite favourably with current state-of-the-art in terms of detection performance. Additionally, the PixelDA approach was adopted and successfully applied to the MNIST to MNIST-M domain adaptation problem, which validated the idea that unsupervised domain adaptation using Generative Adversarial Networks can improve the performance of a task network for a dataset lacking ground truth annotations. Surprisingly, the approach did however not significantly improve upon the performance of the image-based 3D object detection models when trained on the SYN dataset and evaluated on KITTI.
|
6 |
Implementation of an Approach for 3D Vehicle Detection in Monocular Traffic Surveillance VideosMishra, Abhinav 19 February 2021 (has links)
Recent advancements in the field of Computer Vision are a by-product of breakthroughs in the domain of Artificial Intelligence. Object detection in monocular images is now realized by an amalgamation of Computer Vision and Deep Learning. While most approaches detect objects as a mere two dimensional (2D) bounding box, there are a few that exploit rather traditional representation of the 3D object. Such approaches detect an object either as a 3D bounding box or exploit its shape primitives using active shape models which results in a wireframe-like detection. Such a wireframe detection is represented as combinations of detected keypoints (or landmarks) of the desired object. Apart from a faithful retrieval of the object’s true shape, wireframe based approaches are relatively robust in handling occlusions. The central task of this thesis was to find such an approach and to implement it with the goal of its performance evaluation. The object of interest is the vehicle class (cars, mini vans, trucks etc.) and the evaluation data is monocular traffic surveillance videos collected by the supervising chair. A wireframe type detection can aid several facets of traffic analysis by improved (compared to 2D bounding box) estimation of the detected object’s ground plane. The thesis encompasses the process of implementation of the chosen approach called Occlusion-Net [40], including its design details and a qualitative evaluation on traffic surveillance videos. The implementation reproduces most of the published results across several occlusion categories except the truncated car category. Occlusion-Net’s erratic detections are mostly caused by incorrect detection of the initial region of interest. It employs three instances of Graph Neural Networks for occlusion reasoning and localization. The thesis also provides a didactic introduction to the field of Machine and Deep Learning including intuitions of mathematical concepts required to understand the two disciplines and the implemented approach.:Contents
1 Introduction 1
2 Technical Background 7
2.1 AI, Machine Learning and Deep Learning 7
2.1.1 But what is AI ? 7
2.1.2 Representational composition by Deep Learning 10
2.2 Essential Mathematics for ML 14
2.2.1 Linear Algebra 15
2.2.2 Probability and Statistics 25
2.2.3 Calculus 34
2.3 Mathematical Introduction to ML 39
2.3.1 Ingredients of a Machine Learning Problem 39
2.3.2 The Perceptron 40
2.3.3 Feature Transformation 46
2.3.4 Logistic Regression 48
2.3.5 Artificial Neural Networks: ANN 53
2.3.6 Convolutional Neural Network: CNN 61
2.3.7 Graph Neural Networks 68
2.4 Specific Topics in Computer Vision 72
2.5 Previous work 76
3 Design of Implemented Approach 81
3.1 Training Dataset 81
3.2 Keypoint Detection : MaskRCNN 83
3.3 Occluded Edge Prediction : 2D-KGNN Encoder 84
3.4 Occluded Keypoint Localization : 2D-KGNN Decoder 86
3.5 3D Shape Estimation: 3D-KGNN Encoder 88
4 Implementation 93
4.1 Open-Source Tools and Libraries 93
4.1.1 Code Packaging: NVIDIA-Docker 94
4.1.2 Data Processing Libraries 94
4.1.3 Libraries for Neural Networks 95
4.1.4 Computer Vision Library 95
4.2 Dataset Acquisition and Training 96
4.2.1 Acquiring Dataset 96
4.2.2 Training Occlusion-Net 96
4.3 Refactoring 97
4.3.1 Error in Docker File 97
4.3.2 Image Directories as Input 97
4.3.3 Frame Extraction in Parallel 98
4.3.4 Video as Input 100
4.4 Functional changes 100
4.4.1 Keypoints In Output 100
4.4.2 Mismatched BB and Keypoints 101
4.4.3 Incorrect Class Labels 101
4.4.4 Bounding Box Overlay 101
5 Evaluation 103
5.1 Qualitative Evaluation 103
5.1.1 Evaluation Across Occlusion Categories 103
5.1.2 Performance on Moderate and Heavy Vehicles 105
5.2 Verification of Failure Analysis 106
5.2.1 Truncated Cars 107
5.2.2 Overlapping Cars 108
5.3 Analysis of Missing Frames 109
5.4 Test Performance 110
6 Conclusion 113
7 Future Work 117
Bibliography 119
|
7 |
Multi-site Organ Detection in CT Images using Deep Learning / Regionsoberoende organdetektion i CT-bilder meddjupinlärningJacobzon, Gustaf January 2020 (has links)
When optimizing a controlled dose in radiotherapy, high resolution spatial information about healthy organs in close proximity to the malignant cells are necessary in order to mitigate dispersion into these organs-at-risk. This information can be provided by deep volumetric segmentation networks, such as 3D U-Net. However, due to limitations of memory in modern graphical processing units, it is not feasible to train a volumetric segmentation network on full image volumes and subsampling the volume gives a too coarse segmentation. An alternative is to sample a region of interest from the image volume and train an organ-specific network. This approach requires knowledge of which region in the image volume that should be sampled and can be provided by a 3D object detection network. Typically the detection network will also be region specific, although a larger region such as the thorax region, and requires human assistance in choosing the appropriate network for a certain region in the body. Instead, we propose a multi-site object detection network based onYOLOv3 trained on 43 different organs, which may operate on arbitrary chosen axial patches in the body. Our model identifies the organs present (whole or truncated) in the image volume and may automatically sample a region from the input and feed to the appropriate volumetric segmentation network. We train our model on four small (as low as 20 images) site-specific datasets in a weakly-supervised manner in order to handle the partially unlabeled nature of site-specific datasets. Our model is able to generate organ-specific regions of interests that enclose 92% of the organs present in the test set. / Vid optimering av en kontrollerad dos inom strålbehandling krävs det information om friska organ, så kallade riskorgan, i närheten av de maligna cellerna för att minimera strålningen i dessa organ. Denna information kan tillhandahållas av djupa volymetriskta segmenteringsnätverk, till exempel 3D U-Net. Begränsningar i minnesstorleken hos moderna grafikkort gör att det inte är möjligt att träna ett volymetriskt segmenteringsnätverk på hela bildvolymen utan att först nedsampla volymen. Detta leder dock till en lågupplöst segmentering av organen som inte är tillräckligt precis för att kunna användas vid optimeringen. Ett alternativ är att endast behandla en intresseregion som innesluter ett eller ett fåtal organ från bildvolymen och träna ett regionspecifikt nätverk på denna mindre volym. Detta tillvägagångssätt kräver dock information om vilket område i bildvolymen som ska skickas till det regionspecifika segmenteringsnätverket. Denna information kan tillhandahållas av ett 3Dobjektdetekteringsnätverk. I regel är även detta nätverk regionsspecifikt, till exempel thorax-regionen, och kräver mänsklig assistans för att välja rätt nätverk för en viss region i kroppen. Vi föreslår istället ett multiregions-detekteringsnätverk baserat påYOLOv3 som kan detektera 43 olika organ och fungerar på godtyckligt valda axiella fönster i kroppen. Vår modell identifierar närvarande organ (hela eller trunkerade) i bilden och kan automatiskt ge information om vilken region som ska behandlas av varje regionsspecifikt segmenteringsnätverk. Vi tränar vår modell på fyra små (så lågt som 20 bilder) platsspecifika datamängder med svag övervakning för att hantera den delvis icke-annoterade egenskapen hos datamängderna. Vår modell genererar en organ-specifik intresseregion för 92 % av organen som finns i testmängden.
|
8 |
Data Augmentation for Safe 3D Object Detection for Autonomous Volvo Construction VehiclesZhao, Xun January 2021 (has links)
Point cloud data can express the 3D features of objects, and is an important data type in the field of 3D object detection. Since point cloud data is more difficult to collect than image data and the scale of existing datasets is smaller, point cloud data augmentation is introduced to allow more features to be discovered on existing data. In this thesis, we propose a novel method to enhance the point cloud scene, based on the generative adversarial network (GAN) to realize the augmentation of the objects and then integrate them into the existing scenes. A good fidelity and coverage are achieved between the fake sample and the real sample, with JSD equal to 0.027, MMD equal to 0.00064, and coverage equal to 0.376. In addition, we investigated the functional data annotation tools and completed the data labeling task. The 3D object detection task is carried out on the point cloud data, and we have achieved a relatively good detection results in a short processing of around 22ms. Quantitative and qualitative analysis is carried out on different models. / Punktmolndata kan uttrycka 3D-egenskaperna hos objekt och är en viktig datatyp inom området för 3D-objektdetektering. Eftersom punktmolndata är svarare att samla in än bilddata och omfattningen av befintlig data är mindre, introduceras punktmolndataförstärkning för att tillåta att fler funktioner kan upptäckas på befintlig data. I det här dokumentet föreslår vi en metod för att förbättra punktmolnsscenen, baserad på det generativa motstridiga nätverket (GAN) för att realisera förstärkningen av objekten och sedan integrera dem i de befintliga scenerna. En god trohet och tackning uppnås mellan det falska provet och det verkliga provet, med JSD lika med 0,027, MMD lika med 0,00064 och täckning lika med 0,376. Dessutom undersökte vi de funktionella verktygen för dataanteckningar och slutförde uppgiften for datamärkning. 3D- objektdetekteringsuppgiften utförs på punktmolnsdata och vi har uppnått ett relativt bra detekteringsresultat på en kort bearbetningstid runt 22ms. Kvantitativ och kvalitativ analys utförs på olika modeller.
|
9 |
3D Object Detection based on Unsupervised Depth EstimationManoharan, Shanmugapriyan 25 January 2022 (has links)
Estimating depth and detection of object instances in 3D space is fundamental in autonomous navigation, localization, and mapping, robotic object manipulation, and
augmented reality. RGB-D images and LiDAR point clouds are the most illustrative formats of depth information. However, depth sensors offer many shortcomings,
such as low effective spatial resolutions and capturing of a scene from a single perspective.
The thesis focuses on reproducing denser and comprehensive 3D scene structure for given monocular RGB images using depth and 3D object detection.
The first contribution of this thesis is the pipeline for the depth estimation based on an unsupervised learning framework. This thesis proposes two architectures to
analyze structure from motion and 3D geometric constraint methods. The proposed architectures trained and evaluated using only RGB images and no ground truth
depth data. The architecture proposed in this thesis achieved better results than the state-of-the-art methods.
The second contribution of this thesis is the application of the estimated depth map, which includes two algorithms: point cloud generation and collision avoidance.
The predicted depth map and RGB image are used to generate the point cloud data using the proposed point cloud algorithm. The collision avoidance algorithm predicts
the possibility of collision and provides the collision warning message based on decoding the color in the estimated depth map. This algorithm design is adaptable
to different color map with slight changes and perceives collision information in the sequence of frames.
Our third contribution is a two-stage pipeline to detect the 3D objects from a monocular image. The first stage pipeline used to detect the 2D objects and crop
the patch of the image and the same provided as the input to the second stage. In the second stage, the 3D regression network train to estimate the 3D bounding boxes
to the target objects. There are two architectures proposed for this 3D regression network model. This approach achieves better average precision than state-of-theart
for truncation of 15% or fully visible objects and lowers but comparable results for truncation more than 30% or partly/fully occluded objects.
|
10 |
CenterPoint-based 3D Object Detection in ONCE DatasetDu, Yuwei January 2022 (has links)
High-efficiency point cloud 3D object detection is important for autonomous driving. 3D object detection based on point cloud data is naturally more complex and difficult than the 2D task based on images. Researchers keep working on improving 3D object detection performance in autonomous driving scenarios recently. In this report, we present our optimized point cloud 3D object detection model based on CenterPoint method. CenterPoint detects centers of objects using a keypoint detector on top of a voxel-based backbone, then regresses to other attributes. On the basis of this, our modified model is featured with an improved Region Proposal Network (RPN) with extended receptive field, an added sub-head that produces an IoU-aware confidence score, as well as box ensemble inference strategies with more accurate predictions. These model enhancements, together with class-balanced data pre-processing, lead to a competitive accuracy of 72.02 mAP on ONCE Validation Split, and 79.09 mAP on ONCE Test Split. Our model gains the fifth place of ICCV 2021 Workshop SSLAD Track 3D Object Detection Challenge. / Högeffektiv punktmoln 3D-objektdetektering är viktig för autonom körning. 3D-objektdetektering baserad på punktmolnsdata är naturligtvis mer komplex och svårare än 2D-uppgiften baserad på bilder. Forskare fortsätter att arbeta med att förbättra 3D-objektdetekteringsprestandan i scenarier för autonom körning nyligen. I den här rapporten presenterar vi vår optimerade 3D-objektdetekteringsmodell baserad på CenterPoint. CenterPoint upptäcker objektcentrum med hjälp av en nyckelpunktsdetektor ovanpå en voxelbaserad ryggrad och går sedan tillbaka till andra attribut. På grundval av detta presenteras vår modifierade modell med ett förbättrat regionförslagsnätverk med utökat receptivt fält, en extra underrubrik som producerar en IoU-medveten konfidenspoäng och ensemblestrategier med mer exakta förutsägelser. Dessa modellförbättringar, tillsammans med klassbalanserad dataförbehandling, leder till en konkurrenskraftig noggrannhet på 72,02 mAP på ONCE Validation Split och 79,09 mAP på ONCE Test Split. Vår modell vinner femteplatsen i ICCV 2021 Workshop SSLAD Track 3D Object Detection Challenge.
|
Page generated in 0.0972 seconds