• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 348
  • 42
  • 20
  • 13
  • 10
  • 8
  • 5
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 541
  • 541
  • 253
  • 210
  • 173
  • 134
  • 113
  • 111
  • 108
  • 89
  • 87
  • 80
  • 75
  • 74
  • 73
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Imaging and Object Detection under Extreme Lighting Conditions and Real World Adversarial Attacks

Xiangyu Qu (16385259) 22 June 2023 (has links)
<p>Imaging and computer vision systems deployed in real-world environments face the challenge of accommodating a wide range of lighting conditions. However, the cost, the demand for high resolution, and the miniaturization of imaging devices impose physical constraints on sensor design, limiting both the dynamic range and effective aperture size of each pixel. Consequently, conventional CMOS sensors fail to deliver satisfactory capture in high dynamic range scenes or under photon-limited conditions, thereby impacting the performance of downstream vision tasks. In this thesis, we address two key problems: 1) exploring the utilization of spatial multiplexing, specifically spatially varying exposure tiling, to extend sensor dynamic range and optimize scene capture, and 2) developing techniques to enhance the robustness of object detection systems under photon-limited conditions.</p> <p><br></p> <p>In addition to challenges imposed by natural environments, real-world vision systems are susceptible to adversarial attacks in the form of artificially added digital content. Therefore, this thesis presents a comprehensive pipeline for constructing a robust and scalable system to counter such attacks.</p>
482

Anchor-free object detection in surveillance applications

Magnusson, Peter January 2023 (has links)
Computer vision object detection is the task of detecting and identifying objects present in an image or a video sequence. Models based on artificial convolutional neural networks are commonly used as detector models. Object detection precision and inference efficiency are crucial for surveillance-based applications. A decrease in the detector model complexity as well as in the complexity of the post-processing computations promotes increased inference efficiency. Modern object detectors for surveillance applications usually make use of a regression algorithm and bounding box priors referred to as anchor boxes to compute bounding box proposals, and the proposal selection algorithm contributes to the computational cost at inference. In this study, an anchor-free and low complexity deep learning detector model was implemented within a surveillance applications setting, and was evaluated and compared to a reference baseline state-of-the-art anchor-based object detector. A key-point-based detector model (CenterNet), predicting Gaussian distribution based object centers, was selected for the evaluation against the baseline. The surveillance applications adapted anchor-free detector exhibited a factor 2.4 lower complexity than the baseline detector. Further, a significant redistribution to shorter post-processing times was demonstrated at inference for the anchor-free surveillance adapted CenterNet detector, giving a modal values factor 0.6 of the baseline detector post-processing time. Furthermore, the surveillance adapted CenterNet model was shown to outperform the baseline in terms of detection precision for several surveillance applications relevant classes and for objects of smaller spatial scale.
483

Utveckling av stöd för synskadade med hjälp av AI och datorseende : Designprinciper för icke-visuella gränssnitt

Schill, William, Berngarn, Philip January 2022 (has links)
Denna studie ämnar att undersöka och identifiera lämpliga designprinciper för interaktiva system med icke-visuella gränssnitt. Genom att utveckla och ta fram ett hjälpmedel för synskadade människor med hjälp av AI och datorseende, är det möjligt att identifiera och utvärdera viktiga designprinciper. Teorier har samlats in kring interaktiva system, designprinciper, AI och datorseende för att både kunna utveckla en artefakt men också förstå befintliga designprinciper för interaktiva system. Design Science Research Methodology har använts som metod för att utveckla en artefakt i form av ett hjälpmedel som känner av olika objekt i realtid. Metoden har genom en iterativ process kunnat identifiera och utvärdera olika krav för artefakten som sedan resulterat i ett designförslag. För att identifiera kraven har kvalitativ data i form av semistrukturerade användarintervjuer samlats in från fem personer med en synskada. Avslutningsvis presenteras kopplingen mellan de krav som framkommit under intervjuerna och  befintliga designprinciper för interaktiva system med grafiska användargränssnitt. Ett förslag på vidare forskning inom ämnet diskuteras också. / This study aims to examine and identify appropriate design principles for interactive systems without visual interfaces. By developing an aid for the visually impaired with the help of AI and computer vision, it is possible to identify and evaluate important design principles. Theories within interactive systems, design principles, AI and computer vision have been collected in order to develop an artifact and to understand existing design principles. Design Science Research Methodology has been used to develop an aid that can detect objects in real-time. The method has been able to identify and evaluate different requirements for the artifact through an iterative process that results in a design proposal. In order to identify the requirements, qualitative data was collected from five people with visual impairment by conducting semi-structured interviews. Finally, the connection between the requirements identified from the interviews, and the existing design principles for interactive systems with graphical user interfaces is presented. A proposal for further research within the area is also discussed.
484

Optical Inspection for Soldering Fault Detection in a PCB Assembly using Convolutional Neural Networks

Bilal Akhtar, Muhammad January 2019 (has links)
Convolutional Neural Network (CNN) has been established as a powerful toolto automate various computer vision tasks without requiring any aprioriknowledge. Printed Circuit Board (PCB) manufacturers want to improve theirproduct quality by employing vision based automatic optical inspection (AOI)systems at PCB assembly manufacturing. An AOI system employs classiccomputer vision and image processing techniques to detect variousmanufacturing faults in a PCB assembly. Recently, CNN has been usedsuccessfully at various stages of automatic optical inspection. However, nonehas used 2D image of PCB assembly directly as input to a CNN. Currently, allavailable systems are specific to a PCB assembly and require a lot ofpreprocessing steps or a complex illumination system to improve theaccuracy. This master thesis attempts to design an effective soldering faultdetection system using CNN applied on image of a PCB assembly, withRaspberry Pi PCB assembly as the case in point.Soldering faults detection is considered as equivalent of object detectionprocess. YOLO (short for: “You Only Look Once”) is state-of-the-art fast objectdetection CNN. Although, it is designed for object detection in images frompublicly available datasets, we are using YOLO as a benchmark to define theperformance metrics for the proposed CNN. Besides accuracy, theeffectiveness of a trained CNN also depends on memory requirements andinference time. Accuracy of a CNN increases by adding a convolutional layer atthe expense of increased memory requirement and inference time. Theprediction layer of proposed CNN is inspired by the YOLO algorithm while thefeature extraction layer is customized to our application and is a combinationof classical CNN components with residual connection, inception module andbottleneck layer.Experimental results show that state-of-the-art object detection algorithmsare not efficient when used on a new and different dataset for object detection.Our proposed CNN detection algorithm predicts more accurately than YOLOalgorithm with an increase in average precision of 3.0%, is less complexrequiring 50% lesser number of parameters, and infers in half the time takenby YOLO. The experimental results also show that CNN can be an effectivemean of performing AOI (given there is plenty of dataset available for trainingthe CNN). / Convolutional Neural Network (CNN) har etablerats som ett kraftfullt verktygför att automatisera olika datorvisionsuppgifter utan att kräva någon apriorikunskap. Printed Circuit Board (PCB) tillverkare vill förbättra sinproduktkvalitet genom att använda visionbaserade automatiska optiskainspektionssystem (AOI) vid PCB-monteringstillverkning. Ett AOI-systemanvänder klassiska datorvisions- och bildbehandlingstekniker för att upptäckaolika tillverkningsfel i en PCB-enhet. Nyligen har CNN använts framgångsrikti olika stadier av automatisk optisk inspektion. Ingen har dock använt 2D-bildav PCB-enheten direkt som inmatning till ett CNN. För närvarande är allatillgängliga system specifika för en PCB-enhet och kräver mångaförbehandlingssteg eller ett komplext belysningssystem för att förbättranoggrannheten. Detta examensarbete försöker konstruera ett effektivtlödningsfelsdetekteringssystem med hjälp av CNN applicerat på bild av enPCB-enhet, med Raspberry Pi PCB-enhet som fallet.Detektering av lödningsfel anses vara ekvivalent medobjektdetekteringsprocessen. YOLO (förkortning: “Du ser bara en gång”) ärdet senaste snabba objektdetekteringen CNN. Även om det är utformat förobjektdetektering i bilder från offentligt tillgängliga datasätt, använder viYOLO som ett riktmärke för att definiera prestandametriken för detföreslagna CNN. Förutom noggrannhet beror effektiviteten hos en tränadCNN också på minneskrav och slutningstid. En CNNs noggrannhet ökargenom att lägga till ett invändigt lager på bekostnad av ökat minnesbehov ochinferingstid. Förutsägelseskiktet för föreslaget CNN är inspirerat av YOLOalgoritmenmedan funktionsekstraktionsskiktet anpassas efter vår applikationoch är en kombination av klassiska CNN-komponenter med restanslutning,startmodul och flaskhalsskikt.Experimentella resultat visar att modernaste objektdetekteringsalgoritmerinte är effektiva när de används i ett nytt och annorlunda datasätt förobjektdetektering. Vår föreslagna CNN-detekteringsalgoritm förutsäger merexakt än YOLO-algoritmen med en ökning av den genomsnittliga precisionenpå 3,0%, är mindre komplicerad som kräver 50% mindre antal parametraroch lägger ut under halva tiden som YOLO tar. De experimentella resultatenvisar också att CNN kan vara ett effektivt medel för att utföra AOI (med tankepå att det finns gott om datamängder tillgängliga för utbildning av CNN)
485

Multimodální zpracování dat a mapování v robotice založené na strojovém učení / Machine Learning-Based Multimodal Data Processing and Mapping in Robotics

Ligocki, Adam January 2021 (has links)
Disertace se zabývá aplikaci neuronových sítí pro detekci objektů na multimodální data v robotice. Celkem cílí na tři oblasti: tvorbu datasetu, zpracování multimodálních dat a trénování neuronových sítí. Nejdůležitější části práce je návrh metody pro tvorbu rozsáhlých anotovaných datasetů bez časové náročného lidského zásahu. Metoda používá neuronové sítě trénované na RGB obrázcích. Užitím dat z několika snímačů pro vytvoření modelu okolí a mapuje anotace z RGB obrázků na jinou datovou doménu jako jsou termální obrázky, či mračna bodů. Pomoci této metody autor vytvořil dataset několika set tisíc anotovaných obrázků a použil je pro trénink neuronové sítě, která následně překonala modely trénované na menších, lidmi anotovaných datasetech. Dále se autor v práci zabývá robustností detekce objektů v několika datových doménách za různých povětrnostních podmínek. Práce také popisuje kompletní řetězec zpracování multimodálních dat, které autor vytvořil během svého doktorského studia. To Zahrnuje vývoj unikátního senzorického zařízení, které je vybavené řadou snímačů běžně užívaných v robotice. Dále autor popisuje proces tvorby rozsáhlého, veřejně dostupného datasetu Brno Urban Dataset. Na závěr autor popisuje software, který vznikl během jeho studia a jak je tento software užit při zpracování dat v rámci jeho práce (Atlas Fusion a Robotic Template Library).
486

Instance Segmentation on depth images using Swin Transformer for improved accuracy on indoor images / Instans-segmentering på bilder med djupinformation för förbättrad prestanda på inomhusbilder

Hagberg, Alfred, Musse, Mustaf Abdullahi January 2022 (has links)
The Simultaneous Localisation And Mapping (SLAM) problem is an open fundamental problem in autonomous mobile robotics. One of the latest most researched techniques used to enhance the SLAM methods is instance segmentation. In this thesis, we implement an instance segmentation system using Swin Transformer combined with two of the state of the art methods of instance segmentation namely Cascade Mask RCNN and Mask RCNN. Instance segmentation is a technique that simultaneously solves the problem of object detection and semantic segmentation. We show that depth information enhances the average precision (AP) by approximately 7%. We also show that the Swin Transformer backbone model can work well with depth images. Our results also show that Cascade Mask RCNN outperforms Mask RCNN. However, the results are to be considered due to the small size of the NYU-depth v2 dataset. Most of the instance segmentation researches use the COCO dataset which has a hundred times more images than the NYU-depth v2 dataset but it does not have the depth information of the image.
487

Proposal networks in object detection / Förslagsnätverk för objektdetektering

Grossman, Mikael January 2019 (has links)
Locating and extracting useful data from images is a task that has been revolutionized in the last decade as computing power has risen to such a level to use deep neural networks with success. A type of neural network that uses the convolutional operation called convolutional neural network (CNN) is suited for image related tasks. Using the convolution operation creates opportunities for the network to learn their own filters, that previously had to be hand engineered. For locating objects in an image the state-of-the-art Faster R-CNN model predicts objects in two parts. Firstly, the region proposal network (RPN) extracts regions from the picture where it is likely to find an object. Secondly, a detector verifies the likelihood of an object being in that region.For this thesis, we review the current literature on artificial neural networks, object detection methods, proposal methods and present our new way of generating proposals. By replacing the RPN with our network, the multiscale proposal network (MPN), we increase the average precision (AP) with 12% and reduce the computation time per image by 10%. / Lokalisering av användbar data från bilder är något som har revolutionerats under det senaste decenniet när datorkraften har ökat till en nivå då man kan använda artificiella neurala nätverk i praktiken. En typ av ett neuralt nätverk som använder faltning passar utmärkt till bilder eftersom det ger möjlighet för nätverket att skapa sina egna filter som tidigare skapades för hand. För lokalisering av objekt i bilder används huvudsakligen Faster R-CNN arkitekturen. Den fungerar i två steg, först skapar RPN boxar som innehåller regioner där nätverket tror det är störst sannolikhet att hitta ett objekt. Sedan är det en detektor som verifierar om boxen är på ett objekt .I denna uppsats går vi igenom den nuvarande litteraturen i artificiella neurala nätverk, objektdektektering, förslags metoder och presenterar ett nytt förslag att generera förslag på regioner. Vi visar att genom att byta ut RPN med vår metod (MPN) ökar vi precisionen med 12% och reducerar tiden med 10%.
488

3D YOLO: End-to-End 3D Object Detection Using Point Clouds / 3D YOLO: Objektdetektering i 3D med LiDAR-data

Al Hakim, Ezeddin January 2018 (has links)
For safe and reliable driving, it is essential that an autonomous vehicle can accurately perceive the surrounding environment. Modern sensor technologies used for perception, such as LiDAR and RADAR, deliver a large set of 3D measurement points known as a point cloud. There is a huge need to interpret the point cloud data to detect other road users, such as vehicles and pedestrians. Many research studies have proposed image-based models for 2D object detection. This thesis takes it a step further and aims to develop a LiDAR-based 3D object detection model that operates in real-time, with emphasis on autonomous driving scenarios. We propose 3D YOLO, an extension of YOLO (You Only Look Once), which is one of the fastest state-of-the-art 2D object detectors for images. The proposed model takes point cloud data as input and outputs 3D bounding boxes with class scores in real-time. Most of the existing 3D object detectors use hand-crafted features, while our model follows the end-to-end learning fashion, which removes manual feature engineering. 3D YOLO pipeline consists of two networks: (a) Feature Learning Network, an artificial neural network that transforms the input point cloud to a new feature space; (b) 3DNet, a novel convolutional neural network architecture based on YOLO that learns the shape description of the objects. Our experiments on the KITTI dataset shows that the 3D YOLO has high accuracy and outperforms the state-of-the-art LiDAR-based models in efficiency. This makes it a suitable candidate for deployment in autonomous vehicles. / För att autonoma fordon ska ha en god uppfattning av sin omgivning används moderna sensorer som LiDAR och RADAR. Dessa genererar en stor mängd 3-dimensionella datapunkter som kallas point clouds. Inom utvecklingen av autonoma fordon finns det ett stort behov av att tolka LiDAR-data samt klassificera medtrafikanter. Ett stort antal studier har gjorts om 2D-objektdetektering som analyserar bilder för att upptäcka fordon, men vi är intresserade av 3D-objektdetektering med hjälp av endast LiDAR data. Därför introducerar vi modellen 3D YOLO, som bygger på YOLO (You Only Look Once), som är en av de snabbaste state-of-the-art modellerna inom 2D-objektdetektering för bilder. 3D YOLO tar in ett point cloud och producerar 3D lådor som markerar de olika objekten samt anger objektets kategori. Vi har tränat och evaluerat modellen med den publika träningsdatan KITTI. Våra resultat visar att 3D YOLO är snabbare än dagens state-of-the-art LiDAR-baserade modeller med en hög träffsäkerhet. Detta gör den till en god kandidat för kunna användas av autonoma fordon.
489

OBJECT DETECTION USING VISION TRANSFORMED EFFICIENTDET

Shreyanil Kar (16285265) 30 August 2023 (has links)
<p>This research presents a novel approach for object detection by integrating Vision Transformers (ViT) into the EfficientDet architecture. The field of computer vision, encompassing artificial intelligence, focuses on the interpretation and analysis of visual data. Recent advancements in deep learning, particularly convolutional neural networks (CNNs), have significantly improved the accuracy and efficiency of computer vision systems. Object detection, a widely studied application within computer vision, involves the identification and localization of objects in images.</p> <p>The ViT backbone, renowned for its success in image classification and natural language processing tasks, employs self-attention mechanisms to capture global dependencies in input images. However, ViT’s capability to capture fine-grained details and context information is limited. To address this limitation, the integration of ViT into the EfficientDet architecture is proposed. EfficientDet is recognized for its efficiency and accuracy in object detection. By combining the strengths of ViT and EfficientDet, the proposed integration enhances the network’s ability to capture fine-grained details and context information. It leverages ViT’s global dependency modeling alongside EfficientDet’s efficient object detection framework, resulting in highly accurate and efficient performance. Noteworthy object detection frameworks utilized in the industry, such as RetinaNet, EfficientNet, and EfficientDet, primarily employ convolution.</p> <p>Experimental evaluations were conducted using the PASCAL VOC 2007 and 2012 datasets, widely acknowledged benchmarks for object detection. The integrated ViT-EfficientDet model achieved an impressive mean Average Precision (mAP) score of 86.27% when tested on the PASCAL VOC 2007 dataset, demonstrating its superior accuracy. These results underscore the potential of the proposed integration for real-world applications.</p> <p>In conclusion, the research introduces a novel integration of Vision Transformers into the EfficientDet architecture, yielding significant improvements in object detection performance. By combining ViT’s ability to capture global dependencies with EfficientDet’s efficiency and accuracy, the proposed approach offers enhanced object detection capabilities. Future research directions may explore additional datasets and evaluate the performance of the proposed framework across various computer vision tasks.</p>
490

CenterPoint-based 3D Object Detection in ONCE Dataset

Du, Yuwei January 2022 (has links)
High-efficiency point cloud 3D object detection is important for autonomous driving. 3D object detection based on point cloud data is naturally more complex and difficult than the 2D task based on images. Researchers keep working on improving 3D object detection performance in autonomous driving scenarios recently. In this report, we present our optimized point cloud 3D object detection model based on CenterPoint method. CenterPoint detects centers of objects using a keypoint detector on top of a voxel-based backbone, then regresses to other attributes. On the basis of this, our modified model is featured with an improved Region Proposal Network (RPN) with extended receptive field, an added sub-head that produces an IoU-aware confidence score, as well as box ensemble inference strategies with more accurate predictions. These model enhancements, together with class-balanced data pre-processing, lead to a competitive accuracy of 72.02 mAP on ONCE Validation Split, and 79.09 mAP on ONCE Test Split. Our model gains the fifth place of ICCV 2021 Workshop SSLAD Track 3D Object Detection Challenge. / Högeffektiv punktmoln 3D-objektdetektering är viktig för autonom körning. 3D-objektdetektering baserad på punktmolnsdata är naturligtvis mer komplex och svårare än 2D-uppgiften baserad på bilder. Forskare fortsätter att arbeta med att förbättra 3D-objektdetekteringsprestandan i scenarier för autonom körning nyligen. I den här rapporten presenterar vi vår optimerade 3D-objektdetekteringsmodell baserad på CenterPoint. CenterPoint upptäcker objektcentrum med hjälp av en nyckelpunktsdetektor ovanpå en voxelbaserad ryggrad och går sedan tillbaka till andra attribut. På grundval av detta presenteras vår modifierade modell med ett förbättrat regionförslagsnätverk med utökat receptivt fält, en extra underrubrik som producerar en IoU-medveten konfidenspoäng och ensemblestrategier med mer exakta förutsägelser. Dessa modellförbättringar, tillsammans med klassbalanserad dataförbehandling, leder till en konkurrenskraftig noggrannhet på 72,02 mAP på ONCE Validation Split och 79,09 mAP på ONCE Test Split. Vår modell vinner femteplatsen i ICCV 2021 Workshop SSLAD Track 3D Object Detection Challenge.

Page generated in 0.0829 seconds