Spelling suggestions: "subject:"aemantic segmentation"" "subject:"emantic segmentation""
71 |
Automatic classification of fish and bubbles at pixel-level precision in multi-frequency acoustic echograms using U-Net convolutional neural networksSlonimer, Alex 05 April 2022 (has links)
Multi-frequency backscatter acoustic profilers (echosounders) are used to measure biological and physical phenomena in the ocean in ways that are not possible with optical methods. Echosounders are commonly used on ocean observatories and by commercial fisheries but require significant manual effort to classify species of interest within the collected echograms. The work presented in this thesis tackles the challenging task of automating the identification of fish and other phenomena in echosounder data, with specific application to aggregations of juvenile salmon, schools of herring, and bubbles of air that have been mixed into the water.
U-Net convolutional neural networks (CNNs) are used to accomplish this task by identifying classes at the pixel level. The data considered here were collected in Okisollo Channel on the coast of British Columbia, Canada, using an Acoustic Zooplankton and Fish Profiler at four frequencies (67.5, 125, 200, and 455 kHz). The entrainment of air bubbles and the behaviour of fish are both governed by the surrounding physical environment. To improve the classification, simulated channels for water depth and solar elevation angle (a proxy for sunlight) are used to encode the CNNs with information related to the environment providing spatial and temporal context. The manual annotation of echograms at the pixel level is a challenging process, and a custom application was developed to aid in this process. A relatively small set of annotations were created and are used to train the CNNs. During training, the echogram data are divided into randomly-spaced square tiles to encode the models with robust features, and into overlapping tiles for added redundancy during classification. This is done without removing noise in the data, thus ensuring broad applicability. This approach is proven highly successful, as evidenced by the best-performing U-Net model producing F1 scores of 93.0%, 87.3% and 86.5% for herring, salmon, and bubble classes, respectively. These models also achieve promising results when applied to echogram data with coarser resolution.
One goal in fisheries acoustics is to detect distinct schools of fish. Following the initial pixel level classification, the results from the best performing U-Net model are fed through a heuristic module, inspired by traditional fisheries methods, that links connected components of identified fish (school candidates) into distinct school objects. The results are compared to the outputs from a recent study that relied on a Mask R-CNN architecture to apply instance segmentation for classifying fish schools. It is demonstrated that the U-Net/heuristic hybrid technique improves on the Mask R-CNN approach by a small amount for the classification of herring schools, and by a large amount for aggregations of juvenile salmon (improvement in mean average precision from 24.7% to 56.1%). / Graduate
|
72 |
Apprentissage statistique de classes sémantiques pour l'interprétation d'images aériennes / Learning of semantic classes for aerial image analysisRandrianarivo, Hicham 15 December 2016 (has links)
Ce travail concerne l'interprétation du contenu des images aériennes optiques panchromatiques très haute résolution. Deux méthodes pour la classification du contenu de ces images ont été développées. Une méthode basée sur la détection des instances des différentes catégories d'objets et une autre méthode basée sur la segmentation sémantique des superpixels de l'image utilisant un modèle de contexte entre les différentes instances des superpixels. La méthode de détection des objets dans une image très haute résolution est basée sur l'apprentissage d'un mélange de modèle d'apparence de la catégorie d'objets à détecter puis d'une fusion des hypothèses renvoyées par les différents modèles. Nous proposons une méthode de partitionnement en sous catégories visuelles basée sur une procédure en deux étapes des exemples d'apprentissages de la base en fonction des métadonnées disponibles et de l'apparence des exemples d'apprentissage. Cette phase de partitionnement permet d'apprendre des modèles d'apparence où chacun est spécialisés dans la reconnaissance d'une sous-partie de la base et dont la fusion permet la généralisation de la détection à l'ensemble des objets de la classe. Les performances du détecteur ainsi obtenu sont évaluées sur plusieurs bases d'images aériennes très haute résolution à des résolution différentes et en plusieurs endroits du monde. La méthode de segmentation sémantique contextuelle développée utilise une combinaison de la description visuelle d'un superpixel extrait d'une image et des informations de contexte extraient entre un superpixel et ses voisins. La représentation du contexte entre les superpixels est obtenu en utilisant une représentation par modèle graphique entre les superpixels voisins. Les noeuds du graphes étant la représentation visuelle d'un superpixel et les arêtes la représentation contextuelle entre deux voisins. Enfin nous présentons une méthode de prédiction de la catégorie d'un superpixel en fonction des décisions données par les voisins pour rendre les prédictions plus robustes. La méthode a été testé sur une base d'image aérienne très haute résolution. / This work is about interpretation of the content of very high resolution aerial optical panchromatic images. Two methods are proposed for the classification of this kind of images. The first method aims at detecting the instances of a class of objects and the other method aims at segmenting superpixels extracted from the images using a contextual model of the relations between the superpixels. The object detection method in very high resolution images uses a mixture of appearance models of a class of objects then fuses the hypothesis returned by the models. We develop a method that clusters training samples into visual subcategories based on a two stages procedure using metadata and visual information. The clustering part allows to learn models that are specialised in recognizing a subset of the dataset and whose fusion lead to a generalization of the object detector. The performances of the method are evaluate on several dataset of very high resolution images at several resolutions and several places. The method proposed for contextual semantic segmentation use a combination of visual description of a superpixel extract from the image and contextual information gathered between a superpixel and its neighbors. The contextual representation is based on a graph where the nodes are the superpixels and the edges are the relations between two neighbors. Finally we predict the category of a superpixel using the predictions made by of the neighbors using the contextual model in order to make the prediction more reliable. We test our method on a dataset of very high resolution images.
|
73 |
Semantic Segmentation Using Deep Learning Neural ArchitecturesSarpangala, Kishan January 2019 (has links)
No description available.
|
74 |
Semantic Segmentation of RGB images for feature extraction in Real TimeElavarthi, Pradyumna January 2019 (has links)
No description available.
|
75 |
The World in 3D : Geospatial Segmentation and ReconstructionRobín Karlsson, David January 2022 (has links)
Deep learning has proven a powerful tool for image analysis during the past two decades. With the rise of high resolution overhead imagery, an opportunity for automatic geospatial 3D-recreation has presented itself. This master thesis researches the possibil- ity of 3D-recreation through deep learning based image analysis of overhead imagery. The goal is a model capable of making predictions for three different tasks: heightmaps, bound- ary proximity heatmaps and semantic segmentations. A new neural network is designed with the novel feature of supplying the predictions from one task to another with the goal of improving performance. A number of strategies to ensure the model generalizes to un- seen data are employed. The model is trained using satellite and aerial imagery from a variety of cities on the planet. The model is meticulously evaluated by using four common performance metrics. For datasets with no ground truth data, the results were assessed visually. This thesis concludes that it is possible to create a deep learning network capa- ble of making predictions for the three tasks with varying success, performing best for heightmaps and worst for semantic segmentation. It was observed that supplying estima- tions from one task to another can both improve and decrease performance. Analysis into what features in an image is important for the three tasks was clear in some images, unclear in others. Lastly, validation proved that a number of random transformations during the training process helped the model generalize to unseen data. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
|
76 |
Interpretability of a Deep Learning Model for Semantic Segmentation : Example of Remote Sensing ApplicationJanik, Adrianna January 2019 (has links)
Understanding a black-box model is a major problem in domains that relies on model predictions in critical tasks. If solved, can help to evaluate the trustworthiness of a model. This thesis proposes a user-centric approach to black-box interpretability. It addresses the problem in semantic segmentation setting with an example of humanitarian remote sensing application for building detection. The question that drives this work was, Can existing methods for explaining black-box classifiers be used for a deep learning semantic segmentation model? We approached this problem with exploratory qualitative research involving a case study and human evaluation. The study showed that it is possible to explain a segmentation model with adapted methods for classifiers but not without a cost. The specificity of the model is likely to be lost in the process. The sole process could include introducing artificial classes or fragmenting image into super-pixels. Other approaches are necessary to mitigate identified drawback. The main contribution of this work is an interactive visualisation approach for exploring learned latent space via a deep segmenter, named U-Net, evaluated with a user study involving 45 respondents. We developed an artefact (accessible online) to evaluate the approach with the survey. It presents an example of this approach with a real-world satellite image dataset. In the evaluation study, the majority of users had a computer science background (80%), including a large percentage of users with machine learning specialisation (44.4% of all respondents). The model distinguishes rurality vs urbanization (58% of users). External quantitative comparison of building densities of each city concerning the location in the latent space confirmed the later. The representation of the model was found faithful to the underlying model (62% of users). Preliminary results show the utility of the pursued approach in the application domain. Limited possibility to present complex model visually requires further investigation. / Att förstå en svartboxmodell är ett stort problem inom domäner som förlitar sig på modellprognoser i kritiska uppgifter. Om det löses, kan det hjälpa till att utvärdera en modells pålitlighet. Den här avhandlingen föreslår en användarcentrisk strategi för svartboxtolkbarhet. Den tar upp problemet i semantisk segmentering med ett exempel på humanitär fjärranalysapplikation för byggnadsdetektering. Frågan som driver detta arbete var: Kan befintliga metoder för att förklara svartruta klassificerare användas för en djup semantisk segmenteringsmodell? Vi närmade oss detta problem med utforskande kvalitativ forskning som involverade en fallstudie och mänsklig utvärdering. Studien visade att det är möjligt att förklara en segmenteringsmodell med anpassade metoder för klassificerare men inte utan kostnad. Modellens specificitet kommer sannolikt att gå förlorad i processen. Den enda processen kan inkludera införande av konstgjorda klasser eller fragmentering av bild i superpixlar. Andra tillvägagångssätt är nödvändiga för att mildra identifierad nackdel. Huvudbidraget i detta arbete är en interaktiv visualiseringsmetod för att utforska lärt latent utrymme via en djup segmenter, benämnd U-Net, utvärderad med en användarstudie med 45 svarande. Vi utvecklade en artefakt (tillgänglig online) för att utvärdera tillvägagångssättet med undersökningen. Den presenterar ett exempel på denna metod med en verklig satellitbilddatasats. I utvärderingsstudien hade majoriteten av användarna en datavetenskaplig bakgrund (80%), inklusive en stor andel användare med specialisering av maskininlärning (44,4 % av alla svarande). Modellen skiljer ruralitet och urbanisering (58 % av användarna). Den externa kvantitativa jämförelsen av byggnadstätheten i varje stad angående platsen i det latenta utrymmet bekräftade det senare. Representationen av modellen visade sig vara trogen mot den underliggande modellen (62% av användarna). Preliminära resultat visar användbarheten av den eftersträvade metoden inom applikationsdomänen. Begränsad möjlighet att presentera komplexa modeller visuellt kräver ytterligare utredning.
|
77 |
2D object detection and semantic segmentation in the Carla simulator / 2D-objekt detektering och semantisk segmentering i Carla-simulatornWang, Chen January 2020 (has links)
The subject of self-driving car technology has drawn growing interest in recent years. Many companies, such as Baidu and Tesla, have already introduced automatic driving techniques in their newest cars when driving in a specific area. However, there are still many challenges ahead toward fully autonomous driving cars. Tesla has caused several severe accidents when using autonomous driving functions, which makes the public doubt self-driving car technology. Therefore, it is necessary to use the simulator environment to help verify and perfect algorithms for the perception, planning, and decision-making of autonomous vehicles before implementation in real-world cars. This project aims to build a benchmark for implementing the whole self-driving car system in software. There are three main components including perception, planning, and control in the entire autonomous driving system. This thesis focuses on two sub-tasks 2D object detection and semantic segmentation in the perception part. All of the experiments will be tested in a simulator environment called The CAR Learning to Act(Carla), which is an open-source platform for autonomous car research. Carla simulator is developed based on the game engine(Unreal4). It has a server-client system, which provides a flexible python API. 2D object detection uses the You only look once(Yolov4) algorithm that contains the tricks of the latest deep learning techniques from the aspect of network structure and data augmentation to strengthen the network’s ability to learn the object. Yolov4 achieves higher accuracy and short inference time when comparing with the other popular object detection algorithms. Semantic segmentation uses Efficient networks for Computer Vision(ESPnetv2). It is a light-weight and power-efficient network, which achieves the same performance as other semantic segmentation algorithms by using fewer network parameters and FLOPS. In this project, Yolov4 and ESPnetv2 are implemented into the Carla simulator. Two modules work together to help the autonomous car understand the world. The minimal distance awareness application is implemented into the Carla simulator to detect the distance to the ahead vehicles. This application can be used as a basic function to avoid the collision. Experiments are tested by using a single Nvidia GPU(RTX2060) in Ubuntu 18.0 system. / Ämnet självkörande bilteknik har väckt intresse de senaste åren. Många företag, som Baidu och Tesla, har redan infört automatiska körtekniker i sina nyaste bilar när de kör i ett specifikt område. Det finns dock fortfarande många utmaningar inför fullt autonoma bilar. Detta projekt syftar till att bygga ett riktmärke för att implementera hela det självkörande bilsystemet i programvara. Det finns tre huvudkomponenter inklusive uppfattning, planering och kontroll i hela det autonoma körsystemet. Denna avhandling fokuserar på två underuppgifter 2D-objekt detektering och semantisk segmentering i uppfattningsdelen. Alla experiment kommer att testas i en simulatormiljö som heter The CAR Learning to Act (Carla), som är en öppen källkodsplattform för autonom bilforskning. Du ser bara en gång (Yolov4) och effektiva nätverk för datorvision (ESPnetv2) implementeras i detta projekt för att uppnå Funktioner för objektdetektering och semantisk segmentering. Den minimala distans medvetenhets applikationen implementeras i Carla-simulatorn för att upptäcka avståndet till de främre bilarna. Denna applikation kan användas som en grundläggande funktion för att undvika kollisionen.
|
78 |
Network Orientation and Segmentation Refinement Using Machine LearningNilsson, Michael, Kentson, Jonatan January 2023 (has links)
Network mapping is used to extract the coordinates of a network's components in an image. Furthermore, machine learning algorithms have demonstrated their efficacy in advancing the field of network mapping across various domains, including mapping of road networks and blood vessel networks. However, accurately mapping of road networks still remains a challenge due to difficulties in identification and separation of roads in the presence of occlusion caused by trees, as well as complex environments, such as parking lots and complex intersections. Additionally, the segmentation of blood vessels networks, such as the ones in the retina, is also not trivial due to their complex shape and thin appearance. Therefore, the aim for this thesis was to investigate two deep learning approaches to improve mapping of networks, namely by refining existing road network probability maps, and by estimating road network orientations. Additionally, the thesis explores the possibility of using a machine learning model trained on road network probability maps to refine retina network segmentations. In the first approach, U-Net models with a binary output channel were implemented to refine existing probability maps of networks. In the second approach, ResNet models with a regression output were implemented to estimate the orientation of roads within a network. The models for refining road network probability maps were evaluated using F1-score and MCC-score, while the models for estimating road network orientation were evaluated based on angle loss, angle difference, F1-score, and MCC-score. The results for refining road segmentations yielded an increase of 0.102 MCC-score compared to the baseline (0.701). However, when applying the segmentation refinement model to retina images, the output from the model achieved merely 0.226 in MCC-score. Nevertheless, the model demonstrated the capability to identify and refine the segmentation of large blood vessels. Additionally, the estimation of road network orientation achieved an average error of 10.50 degrees. It successfully distinguished roads from the background, achieving an MCC-score of 0.805. In conclusion, this thesis shows that a deep learning-based approach for road segmentation refinement is beneficial, especially in cases where occlusions are present. However, the refinement of retina image segmentations using a model trained on roads and tested on retina images produced unsatisfactory results, likely due to differences in scale between road width and vessel size. Further experiments with adjustments in image scales are likely needed to achieve better results. Moreover, the orientation model demonstrated promising results in estimating the orientation of road pixels and effectively differentiating between road and non-road pixels.
|
79 |
Multitask Deep Learning models for real-time deployment in embedded systems / Deep Learning-modeller för multitaskproblem, anpassade för inbyggda system i realtidsapplikationerMartí Rabadán, Miquel January 2017 (has links)
Multitask Learning (MTL) was conceived as an approach to improve thegeneralization ability of machine learning models. When applied to neu-ral networks, multitask models take advantage of sharing resources forreducing the total inference time, memory footprint and model size. Wepropose MTL as a way to speed up deep learning models for applicationsin which multiple tasks need to be solved simultaneously, which is par-ticularly useful in embedded, real-time systems such as the ones foundin autonomous cars or UAVs.In order to study this approach, we apply MTL to a Computer Vi-sion problem in which both Object Detection and Semantic Segmenta-tion tasks are solved based on the Single Shot Multibox Detector andFully Convolutional Networks with skip connections respectively, usinga ResNet-50 as the base network. We train multitask models for twodifferent datasets, Pascal VOC, which is used to validate the decisionsmade, and a combination of datasets with aerial view images capturedfrom UAVs.Finally, we analyse the challenges that appear during the process of train-ing multitask networks and try to overcome them. However, these hinderthe capacity of our multitask models to reach the performance of the bestsingle-task models trained without the limitations imposed by applyingMTL. Nevertheless, multitask networks benefit from sharing resourcesand are 1.6x faster, lighter and use less memory compared to deployingthe single-task models in parallel, which turns essential when runningthem on a Jetson TX1 SoC as the parallel approach does not fit intomemory. We conclude that MTL has the potential to give superior per-formance as far as the object detection and semantic segmentation tasksare concerned in exchange of a more complex training process that re-quires overcoming challenges not present in the training of single-taskmodels.
|
80 |
Semantic Stixels fusing LIDAR for Scene Perception / Semantiska Stixlar med LIDAR för självkörande bilarForsberg, Olof January 2018 (has links)
Autonomous driving is the concept of a vehicle that operates in traffic without instructions from a driver. A major challenge for such a system is to provide a comprehensive, accurate and compact scene model based on information from sensors. For such a model to be comprehensive it must provide 3D position and semantics on relevant surroundings to enable a safe traffic behavior. Such a model creates a foundation for autonomous driving to make substantiated driving decisions. The model must be compact to enable efficient processing, allowing driving decisions to be made in real time. In this thesis rectangular objects (The Stixelworld) are used to represent the surroundings of a vehicle and provide a scene model. LIDAR and semantic segmentation are fused in the computation of these rectangles. This method indicates that a dense and compact scene model can be provided also from sparse LIDAR data by use of semantic segmentation. / Fullt självkörande fordon behöver inte förare. Ett sådant fordon behöver en precis, detaljerad och kompakt modell av omgivningen baserad på sensordata. Med detaljerad avses att modellen innefattar all information nödvändig för ett trafiksäkert beteende. Med kompakt avses att en snabb bearbetning kan göras av modellen så att fordonet i realtid kan fatta beslut och manövrera i trafiken. I denna uppsats tillämpas en metod där man med rektangulära objekt skapar en modell av omgivningen. Dessa beräknas från LIDAR och semantisk segmentering. Arbetet indikerar att med hjälp av semantisk segmentering kan en tät, detaljerad och kompakt modell göras även från glesa LIDAR-data.
|
Page generated in 0.1284 seconds