Spelling suggestions: "subject:"comain daptation"" "subject:"comain aadaptation""
51 |
Adversarial approaches to remote sensing image analysisBejiga, Mesay Belete 17 April 2020 (has links)
The recent advance in generative modeling in particular the unsupervised learning of data distribution is attributed to the invention of models with new learning algorithms. Among the methods proposed, generative adversarial networks (GANs) have shown to be the most efficient approaches to estimate data distributions. The core idea of GANs is an adversarial training of two deep neural networks, called generator and discriminator, to learn an implicit approximation of the true data distribution. The distribution is approximated through the weights of the generator network, and interaction with the distribution is through the process of sampling. GANs have found to be useful in applications such as image-to-image translation, in-painting, and text-to-image synthesis. In this thesis, we propose to capitalize on the power of GANs for different remote sensing problems.
The first problem is a new research track to the remote sensing community that aims to generate remote sensing images from text descriptions. More specifically, we focus on exploiting ancient text descriptions of geographical areas, inherited from previous civilizations, and convert them the equivalent remote sensing images. The proposed method is composed of a text encoder and an image synthesis module. The text encoder is tasked with converting a text description into a vector. To this end, we explore two encoding schemes: a multilabel encoder and a doc2vec encoder. The multilabel encoder takes into account the presence or absence of objects in the encoding process whereas the doc2vec method encodes additional information available in the text. The encoded vectors are then used as conditional information to a GAN network and guide the synthesis process. We collected satellite images and ancient text descriptions for training in order to evaluate the efficacy of the proposed method. The qualitative and quantitative results obtained suggest that the doc2vec encoder-based model yields better images in terms of the semantic agreement with the input description. In addition, we present open research areas that we believe are important to further advance this new research area.
The second problem we want to address is the issue of semi-supervised domain adaptation. The goal of domain adaptation is to learn a generic classifier for multiple related problems, thereby reducing the cost of labeling. To that end, we propose two methods. The first method uses GANs in the context of image-to-image translation to adapt source domain images into target domain images and train a classifier using the adapted images. We evaluated the proposed method on two remote sensing datasets. Though we have not explored this avenue extensively due to computational challenges, the results obtained show that the proposed method is promising and worth exploring in the future. The second domain adaptation strategy borrows the adversarial property of GANs to learn a new representation space where the domain discrepancy is negligible, and the new features are discriminative enough. The method is composed of a feature extractor, class predictor, and domain classifier blocks. Contrary to the traditional methods that perform representation and classifier learning in separate stages, this method combines both into a single-stage thereby learning a new representation of the input data that is domain invariant and discriminative. After training, the classifier is used to predict both source and target domain labels. We apply this method for large-scale land cover classification and cross-sensor hyperspectral classification problems. Experimental results obtained show that the proposed method provides a performance gain of up to 40%, and thus indicates the efficacy of the method.
|
52 |
Human action recognition in the real world: handling domain shift in open-set, source-free and multi-source scenariosZara, Giacomo 24 January 2025 (has links)
Human behavior understanding as an application of artificial intelligence and deep learning has quickly acquired popularity over the past few years, due to the crucial role it plays in trending fields such as human-robot interaction, autonomous driving, drone footage, sports and video surveillance. The variety of different scenarios and conditions in which modern computer vision algorithms are expected to operate, along with the significant cost derived from collecting and annotating data, encouraged the community to devise solutions oriented to adapting models to visual and semantic domains potentially very different from those characterizing the data used to train them. Formally, this task goes by the name of Domain Adaptation (DA), and it has recently been devoted a significant amount of effort. While image-based content has been vastly addressed in this scope, the field of videos turns out to be significantly less explored. This phenomenon can be associated to video data posing quite a harder challenge than images in all the steps involved, being more expensive to collect, store and annotate, and even more importantly harder to model and interpret. This latter aspect is mainly rooted in the additional level of complexity derived by the presence of the temporal dimension, which poses the challenge of understanding and modeling dynamics that evolve simultaneously through space and time, potentially in a radically different fashion across different domains. While the underlying theoretical problem is relevant and sound, it is usually addressed in its base formulation, which is characterized by a set of assumptions that rarely hold in real-world scenarios; these include, for instance, full knowledge of the categories present in the target domain, as well as direct access to the data used to train the models. Driven by the purpose of dropping these assumptions, this thesis proposes a selection of different perspectives on the specific problem of domain adaptation for the task of video action recognition, contextualizing it into challenging and realistic instances and proposing a different methodological approach to each one of them. All proposed methods are accurately described and motivated, and their effectiveness is then thoroughly showcased through an extensive experimental protocol, obtaining state-of-the-art results on the relevant benchmarks.
|
53 |
Online Unsupervised Domain Adaptation / Online-övervakad domänanpassningPanagiotakopoulos, Theodoros January 2022 (has links)
Deep Learning models have seen great application in demanding tasks such as machine translation and autonomous driving. However, building such models has proved challenging, both from a computational perspective and due to the requirement of a plethora of annotated data. Moreover, when challenged on new situations or data distributions (target domain), those models may perform inadequately. Such examples are transitioning from one city to another, different weather situations, or changes in sunlight. Unsupervised Domain adaptation (UDA) exploits unlabelled data (easy access) to adapt models to new conditions or data distributions. Inspired by the fact that environmental changes happen gradually, we focus on Online UDA. Instead of directly adjusting a model to a demanding condition, we constantly perform minor adaptions to every slight change in the data, creating a soft transition from the current domain to the target one. To perform gradual adaptation, we utilized state-of-the-art semantic segmentation approaches on increasing rain intensities (25, 50, 75, 100, and 200mm of rain). We demonstrate that deep learning models can adapt substantially better to hard domains when exploiting intermediate ones. Moreover, we introduce a model switching mechanism that allows adjusting back to the source domain, after adaptation, without dropping performance. / Deep Learning-modeller har sett stor tillämpning i krävande uppgifter som maskinöversättning och autonom körning. Att bygga sådana modeller har dock visat sig vara utmanande, både ur ett beräkningsperspektiv och på grund av kravet på en uppsjö av kommenterade data. Dessutom, när de utmanas i nya situationer eller datadistributioner (måldomän), kan dessa modeller prestera otillräckligt. Sådana exempel är övergång från en stad till en annan, olika vädersituationer eller förändringar i solljus. Unsupervised Domain adaptation (UDA) utnyttjar omärkt data (enkel åtkomst) för att anpassa modeller till nya förhållanden eller datadistributioner. Inspirerade av att miljöförändringar sker gradvis, fokuserar vi på Online UDA. Istället för att direkt anpassa en modell till ett krävande tillstånd, gör vi ständigt mindre anpassningar till varje liten förändring i data, vilket skapar en mjuk övergång från den aktuella domänen till måldomänen. För att utföra gradvis anpassning använde vi toppmoderna semantiska segmenteringsmetoder för att öka regnintensiteten (25, 50, 75, 100 och 200 mm regn). Vi visar att modeller för djupinlärning kan anpassa sig betydligt bättre till hårda domäner när man utnyttjar mellanliggande. Dessutom introducerar vi en modellväxlingsmekanism som tillåter justering tillbaka till källdomänen, efter anpassning, utan att tappa prestanda.
|
54 |
Improving Image Classification using Domain Adaptation for Autonomous Driving : A Master Thesis in Collaboration with Scania / Förbättring av Bildklassificering med hjälp av Domain Adaptation för Sjävkörande Fordon : Ett examensarbete i samarbete med ScaniaWestlund, Mikael January 2023 (has links)
Autonomous driving is a rapidly changing industry and has recently become a heavily focused research topic for vehicle producing companies and research organizations. These autonomous vehicles are typically equipped with sensors such as Light Detection and Radar (LiDAR) in order to perceive their surroundings. The problem of detecting and classifying surrounding objects from the sensor data can be solved using different types of algorithms. Recently, machine learning solutions have been investigated. One problem with the machine learning approach is that the models usually require a substantial amount of labeled data, and labeling LiDAR data is a time-consuming process. A promising solution to this problem is utilizing Domain Adaptation (DA) methods. The DA methods can use labeled camera data, which are easier to label, in conjunction with unlabeled LiDAR data to improve the performance of machine learning models on LiDAR data. This thesis investigates and compares different DA methods that can be used for classification of LiDAR data. In this thesis, two image classification datasets with data of humans and vehicles were created. One dataset contains camera images, and the other dataset contains LiDAR intensity images. The datasets were used to train and test three methods: (1) a baseline method, which simply uses labeled camera images to train a model. (2) Correlation Alignment (CORAL), a DA method that aligns the covariance of camera features towards LiDAR features. (3) Deep Adaptation Network (DAN), a DA method that includes a maximum mean discrepancy computation between camera and LiDAR features within the objective function of the model. These methods were then evaluated based on the resulting confusion matrices, accuracy, recall, precision and F1-score on LiDAR data. The results showed that DAN was the best out of the three methods, reaching an accuracy of 87% while the baseline and CORAL only measured at 65% and 73%, respectively. The strong performance of DAN showed that there is potential for using DA methods within the field of autonomous vehicles. / Industrin för självkörande fordon är snabbt förändlig och har under de senaste åren fått ett enormt fokus från biltillverkare och forskningsorganisationer. De självkörande fordonen är oftast utrustade med sensorer som Light Detection and Radar (LiDAR) för att hjälpa fordonen förstå omgivningen. Klassificering och identifiering av omgivande objekt är ett problem som kan lösas med hjälp av olika slags algoritmer. Nyligen har lösningar som utnyttjar maskininlärning undersökts. Ett problem med dessa lösningar är att modellerna oftast kräver en enorm mängd annoterad data, och att annotera LiDAR-data är en kostsam process. En lösning till detta problem är att utnyttja metoder inom Domain Adaptation (DA). DA metoder kan utnyttja både annoterad kameradata samt oannoterad LiDAR-data för att förbättra modellernas prestanda på LiDAR-data. Den här avhandlingen undersöker och jämför olika metoder inom DA som kan användas för att klassificera LiDAR-data. I det här arbetet skapades två dataset som består av data från människor och fordon. Det ena datasettet innehöll kamerabilder och det andra innehöll LiDAR-intensitetsbilder. Dessa dataset användes för att träna och testa tre olika metoder: (1) en baselinemetod, som endast använde annoterade kamerabilder för att träna en modell. (2) Correlation Alignment (CORAL), en metod inom DA som justerar kovariansen hos kamerafeatures mot kovariansen hos LiDAR-features. (3) Deep Adaptation Network (DAN), en metod inom DA som lägger till en uträkning av maximum mean discrepancy mellan kamerafeatures och LiDAR-features i modellens optimeringskriterie. Metoderna bedömdes sedan beroende på deras förvirringsmatriser, träffsäkerhet, precision, täckning och F1-träffsäkerhet på LiDAR-data. Resultaten avslöjade att DAN presterade bäst av de tre metoderna och uppnåde 87% träffsäkerhet medan baselinemetoden och CORAL bara uppnådde 65% respektive 73%. DANs imponerande prestation visade att det finns potential för att använda metoder inom DA för självkörande fordon.
|
55 |
Gaze tracking using Recurrent Neural Networks : Hardware agnostic gaze estimation using temporal features, synthetic data and a geometric modelMalmberg, Fredrik January 2022 (has links)
Vision is an important tool for us humans and significant effort has been put into creating solutions that let us measure how we use it. Most common among the techniques to measure gaze direction is to use specialised hardware such as infrared eye trackers. Recently, several Convolutional Neural Network (CNN) based architectures have been suggested yielding impressive results on single Red Green Blue (RGB) images. However, limited research has been done around whether using several sequential images can lead to improved tracking performance. Expanding this research to include low frequency and low quality RGB images can further open up the possibility to improve tracking performance for models using off-the-shelf hardware such as web cameras or smart phone cameras. GazeCapture is a well known dataset used for training RGB based CNN models but it lacks sequences of images and natural eye movements. In this thesis, a geometric gaze estimation model is introduced and synthetic data is generated using Unity to create sequences of images with both RGB input data as well as ground Point of Gaze (POG). To make these images more natural appearing domain adaptation is done using a CycleGAN. The data is then used to train several different models to evaluate whether temporal information can increase accuracy. Even though the improvement when using a Gated Recurrent Unit (GRU) based temporal model is limited over simple sequence averaging, the network achieves smoother tracking than a single image model while still offering faster updates over a saccade (eye movement) compared to averaging. This indicates that temporal features could improve accuracy. There are several promising future areas of related research that could further improve performance such as using real sequential data or further improving the domain adaptation of synthetic data. / Synen är ett viktigt sinne för oss människor och avsevärd energi har lagts ner på att skapa lösningar som låter oss mäta hur vi använder den. Det vanligaste sättet att göra detta idag är att använda specialiserad hårdvara baserad på infrarött ljus för ögonspårning. På senare tid har maskininlärning och modeller baserade på CNN uppnått imponerande resultat för enskilda RGB-bilder men endast begränsad forskning har gjorts kring huruvida användandet av en sekvens av högupplösta bilder kan öka prestandan för dessa modeller ytterligare. Genom att uttöka denna till bildserier med lägre frekvens och kvalitet kan det finnas möjligheter att förbättra prestandan för sekventiella modeller som kan använda data från standard-hårdvara såsom en webbkamera eller kameran i en vanlig telefon. GazeCapture är ett välkänt dataset som kan användas för att träna RGB-baserade CNN-modeller för enskilda bilder. Dock innehåller det inte bildsekvenser eller bilder som fångar naturliga ögonrörelser. För att hantera detta tränades de sekventiella modellerna i denna uppsats med data som skapats från 3D-modeller i Unity. För att den syntetiska datan skulle vara jämförbar med riktiga bilder anpassades den med hjälp av ett CycleGAN. Även om förbättringen som uppnåddes med sekventiella GRU-baserade modeller var begränsad jämfört med en modell som använde medelvärdet för sekvensen så uppnådde den tränade sekventiella modellen jämnare spårning jämfört med enbildsmodeller samtidigt som den uppdateras snabbare vid en sackad (ögonrörelse) än medelvärdesmodellen. Detta indikerar att den tidsmässiga information kan förbättra ögonspårning även för lågfrekventa bildserier med lägre kvalitet. Det finns ett antal intressanta områden att fortsätta undersöka för att ytterligare öka prestandan i liknande system som till exempel användandet av större mängder riktig sekventiell data eller en förbättrad domänanpassning av syntetisk data.
|
56 |
Automatic Analysis of Facial Actions: Learning from Transductive, Supervised and Unsupervised FrameworksChu, Wen-Sheng 01 January 2017 (has links)
Automatic analysis of facial actions (AFA) can reveal a person’s emotion, intention, and physical state, and make possible a wide range of applications. To enable reliable, valid, and efficient AFA, this thesis investigates automatic analysis of facial actions through transductive, supervised and unsupervised learning. Supervised learning for AFA is challenging, in part, because of individual differences among persons in face shape and appearance and variation in video acquisition and context. To improve generalizability across persons, we propose a transductive framework, Selective Transfer Machine (STM), which personalizes generic classifiers through joint sample reweighting and classifier learning. By personalizing classifiers, STM offers improved generalization to unknown persons. As an extension, we develop a variant of STM for use when partially labeled data are available. Additional challenges for supervised learning include learning an optimal representation for classification, variation in base rates of action units (AUs), correlation between AUs and temporal consistency. While these challenges could be partly accommodated with an SVM or STM, a more powerful alternative is afforded by an end-to-end supervised framework (i.e., deep learning). We propose a convolutional network with long short-term memory (LSTM) and multi-label sampling strategies. We compared SVM, STM and deep learning approaches with respect to AU occurrence and intensity in and between BP4D+ [282] and GFT [93] databases, which consist of around 0.6 million annotated frames. Annotated video is not always possible or desirable. We introduce an unsupervised Branch-and-Bound framework to discover correlated facial actions in un-annotated video. We term this approach Common Event Discovery (CED). We evaluate CED in video and motion capture data. CED achieved moderate convergence with supervised approaches and enabled discovery of novel patterns occult to supervised approaches.
|
57 |
Data Mining Techniques to Understand Textual DataZhou, Wubai 04 October 2017 (has links)
More than ever, information delivery online and storage heavily rely on text. Billions of texts are produced every day in the form of documents, news, logs, search queries, ad keywords, tags, tweets, messenger conversations, social network posts, etc. Text understanding is a fundamental and essential task involving broad research topics, and contributes to many applications in the areas text summarization, search engine, recommendation systems, online advertising, conversational bot and so on. However, understanding text for computers is never a trivial task, especially for noisy and ambiguous text such as logs, search queries. This dissertation mainly focuses on textual understanding tasks derived from the two domains, i.e., disaster management and IT service management that mainly utilizing textual data as an information carrier.
Improving situation awareness in disaster management and alleviating human efforts involved in IT service management dictates more intelligent and efficient solutions to understand the textual data acting as the main information carrier in the two domains. From the perspective of data mining, four directions are identified: (1) Intelligently generate a storyline summarizing the evolution of a hurricane from relevant online corpus; (2) Automatically recommending resolutions according to the textual symptom description in a ticket; (3) Gradually adapting the resolution recommendation system for time correlated features derived from text; (4) Efficiently learning distributed representation for short and lousy ticket symptom descriptions and resolutions. Provided with different types of textual data, data mining techniques proposed in those four research directions successfully address our tasks to understand and extract valuable knowledge from those textual data.
My dissertation will address the research topics outlined above. Concretely, I will focus on designing and developing data mining methodologies to better understand textual information, including (1) a storyline generation method for efficient summarization of natural hurricanes based on crawled online corpus; (2) a recommendation framework for automated ticket resolution in IT service management; (3) an adaptive recommendation system on time-varying temporal correlated features derived from text; (4) a deep neural ranking model not only successfully recommending resolutions but also efficiently outputting distributed representation for ticket descriptions and resolutions.
|
58 |
Towards robust steganalysis : binary classifiers and large, heterogeneous dataLubenko, Ivans January 2013 (has links)
The security of a steganography system is defined by our ability to detect it. It is of no surprise then that steganography and steganalysis both depend heavily on the accuracy and robustness of our detectors. This is especially true when real-world data is considered, due to its heterogeneity. The difficulty of such data manifests itself in a penalty that has periodically been reported to affect the performance of detectors built on binary classifiers; this is known as cover source mismatch. It remains unclear how the performance drop that is associated with cover source mismatch is mitigated or even measured. In this thesis we aim to show a robust methodology to empirically measure its effects on the detection accuracy of steganalysis classifiers. Some basic machine-learning based methods, which take their origin in domain adaptation, are proposed to counter it. Specifically, we test two hypotheses through an empirical investigation. First, that linear classifiers are more robust than non-linear classifiers to cover source mismatch in real-world data and, second, that linear classifiers are so robust that given sufficiently large mismatched training data they can equal the performance of any classifier trained on small matched data. With the help of theory we draw several nontrivial conclusions based on our results. The penalty from cover source mismatch may, in fact, be a combination of two types of error; estimation error and adaptation error. We show that relatedness between training and test data, as well as the choice of classifier, both have an impact on adaptation error, which, as we argue, ultimately defines a detector's robustness. This provides a novel framework for reasoning about what is required to improve the robustness of steganalysis detectors. Whilst our empirical results may be viewed as the first step towards this goal, we show that our approach provides clear advantages over earlier methods. To our knowledge this is the first study of this scale and structure.
|
59 |
Nouvelles approches itératives avec garanties théoriques pour l'adaptation de domaine non supervisée / New iterative approaches with theoretical guarantees for unsupervised domain adaptationPeyrache, Jean-Philippe 11 July 2014 (has links)
Ces dernières années, l’intérêt pour l’apprentissage automatique n’a cessé d’augmenter dans des domaines aussi variés que la reconnaissance d’images ou l’analyse de données médicales. Cependant, une limitation du cadre classique PAC a récemment été mise en avant. Elle a entraîné l’émergence d’un nouvel axe de recherche : l’Adaptation de Domaine, dans lequel on considère que les données d’apprentissage proviennent d’une distribution (dite source) différente de celle (dite cible) dont sont issues les données de test. Les premiers travaux théoriques effectués ont débouché sur la conclusion selon laquelle une bonne performance sur le test peut s’obtenir en minimisant à la fois l’erreur sur le domaine source et un terme de divergence entre les deux distributions. Trois grandes catégories d’approches s’en inspirent : par repondération, par reprojection et par auto-étiquetage. Dans ce travail de thèse, nous proposons deux contributions. La première est une approche de reprojection basée sur la théorie du boosting et s’appliquant aux données numériques. Celle-ci offre des garanties théoriques intéressantes et semble également en mesure d’obtenir de bonnes performances en généralisation. Notre seconde contribution consiste d’une part en la proposition d’un cadre permettant de combler le manque de résultats théoriques pour les méthodes d’auto-étiquetage en donnant des conditions nécessaires à la réussite de ce type d’algorithme. D’autre part, nous proposons dans ce cadre une nouvelle approche utilisant la théorie des (epsilon, gamma, tau)-bonnes fonctions de similarité afin de contourner les limitations imposées par la théorie des noyaux dans le contexte des données structurées / During the past few years, an increasing interest for Machine Learning has been encountered, in various domains like image recognition or medical data analysis. However, a limitation of the classical PAC framework has recently been highlighted. It led to the emergence of a new research axis: Domain Adaptation (DA), in which learning data are considered as coming from a distribution (the source one) different from the one (the target one) from which are generated test data. The first theoretical works concluded that a good performance on the target domain can be obtained by minimizing in the same time the source error and a divergence term between the two distributions. Three main categories of approaches are derived from this idea : by reweighting, by reprojection and by self-labeling. In this thesis work, we propose two contributions. The first one is a reprojection approach based on boosting theory and designed for numerical data. It offers interesting theoretical guarantees and also seems able to obtain good generalization performances. Our second contribution consists first in a framework filling the gap of the lack of theoretical results for self-labeling methods by introducing necessary conditions ensuring the good behavior of this kind of algorithm. On the other hand, we propose in this framework a new approach, using the theory of (epsilon, gamma, tau)- good similarity functions to go around the limitations due to the use of kernel theory in the specific context of structured data
|
60 |
Classification d’objets au moyen de machines à vecteurs supports dans les images de sonar de haute résolution du fond marin / Object classification using support vector machines in high resolution sonar seabed imageryRousselle, Denis 28 November 2016 (has links)
Cette thèse a pour objectif d'améliorer la classification d'objets sous-marins dans des images sonar haute résolution. En particulier, il s'agit de distinguer les mines des objets inoffensifs parmi une collection d'objets ressemblant à des mines. Nos recherches ont été dirigées par deux contraintes classiques en guerre de la mine : d'une part, le manque de données et d'autre part, le besoin de lisibilité des décisions. Nous avons donc constitué une base de données la plus représentative possible et simulé des objets dans le but de la compléter. Le manque d'exemples nous a mené à utiliser une représentation compacte, issue de la reconnaissance de visages : les Structural Binary Gradient Patterns (SBGP). Dans la même optique, nous avons dérivé une méthode d'adaptation de domaine semi-supervisée, basée sur le transport optimal, qui peut être facilement interprétable. Enfin, nous avons développé un nouvel algorithme de classification : les Ensemble of Exemplar-Maximum Excluding Ball (EE-MEB) qui sont à la fois adaptés à des petits jeux de données mais dont la décision est également aisément analysable / This thesis aims to improve the classification of underwater objects in high resolution sonar images. Especially, we seek to make the distinction between mines and harmless objects from a collection of mine-like objects. Our research was led by two classical constraints of the mine warfare : firstly, the lack of data and secondly, the need for readability of the classification. In this context, we built a database as much representative as possible and simulated objects in order to complete it. The lack of examples led us to use a compact representation, originally used by the face recognition community : the Structural Binary Gradient Patterns (SBGP). To the same end, we derived a method of semi-supervised domain adaptation, based on optimal transport, that can be easily interpreted. Finally, we developed a new classification algorithm : the Ensemble of Exemplar-Maximum Excluding Ball (EE-MEB) which is suitable for small datasets and with an easily interpretable decision function
|
Page generated in 0.1224 seconds