• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 239
  • 10
  • 10
  • 9
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 320
  • 320
  • 143
  • 122
  • 115
  • 98
  • 73
  • 66
  • 61
  • 57
  • 57
  • 54
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Using Mask R-CNN for Instance Segmentation of Eyeglass Lenses / Användning av Mask R-CNN för instanssegmentering av glasögonlinser

Norrman, Marcus, Shihab, Saad January 2021 (has links)
This thesis investigates the performance of Mask R-CNN when utilizing transfer learning on a small dataset. The aim was to instance segment eyeglass lenses as accurately as possible from self-portrait images. Five different models were trained, where the key difference was the types of eyeglasses the models were trained on. The eyeglasses were grouped into three types, fully rimmed, semi-rimless, and rimless glasses. 1550 images were used for training, validation, and testing. The model's performances were evaluated using TensorBoard training data and mean Intersection over Union scores (mIoU). No major differences in performance were found in four of the models, which grouped all three types of glasses into one class. Their mIoU scores range from 0.913 to 0.94 whereas the model with one class for each group of glasses, performed worse, with a mIoU of 0.85. The thesis revealed that one can achieve great instance segmentation results using a limited dataset when taking advantage of transfer learning. / Denna uppsats undersöker prestandan för Mask R-CNN vid användning av överföringsinlärning på en liten datamängd. Syftet med arbetet var att segmentera glasögonlinser så exakt som möjligt från självporträttbilder. Fem olika modeller tränades, där den viktigaste skillnaden var de typer av glasögon som modellerna tränades på. Glasögonen delades in i 3 typer, helbåge, halvbåge och båglösa. Totalt samlades 1550 träningsbilder in, dessa annoterades och användes för att träna modellerna.  Modellens prestanda utvärderades med TensorBoard träningsdata samt genomsnittlig Intersection over Union (IoU). Inga större skillnader i prestanda hittades mellan modellerna som endast tränades på en klass av glasögon. Deras genomsnittliga IoU varierar mellan 0,913 och 0,94. Modellen där varje glasögonkategori representerades som en unik klass, presterade sämre med en genomsnittlig IoU på 0,85. Resultatet av uppsatsen påvisar att goda instanssegmenteringsresultat går att uppnå med hjälp av en begränsad datamängd om överföringsinlärning används.
212

On the Efficiency of Transfer Learning in a Fighter Pilot Behavior Modelling Context / Effektiviteten av överföringsinlärning vid beteendemodellering av stridspiloter

Sandström, Viktor January 2021 (has links)
Creating realistic models of human fighter pilot behavior is made possible with recent deep learning techniques. However, these techniques are often highly dependent on large datasets, often unavailable in many settings, or expensive to produce. Transfer learning is an active research field where the idea is to leverage the knowledge gained from studying a problem for which large amounts of training data are more readily available, when considering a different, related problem. The related problem is called the target task and the initial problem is called the source task. Given a successful transfer scenario, a smaller amount of data, or less training, can be required to reach high quality results on the target task. The first part of this thesis focuses on the development of a fighter pilot model using behavior cloning, a method for reducing an imitation learning problem to standard supervised learning. The resulting model, called a policy, is capable of imitating a human pilot controlling a fighter jet in the military combat simulator Virtual BattleSpace 3. In this simulator, the forces acting on the aircraft can be modelled using one of several flight dynamic models (FDMs). In the second part, the efficiency of transfer learning is measured. This is done by replacing the built-in FDM to one with a significant variation in the input response, and subsequently train two policies on successive amount of data. One policy was trained using only the latter FDM, whereas the other policy exploits the gained knowledge from the first part of the thesis, using a technique called fine-tuning. The results indicate that a model already capable of handling one FDM, adapts to a different FDM with less data compared to a previously untrained policy. / Realistiska modeller av mänskligt pilotbeteende kan potentiellt skapas med djupinlärningstekniker. För detta krävs ofta stora datamängder som för många tillämpningar saknas, eller är dyra att ta fram. Överföringsinlärning är ett aktivt forskningsfält där grundidén är att utnyttja redan inlärd kunskap från ett problem där stora mängder träningsdata finns tillgängligt, vid undersökning av ett relaterat problem. Vid lyckad överföringinlärning behövs en mindre mängd data, eller mindre träning, för att uppnå ett önskvärt resultat på denna måluppgift. Första delen av detta examensarbete handlar om utvecklingen av en pilotmodell med hjälp av beteendekloning, en metod som reducerar imitationsinlärning till vanlig övervakad inlärning. Den resulterande pilotmodellen klarar av att imitera en mänsklig pilot som styr ett stridsflygplan i den militära simulatormiljön Virtual BattleSpace 3, där krafterna som verkar på flygplanet modelleras med en enkel inbyggd flygdynamiksmodell. I den andra delen av arbetet utvärderas överföringsförmågan mellan olika flygdynamiksmodeller. Detta gjordes genom att ersätta den inbyggda dynamiken till en dynamik som modellerar ett annat flygplan och som svarar på styrsignaler på ett vida olikartat sätt. Sedan tränades två stridspilotmodeller successivt på ökad mängd data. Den ena pilotmodellen tränas endast med den ena dynamiken varvid den andra pilotmodellen utnyttjar det redan inlärda beteendet från första delen av arbetet, med hjälp av en teknik som kallas finjustering. Resultaten visar att en pilotmodell som redan lärt sig att flyga med en specifik flygdynamik har lättare att lära sig en ny dynamik, jämfört med en pilotmodell som inte förtränats.
213

Labelling factual information in legal cases using fine-tuned BERT models

Wenestam, Arvid January 2021 (has links)
Labelling factual information on the token level in legal cases requires legal expertise and is time-consuming. This thesis proposes transfer-learning and fine-tuning implementation of pre-trained state-of-the-art BERT models to perform this labelling task. Investigations are done to compare whether models pre-trained on solely legal corpus outperforms a generic corps trained BERT and the model’s behaviour as the number of cases in the training sample varies. This work showed that the models metric scores are stable and on par using 40-60 professionally annotated cases as opposed to using the full sample of 100 cases. Also, the generic-trained BERT model is a strong baseline, and a solely pre-trained BERT on legal corpus is not crucial for this task.
214

Transfer Learning for Automatic Author Profiling with BERT Transformers and GloVe Embeddings

From, Viktor January 2022 (has links)
Historically author profiling has been used in forensic linguistics. However, it is not until the last decades that the analysis method has worked into computer science and machine learning. In comparison, determining author profiling characteristics in machine learning is nothing new. This paper investigates the possibility to improve upon previous results with modern frameworks using data sets that have seen limited usage. The purpose of this master thesis was to use pre-trained transformers or embeddings together with transfer learning. In addition, to examine if general author profiling characteristics of anonymous users on internet forums or conversations on social media could be determined. The data sets used to investigate the questions above were PAN15 and PANDORA, which contains various properties in text data based on authors paired with ground truth labels such as gender, age, and Big Five/OCEAN. In addition, transfer learning of BERT and GloVe was used as a starting point to decrease the learning time of a new task. PAN15, a Twitter data set, did not contain enough data when training a model and was augmented using PANDORA, a Reddit-based data set. Ultimately, BERT obtained the best performance using a stacked approach, achieving 86 − 91% accuracy for each label on unseen data.
215

Adaptive Anomaly Detection for Large IoT Datasets with Machine Learning and Transfer Learning

Negus, Andra Stefania January 2020 (has links)
As more IoT devices enter the market it becomes increasingly important to develop reliable and adaptive ways of dealing with the data they generate. These must address data quality and reliability. Such solutions could benefit both the device producers and their customers who, as a result, could receive faster and better customer support services. Thus, this project's goal is twofold. First, it is to identify faulty data points generated by such devices. Second, it is to evaluate whether the knowledge gained from available/known sensors and appliances is transferable to other sensors on similar devices. This would make it possible to evaluate the behaviour of new appliances as soon as they are first switched on, rather than after sufficient data from them has been collected. This project uses time series data from three appliances: washing machine, washer&dryer and refrigerator. For these, two solutions are developed and tested: one for categorical and another for numerical variables. Categorical variables are analysed using the Average Value Frequency and the pure frequency of state-transition methods. Due to the limited number of possible states, the pure frequency proves to be the better solution, and the knowledge gained is transferred from the source device to the target one, with moderate success. Numerical variables are analysed using a One-class Support Vector Machine pipeline, with very promising results. Further, learning and forgetting mechanisms are developed to allow for the pipelines to adapt to changes in appliance patterns of behaviour. This includes a decay function for the numerical variables solution. Interestingly, the different weights for the source and target have little to no impact on the quality of the classification. / Nya IoT-enheter träder in på marknaden så det blir allt viktigare att utveckla tillförlitliga och anpassningsbara sätt att hantera de data de genererar. Dessa bör hantera datakvalitet och tillförlitlig- het. Sådana lösningar kan gynna båda tillverkarna av apparater och deras kunder som som ett resultat kan dra nytta av snabbare och bättre kundsupport / tjänster. Således har detta projekt två mål. Det första är att identifiera felaktiga datapunkter som genereras av sådana enheter. För det andra är det att utvärdera om kunskapen från tillgängliga / kända sensorer och apparater kan överföras till andra sensorer på liknande enheter. Detta skulle göra det möjligt att utvärdera beteendet hos nya apparater så snart de slås på första gången, snarare än efter att tillräcklig information från dem har samlats in. Detta projekt använder tidsseriedata från tre apparater: tvättmaskin, tvättmaskin och torktumlare och kylskåp. För dessa utvecklas och testas två lösningar: en för kategoriska variabler och en annan för numeriska variabler. De kategoriska variablerna analyseras med två metoder: Average Value Frequency och den rena frekvensen för tillståndsövergång. På grund av det begränsade antalet möjliga tillstånd visar sig den rena frekvensen vara den bättre lösningen, och kunskapen som erhålls överförs från källanordningen till målet, med måttlig framgång. De numeriska variablerna analyseras med hjälp av en One-class Support Vector Machine-pipeline, med mycket lovande resultat. Vidare utvecklas inlärnings- och glömningsmekanismer för att möjliggöra för rörledningarna att anpassa sig till förändringar i apparatens beteendemönster. Detta inkluderar en sönderfallningsfunktion för den numeriska variabellösningen. Intressant är att de olika vikterna för källan och målet har liten eller ingen inverkan på kvaliteten på klassificeringen.
216

Vers des interfaces cérébrales adaptées aux utilisateurs : interaction robuste et apprentissage statistique basé sur la géométrie riemannienne / Toward user-adapted brain computer interfaces : robust interaction and machine learning based on riemannian geometry

Kalunga, Emmanuel 30 August 2017 (has links)
Au cours des deux dernières décennies, l'intérêt porté aux interfaces cérébrales ou Brain Computer Interfaces (BCI) s’est considérablement accru, avec un nombre croissant de laboratoires de recherche travaillant sur le sujet. Depuis le projet Brain Computer Interface, où la BCI a été présentée à des fins de réadaptation et d'assistance, l'utilisation de la BCI a été étendue à d'autres applications telles que le neurofeedback et l’industrie du jeux vidéo. Ce progrès a été réalisé grâce à une meilleure compréhension de l'électroencéphalographie (EEG), une amélioration des systèmes d’enregistrement du EEG, et une augmentation de puissance de calcul.Malgré son potentiel, la technologie de la BCI n’est pas encore mature et ne peut être utilisé en dehors des laboratoires. Il y a un tas de défis qui doivent être surmontés avant que les systèmes BCI puissent être utilisés à leur plein potentiel. Ce travail porte sur des aspects importants de ces défis, à savoir la spécificité des systèmes BCI aux capacités physiques des utilisateurs, la robustesse de la représentation et de l'apprentissage du EEG, ainsi que la suffisance des données d’entrainement. L'objectif est de fournir un système BCI qui peut s’adapter aux utilisateurs en fonction de leurs capacités physiques et des variabilités dans les signaux du cerveau enregistrés.À ces fins, deux voies principales sont explorées : la première, qui peut être considérée comme un ajustement de haut niveau, est un changement de paradigmes BCI. Elle porte sur la création de nouveaux paradigmes qui peuvent augmenter les performances de la BCI, alléger l'inconfort de l'utilisation de ces systèmes, et s’adapter aux besoins des utilisateurs. La deuxième voie, considérée comme une solution de bas niveau, porte sur l’amélioration des techniques de traitement du signal et d’apprentissage statistique pour améliorer la qualité du signal EEG, la reconnaissance des formes, ainsi que la tache de classification.D'une part, une nouvelle méthodologie dans le contexte de la robotique d'assistance est définie : il s’agit d’une approche hybride où une interface physique est complémentée par une interface cérébrale pour une interaction homme-machine plus fluide. Ce système hybride utilise les capacités motrices résiduelles des utilisateurs et offre la BCI comme un choix optionnel : l'utilisateur choisit quand utiliser la BCI et peut alterner entre les interfaces cérébrales et musculaire selon le besoin.D'autre part, pour l’amélioration des techniques de traitement du signal et d'apprentissage statistique, ce travail utilise un cadre Riemannien. Un frein majeur dans le domaine de la BCI est la faible résolution spatiale du EEG. Ce problème est dû à l'effet de conductance des os du crâne qui agissent comme un filtre passe-bas non linéaire, en mélangeant les signaux de différentes sources du cerveau et réduisant ainsi le rapport signal-à-bruit. Par conséquent, les méthodes de filtrage spatial ont été développées ou adaptées. La plupart d'entre elles – à savoir la Common Spatial Pattern (CSP), la xDAWN et la Canonical Correlation Analysis (CCA) – sont basées sur des estimations de matrice de covariance. Les matrices de covariance sont essentielles dans la représentation d’information contenue dans le signal EEG et constituent un élément important dans leur classification. Dans la plupart des algorithmes d'apprentissage statistique existants, les matrices de covariance sont traitées comme des éléments de l'espace euclidien. Cependant, étant symétrique et défini positive (SDP), les matrices de covariance sont situées dans un espace courbe qui est identifié comme une variété riemannienne. Utiliser les matrices de covariance comme caractéristique pour la classification des signaux EEG, et les manipuler avec les outils fournis par la géométrie de Riemann, fournit un cadre solide pour la représentation et l'apprentissage du EEG. / In the last two decades, interest in Brain-Computer Interfaces (BCI) has tremendously grown, with a number of research laboratories working on the topic. Since the Brain-Computer Interface Project of Vidal in 1973, where BCI was introduced for rehabilitative and assistive purposes, the use of BCI has been extended to more applications such as neurofeedback and entertainment. The credit of this progress should be granted to an improved understanding of electroencephalography (EEG), an improvement in its measurement techniques, and increased computational power.Despite the opportunities and potential of Brain-Computer Interface, the technology has yet to reach maturity and be used out of laboratories. There are several challenges that need to be addresses before BCI systems can be used to their full potential. This work examines in depth some of these challenges, namely the specificity of BCI systems to users physical abilities, the robustness of EEG representation and machine learning, and the adequacy of training data. The aim is to provide a BCI system that can adapt to individual users in terms of their physical abilities/disabilities, and variability in recorded brain signals.To this end, two main avenues are explored: the first, which can be regarded as a high-level adjustment, is a change in BCI paradigms. It is about creating new paradigms that increase their performance, ease the discomfort of using BCI systems, and adapt to the user’s needs. The second avenue, regarded as a low-level solution, is the refinement of signal processing and machine learning techniques to enhance the EEG signal quality, pattern recognition and classification.On the one hand, a new methodology in the context of assistive robotics is defined: it is a hybrid approach where a physical interface is complemented by a Brain-Computer Interface (BCI) for human machine interaction. This hybrid system makes use of users residual motor abilities and offers BCI as an optional choice: the user can choose when to rely on BCI and could alternate between the muscular- and brain-mediated interface at the appropriate time.On the other hand, for the refinement of signal processing and machine learning techniques, this work uses a Riemannian framework. A major limitation in this filed is the EEG poor spatial resolution. This limitation is due to the volume conductance effect, as the skull bones act as a non-linear low pass filter, mixing the brain source signals and thus reducing the signal-to-noise ratio. Consequently, spatial filtering methods have been developed or adapted. Most of them (i.e. Common Spatial Pattern, xDAWN, and Canonical Correlation Analysis) are based on covariance matrix estimations. The covariance matrices are key in the representation of information contained in the EEG signal and constitute an important feature in their classification. In most of the existing machine learning algorithms, covariance matrices are treated as elements of the Euclidean space. However, being Symmetric and Positive-Definite (SPD), covariance matrices lie on a curved space that is identified as a Riemannian manifold. Using covariance matrices as features for classification of EEG signals and handling them with the tools provided by Riemannian geometry provide a robust framework for EEG representation and learning.
217

Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment

Gaikwad, Akash S. 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In recent years, deep learning models have become popular in the real-time embedded application, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Recent research in the field of deep learning focuses on reducing the model size of the Convolution Neural Network (CNN) by various compression techniques like Architectural compression, Pruning, Quantization, and Encoding (e.g., Huffman encoding). Network pruning is one of the promising technique to solve these problems. This thesis proposes methods to prune the convolution neural network (SqueezeNet) without introducing network sparsity in the pruned model. This thesis proposes three methods to prune the CNN to decrease the model size of CNN without a significant drop in the accuracy of the model. 1: Pruning based on Taylor expansion of change in cost function Delta C. 2: Pruning based on L2 normalization of activation maps. 3: Pruning based on a combination of method 1 and method 2. The proposed methods use various ranking methods to rank the convolution kernels and prune the lower ranked filters afterwards SqueezeNet model is fine-tuned by backpropagation. Transfer learning technique is used to train the SqueezeNet on the CIFAR-10 dataset. Results show that the proposed approach reduces the SqueezeNet model by 72% without a significant drop in the accuracy of the model (optimal pruning efficiency result). Results also show that Pruning based on a combination of Taylor expansion of the cost function and L2 normalization of activation maps achieves better pruning efficiency compared to other individual pruning criteria and most of the pruned kernels are from mid and high-level layers. The Pruned model is deployed on BlueBox 2.0 using RTMaps software and model performance was evaluated.
218

Exploiting Multilingualism and Transfer Learning for Low Resource Machine Translation / 低リソース機械翻訳における多言語性と転移学習の活用

Prasanna, Raj Noel Dabre 26 March 2018 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第21210号 / 情博第663号 / 新制||情||114(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 黒橋 禎夫, 教授 河原 達也, 教授 森 信介 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
219

Integrative approaches to single cell RNA sequencing analysis

Johnson, Travis Steele 21 September 2020 (has links)
No description available.
220

MULTI-SOURCE AND SOURCE-PRIVATE CROSS-DOMAIN LEARNING FOR VISUAL RECOGNITION

Qucheng Peng (12426570) 12 July 2022 (has links)
<p>Domain adaptation is one of the hottest directions in solving annotation insufficiency problem of deep learning. General domain adaptation is not consistent with the practical scenarios in the industry. In this thesis, we focus on two concerns as below.</p> <p>  </p> <p>  First is that labeled data are generally collected from multiple domains. In other words, multi-source adaptation is a more common situation. Simply extending these single-source approaches to the multi-source cases could cause sub-optimal inference, so specialized multi-source adaptation methods are essential. The main challenge in the multi-source scenario is a more complex divergence situation. Not only the divergence between target and each source plays a role, but the divergences among distinct sources matter as well. However, the significance of maintaining consistency among multiple sources didn't gain enough attention in previous work. In this thesis, we propose an Enhanced Consistency Multi-Source Adaptation (EC-MSA) framework to address it from three perspectives. First, we mitigate feature-level discrepancy by cross-domain conditional alignment, narrowing the divergence between each source and target domain class-wisely. Second, we enhance multi-source consistency via dual mix-up, diminishing the disagreements among different sources. Third, we deploy a target distilling mechanism to handle the uncertainty of target prediction, aiming to provide high-quality pseudo-labeled target samples to benefit the previous two aspects. Extensive experiments are conducted on several common benchmark datasets and demonstrate that our model outperforms the state-of-the-art methods.</p> <p>  </p> <p>  Second is that data privacy and security is necessary in practice. That is, we hope to keep the raw data stored locally while can still obtain a satisfied model. In such a case, the risk of data leakage greatly decreases. Therefore, it is natural for us to combine the federated learning paradigm with domain adaptation. Under the source-private setting, the main challenge for us is to expose information from the source domain to the target domain while make sure that the communication process is safe enough. In this thesis, we propose a method named Fourier Transform-Assisted Federated Domain Adaptation (FTA-FDA) to alleviate the difficulties in two ways. We apply Fast Fourier Transform to the raw data and transfer only the amplitude spectra during the communication. Then frequency space interpolations between these two domains are conducted, minimizing the discrepancies while ensuring the contact of them and keeping raw data safe. What's more, we make prototype alignments by using the model weights together with target features, trying to reduce the discrepancy in the class level. Experiments on Office-31 demonstrate the effectiveness and competitiveness of our approach, and further analyses prove that our algorithm can help protect privacy and security.</p>

Page generated in 0.0876 seconds