• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 91
  • 12
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 148
  • 148
  • 148
  • 78
  • 54
  • 53
  • 24
  • 24
  • 23
  • 23
  • 20
  • 20
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Zlepšování systému pro automatické hraní hry Starcraft II v prostředí PySC2 / Improving Bots Playing Starcraft II Game in PySC2 Environment

Krušina, Jan January 2018 (has links)
The aim of this thesis is to create an automated system for playing a real-time strategy game Starcraft II. Learning from replays via supervised learning and reinforcement learning techniques are used for improving bot's behavior. The proposed system should be capable of playing the whole game utilizing PySC2 framework for machine learning. Performance of the bot is evaluated against the built-in scripted AI in the game.
122

Automatické hodnocení anglické výslovnosti nerodilých mluvčích / Automatic Pronunciation Evaluation of Non-Native English Speakers

Gazdík, Peter January 2019 (has links)
Computer-Assisted Pronunciation Training (CAPT) is becoming more and more popular these days. However, the accuracy of existing CAPT systems is still quite low. Therefore, this diploma thesis focuses on improving existing methods for automatic pronunciation evaluation on the segmental level. The first part describes common techniques for this task. Afterwards, we proposed the system based on two approaches. Finally, performed experiments show significant improvement over the reference system.
123

Hluboké neuronové sítě / Deep Neural Networks

Habrnál, Matěj January 2014 (has links)
The thesis addresses the topic of Deep Neural Networks, in particular the methods regar- ding the field of Deep Learning, which is used to initialize the weight and learning process s itself within Deep Neural Networks. The focus is also put to the basic theory of the classical Neural Networks, which is important to comprehensive understanding of the issue. The aim of this work is to determine the optimal set of optional parameters of the algori- thms on various complexity levels of image recognition tasks through experimenting with created application applying Deep Neural Networks. Furthermore, evaluation and analysis of the results and lessons learned from the experimentation with classical and Deep Neural Networks are integrated in the thesis.
124

Novel Instances and Applications of Shared Knowledge in Computer Vision and Machine Learning Systems

Synakowski, Stuart R. January 2021 (has links)
No description available.
125

Improving the Robustness of Deep Neural Networks against Adversarial Examples via Adversarial Training with Maximal Coding Rate Reduction / Förbättra Robustheten hos Djupa Neurala Nätverk mot Exempel på en Motpart genom Utbildning för motståndare med Maximal Minskning av Kodningshastigheten

Chu, Hsiang-Yu January 2022 (has links)
Deep learning is one of the hottest scientific topics at the moment. Deep convolutional networks can solve various complex tasks in the field of image processing. However, adversarial attacks have been shown to have the ability of fooling deep learning models. An adversarial attack is accomplished by applying specially designed perturbations on the input image of a deep learning model. The noises are almost visually indistinguishable to human eyes, but can fool classifiers into making wrong predictions. In this thesis, adversarial attacks and methods to improve deep learning ’models robustness against adversarial samples were studied. Five different adversarial attack algorithm were implemented. These attack algorithms included white-box attacks and black-box attacks, targeted attacks and non-targeted attacks, and image-specific attacks and universal attacks. The adversarial attacks generated adversarial examples that resulted in significant drop in classification accuracy. Adversarial training is one commonly used strategy to improve the robustness of deep learning models against adversarial examples. It is shown that adversarial training can provide an additional regularization benefit beyond that provided by using dropout. Adversarial training is performed by incorporating adversarial examples into the training process. Traditionally, during this process, cross-entropy loss is used as the loss function. In order to improve the robustness of deep learning models against adversarial examples, in this thesis we propose two new methods of adversarial training by applying the principle of Maximal Coding Rate Reduction. The Maximal Coding Rate Reduction loss function maximizes the coding rate difference between the whole data set and the sum of each individual class. We evaluated the performance of different adversarial training methods by comparing the clean accuracy, adversarial accuracy and local Lipschitzness. It was shown that adversarial training with Maximal Coding Rate Reduction loss function would yield a more robust network than the traditional adversarial training method. / Djupinlärning är ett av de hetaste vetenskapliga ämnena just nu. Djupa konvolutionella nätverk kan lösa olika komplexa uppgifter inom bildbehandling. Det har dock visat sig att motståndarattacker har förmågan att lura djupa inlärningsmodeller. En motståndarattack genomförs genom att man tillämpar särskilt utformade störningar på den ingående bilden för en djup inlärningsmodell. Störningarna är nästan visuellt omöjliga att särskilja för mänskliga ögon, men kan lura klassificerare att göra felaktiga förutsägelser. I den här avhandlingen studerades motståndarattacker och metoder för att förbättra djupinlärningsmodellers robusthet mot motståndarexempel. Fem olika algoritmer för motståndarattack implementerades. Dessa angreppsalgoritmer omfattade white-box-attacker och black-box-attacker, riktade attacker och icke-målinriktade attacker samt bildspecifika attacker och universella attacker. De negativa attackerna genererade motståndarexempel som ledde till en betydande minskning av klassificeringsnoggrannheten. Motståndsträning är en vanligt förekommande strategi för att förbättra djupinlärningsmodellernas robusthet mot motståndarexempel. Det visas att motståndsträning kan ge en ytterligare regulariseringsfördel utöver den som ges genom att använda dropout. Motståndsträning utförs genom att man införlivar motståndarexempel i träningsprocessen. Traditionellt används under denna process cross-entropy loss som förlustfunktion. För att förbättra djupinlärningsmodellernas robusthet mot motståndarexempel föreslår vi i den här avhandlingen två nya metoder för motståndsträning genom att tillämpa principen om maximal minskning av kodningshastigheten. Förlustfunktionen Maximal Coding Rate Reduction maximerar skillnaden i kodningshastighet mellan hela datamängden och summan av varje enskild klass. Vi utvärderade prestandan hos olika metoder för motståndsträning genom att jämföra ren noggrannhet, motstånds noggrannhet och lokal Lipschitzness. Det visades att motståndsträning med förlustfunktionen Maximal Coding Rate Reduction skulle ge ett mer robust nätverk än den traditionella motståndsträningsmetoden.
126

The Role of Temporal Fine Structure in Everyday Hearing

Agudemu Borjigin (12468234) 28 April 2022 (has links)
<p>This thesis aims to investigate how one fundamental component of the inner-ear (cochlear) response to all sounds, the temporal fine structure (TFS), is used by the auditory system in everyday hearing. Although it is well known that neurons in the cochlea encode the TFS through exquisite phase locking, how this initial/peripheral temporal code contributes to everyday hearing and how its degradation contributes to perceptual deficits are foundational questions in auditory neuroscience and clinical audiology that remain unresolved despite extensive prior research. This is largely because the conventional approach to studying the role of TFS involves performing perceptual experiments with acoustic manipulations of stimuli (such as sub-band vocoding), rather than direct physiological or behavioral measurements of TFS coding, and hence is intrinsically limited. The present thesis addresses these gaps in three parts: 1) developing assays that can quantify TFS coding at the individual level 2) comparing individual differences in TFS coding to differences in speech-in-noise perception across a range of real-world listening conditions, and 3) developing deep neural network (DNN) models of speech separation/enhancement to complement the individual-difference approach. By comparing behavioral and electroencephalogram (EEG)-based measures, Part 1 of this work identified a robust test battery that measures TFS processing in individual humans. Using this battery, Part 2 subdivided a large sample of listeners (N=200) into groups with “good” and “poor” TFS sensitivity. A comparison of speech-in-noise scores under a range of listening conditions between the groups revealed that good TFS coding reduces the negative impact of reverberation on speech intelligibility, and leads to reduced reaction times suggesting lessened listening effort. These results raise the possibility that cochlear implant (CI) sound coding strategies could be improved by attempting to provide usable TFS information, and that these individualized TFS assays can also help predict listening outcomes in reverberant, real-world listening environments. Finally, the DNN models (Part 3) introduced significant improvements in speech quality and intelligibility, as evidenced by all acoustic evaluation metrics and test results from CI listeners (N=8). These models can be incorporated as “front-end” noise-reduction algorithms in hearing assistive devices, as well as complement other approaches by serving as a research tool to help generate and rapidly sub-select the most viable hypotheses about the role of TFS coding in complex listening scenarios.</p>
127

[en] DEEP LEARNING NEURAL NETWORKS FOR THE IDENTIFICATION OF AROUSALS RELATED TO RESPIRATORY EVENTS USING POLYSOMNOGRAPHIC EEG SIGNALS / [pt] REDES NEURAIS DE APRENDIZADO PROFUNDO PARA A IDENTIFICAÇÃO DE DESPERTARES RELACIONADOS A EVENTOS RESPIRATÓRIOS USANDO SINAIS EEG POLISSONOGRÁFICOS

MARIA LEANDRA GUATEQUE JARAMILLO 31 May 2021 (has links)
[pt] Para o diagnóstico de distúrbios do sono, um dos exames mais usado é a polissonografia (PSG), na qual é registrada uma variedade de sinais fisiológicos. O exame de PSG é observado por um especialista do sono, processo que pode levar muito tempo e incorrer em erros de interpretação. O presente trabalho desenvolve e compara o desempenho de quatro sistemas baseados em arquiteturas de redes neurais de aprendizado profundo, mais especificamente, redes convolutivas (CNN) e redes recorrentes Long-Short Term Memory (LSTM), para a identificação de despertares relacionados ao esforço respiratório (Respiratory Effort-Related Arousal-RERA) e a eventos de despertar relacionados à apneia/hipopneia. Para o desenvolvimento desta pesquisa, foram usadas as informações de apenas seis canais eletroencefalográficos (EEG) provenientes de 994 registros de PSG noturna da base de dados PhysioNet CinC Challenge2018, além disso, foi considerado o uso de class weight e Focal Loss para lidar com o desbalanceamento de classes. Para a avaliação de cada um dos sistemas foram usadas a Accuracy, AUROC e AUPRC como métricas de desempenho. Os melhores resultados para o conjunto de teste foram obtidos com os modelos CNN1 obtendo-se uma Accuracy, AUROC e AUPRC de 0,8404, 0,8885 e 0,8141 respetivamente, e CNN2 obtendo-se uma Accuracy, AUROC e AUPRC de 0,8214, 0,8915 e 0,8097 respetivamente. Os resultados restantes confirmaram que as redes neurais de aprendizado profundo permitem lidar com dados temporais de EEG melhor que os algoritmos de aprendizado de máquina tradicional, e o uso de técnicas como class weight e Focal Loss melhoram o desempenho dos sistemas. / [en] For the diagnosis of sleep disorders, one of the most commonly used tests is polysomnography (PSG), in which a variety of physiological signs are recorded. The study of PSG is observed by a sleep therapist, This process may take a long time and may incur misinterpretation. This work develops and compares the performance of four classification systems based on deep learning neural networks, more specifically, convolutional neural networks (CNN) and recurrent networks Long-Short Term Memory (LSTM), for the identification of Respiratory Effort-Related Arousal (RERA) and to events related to apnea/hypopnea. For the development of this research, it was used the Electroencephalogram (EEG) data of six channels from 994 night polysomnography records from the database PhysioNet CinC Challenge2018, the use of class weight and Focal Loss was considered to deal with class unbalance. Accuracy, AUROC, and AUPRC were used as performance metrics for evaluating each system. The best results for the test set were obtained with the CNN1 models obtaining an accuracy, AUROC and AUPRC of 0.8404, 0.8885 and 0.8141 respectively, and RCNN2 obtaining an accuracy, AUROC and AUPRC of 0.8214, 0.8915 and 0.8097 respectively. The remaining results confirmed that deep learning neural networks allow dealing with EEG time data better than traditional machine learning algorithms, and the use of techniques such as class weight and Focal Loss improve system performance.
128

Transformer Offline Reinforcement Learning for Downlink Link Adaptation

Mo, Alexander January 2023 (has links)
Recent advancements in Transformers have unlocked a new relational analysis technique for Reinforcement Learning (RL). This thesis researches the models for DownLink Link Adaptation (DLLA). Radio resource management methods such as DLLA form a critical facet for radio-access networks, where intricate optimization problems are continuously resolved under strict latency constraints in the order of milliseconds. Although previous work has showcased improved downlink throughput in an online RL approach, time dependence of DLLA obstructs its wider adoption. Consequently, this thesis ventures into uncharted territory by extending the DLLA framework with sequence modelling to fit the Transformer architecture. The objective of this thesis is to assess the efficacy of an autoregressive sequence modelling based offline RL Transformer model for DLLA using a Decision Transformer. Experimentally, the thesis demonstrates that the attention mechanism models environment dynamics effectively. However, the Decision Transformer framework lacks in performance compared to the baseline, calling for a different Transformer model. / De senaste framstegen inom Transformers har möjliggjort ny teknik för Reinforcement Learning (RL). I denna uppsats undersöks modeller för länkanpassning, närmare bestämt DownLink Link Adaptation (DLLA). Metoder för hantering av radioresurser som DLLA utgör en kritisk aspekt för radioåtkomstnätverk, där invecklade optimeringsproblem löses kontinuerligt under strikta villkor kring latens och annat, i storleksordningen millisekunder. Även om tidigare arbeten har påvisat förbättrad länkgenomströmning med en online-RL-metod, så gäller att tidsberoenden i DLLA hindrar dess bredare användning. Följaktligen utökas här DLLA-ramverket med sekvensmodellering för att passa Transformer-arkitekturer. Syftet är att bedöma effekten av en autoregressiv sekvensmodelleringsbaserad offline-RL-modell för DLLA med en Transformer för beslutsstöd. Experimentellt visas att uppmärksamhetsmekanismen modellerar miljöns dynamik effektivt. Men ramverket saknar prestanda jämfört med tidigare forsknings- och utvecklingprojekt, vilket antyder att en annan Transformer-modell krävs.
129

Conditional generative modeling for images, 3D animations, and video

Voleti, Vikram 07 1900 (has links)
Generative modeling for computer vision has shown immense progress in the last few years, revolutionizing the way we perceive, understand, and manipulate visual data. This rapidly evolving field has witnessed advancements in image generation, 3D animation, and video prediction that unlock diverse applications across multiple fields including entertainment, design, healthcare, and education. As the demand for sophisticated computer vision systems continues to grow, this dissertation attempts to drive innovation in the field by exploring novel formulations of conditional generative models, and innovative applications in images, 3D animations, and video. Our research focuses on architectures that offer reversible transformations of noise and visual data, and the application of encoder-decoder architectures for generative tasks and 3D content manipulation. In all instances, we incorporate conditional information to enhance the synthesis of visual data, improving the efficiency of the generation process as well as the generated content. Prior successful generative techniques which are reversible between noise and data include normalizing flows and denoising diffusion models. The continuous variant of normalizing flows is powered by Neural Ordinary Differential Equations (Neural ODEs), and have shown some success in modeling the real image distribution. However, they often involve huge number of parameters, and high training time. Denoising diffusion models have recently gained huge popularity for their generalization capabilities especially in text-to-image applications. In this dissertation, we introduce the use of Neural ODEs to model video dynamics using an encoder-decoder architecture, demonstrating their ability to predict future video frames despite being trained solely to reconstruct current frames. In our next contribution, we propose a conditional variant of continuous normalizing flows that enables higher-resolution image generation based on lower-resolution input. This allows us to achieve comparable image quality to regular normalizing flows, while significantly reducing the number of parameters and training time. Our next contribution focuses on a flexible encoder-decoder architecture for accurate estimation and editing of full 3D human pose. We present a comprehensive pipeline that takes human images as input, automatically aligns a user-specified 3D human/non-human character with the pose of the human, and facilitates pose editing based on partial input information. We then proceed to use denoising diffusion models for image and video generation. Regular diffusion models involve the use of a Gaussian process to add noise to clean images. In our next contribution, we derive the relevant mathematical details for denoising diffusion models that use non-isotropic Gaussian processes, present non-isotropic noise, and show that the quality of generated images is comparable with the original formulation. In our final contribution, devise a novel framework building on denoising diffusion models that is capable of solving all three video tasks of prediction, generation, and interpolation. We perform ablation studies using this framework, and show state-of-the-art results on multiple datasets. Our contributions are published articles at peer-reviewed venues. Overall, our research aims to make a meaningful contribution to the pursuit of more efficient and flexible generative models, with the potential to shape the future of computer vision. / La modélisation générative pour la vision par ordinateur a connu d’immenses progrès ces dernières années, révolutionnant notre façon de percevoir, comprendre et manipuler les données visuelles. Ce domaine en constante évolution a connu des avancées dans la génération d’images, l’animation 3D et la prédiction vidéo, débloquant ainsi diverses applications dans plusieurs domaines tels que le divertissement, le design, la santé et l’éducation. Alors que la demande de systèmes de vision par ordinateur sophistiqués ne cesse de croître, cette thèse s’efforce de stimuler l’innovation dans le domaine en explorant de nouvelles formulations de modèles génératifs conditionnels et des applications innovantes dans les images, les animations 3D et la vidéo. Notre recherche se concentre sur des architectures offrant des transformations réversibles du bruit et des données visuelles, ainsi que sur l’application d’architectures encodeur-décodeur pour les tâches génératives et la manipulation de contenu 3D. Dans tous les cas, nous incorporons des informations conditionnelles pour améliorer la synthèse des données visuelles, améliorant ainsi l’efficacité du processus de génération ainsi que le contenu généré. Les techniques génératives antérieures qui sont réversibles entre le bruit et les données et qui ont connu un certain succès comprennent les flux de normalisation et les modèles de diffusion de débruitage. La variante continue des flux de normalisation est alimentée par les équations différentielles ordinaires neuronales (Neural ODEs) et a montré une certaine réussite dans la modélisation de la distribution d’images réelles. Cependant, elles impliquent souvent un grand nombre de paramètres et un temps d’entraînement élevé. Les modèles de diffusion de débruitage ont récemment gagné énormément en popularité en raison de leurs capacités de généralisation, notamment dans les applications de texte vers image. Dans cette thèse, nous introduisons l’utilisation des Neural ODEs pour modéliser la dynamique vidéo à l’aide d’une architecture encodeur-décodeur, démontrant leur capacité à prédire les images vidéo futures malgré le fait d’être entraînées uniquement à reconstruire les images actuelles. Dans notre prochaine contribution, nous proposons une variante conditionnelle des flux de normalisation continus qui permet une génération d’images à résolution supérieure à partir d’une entrée à résolution inférieure. Cela nous permet d’obtenir une qualité d’image comparable à celle des flux de normalisation réguliers, tout en réduisant considérablement le nombre de paramètres et le temps d’entraînement. Notre prochaine contribution se concentre sur une architecture encodeur-décodeur flexible pour l’estimation et l’édition précises de la pose humaine en 3D. Nous présentons un pipeline complet qui prend des images de personnes en entrée, aligne automatiquement un personnage 3D humain/non humain spécifié par l’utilisateur sur la pose de la personne, et facilite l’édition de la pose en fonction d’informations partielles. Nous utilisons ensuite des modèles de diffusion de débruitage pour la génération d’images et de vidéos. Les modèles de diffusion réguliers impliquent l’utilisation d’un processus gaussien pour ajouter du bruit aux images propres. Dans notre prochaine contribution, nous dérivons les détails mathématiques pertinents pour les modèles de diffusion de débruitage qui utilisent des processus gaussiens non isotropes, présentons du bruit non isotrope, et montrons que la qualité des images générées est comparable à la formulation d’origine. Dans notre dernière contribution, nous concevons un nouveau cadre basé sur les modèles de diffusion de débruitage, capable de résoudre les trois tâches vidéo de prédiction, de génération et d’interpolation. Nous réalisons des études d’ablation en utilisant ce cadre et montrons des résultats de pointe sur plusieurs ensembles de données. Nos contributions sont des articles publiés dans des revues à comité de lecture. Dans l’ensemble, notre recherche vise à apporter une contribution significative à la poursuite de modèles génératifs plus efficaces et flexibles, avec le potentiel de façonner l’avenir de la vision par ordinateur.
130

Road Segmentation and Optimal Route Prediction using Deep Neural Networks and Graphs / Vägsegmentering och förutsägelse av optimala rutter genom djupa neurala nätverk och grafer

Ossmark, Viktor January 2021 (has links)
Observing the earth from above is a great way of understanding our world better. From space, many complex patterns and relationships on the ground can be identified through high-quality satellite data. The quality and availability of this data in combination with recent advancement in various deep learning techniques allows us to find these patterns more effectively then ever. In this thesis, we will analyze satellite imagery by using deep neural networks in an attempt to find road networks in different cities around the world. Once we have located networks of roads in the cities we will represent them as graphs and deploy the Dijkstra shortest path algorithm to find optimal routes within these networks. Having the ability to efficiently use satellite imagery for near real-time road detection and optimal route prediction has many possible applications, especially from a humanitarian and commercial point of view. For example, in the humanitarian realm, the frequency of natural disasters is unfortunately increasing due to climate change and the need for emergency real-time mapping for relief organisations in the case of a severe flood or similar is growing.  The state-of-the-art deep neural network models that will be implemented, compared and contrasted for this task are mainly based on the U-net and ResNet architectures. However, before introducing these architectures the reader will be given a comprehensive introduction and theoretical background of deep neural networks to distinctly formulate the mathematical groundwork. The final results demonstrates an overall strong model performance across different metrics and data sets, with the highest obtained IoU-score being approximately 0.7 for the segmentation task. For some models we can also see a high degree of similarity between the predicted optimal paths and the ground truth optimal paths. / Att betrakta jorden från ovan är ett bra tillvägagångsätt för att förstå vår egen värld bättre. Från rymden, många komplexa mönster och samband på marken går att urskilja genom hög-upplöst satellitdata. Kvalitén och tillgängligheten av denna data, i kombination med de senaste framstegen inom djupa inlärningstekniker, möjliggör oss att hissa dessa mönster mer effektivt än någonsin. I denna avhandling kommer vi analysera satellitbilder med hjälp av djupa neurala nätverk i ett försök att hitta nätverk av vägar i olika städer runtom i världen. Efter vi har lokaliserat dessa nätverk av vägar så kommer vi att representera nätverken som grafer och använda oss av Dijkstras algoritm för att hitta optimala rutter inom dessa nätverk.  Att ha förmågan att kunna effektivt använda sig av satellitbilder för att i nära realtid kunna identifiera vägar och optimala rutter har många möjliga applikationer. Speciellt ur ett humant och kommersiellt perspektiv. Exempelvis, inom det humanitära området, så ökar dessvärre frekvensen av naturkatastrofer på grund av klimatförändringar och därmed är behovet av nödkartläggning i realtid för hjälporganisationer större än någonsin. En effektiv nödkartläggning skulle exempelvis kunna underlätta enormt vid en allvarlig översvämning eller dylikt.  Dem toppmoderna djupa neurala nätverksmodellerna som kommer implementeras, jämföras och nyanseras för denna uppgift är i huvudsak baserad på U-net och ResNet arkitekturerna. Innan vi presenterar dessa arkitekturer i denna avhandling så kommer läsaren att få en omfattande teoretisk bakgrund till djupa neurala nätverk för att tydligt formulera dem matematiska grundpelarna. Dem slutgiltiga resultaten visar övergripande stark prestanda för samtliga av våra modeller. Både på olika datauppsättningar samt utvärderingsmått. Den högste IoU poängen som uppnås är cirka 0,7 och vi kan även se en hög grad av likhet mellan vissa av våra förutsagda optimala rutter och mark sanningens optimala rutter.

Page generated in 0.0574 seconds