• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 98
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 124
  • 124
  • 124
  • 124
  • 82
  • 70
  • 58
  • 49
  • 45
  • 44
  • 44
  • 43
  • 42
  • 39
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Deep Learning in the Web Browser for Wind Speed Forecasting using TensorFlow.js / Djupinlärning i Webbläsaren för Vindhastighetsprognoser med TensorFlow.js

Moazez Gharebagh, Sara January 2023 (has links)
Deep Learning is a powerful and rapidly advancing technology that has shown promising results within the field of weather forecasting. Implementing and using deep learning models can however be challenging due to their complexity. One approach to potentially overcome the challenges with deep learning is to run deep learning models directly in the web browser. This approach introduces several advantages, including accessibility, data privacy, and the ability to access device sensors. The ability to run deep learning models on the web browser thus opens new possibilities for research and development in areas such as weather forecasting. In this thesis, two deep learning models that run in the web browser are implemented using JavaScript and TensorFlow.js to predict wind speed in the near future. Specifically, the application of Long Short-Term Memory and Gated Recurrent Units models are investigated. The results demonstrate that both the Long Short-Term Memory and Gated Recurrent Units models achieve similar performance and are able to generate predictions that closely align with the expected patterns when the variations in the data are less significant. The best performing Long Short-Term Memory model achieved a mean squared error of 0.432, a root mean squared error of 0.657 and a mean average error of 0.459. The best performing Gated Recurrent Units model achieved a mean squared error of 0.435, a root mean squared error of 0.660 and a mean average error of 0.461. / Djupinlärning är en kraftfull teknik som genomgår snabb utveckling och har uppnått lovande resultat inom väderprognoser. Att implementera och använda djupinlärningsmodeller kan dock vara utmanande på grund av deras komplexitet. Ett möjligt sätt att möta utmaningarna med djupinlärning är att köra djupinlärningsmodeller direkt i webbläsaren. Detta sätt medför flera fördelar, inklusive tillgänglighet, dataintegritet och möjligheten att använda enhetens egna sensorer. Att kunna köra djupinlärningsmodeller i webbläsaren bidrar därför med möjligheter för forskning och utveckling inom områden såsom väderprognoser. I denna studie implementeras två djupinlärningsmodeller med JavaScript och TensorFlow.js som körs i webbläsaren för att prediktera vindhastighet i en nära framtid. Specifikt undersöks tillämpningen av modellerna Long Short-Term Memory och Gated Recurrent Units. Resultaten visar att både Long Short-Term Memory och Gated Recurrent Units modellerna presterar lika bra och kan generera prediktioner som är nära förväntade mönster när variationen i datat är mindre signifikant. Den Long Short-Term Memory modell som presterade bäst uppnådde en mean squared error på 0.432, en root mean squared error på 0.657 och en mean average error på 0.459. Den Gated Recurrent Units modell som presterade bäst uppnådde en mean squared error på 0.435, en root mean squared error på 0.660 och en mean average error på 0.461.
82

Pulse Repetition Interval Modulation Classification using Machine Learning / Maskininlärning för klassificering av modulationstyp för pulsrepetitionsintervall

Norgren, Eric January 2019 (has links)
Radar signals are used for estimating location, speed and direction of an object. Some radars emit pulses, while others emit a continuous wave. Both types of radars emit signals according to some pattern; a pulse radar, for example, emits pulses with a specific time interval between pulses. This time interval may either be stable, change linearly, or follow some other pattern. The interval between two emitted pulses is often referred to as the pulse repetition interval (PRI), and the pattern that defines the PRI is often referred to as the modulation. Classifying which PRI modulation is used in a radar signal is a crucial component for the task of identifying who is emitting the signal. Incorrectly classifying the used modulation can lead to an incorrect guess of the identity of the agent emitting the signal, and can as a consequence be fatal. This work investigates how a long short-term memory (LSTM) neural network performs compared to a state of the art feature extraction neural network (FE-MLP) approach for the task of classifying PRI modulation. The results indicate that the proposed LSTM model performs consistently better than the FE-MLP approach across all tested noise levels. The downside of the proposed LSTM model is that it is significantly more complex than the FE-MLP approach. Future work could investigate if the LSTM model is too complex to use in a real world setting where computing power may be limited. Additionally, the LSTM model can, in a trivial manner, be modified to support more modulations than those tested in this work. Hence, future work could also evaluate how the proposed LSTM model performs when support for more modulations is added. / Radarsignaler används för att uppskatta plats, hastighet och riktning av objekt. Vissa radarer sänder ut signaler i form av pulser, medan andra sänder ut en kontinuerlig våg. Båda typer av radarer avger signaler enligt ett visst mönster, till exempel avger en pulsradar pulser med ett specifikt tidsintervall mellan pulserna. Detta tidsintervall kan antingen vara konstant, förändras linjärt, eller följa ett annat mönster. Intervallet mellan två pulser benämns ofta pulsrepetitionsintervall (PRI), och mönstret som definierar PRIn benämns ofta modulering. Att klassificera vilken PRI-modulering som används i en radarsignal är en viktig del i processen att identifiera vem som skickade ut signalen. Felaktig klassificering av den använda moduleringen kan leda till en felaktig gissning av identiteten av agenten som skickade ut signalen, vilket kan leda till ett dödligt utfall. Detta arbete undersöker hur väl det framtagna neurala nätverket som består av ett långt korttidsminne (LSTM) kan klassificera PRI-modulering i förhållande till en modern modell som använder särskilt utvalda beräknade särdrag från data och klassificerar dessa särdrag med ett neuralt nätverk. Resultaten indikerar att LSTM-modellen konsekvent klassificerar med högre träffsäkerhet än modellen som använder särdrag, vilket gäller för alla testade brusnivåer. Nackdelen med LSTM-modellen är att den är mer komplex än modellen som använder särdrag. Framtida arbete kan undersöka om LSTM-modellen är för komplex för att använda i ett verkligt scenario där beräkningskraften kan vara begränsad. Dessutom skulle framtida arbete kunna utvärdera hur väl LSTM-modellen kan klassificera PRI-moduleringar när stöd för fler moduleringar än de som testats i detta arbete läggs till, detta då stöd för ytterligare PRI-moduleringar kan läggas till i LSTM-modellen på ett trivialt sätt.
83

Link blockage modelling for channel state prediction in high-frequencies using deep learning / Länkblockeringsmodellering för förutsägelse av kanaltillstånd i höga frekvenser med djupinlärning

Chari, Shreya Krishnama January 2020 (has links)
With the accessibility to generous spectrum and development of high gain antenna arrays, wireless communication in higher frequency bands providing multi-gigabit short range wireless access has become a reality. The directional antennas have proven to reduce losses due to interfering signals but are still exposed to blockage events. These events impede the overall user connectivity and throughput. A mobile blocker such as a moving vehicle amplifies the blockage effect. Modelling the blockage effects helps in understanding these events in depth and in maintaining the user connectivity. This thesis proposes the use of a four state channel model to describe blockage events in high-frequency communication. Two deep learning architectures are then designed and evaluated for two possible tasks, the prediction of the signal strength and the classification of the channel state. The evaluations based on simulated traces show high accuracy, and suggest that the proposed models have the potential to be extended for deployment in real systems. / Med tillgängligheten till generöst spektrum och utveckling av antennmatriser med hög förstärkning har trådlös kommunikation i högre frekvensband som ger multi-gigabit kortdistans trådlös åtkomst blivit verklighet. Riktningsantennerna har visat sig minska förluster på grund av störande signaler men är fortfarande utsatta för blockeringshändelser. Dessa händelser hindrar den övergripande användaranslutningen och genomströmningen. En mobil blockerare såsom ett fordon i rörelse förstärker blockeringseffekten. Modellering av blockeringseffekter hjälper till att förstå dessa händelser på djupet och bibehålla användaranslutningen. Denna avhandling föreslår användning av en fyrstatskanalmodell för att beskriva blockeringshändelser i högfrekvent kommunikation. Två djupinlärningsarkitekturer designas och utvärderas för två möjliga uppgifter, förutsägelsen av signalstyrkan och klassificeringen av kanalstatusen. Utvärderingarna baserade på simulerade spår visar hög noggrannhet och föreslår att de föreslagna modellerna har potential att utökas för distribution i verkliga system.
84

Použití rekurentních neuronových sítí pro automatické rozpoznávání řečníka, jazyka a pohlaví / Neural networks for automatic speaker, language, and sex identification

Do, Ngoc January 2016 (has links)
Title: Neural networks for automatic speaker, language, and sex identifica- tion Author: Bich-Ngoc Do Department: Institute of Formal and Applied Linguistics Supervisor: Ing. Mgr. Filip Jurek, Ph.D., Institute of Formal and Applied Linguistics and Dr. Marco Wiering, Faculty of Mathematics and Natural Sciences, University of Groningen Abstract: Speaker recognition is a challenging task and has applications in many areas, such as access control or forensic science. On the other hand, in recent years, deep learning paradigm and its branch, deep neural networks have emerged as powerful machine learning techniques and achieved state-of- the-art in many fields of natural language processing and speech technology. Therefore, the aim of this work is to explore the capability of a deep neural network model, recurrent neural networks, in speaker recognition. Our pro- posed systems are evaluated on TIMIT corpus using speaker identification task. In comparison with other systems in the same test conditions, our systems could not surpass reference ones due to the sparsity of validation data. In general, our experiments show that the best system configuration is a combination of MFCCs with their dynamic features and a recurrent neural network model. We also experiment recurrent neural networks and convo- lutional neural...
85

Identification of Online Users' Social Status via Mining User-Generated Data

Zhao, Tao 05 September 2019 (has links)
No description available.
86

Relation Classification using Semantically-Enhanced Syntactic Dependency Paths : Combining Semantic and Syntactic Dependencies for Relation Classification using Long Short-Term Memory Networks

Capshaw, Riley January 2018 (has links)
Many approaches to solving tasks in the field of Natural Language Processing (NLP) use syntactic dependency trees (SDTs) as a feature to represent the latent nonlinear structure within sentences. Recently, work in parsing sentences to graph-based structures which encode semantic relationships between words—called semantic dependency graphs (SDGs)—has gained interest. This thesis seeks to explore the use of SDGs in place of and alongside SDTs within a relation classification system based on long short-term memory (LSTM) neural networks. Two methods for handling the information in these graphs are presented and compared between two SDG formalisms. Three new relation extraction system architectures have been created based on these methods and are compared to a recent state-of-the-art LSTM-based system, showing comparable results when semantic dependencies are used to enhance syntactic dependencies, but with significantly fewer training parameters.
87

Detekce osob a hodnocení jejich pohlaví a věku v obrazových datech / Detection of persons and evaluation of gender and age in image data

Dobiš, Lukáš January 2020 (has links)
Táto diplomová práca sa venuje automatickému rozpoznávaniu ludí v obrazových dátach s využitím konvolučných neurónových sieti na určenie polohy tváre a následnej analýze získaných dát. Výsledkom analýzy tváre je určenie pohlavia, emócie a veku osoby. Práca obsahuje popis použitých architektúr konvolučných sietí pre každú podúlohu. Sieť na odhad veku má natrénované nové váhy, ktoré sú vzápätí zmrazené a majú do svojej architektúry vložené LSTM vrstvy. Tieto vrstvy sú samostatne dotrénované a testované na novom datasete vytvorenom pre tento účel. Výsledky testov ukazujú zlepšenie predikcie veku. Riešenie pre rýchlu, robustnú a modulárnu detekciu tváre a ďalších ludských rysov z jedného obrazu alebo videa je prezentované ako kombinácia prepojených konvolučných sietí. Tieto sú implementované v podobe skriptu a následne vysvetlené. Ich rýchlosť je dostatočná pre ďalšie dodatočné analýzy tváre na živých obrazových dátach.
88

Modeling functional brain activity of human working memory using deep recurrent neural networks

Sainath, Pravish 12 1900 (has links)
Dans les systèmes cognitifs, le rôle de la mémoire de travail est crucial pour le raisonnement visuel et la prise de décision. D’énormes progrès ont été réalisés dans la compréhension des mécanismes de la mémoire de travail humain/animal, ainsi que dans la formulation de différents cadres de réseaux de neurones artificiels à mémoire augmentée. L’objectif global de notre projet est de former des modèles de réseaux de neurones artificiels capables de consolider la mémoire sur une courte période de temps pour résoudre une tâche de mémoire et les relier à l’activité cérébrale des humains qui ont résolu la même tâche. Le projet est de nature interdisciplinaire en essayant de relier les aspects de l’intelligence artificielle (apprentissage profond) et des neurosciences. La tâche cognitive utilisée est la tâche N-back, très populaire en neurosciences cognitives dans laquelle les sujets sont présentés avec une séquence d’images, dont chacune doit être identifiée pour savoir si elle a déjà été vue ou non. L’ensemble de données d’imagerie fonctionnelle (IRMf) utilisé a été collecté dans le cadre du projet Courtois Neurmod. Nous étudions plusieurs variantes de modèles de réseaux neuronaux récurrents qui apprennent à résoudre la tâche de mémoire de travail N-back en les entraînant avec des séquences d’images. Ces réseaux de neurones entraînés optimisés pour la tâche de mémoire sont finalement utilisés pour générer des représentations de caractéristiques pour les images de stimuli vues par les sujets humains pendant leurs enregistrements tout en résolvant la tâche. Les représentations dérivées de ces réseaux de neurones servent ensuite à créer un modèle de codage pour prédire l’activité IRMf BOLD des sujets. On comprend alors la relation entre le modèle de réseau neuronal et l’activité cérébrale en analysant cette capacité prédictive du modèle dans différentes zones du cerveau impliquées dans la mémoire de travail. Ce travail présente une manière d’utiliser des réseaux de neurones artificiels pour modéliser le comportement et le traitement de l’information de la mémoire de travail du cerveau et d’utiliser les données d’imagerie cérébrale capturées sur des sujets humains lors de la tâche N-back pour potentiellement comprendre certains mécanismes de mémoire du cerveau en relation avec ces modèles de réseaux de neurones artificiels. / In cognitive systems, the role of working memory is crucial for visual reasoning and decision making. Tremendous progress has been made in understanding the mechanisms of the human/animal working memory, as well as in formulating different frameworks of memory augmented artificial neural networks. The overall objective of our project is to train artificial neural network models that are capable of consolidating memory over a short period of time to solve a memory task and relate them to the brain activity of humans who solved the same task. The project is of interdisciplinary nature in trying to bridge aspects of Artificial Intelligence (deep learning) and Neuroscience. The cognitive task used is the N-back task, a very popular one in Cognitive Neuroscience in which the subjects are presented with a sequence of images, each of which needs to be identified as to whether it was already seen or not. The functional imaging (fMRI) dataset used has been collected as a part of the Courtois Neurmod Project. We study multiple variants of recurrent neural network models that learn to remember input images across timesteps. These trained neural networks optimized for the memory task are ultimately used to generate feature representations for the stimuli images seen by the human subjects during their recordings while solving the task. The representations derived from these neural networks are then to create an encoding model to predict the fMRI BOLD activity of the subjects. We then understand the relationship between the neural network model and brain activity by analyzing this predictive ability of the model in different areas of the brain that are involved in working memory. This work presents a way of using artificial neural networks to model the behavior and information processing of the working memory of the brain and to use brain imaging data captured from human subjects during the N-back task to potentially understand some memory mechanisms of the brain in relation to these artificial neural network models.
89

Anomaly Detection and Security Deep Learning Methods Under Adversarial Situation

Miguel Villarreal-Vasquez (9034049) 27 June 2020 (has links)
<p>Advances in Artificial Intelligence (AI), or more precisely on Neural Networks (NNs), and fast processing technologies (e.g. Graphic Processing Units or GPUs) in recent years have positioned NNs as one of the main machine learning algorithms used to solved a diversity of problems in both academia and the industry. While they have been proved to be effective in solving many tasks, the lack of security guarantees and understanding of their internal processing disrupts their wide adoption in general and cybersecurity-related applications. In this dissertation, we present the findings of a comprehensive study aimed to enable the absorption of state-of-the-art NN algorithms in the development of enterprise solutions. Specifically, this dissertation focuses on (1) the development of defensive mechanisms to protect NNs against adversarial attacks and (2) application of NN models for anomaly detection in enterprise networks.</p><p>In this state of affairs, this work makes the following contributions. First, we performed a thorough study of the different adversarial attacks against NNs. We concentrate on the attacks referred to as trojan attacks and introduce a novel model hardening method that removes any trojan (i.e. misbehavior) inserted to the NN models at training time. We carefully evaluate our method and establish the correct metrics to test the efficiency of defensive methods against these types of attacks: (1) accuracy with benign data, (2) attack success rate, and (3) accuracy with adversarial data. Prior work evaluates their solutions using the first two metrics only, which do not suffice to guarantee robustness against untargeted attacks. Our method is compared with the state-of-the-art. The obtained results show our method outperforms it. Second, we proposed a novel approach to detect anomalies using LSTM-based models. Our method analyzes at runtime the event sequences generated by the Endpoint Detection and Response (EDR) system of a renowned security company running and efficiently detects uncommon patterns. The new detecting method is compared with the EDR system. The results show that our method achieves a higher detection rate. Finally, we present a Moving Target Defense technique that smartly reacts upon the detection of anomalies so as to also mitigate the detected attacks. The technique efficiently replaces the entire stack of virtual nodes, making ongoing attacks in the system ineffective.</p><p> </p>
90

Comparing decentralized learning to Federated Learning when training Deep Neural Networks under churn

Vikström, Johan January 2021 (has links)
Decentralized Machine Learning could address some problematic facets with Federated Learning. There is no central server acting as an arbiter of whom or what may benefit from Machine Learning models created by the vast amount of data becoming available in recent years. It could also increase the reliability and scalability of Machine Learning systems thereby drawing the benefit of having more data accessible. Gossip Learning is such a protocol, but has primarily been designed with linear models in mind. How does Gossip Learning perform when training Deep Neural Networks? Could it be a viable alternative to Federated Learning? In this thesis, we implement Gossip Learning using two different model merging strategies. We also design and implement two extensions to this protocol with the goal of achieving higher performance when training under churn. The training methods are compared on two tasks: image classification on the Federated Extended MNIST dataset and time- series forecasting on the NN5 dataset. Additionally, we also run an experiment where learners churn, alternating between being available and unavailable. We find that Gossip Learning performs slightly better in settings where learners do not churn but is vastly outperformed in the setting where they do. / Decentraliserad Maskinginlärning kan lösa några problematiska aspekter med Federated Learning. Det finns ingen central server som agerar som domare för vilka som får gagna av Maskininlärningsmodellerna skapad av den stora mäng data som blivit tillgänglig på senare år. Det skulle också kunna öka pålitligheten och skalbarheten av Maskininlärningssystem och därav dra nytta av att mer data är tillgänglig. Gossip Learning är ett sånt protokoll, men det är primärt designat med linjära modeller i åtanke. Hur presterar Gossip Learning när man tränar Djupa Neurala Nätverk? Kan det vara ett möjligt alternativ till Federated Learning? I det här exjobbet implementerar vi Gossip Learning med två olika modelsammanslagningstekniker. Vi designar och implementerar även två tillägg till protokollet med målet att uppnå bättre prestanda när man tränar i system där noder går ner och kommer up. Träningsmetoderna jämförs på två uppgifter: bildklassificering på Federated Extended MNIST datauppsättningen och tidsserieprognostisering på NN5 datauppsättningen. Dessutom har vi även experiment då noder alternerar mellan att vara tillgängliga och otillgängliga. Vi finner att Gossip Learning presterar marginellt bättre i miljöer då noder alltid är tillgängliga men är kraftigt överträffade i miljöer då noder alternerar mellan att vara tillgängliga och otillgängliga.

Page generated in 0.1114 seconds