Spelling suggestions: "subject:"[een] RECURRENT NEURAL NETWORKS"" "subject:"[enn] RECURRENT NEURAL NETWORKS""
101 |
Passive RFID Module with LSTM Recurrent Neural Network Activity Classification Algorithm for Ambient Assisted LivingOguntala, George A., Hu, Yim Fun, Alabdullah, Ali A.S., Abd-Alhameed, Raed, Ali, Muhammad, Luong, D.K. 23 March 2021 (has links)
Yes / Human activity recognition from sensor data is a critical research topic to achieve remote health monitoring and ambient assisted living (AAL). In AAL, sensors are integrated into conventional objects aimed to support targets capabilities through digital environments that are sensitive, responsive and adaptive to human activities. Emerging technological paradigms to support AAL within the home or community setting offers people the prospect of a more individually focused care and improved quality of living. In the present work, an ambient human activity classification framework that augments information from the received signal strength indicator (RSSI) of passive RFID tags to obtain detailed activity profiling is proposed. Key indices of position, orientation, mobility, and degree of activities which are critical to guide reliable clinical management decisions using 4 volunteers are employed to simulate the research objective. A two-layer, fully connected sequence long short-term memory recurrent neural network model (LSTM RNN) is employed. The LSTM RNN model extracts the feature of RSS from the sensor data and classifies the sampled activities using SoftMax. The performance of the LSTM model is evaluated for different data size and the hyper-parameters of the RNN are adjusted to optimal states, which results in an accuracy of 98.18%. The proposed framework suits well for smart health and smart homes which offers pervasive sensing environment for the elderly, persons with disability and chronic illness.
|
102 |
Исследование методов обработки естественного языка для классификации медицинских текстов разной длины : магистерская диссертация / Study of methods of natural language processing for classification of medical texts of different lengthsМаяцкая, Е. A., Mayatskaya, E. A. January 2024 (has links)
The object of the study is the classification of medical text sequences of different lengths. The subject of the study is methods for creating a vector representation of text data, as well as algorithms capable of processing data without restrictions on the length of the sequence. The goal of the final qualification work of the master is to study methods of natural language processing for classification of medical texts of different lengths. Research methods: analysis, mathematical modeling, synthesis, comparison, experiment. The result of the work is: a review of existing methods for processing long texts; a collected dataset including more than 18,000 medical texts; a developed approach for processing long texts and accelerating the transformer model when encoding texts of different lengths. Based on the analysis results, the developed approach achieved the best classification results and inference time compared to the methods considered in the work. / Объект исследования – классификация медицинских текстовых последовательностей разной длины. Предметом исследования являются методы по созданию векторного представления текстовых данных, а также алгоритмы способные обрабатывать данные без ограничения на длину последовательности. Цель выпускной квалификационной работы магистра – исследование методов обработки естественного языка для классификации медицинских текстов разной длины. Методы исследования: анализ, математическое моделирование, синтез, сравнение, эксперимент. Результатом работы является: обзор существующих методов, позволяющих обрабатывать длинные тексты; собранный набор данных, включающий более 18 000 медицинских текстов; разработанный подход, позволяющий обрабатывать длинные тексты и ускоряющий модель трансформера при кодировке текстов разной длины. По итогам анализа разработанный подход достиг наилучший результатов классификации и времени инференса по сравнению с рассматриваемыми в работе методами.
|
103 |
Outlier detection with ensembled LSTM auto-encoders on PCA transformed financial data / Avvikelse-detektering med ensemble LSTM auto-encoders på PCA-transformerad finansiell dataStark, Love January 2021 (has links)
Financial institutions today generate a large amount of data, data that can contain interesting information to investigate to further the economic growth of said institution. There exists an interest in analyzing these points of information, especially if they are anomalous from the normal day-to-day work. However, to find these outliers is not an easy task and not possible to do manually due to the massive amounts of data being generated daily. Previous work to solve this has explored the usage of machine learning to find outliers in these financial datasets. Previous studies have shown that the pre-processing of data usually stands for a big part in information loss. This work aims to study if there is a proper balance in how the pre-processing is carried out to retain the highest amount of information while simultaneously not letting the data remain too complex for the machine learning models. The dataset used consisted of Foreign exchange transactions supplied by the host company and was pre-processed through the use of Principal Component Analysis (PCA). The main purpose of this work is to test if an ensemble of Long Short-Term Memory Recurrent Neural Networks (LSTM), configured as autoencoders, can be used to detect outliers in the data and if the ensemble is more accurate than a single LSTM autoencoder. Previous studies have shown that Ensemble autoencoders can prove more accurate than a single autoencoder, especially when SkipCells have been implemented (a configuration that skips over LSTM cells to make the model perform with more variation). A datapoint will be considered an outlier if the LSTM model has trouble properly recreating it, i.e. a pattern that is hard to classify, making it available for further investigations done manually. The results show that the ensembled LSTM model proved to be more accurate than that of a single LSTM model in regards to reconstructing the dataset, and by our definition of an outlier, more accurate in outlier detection. The results from the pre-processing experiments reveal different methods of obtaining an optimal number of components for your data. One of those is by studying retained variance and accuracy of PCA transformation compared to model performance for a certain number of components. One of the conclusions from the work is that ensembled LSTM networks can prove very powerful, but that alternatives to pre-processing should be explored such as categorical embedding instead of PCA. / Finansinstitut genererar idag en stor mängd data, data som kan innehålla intressant information värd att undersöka för att främja den ekonomiska tillväxten för nämnda institution. Det finns ett intresse för att analysera dessa informationspunkter, särskilt om de är avvikande från det normala dagliga arbetet. Att upptäcka dessa avvikelser är dock inte en lätt uppgift och ej möjligt att göra manuellt på grund av de stora mängderna data som genereras dagligen. Tidigare arbete för att lösa detta har undersökt användningen av maskininlärning för att upptäcka avvikelser i finansiell data. Tidigare studier har visat på att förbehandlingen av datan vanligtvis står för en stor del i förlust av emphinformation från datan. Detta arbete syftar till att studera om det finns en korrekt balans i hur förbehandlingen utförs för att behålla den högsta mängden information samtidigt som datan inte förblir för komplex för maskininlärnings-modellerna. Det emphdataset som användes bestod av valutatransaktioner som tillhandahölls av värdföretaget och förbehandlades genom användning av Principal Component Analysis (PCA). Huvudsyftet med detta arbete är att undersöka om en ensemble av Long Short-Term Memory Recurrent Neural Networks (LSTM), konfigurerad som autoenkodare, kan användas för att upptäcka avvikelser i data och om ensemblen är mer precis i sina predikteringar än en ensam LSTM-autoenkodare. Tidigare studier har visat att en ensembel avautoenkodare kan visa sig vara mer precisa än en singel autokodare, särskilt när SkipCells har implementerats (en konfiguration som hoppar över vissa av LSTM-cellerna för att göra modellerna mer varierade). En datapunkt kommer att betraktas som en avvikelse om LSTM-modellen har problem med att återskapa den väl, dvs ett mönster som nätverket har svårt att återskapa, vilket gör datapunkten tillgänglig för vidare undersökningar. Resultaten visar att en ensemble av LSTM-modeller predikterade mer precist än en singel LSTM-modell när det gäller att återskapa datasetet, och då enligt vår definition av avvikelser, mer precis avvikelse detektering. Resultaten från förbehandlingen visar olika metoder för att uppnå ett optimalt antal komponenter för dina data genom att studera bibehållen varians och precision för PCA-transformation jämfört med modellprestanda. En av slutsatserna från arbetet är att en ensembel av LSTM-nätverk kan visa sig vara mycket kraftfulla, men att alternativ till förbehandling bör undersökas, såsom categorical embedding istället för PCA.
|
104 |
Deep Learning in the Web Browser for Wind Speed Forecasting using TensorFlow.js / Djupinlärning i Webbläsaren för Vindhastighetsprognoser med TensorFlow.jsMoazez Gharebagh, Sara January 2023 (has links)
Deep Learning is a powerful and rapidly advancing technology that has shown promising results within the field of weather forecasting. Implementing and using deep learning models can however be challenging due to their complexity. One approach to potentially overcome the challenges with deep learning is to run deep learning models directly in the web browser. This approach introduces several advantages, including accessibility, data privacy, and the ability to access device sensors. The ability to run deep learning models on the web browser thus opens new possibilities for research and development in areas such as weather forecasting. In this thesis, two deep learning models that run in the web browser are implemented using JavaScript and TensorFlow.js to predict wind speed in the near future. Specifically, the application of Long Short-Term Memory and Gated Recurrent Units models are investigated. The results demonstrate that both the Long Short-Term Memory and Gated Recurrent Units models achieve similar performance and are able to generate predictions that closely align with the expected patterns when the variations in the data are less significant. The best performing Long Short-Term Memory model achieved a mean squared error of 0.432, a root mean squared error of 0.657 and a mean average error of 0.459. The best performing Gated Recurrent Units model achieved a mean squared error of 0.435, a root mean squared error of 0.660 and a mean average error of 0.461. / Djupinlärning är en kraftfull teknik som genomgår snabb utveckling och har uppnått lovande resultat inom väderprognoser. Att implementera och använda djupinlärningsmodeller kan dock vara utmanande på grund av deras komplexitet. Ett möjligt sätt att möta utmaningarna med djupinlärning är att köra djupinlärningsmodeller direkt i webbläsaren. Detta sätt medför flera fördelar, inklusive tillgänglighet, dataintegritet och möjligheten att använda enhetens egna sensorer. Att kunna köra djupinlärningsmodeller i webbläsaren bidrar därför med möjligheter för forskning och utveckling inom områden såsom väderprognoser. I denna studie implementeras två djupinlärningsmodeller med JavaScript och TensorFlow.js som körs i webbläsaren för att prediktera vindhastighet i en nära framtid. Specifikt undersöks tillämpningen av modellerna Long Short-Term Memory och Gated Recurrent Units. Resultaten visar att både Long Short-Term Memory och Gated Recurrent Units modellerna presterar lika bra och kan generera prediktioner som är nära förväntade mönster när variationen i datat är mindre signifikant. Den Long Short-Term Memory modell som presterade bäst uppnådde en mean squared error på 0.432, en root mean squared error på 0.657 och en mean average error på 0.459. Den Gated Recurrent Units modell som presterade bäst uppnådde en mean squared error på 0.435, en root mean squared error på 0.660 och en mean average error på 0.461.
|
105 |
MahlerNet : Unbounded Orchestral Music with Neural Networks / Orkestermusik utan begränsning med neurala nätverkLousseief, Elias January 2019 (has links)
Modelling music with mathematical and statistical methods in general, and with neural networks in particular, has a long history and has been well explored in the last decades. Exactly when the first attempt at strictly systematic music took place is hard to say; some would say in the days of Mozart, others would say even earlier, but it is safe to say that the field of algorithmic composition has a long history. Even though composers have always had structure and rules as part of the writing process, implicitly or explicitly, following rules at a stricter level was well investigated in the middle of the 20th century at which point also the first music writing computer program based on mathematics was implemented. This work in computer science focuses on the history of musical composition with computers, also known as algorithmic composition, using machine learning and neural networks and consists of two parts: a literature survey covering in-depth the last decades in the field from which is drawn inspiration and experience to construct MahlerNet, a neural network based on the previous architectures MusicVAE, BALSTM, PerformanceRNN and BachProp, capable of modelling polyphonic symbolic music with up to 23 instruments. MahlerNet is a new architecture that uses a custom preprocessor with musical heuristics to normalize and filter the input and output files in MIDI format into a data representation that it uses for processing. MahlerNet, and its preprocessor, was written altogether for this project and produces music that clearly shows musical characteristics reminiscent of the data it was trained on, with some long-term structure, albeit not in the form of motives and themes. / Matematik och statistik i allmänhet, och maskininlärning och neurala nätverk i synnerhet, har sedan långt tillbaka använts för att modellera musik med en utveckling som kulminerat under de senaste decennierna. Exakt vid vilken historisk tidpunkt som musikalisk komposition för första gången tillämpades med strikt systematiska regler är svårt att säga; vissa skulle hävda att det skedde under Mozarts dagar, andra att det skedde redan långt tidigare. Oavsett vilket, innebär det att systematisk komposition är en företeelse med lång historia. Även om kompositörer i alla tider följt strukturer och regler, medvetet eller ej, som en del av kompositionsprocessen började man under 1900-talets mitt att göra detta i högre utsträckning och det var också då som de första programmen för musikalisk komposition, baserade på matematik, kom till. Den här uppsatsen i datateknik behandlar hur musik historiskt har komponerats med hjälp av datorer, ett område som också är känt som algoritmisk komposition. Uppsatsens fokus ligger på användning av maskininlärning och neurala nätverk och består av två delar: en litteraturstudie som i hög detalj behandlar utvecklingen under de senaste decennierna från vilken tas inspiration och erfarenheter för att konstruera MahlerNet, ett neuralt nätverk baserat på de tidigare modellerna MusicVAE, BALSTM, PerformanceRNN och BachProp. MahlerNet kan modellera polyfon musik med upp till 23 instrument och är en ny arkitektur som kommer tillsammans med en egen preprocessor som använder heuristiker från musikteori för att normalisera och filtrera data i MIDI-format till en intern representation. MahlerNet, och dess preprocessor, är helt och hållet implementerade för detta arbete och kan komponera musik som tydligt uppvisar egenskaper från den musik som nätverket tränats på. En viss kontinuitet finns i den skapade musiken även om det inte är i form av konkreta teman och motiv.
|
106 |
Dynamic Student Embeddings for a Stable Time Dimension in Knowledge TracingTump, Clara January 2020 (has links)
Knowledge tracing is concerned with tracking a student’s knowledge as she/he engages with exercises in an (online) learning platform. A commonly used state-of-theart knowledge tracing model is Deep Knowledge Tracing (DKT) which models the time dimension as a sequence of completed exercises per student by using a Long Short-Term Memory Neural Network (LSTM). However, a common problem in this sequence-based model is too much instability in the time dimension of the modelled knowledge of a student. In other words, the student’s knowledge on a skill changes too quickly and unreliably. We propose dynamic student embeddings as a stable method for encoding the time dimension of knowledge tracing systems. In this method the time dimension is encoded in time slices of a fixed size, while the model’s loss function is designed to smoothly align subsequent time slices. We compare the dynamic student embeddings to DKT on a large-scale real-world dataset, and we show that dynamic student embeddings provide a more stable knowledge tracing while retaining good performance. / Kunskapsspårning handlar om att modellera en students kunskaper då den arbetar med uppgifter i en (online) lärplattform. En vanlig state-of-the-art kunskapsspårningsmodell är Deep Knowledge Tracing (DKT) vilken modellerar tidsdimensionen som en sekvens av avslutade uppgifter per student med hjälp av ett neuronnät kallat Long Short-Term Memory Neural Network (LSTM). Ett vanligt problem i dessa sekvensbaserade modeller är emellertid en för stor instabilitet i tidsdimensionen för studentens modellerade kunskap. Med andra ord, studentens kunskaper förändras för snabbt och otillförlitligt. Vi föreslår därför Dynamiska Studentvektorer som en stabil metod för kodning av tidsdimensionen för kunskapsspårningssystem. I denna metod kodas tidsdimensionen i tidsskivor av fix storlek, medan modellens förlustfunktion är utformad för att smidigt justera efterföljande tidsskivor. I denna uppsats jämför vi de Dynamiska Studentvektorer med DKT i en storskalig verklighetsbaserad dataset, och visar att Dynamiska Studentvektorer tillhandahåller en stabilare kunskapsspårning samtidigt som prestandan bibehålls.
|
107 |
Réseaux de neurones à relaxation entraînés par critère d'autoencodeur débruitantSavard, François 08 1900 (has links)
L’apprentissage machine est un vaste domaine où l’on cherche à apprendre les paramètres
de modèles à partir de données concrètes. Ce sera pour effectuer des tâches demandant
des aptitudes attribuées à l’intelligence humaine, comme la capacité à traiter des don-
nées de haute dimensionnalité présentant beaucoup de variations. Les réseaux de neu-
rones artificiels sont un exemple de tels modèles. Dans certains réseaux de neurones dits
profonds, des concepts "abstraits" sont appris automatiquement.
Les travaux présentés ici prennent leur inspiration de réseaux de neurones profonds,
de réseaux récurrents et de neuroscience du système visuel. Nos tâches de test sont
la classification et le débruitement d’images quasi binaires. On permettra une rétroac-
tion où des représentations de haut niveau (plus "abstraites") influencent des représentations à bas niveau. Cette influence s’effectuera au cours de ce qu’on nomme relaxation,
des itérations où les différents niveaux (ou couches) du modèle s’interinfluencent. Nous
présentons deux familles d’architectures, l’une, l’architecture complètement connectée,
pouvant en principe traiter des données générales et une autre, l’architecture convolutionnelle, plus spécifiquement adaptée aux images. Dans tous les cas, les données utilisées
sont des images, principalement des images de chiffres manuscrits.
Dans un type d’expérience, nous cherchons à reconstruire des données qui ont été
corrompues. On a pu y observer le phénomène d’influence décrit précédemment en comparant le résultat avec et sans la relaxation. On note aussi certains gains numériques et
visuels en terme de performance de reconstruction en ajoutant l’influence des couches
supérieures. Dans un autre type de tâche, la classification, peu de gains ont été observés.
On a tout de même pu constater que dans certains cas la relaxation aiderait à apprendre
des représentations utiles pour classifier des images corrompues. L’architecture convolutionnelle développée, plus incertaine au départ, permet malgré tout d’obtenir des reconstructions numériquement et visuellement semblables à celles obtenues avec l’autre
architecture, même si sa connectivité est contrainte. / Machine learning is a vast field where we seek to learn parameters for models from
concrete data. The goal will be to execute various tasks requiring abilities normally
associated more with human intelligence than with a computer program, such as the
ability to process high dimensional data containing a lot of variations. Artificial neural
networks are a large class of such models. In some neural networks said to be deep, we
can observe that high level (or "abstract") concepts are automatically learned.
The work we present here takes its inspiration from deep neural networks, from
recurrent networks and also from neuroscience of the visual system. Our test tasks are
classification and denoising for near binary images. We aim to take advantage of a
feedback mechanism through which high-level representations, that is to say relatively
abstract concepts, can influence lower-level representations. This influence will happen
during what we call relaxation, which is iterations where the different levels (or layers)
of the model can influence each other. We will present two families of architectures
based on this mechanism. One, the fully connected architecture, can in principle accept
generic data. The other, the convolutional one, is specifically made for images. Both
were trained on images, though, and mostly images of written characters.
In one type of experiment, we want to reconstruct data that has been corrupted. In
these tasks, we have observed the feedback influence phenomenon previously described
by comparing the results we obtained with and without relaxation. We also note some
numerical and visual improvement in terms of reconstruction performance when we add
upper layers’ influence. In another type of task, classification, little gain has been noted.
Still, in one setting where we tried to classify noisy data with a representation trained
without prior class information, relaxation did seem to improve results significantly. The
convolutional architecture, a bit more risky at first, was shown to produce numerical and
visual results in reconstruction that are near those obtained with the fully connected
version, even though the connectivity is much more constrained.
|
108 |
On Recurrent and Deep Neural NetworksPascanu, Razvan 05 1900 (has links)
L'apprentissage profond est un domaine de recherche en forte croissance en apprentissage automatique qui est parvenu à des résultats impressionnants dans différentes tâches allant de la classification d'images à la parole, en passant par la modélisation du langage. Les réseaux de neurones récurrents, une sous-classe d'architecture profonde, s'avèrent particulièrement prometteurs. Les réseaux récurrents peuvent capter la structure temporelle dans les données. Ils ont potentiellement la capacité d'apprendre des corrélations entre des événements éloignés dans le temps et d'emmagasiner indéfiniment des informations dans leur mémoire interne. Dans ce travail, nous tentons d'abord de comprendre pourquoi la profondeur est utile. Similairement à d'autres travaux de la littérature, nos résultats démontrent que les modèles profonds peuvent être plus efficaces pour représenter certaines familles de fonctions comparativement aux modèles peu profonds. Contrairement à ces travaux, nous effectuons notre analyse théorique sur des réseaux profonds acycliques munis de fonctions d'activation linéaires par parties, puisque ce type de modèle est actuellement l'état de l'art dans différentes tâches de classification. La deuxième partie de cette thèse porte sur le processus d'apprentissage. Nous analysons quelques techniques d'optimisation proposées récemment, telles l'optimisation Hessian free, la descente de gradient naturel et la descente des sous-espaces de Krylov. Nous proposons le cadre théorique des méthodes à région de confiance généralisées et nous montrons que plusieurs de ces algorithmes développés récemment peuvent être vus dans cette perspective. Nous argumentons que certains membres de cette famille d'approches peuvent être mieux adaptés que d'autres à l'optimisation non convexe. La dernière partie de ce document se concentre sur les réseaux de neurones récurrents. Nous étudions d'abord le concept de mémoire et tentons de répondre aux questions suivantes: Les réseaux récurrents peuvent-ils démontrer une mémoire sans limite? Ce comportement peut-il être appris? Nous montrons que cela est possible si des indices sont fournis durant l'apprentissage. Ensuite, nous explorons deux problèmes spécifiques à l'entraînement des réseaux récurrents, à savoir la dissipation et l'explosion du gradient. Notre analyse se termine par une solution au problème d'explosion du gradient qui implique de borner la norme du gradient. Nous proposons également un terme de régularisation conçu spécifiquement pour réduire le problème de dissipation du gradient. Sur un ensemble de données synthétique, nous montrons empiriquement que ces mécanismes peuvent permettre aux réseaux récurrents d'apprendre de façon autonome à mémoriser des informations pour une période de temps indéfinie. Finalement, nous explorons la notion de profondeur dans les réseaux de neurones récurrents. Comparativement aux réseaux acycliques, la définition de profondeur dans les réseaux récurrents est souvent ambiguë. Nous proposons différentes façons d'ajouter de la profondeur dans les réseaux récurrents et nous évaluons empiriquement ces propositions. / Deep Learning is a quickly growing area of research in machine learning, providing impressive results on different tasks ranging from image classification to speech and language modelling. In particular, a subclass of deep models, recurrent neural networks, promise even more. Recurrent models can capture the temporal structure in the data. They can learn correlations between events that might be far apart in time and, potentially, store information for unbounded amounts of time in their innate memory. In this work we first focus on understanding why depth is useful. Similar to other published work, our results prove that deep models can be more efficient at expressing certain families of functions compared to shallow models. Different from other work, we carry out our theoretical analysis on deep feedforward networks with piecewise linear activation functions, the kind of models that have obtained state of the art results on different classification tasks. The second part of the thesis looks at the learning process. We analyse a few recently proposed optimization techniques, including Hessian Free Optimization, natural gradient descent and Krylov Subspace Descent. We propose the framework of generalized trust region methods and show that many of these recently proposed algorithms can be viewed from this perspective. We argue that certain members of this family of approaches might be better suited for non-convex optimization than others. The last part of the document focuses on recurrent neural networks. We start by looking at the concept of memory. The questions we attempt to answer are: Can recurrent models exhibit unbounded memory? Can this behaviour be learnt? We show this to be true if hints are provided during learning. We explore, afterwards, two specific difficulties of training recurrent models, namely the vanishing gradients and exploding gradients problem. Our analysis concludes with a heuristic solution for the exploding gradients that involves clipping the norm of the gradients. We also propose a specific regularization term meant to address the vanishing gradients problem. On a toy dataset, employing these mechanisms, we provide anecdotal evidence that the recurrent model might be able to learn, with out hints, to exhibit some sort of unbounded memory. Finally we explore the concept of depth for recurrent neural networks. Compared to feedforward models, for recurrent models the meaning of depth can be ambiguous. We provide several ways in which a recurrent model can be made deep and empirically evaluate these proposals.
|
109 |
Modélisation de l'interprétation des pianistes & applications d'auto-encodeurs sur des modèles temporelsLauly, Stanislas 04 1900 (has links)
Ce mémoire traite d'abord du problème de la modélisation de l'interprétation des pianistes à l'aide de l'apprentissage machine. Il s'occupe ensuite de présenter de nouveaux modèles temporels qui utilisent des auto-encodeurs pour améliorer l'apprentissage de séquences.
Dans un premier temps, nous présentons le travail préalablement fait dans le domaine de la modélisation de l'expressivité musicale, notamment les modèles statistiques du professeur Widmer. Nous parlons ensuite de notre ensemble de données, unique au monde, qu'il a été nécessaire de créer pour accomplir notre tâche. Cet ensemble est composé de 13 pianistes différents enregistrés sur le fameux piano Bösendorfer 290SE. Enfin, nous expliquons en détail les résultats de l'apprentissage de réseaux de neurones et de réseaux de neurones récurrents. Ceux-ci sont appliqués sur les données mentionnées pour apprendre les variations expressives propres à un style de musique.
Dans un deuxième temps, ce mémoire aborde la découverte de modèles statistiques expérimentaux qui impliquent l'utilisation d'auto-encodeurs sur des réseaux de neurones récurrents. Pour pouvoir tester la limite de leur capacité d'apprentissage, nous utilisons deux ensembles de données artificielles développées à l'Université de Toronto. / This thesis addresses the problem of modeling pianists' interpretations using machine learning, and presents new models that use temporal auto-encoders to improve their learning for sequences.
We present previous work in the field of modeling musical expression, including Professor Widmer's statistical models. We then discuss our unique dataset created specifically for our task. This dataset is composed of 13 different pianists recorded on the famous Bösendorfer 290SE piano. Finally, we present the learning results of neural networks and recurrent neural networks in detail. These algorithms are applied to the dataset to learn expressive variations specific to a style of music.
We also present novel statistical models involving the use of auto-encoders in recurrent neural networks. To test the limits of these algorithms' ability to learn, we use two artificial datasets developed at the University of Toronto.
|
110 |
Métodos neuronais para a solução da equação algébrica de Riccati e o LQR / Neural methods for the solution of Equation Of algebraic Riccati and LQRSILVA, Fabio Nogueira da 20 June 2008 (has links)
Submitted by Rosivalda Pereira (mrs.pereira@ufma.br) on 2017-08-14T18:28:45Z
No. of bitstreams: 1
FabioSilva.pdf: 1098466 bytes, checksum: a72dcced91748fe6c54f3cab86c19849 (MD5) / Made available in DSpace on 2017-08-14T18:28:45Z (GMT). No. of bitstreams: 1
FabioSilva.pdf: 1098466 bytes, checksum: a72dcced91748fe6c54f3cab86c19849 (MD5)
Previous issue date: 2008-06-20 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPQ) / Fundação de Amparo à Pesquisa e ao Desenvolvimento Científico e Tecnológico do Maranhão (FAPEMA) / We present in this work the results about two neural networks methods to solve the algebraic Riccati(ARE), what are used in many applications, mainly in the Linear Quadratic Regulator (LQR), H2 and H1 controls. First is showed the real symmetric form of the ARE and two methods based on neural computation. One feedforward neural network (FNN), that de¯nes an error as function of the ARE and a recurrent neural network (RNN), which converts a constrain optimization problem, restricted to the state space model, into an unconstrained convex optimization problem de¯ning an energy as function of the ARE and Cholesky factor. A proposal to chose the learning parameters of the RNN used to solve the ARE, by making a surface of the parameters variations, thus we can tune the neural network for a better performance. Computational experiments related with the plant matrices perturbations of the tested systems in order to perform an analysis of the behavior of the presented methodologies, that are based on homotopies methods, where we chose a good initial condition and compare the results to the Schur method. Two 6th order systems were used, a Doubly Fed Induction Generator(DFIG) and an aircraft plant. The results showed the RNN a good alternative compared with the FNN and Schur methods. / Apresenta-se nesta dissertação os resultados a respeito de dois métodos neuronais para a resolução da equação algébrica de Riccati(EAR), que tem varias aplicações, sendo principalmente usada pelos Regulador Linear Quadrático(LQR), controle H2 e controle H1. É apresentado a EAR real e simétrica e dois métodos baseados em uma rede neuronal direta (RND) que tem a função de erro associada a EAR e uma rede neuronal recorrente (RNR) que converte um problema de otimização restrita ao modelo de espaço de estados em outro de otimização convexa em função da EAR e do fator de Cholesky de modo a usufruir das propriedades de convexidade e condições de otimalidade. Uma proposta para a escolha dos parâmetros da RNR usada para solucionar a EAR por meio da geração de superfícies com a variação paramétrica da RNR, podendo assim melhor sintonizar a rede neuronal para um melhor desempenho. Experimentos computacionais relacionados a perturbações nos sistemas foram realizados para analisar o comportamento das metodologias apresentadas, tendo como base o princípio dos métodos homotópicos, com uma boa condição inicial, a partir de uma ponto de operação estável e comparamos os resultados com o método de Schur. Foram usadas as plantas de dois sistemas: uma representando a dinâmica de uma aeronave e outra de um motor de indução eólico duplamente alimentado(DFIG), ambos sistemas de 6a ordem. Os resultados mostram que a RNR é uma boa alternativa se comparado com a RND e com o método de Schur.
|
Page generated in 0.0332 seconds