• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 2
  • 2
  • Tagged with
  • 37
  • 37
  • 23
  • 18
  • 17
  • 17
  • 15
  • 12
  • 11
  • 11
  • 10
  • 9
  • 9
  • 8
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Online trénování hlubokých neuronových sítí pro klasifikaci / Online training of deep neural networks for classification

Tumpach, Jiří January 2019 (has links)
Deep learning is usually applied to static datasets. If used for classification based on data streams, it is not easy to take into account a non-stationarity. This thesis presents work in progress on a new method for online deep classifi- cation learning in data streams with slow or moderate drift, highly relevant for the application domain of malware detection. The method uses a combination of multilayer perceptron and variational autoencoder to achieve constant mem- ory consumption by encoding past data to a generative model. This can make online learning of neural networks more accessible for independent adaptive sys- tems with limited memory. First results for real-world malware stream data are presented, and they look promising. 1
12

ML-Aided Cross-Band Channel Prediction in MIMO Systems

Pérez Gómez, Alejo January 2022 (has links)
Wireless communications technologies have experienced an exponential development during the last decades. 5G is a prominent exponent whose one of its crucial component is the Massive MIMO technology. By supporting multiple streams of signals it allows a revamped signal reconstruction in terms of mobile traffic size, data rate, latency, and reliability. In this thesis work, we isolated this technology into a SIMOapproach (Single-Input Multiple-Output) to explore a Machine Learning modeling to address the so-called Channel Prediction problem. Generally, the algorithms available to perform Channel Estimation in FDD and TDD deployments incur computational complexity downsides and require explicit feedback from client devices, which is typically prohibitive. This thesis work focuses on Channel Prediction by aims of employing Machine and deep Learning models in order to reduce the computational complexity by further relying in statistical modeling/learning. We explored the cross-Frequency Subband prediction intra-TTI (Transmission Time Interval) by means of proposing 3 three models. These intended to leverage frequency Multipath Components dependencies along TTIs. The first two ones are Probabilistic Principal Components Analysis (PPCA) and its Bayesiancounterpart, Bayesian Principal Components Analysis (BPCA). Then, we implemented Deep Learning Variational Encoder-Decoder (VED) architecture. These three models are intended to deal with the hugely high-dimensional space of the 4 datasets used by its intrinsic dimensionality reduction. The PPCA method was on average five times better than the VED alternative in terms of MSE accounting for all the datasets used.
13

Aspects of Modern Queueing Theory

Ruixin Wang (12873017) 15 June 2022 (has links)
<p>Queueing systems are everywhere: in transportation networks, service centers, communication systems, clinics, manufacturing systems, etc. In this dissertation, we contribute to the theory of queueing in two aspects. In the first part, we dilate the interplay between retrials and strategic arrival behavior in single-class queueing networks. Specifically, we study a variation of the ‘Network Concert Queueing Game,’ wherein a fixed but large number of strategic users arrive at a network of queues where they can be routed to other queues in the network following a fixed routing matrix, or potentially fedback to the end of the queue they arrive at. Working in a non-atomic setting, we prove the existence of Nash equilibrium arrival and routing profiles in three simple, but non-trivial, network topologies/architectures. In two of them, we also prove the uniqueness of the equilibrium. Our results prove that Nash equilibrium decisions on when to arrive and which queue to join in a network are substantially impacted by routing, inducing ‘herding’ behavior under certain conditions on the network architecture. Our theory raises important design implications for capacity-sharing in systems with strategic users, such as ride-sharing and crowdsourcing platforms.</p> <p><br></p> <p>In the second part, we develop a new method of data-driven model calibration or estimation for queueing models. Statistical and theoretical analyses of traffic traces show that the doubly stochastic Poisson processes are appropriate models of high intensity traffic arriving at an array of service systems. On the other hand, the statistical estimation of the underlying latent stochastic intensity process driving the traffic model involves a rather complicated nonlinear filtering problem. In this thesis we use deep neural networks to ‘parameterize’ the path measures induced by the stochastic intensity process, and solve this nonlinear filtering problem by maximizing a tight surrogate objective called the evidence lower bound (ELBO). This framework is flexible in the sense that we can also estimate other stochastic processes (e.g., the queue length process) and their related parameters (e.g., the service time distribution). We demonstrate the effectiveness of our results through extensive simulations. We also provide approximation guarantees for the estimation/calibration problem. Working with the Markov chain induced by the Euler-Maruyama discretization of the latent diffusion, we show that (1) there exists a sequence of approximate data generating distributions that converges to the “ground truth” distribution in total variation distance; (2) the variational gap is strictly positive for the optimal solution to the ELBO. Extending to the non-Markov setting, we identify the variational gap minimizing approximate posterior for an arbitrary (known) posterior and further, prove a lower bound on the optimal ELBO. Recent theoretical results on optimizing the ELBO for related (but ultimately different) models show that when the data generating distribution equals the ground truth distribution and the variational gap is zero, the probability measures that achieve these conditions also maximize the ELBO. Our results show that this may not be true in all problem settings.</p>
14

A Unified Generative and Discriminative Approach to Automatic Chord Estimation for Music Audio Signals / 音楽音響信号に対する自動コード推定のための生成・識別統合的アプローチ

Wu, Yiming 24 September 2021 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第23540号 / 情博第770号 / 新制||情||131(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)准教授 吉井 和佳, 教授 河原 達也, 教授 西野 恒, 教授 鹿島 久嗣 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
15

Nonnegative matrix factorization with applications to sequencing data analysis

Kong, Yixin 25 February 2022 (has links)
A latent factor model for count data is popularly applied when deconvoluting mixed signals in biological data as exemplified by sequencing data for transcriptome or microbiome studies. Due to the availability of pure samples such as single-cell transcriptome data, the estimators can enjoy much better accuracy by utilizing the extra information. However, such an advantage quickly disappears in the presence of excessive zeros. To correctly account for such a phenomenon, we propose a zero-inflated non-negative matrix factorization that models excessive zeros in both mixed and pure samples and derive an effective multiplicative parameter updating rule. In simulation studies, our method yields smaller bias comparing to other deconvolution methods. We applied our approach to gene expression from brain tissue as well as fecal microbiome datasets, illustrating the superior performance of the approach. Our method is implemented as a publicly available R-package, iNMF. In zero-inflated non-negative matrix factorization (iNMF) for the deconvolution of mixed signals of biological data, pure-samples play a significant role by solving the identifiability issue as well as improving the accuracy of estimates. One of the main issues of using single-cell data is that the identities(labels) of the cells are not given. Thus, it is crucial to sort these cells into their correct types computationally. We propose a nonlinear latent variable model that can be used for sorting pure-samples as well as grouping mixed-samples via deep neural networks. The computational difficulty will be handled by adopting a method known as variational autoencoding. While doing so, we keep the NMF structure in a decoder neural network, which makes the output of the network interpretable.
16

Predicting tumour growth-driving interactions from transcriptomic data using machine learning

Stigenberg, Mathilda January 2023 (has links)
The mortality rate is high for cancer patients and treatments are only efficient in a fraction of patients. To be able to cure more patients, new treatments need to be invented. Immunotherapy activates the immune system to fight against cancer and one treatment targets immune checkpoints. If more targets are found, more patients can be treated successfully. In this project, interactions between immune and cancer cells that drive tumour growth were investigated in an attempt to find new potential targets. This was achieved by creating a machine learning model that finds genes expressed in cells involved in tumour-driving interactions. Single-cell RNA sequencing and spatial transcriptomic data from breast cancer patients were utilised as well as single-cell RNA sequencing data from healthy patients. The tumour rate was based on the cumulative expression of G2/M genes. The G2/M related genes were excluded from the analysis since these were assumed to be cell cycle genes. The machine learning model was based on a supervised variational autoencoder architecture. By using this kind of architecture, it was possible to compress the input into a low dimensional space of genes, called a latent space, which was able to explain the tumour rate. Optuna hyperparameter optimizer framework was utilised to find the best combination of hyperparameters for the model. The model had a R2 score of 0.93, which indicated that the latent space was able to explain the growth rate 93% accurately. The latent space consisted of 20 variables. To find out which genes that were in this latent space, the correlation between each latent variable and each gene was calculated. The genes that were positively correlated or negatively correlated were assumed to be in the latent space and therefore involved in explaining tumour growth. Furthermore, the correlation between each latent variable and the growth rate was calculated. The up- and downregulated genes in each latent variable were kept and used for finding out the pathways for the different latent variables. Five of these latent variables were involved in immune responses and therefore these were further investigated. The genes in these five latent variables were mapped to cell types. One of these latent variables had upregulated immune response for positively correlated growth, indicating that immune cells were involved in promoting cancer progression. Another latent variable had downregulated immune response for negatively correlated growth. This indicated that if these genes would be upregulated instead, the tumour would be thriving. The genes found in these latent variables were analysed further. CD80, CSF1, CSF1R, IL26, IL7, IL34 and the protein NF-kappa-B were interesting finds and are known immune-modulators. These could possibly be used as markers for pro-tumour immunity. Furthermore, CSF1, CSF1R, IL26, IL34 and the protein NF-kappa-B could potentially be targeted in immunotherapy.
17

Automatic Question Paraphrasing in Swedish with Deep Generative Models / Automatisk frågeparafrasering på svenska med djupa generativa modeller

Lindqvist, Niklas January 2021 (has links)
Paraphrase generation refers to the task of automatically generating a paraphrase given an input sentence or text. Paraphrase generation is a fundamental yet challenging natural language processing (NLP) task and is utilized in a variety of applications such as question answering, information retrieval, conversational systems etc. In this study, we address the problem of paraphrase generation of questions in Swedish by evaluating two different deep generative models that have shown promising results on paraphrase generation of questions in English. The first model is a Conditional Variational Autoencoder (C-VAE) and the other model is an extension of the first one where a discriminator network is introduced into the model to form a Generative Adversarial Network (GAN) architecture. In addition to these models, a method not based on machine-learning was implemented to act as a baseline. The models were evaluated using both quantitative and qualitative measures including grammatical correctness and equivalence to source question. The results show that the deep generative models outperformed the baseline across all quantitative metrics. Furthermore, from the qualitative evaluation it was shown that the deep generative models outperformed the baseline at generating grammatically correct sentences, but there was no noticeable difference in terms of equivalence to the source question between the models. / Parafrasgenerering syftar på uppgiften att, utifrån en given mening eller text, automatiskt generera en parafras, det vill säga en annan text med samma betydelse. Parafrasgenerering är en grundläggande men ändå utmanande uppgift inom naturlig språkbehandling och används i en rad olika applikationer som informationssökning, konversionssystem, att besvara frågor givet en text etc. I den här studien undersöker vi problemet med parafrasgenerering av frågor på svenska genom att utvärdera två olika djupa generativa modeller som visat lovande resultat på parafrasgenerering av frågor på engelska. Den första modellen är en villkorsbaserad variationsautokodare (C-VAE). Den andra modellen är också en C-VAE men introducerar även en diskriminator vilket gör modellen till ett generativt motståndarnätverk (GAN). Förutom modellerna presenterade ovan, implementerades även en icke maskininlärningsbaserad metod som en baslinje. Modellerna utvärderades med både kvantitativa och kvalitativa mått inklusive grammatisk korrekthet och likvärdighet mellan parafras och originalfråga. Resultaten visar att de djupa generativa modellerna presterar bättre än baslinjemodellen på alla kvantitativa mätvärden. Vidare, visade the kvalitativa utvärderingen att de djupa generativa modellerna kunde generera grammatiskt korrekta frågor i större utsträckning än baslinjemodellen. Det var däremot ingen större skillnad i semantisk ekvivalens mellan parafras och originalfråga för de olika modellerna.
18

Deep Scenario Generation of Financial Markets / Djup scenario generering av finansiella marknader

Carlsson, Filip, Lindgren, Philip January 2020 (has links)
The goal of this thesis is to explore a new clustering algorithm, VAE-Clustering, and examine if it can be applied to find differences in the distribution of stock returns and augment the distribution of a current portfolio of stocks and see how it performs in different market conditions. The VAE-clustering method is as mentioned a newly introduced method and not widely tested, especially not on time series. The first step is therefore to see if and how well the clustering works. We first apply the algorithm to a dataset containing monthly time series of the power demand in Italy. The purpose in this part is to focus on how well the method works technically. When the model works well and generates proper results with the Italian Power Demand data, we move forward and apply the model on stock return data. In the latter application we are unable to find meaningful clusters and therefore unable to move forward towards the goal of the thesis. The results shows that the VAE-clustering method is applicable for time series. The power demand have clear differences from season to season and the model can successfully identify those differences. When it comes to the financial data we hoped that the model would be able to find different market regimes based on time periods. The model is though not able distinguish different time periods from each other. We therefore conclude that the VAE-clustering method is applicable on time series data, but that the structure and setting of the financial data in this thesis makes it to hard to find meaningful clusters. The major finding is that the VAE-clustering method can be applied to time series. We highly encourage further research to find if the method can be successfully used on financial data in different settings than tested in this thesis. / Syftet med den här avhandlingen är att utforska en ny klustringsalgoritm, VAE-Clustering, och undersöka om den kan tillämpas för att hitta skillnader i fördelningen av aktieavkastningar och förändra distributionen av en nuvarande aktieportfölj och se hur den presterar under olika marknadsvillkor. VAE-klusteringsmetoden är som nämnts en nyinförd metod och inte testad i stort, särskilt inte på tidsserier. Det första steget är därför att se om och hur klusteringen fungerar. Vi tillämpar först algoritmen på ett datasätt som innehåller månatliga tidsserier för strömbehovet i Italien. Syftet med denna del är att fokusera på hur väl metoden fungerar tekniskt. När modellen fungerar bra och ger tillfredställande resultat, går vi vidare och tillämpar modellen på aktieavkastningsdata. I den senare applikationen kan vi inte hitta meningsfulla kluster och kan därför inte gå framåt mot målet som var att simulera olika marknader och se hur en nuvarande portfölj presterar under olika marknadsregimer. Resultaten visar att VAE-klustermetoden är väl tillämpbar på tidsserier. Behovet av el har tydliga skillnader från säsong till säsong och modellen kan framgångsrikt identifiera dessa skillnader. När det gäller finansiell data hoppades vi att modellen skulle kunna hitta olika marknadsregimer baserade på tidsperioder. Modellen kan dock inte skilja olika tidsperioder från varandra. Vi drar därför slutsatsen att VAE-klustermetoden är tillämplig på tidsseriedata, men att strukturen på den finansiella data som undersöktes i denna avhandling gör det svårt att hitta meningsfulla kluster. Den viktigaste upptäckten är att VAE-klustermetoden kan tillämpas på tidsserier. Vi uppmuntrar ytterligare forskning för att hitta om metoden framgångsrikt kan användas på finansiell data i andra former än de testade i denna avhandling
19

Neural Ordinary Differential Equations for Anomaly Detection / : Neurala Ordinära Differentialekvationer för Anomalidetektion

Hlöðver Friðriksson, Jón, Ågren, Erik January 2021 (has links)
Today, a large amount of time series data is being produced from a variety of different devices such as smart speakers, cell phones and vehicles. This data can be used to make inferences and predictions. Neural network based methods are among one of the most popular ways to model time series data. The field of neural networks is constantly expanding and new methods and model variants are frequently introduced. In 2018, a new family of neural networks was introduced. Namely, Neural Ordinary Differential Equations (Neural ODEs). Neural ODEs have shown great potential in modelling the dynamics of temporal data. Here we present an investigation into using Neural Ordinary Differential Equations for anomaly detection. We tested two model variants, LSTM-ODE and latent-ODE. The former model utilises a neural ODE to model the continuous-time hidden state in between observations of an LSTM model, the latter is a variational autoencoder that uses the LSTM-ODE as encoding and a Neural ODE as decoding. Both models are suited for modelling sparsely and irregularly sampled time series data. Here, we test their ability to detect anomalies on various sparsity and irregularity ofthe data. The models are compared to a Gaussian mixture model, a vanilla LSTM model and an LSTM variational autoencoder. Experimental results using the Human Activity Recognition dataset showed that the Neural ODEbased models obtained a better ability to detect anomalies compared to their LSTM based counterparts. However, the computational training cost of the Neural ODE models were considerably higher than for the models that onlyutilise the LSTM architecture. The Neural ODE based methods were also more memory consuming than their LSTM counterparts. / Idag produceras en stor mängd tidsseriedata från en mängd olika enheter som smarta högtalare, mobiltelefoner och fordon. Denna datan kan användas för att dra slutsatser och förutsägelser. Neurala nätverksbaserade metoder är bland de mest populära sätten att modellera tidsseriedata. Mycket forskning inom området neurala nätverk pågår och nya metoder och modellvarianter introduceras ofta. Under 2018 introducerades en ny familj av neurala nätverk. Nämligen, Neurala Ordinära Differentialekvationer (NeuralaODE:er). Neurala ODE:er har visat en stor potential i att modellera dynamiken hos temporal data. Vi presenterar här en undersökning i att använda neuralaordinära differentialekvationer för anomalidetektion. Vi testade två olika modellvarianter, en som kallas LSTM-ODE och en annan som kallas latent-ODE.Den förstnämnda använder Neurala ODE:er för att modellera det kontinuerliga dolda tillståndet mellan observationer av en LSTM-modell, den andra är en variational autoencoder som använder LSTM-ODE som kodning och en Neural ODE som avkodning. Båda dessa modeller är lämpliga för att modellera glest och oregelbundet samplade tidsserier. Därför testas deras förmåga att upptäcka anomalier på olika gleshet och oregelbundenhet av datan. Modellerna jämförs med en gaussisk blandningsmodell, en vanlig LSTM modell och en LSTM variational autoencoder. Experimentella resultat vid användning av datasetet Human Activity Recognition (HAR) visade att de Neurala ODE-baserade modellerna erhöll en bättre förmåga att upptäcka avvikelser jämfört med deras LSTM-baserade motsvarighet. Träningstiden förde Neurala ODE-baserade modellerna var dock betydligt långsammare än träningstiden för deras LSTM-baserade motsvarighet. Neurala ODE-baserade metoder krävde också mer minnesanvändning än deras LSTM motsvarighet.
20

Deep generative models for natural language processing

Miao, Yishu January 2017 (has links)
Deep generative models are essential to Natural Language Processing (NLP) due to their outstanding ability to use unlabelled data, to incorporate abundant linguistic features, and to learn interpretable dependencies among data. As the structure becomes deeper and more complex, having an effective and efficient inference method becomes increasingly important. In this thesis, neural variational inference is applied to carry out inference for deep generative models. While traditional variational methods derive an analytic approximation for the intractable distributions over latent variables, here we construct an inference network conditioned on the discrete text input to provide the variational distribution. The powerful neural networks are able to approximate complicated non-linear distributions and grant the possibilities for more interesting and complicated generative models. Therefore, we develop the potential of neural variational inference and apply it to a variety of models for NLP with continuous or discrete latent variables. This thesis is divided into three parts. Part I introduces a <b>generic variational inference framework</b> for generative and conditional models of text. For continuous or discrete latent variables, we apply a continuous reparameterisation trick or the REINFORCE algorithm to build low-variance gradient estimators. To further explore Bayesian non-parametrics in deep neural networks, we propose a family of neural networks that parameterise categorical distributions with continuous latent variables. Using the stick-breaking construction, an unbounded categorical distribution is incorporated into our deep generative models which can be optimised by stochastic gradient back-propagation with a continuous reparameterisation. Part II explores <b>continuous latent variable models for NLP</b>. Chapter 3 discusses the Neural Variational Document Model (NVDM): an unsupervised generative model of text which aims to extract a continuous semantic latent variable for each document. In Chapter 4, the neural topic models modify the neural document models by parameterising categorical distributions with continuous latent variables, where the topics are explicitly modelled by discrete latent variables. The models are further extended to neural unbounded topic models with the help of stick-breaking construction, and a truncation-free variational inference method is proposed based on a Recurrent Stick-breaking construction (RSB). Chapter 5 describes the Neural Answer Selection Model (NASM) for learning a latent stochastic attention mechanism to model the semantics of question-answer pairs and predict their relatedness. Part III discusses <b>discrete latent variable models</b>. Chapter 6 introduces latent sentence compression models. The Auto-encoding Sentence Compression Model (ASC), as a discrete variational auto-encoder, generates a sentence by a sequence of discrete latent variables representing explicit words. The Forced Attention Sentence Compression Model (FSC) incorporates a combined pointer network biased towards the usage of words from source sentence, which significantly improves the performance when jointly trained with the ASC model in a semi-supervised learning fashion. Chapter 7 describes the Latent Intention Dialogue Models (LIDM) that employ a discrete latent variable to learn underlying dialogue intentions. Additionally, the latent intentions can be interpreted as actions guiding the generation of machine responses, which could be further refined autonomously by reinforcement learning. Finally, Chapter 8 summarizes our findings and directions for future work.

Page generated in 0.265 seconds