• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 33
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 53
  • 42
  • 26
  • 26
  • 25
  • 24
  • 24
  • 23
  • 14
  • 14
  • 14
  • 12
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Kernel Coherence Encoders

Sun, Fangzheng 23 April 2018 (has links)
In this thesis, we introduce a novel model based on the idea of autoencoders. Different from a classic autoencoder which reconstructs its own inputs through a neural network, our model is closer to Kernel Canonical Correlation Analysis (KCCA) and reconstructs input data from another data set, where these two data sets should have some, perhaps non-linear, dependence. Our model extends traditional KCCA in a way that the non-linearity of the data is learned through optimizing a kernel function by a neural network. In one of the novelties of this thesis, we do not optimize our kernel based upon some prediction error metric, as is classical in autoencoders. Rather, we optimize our kernel to maximize the "coherence" of the underlying low-dimensional hidden layers. This idea makes our method faithful to the classic interpretation of linear Canonical Correlation Analysis (CCA). As far we are aware, our method, which we call a Kernel Coherence Encoder (KCE), is the only extent approach that uses the flexibility of a neural network while maintaining the theoretical properties of classic KCCA. In another one of the novelties of our approach, we leverage a modified version of classic coherence which is far more stable in the presence of high-dimensional data to address computational and robustness issues in the implementation of a coherence based deep learning KCCA.
2

Bayesian Optimization and Semiparametric Models with Applications to Assistive Technology

Snoek, Jasper Roland 14 January 2014 (has links)
Advances in machine learning are having a profound impact on disciplines spanning the sciences. Assistive technology and health informatics are fields for which minor improvements achieved through leveraging more advanced machine learning algorithms can translate to major real world impact. However, successful application of machine learning currently requires broad domain knowledge to determine which model is appropriate for a given task, and model specific expertise to configure a model to a problem of interest. A major motivation for this thesis was: How can we make machine learning more accessible to assistive technology and health informatics researchers? Naturally, a complementary goal is to make machine learning more accessible in general. Specifically, in this thesis we explore how to automate the role of a machine learning expert through automatically adapting models and adjusting parameters to a given task of interest. This thesis consists of a number of contributions towards solving this challenging open problem in machine learning and these are empirically validated on four real-world applications. Through an interesting theoretical link between two seemingly disparate latent variable models, we create a hybrid model that allows one to flexibly interpolate over a parametric unsupervised neural network, a classification neural network and a non-parametric Gaussian process. We demonstrate empirically that this non-parametrically guided autoencoder allows one to learn a latent representation that is more useful for a given task of interest. We establish methods for automatically configuring machine learning model hyperparameters using Bayesian optimization. We develop Bayesian methods for integrating over parameters, explore the use of different priors over functions, and develop methods to run experiments in parallel. We demonstrate empirically that these methods find better hyperparameters on recent benchmark problems spanning machine learning in significantly less experiments than the methods employed by the problems' authors. We further establish methods for incorporating parameter dependent variable cost in the optimization procedure. These methods find better hyperparameters in less cost, such as time, or within bounded cost, such as before a deadline. Additionally, we develop a constrained Bayesian optimization variant and demonstrate its superiority over the standard procedure in the presence of unknown constraints.
3

Bayesian Optimization and Semiparametric Models with Applications to Assistive Technology

Snoek, Jasper Roland 14 January 2014 (has links)
Advances in machine learning are having a profound impact on disciplines spanning the sciences. Assistive technology and health informatics are fields for which minor improvements achieved through leveraging more advanced machine learning algorithms can translate to major real world impact. However, successful application of machine learning currently requires broad domain knowledge to determine which model is appropriate for a given task, and model specific expertise to configure a model to a problem of interest. A major motivation for this thesis was: How can we make machine learning more accessible to assistive technology and health informatics researchers? Naturally, a complementary goal is to make machine learning more accessible in general. Specifically, in this thesis we explore how to automate the role of a machine learning expert through automatically adapting models and adjusting parameters to a given task of interest. This thesis consists of a number of contributions towards solving this challenging open problem in machine learning and these are empirically validated on four real-world applications. Through an interesting theoretical link between two seemingly disparate latent variable models, we create a hybrid model that allows one to flexibly interpolate over a parametric unsupervised neural network, a classification neural network and a non-parametric Gaussian process. We demonstrate empirically that this non-parametrically guided autoencoder allows one to learn a latent representation that is more useful for a given task of interest. We establish methods for automatically configuring machine learning model hyperparameters using Bayesian optimization. We develop Bayesian methods for integrating over parameters, explore the use of different priors over functions, and develop methods to run experiments in parallel. We demonstrate empirically that these methods find better hyperparameters on recent benchmark problems spanning machine learning in significantly less experiments than the methods employed by the problems' authors. We further establish methods for incorporating parameter dependent variable cost in the optimization procedure. These methods find better hyperparameters in less cost, such as time, or within bounded cost, such as before a deadline. Additionally, we develop a constrained Bayesian optimization variant and demonstrate its superiority over the standard procedure in the presence of unknown constraints.
4

Anomaly detection in SCADA systems using machine learning

Fiah, Eric Kudjoe 12 May 2023 (has links) (PDF)
In this thesis, different Machine learning (ML) algorithms were used in the detection of anomalies using a dataset from a Gas pipeline SCADA system which was generated by Mississippi State University’s SCADA laboratory. This work was divided into two folds: Binary Classification and Categorized classification. In the binary classification, two attack types namely: Command injection and Response injection attacks were considered. Eight Machine Learning Classifiers were used and the results were compared. The Light GBM and Decision tree classifiers performed better than the other algorithms. In the categorical classification task, Seven (7) attack types in the dataset were analyzed using six different ML classifiers. The light gradient-boosting machine (LGBM) outperformed all the other classifiers in the detection of all the attack types. One other aspect of the categorized classification was the use of an autoencoder in improving the performance of all the classifiers used. The last part of this thesis was using SHAP plots to explain the features that accounted for each attack type in the dataset.
5

Football Trajectory Modeling Using Masked Autoencoders : Using Masked Autoencoder for Anomaly Detection and Correction for Football Trajectories / Modellering av Fotbollsbana med Maskerade Autoencoders : Maskerade Autoencoders för Avvikelsedetektering och Korrigering av Fotbollsbanor

Tor, Sandra January 2023 (has links)
Football trajectory modeling is a powerful tool for predicting and evaluating the movement of a football and its dynamics. Masked autoencoders are scalable self-supervised learners used for representation learning of partially observable data. Masked autoencoders have been shown to provide successful results in pre-training for computer vision and natural language processing tasks. Using masked autoencoders in the multivariate time-series data field has not been researched to the same extent. This thesis aims to investigate the potential of using masked autoencoders for multivariate time-series modeling for football trajectory data in collaboration with Tracab. Two versions of the masked autoencoder network with alterations are tested, which are implemented to be used with multivariate time-series data. The resulting models are used to detect anomalies in the football trajectory and propose corrections based on the reconstruction. The results are evaluated, discussed, and compared against the tracked and manually corrected value of the ball trajectory. The performance of the different frameworks is compared and the overall anomaly detection capabilities are discussed. The result suggested that even though the regular autoencoder version had a smaller average reconstruction error during training and testing, using masked autoencoders improved the anomaly detection performance. The result suggested that neither the regular autoencoder nor the masked autoencoder managed to propose plausible trajectories to correct anomalies in the data. This thesis promotes further research to be done in the field of using masked autoencoders for time series and trajectory modeling. / Modellering av en fotbolls bollbana är ett kraftfullt verktyg för att förutse och utvärdera rörelsen och dynamiken hos en fotboll. Maskerade autoencoders är skalbara självövervakande inlärare som används för representationsinlärning av delvis synlig data. Maskerade autoencoders har visat sig ge framgångsrika resultat vid förträning inom datorseende och naturlig språkbearbetning. Användningen av maskerade autoencoders för multivariat tidsserie-data har det inte forskats om i samma omfattning. Syftet med detta examensarbete är att undersöka potentialen för maskerade autoencoders inom tidsseriemodellering av bollbanor för fotboll i samarbete med Tracab. Två versioner av maskerade autoencoders anpassade för tidsserier testas. De tränade modellerna används för att upptäcka avvikelser i detekterade fotbollsbanor och föreslå korrigeringar baserat på rekonstruktionen. Resultaten utvärderas, diskuteras och jämförs med det detekterade och manuellt korrigerade värdet för fotbollens bollbana. De olika ramverken jämförs och deras förmåga för detektion och korrigering av avvikelser diskuteras. Resultatet visade att även om den vanliga autoencoder-versionen hade ett mindre genomsnittligt rekonstruktionsfel efter träning, så bidrog användningen av maskerade autoencoders till en förbättring inom detektering av avvikelser. Resultatet visade att varken den vanliga autoencodern eller den maskerade autoencodern lyckades föreslå trovärdiga bollbanor för att korrigera de funna avvikelserna i datan. Detta examensarbete främjar ytterligare forskning inom användningen av maskerade autoencoders för tidsserier och banmodellering.
6

Optimizing Neural Network Models for Healthcare and Federated Learning

Verardo, Giacomo January 2024 (has links)
Neural networks (NN) have demonstrated considerable capabilities in tackling tasks in a diverse set of fields, including natural language processing, image classification, and regression. In recent years, the amount of available data to train Deep Learning (DL) models has increased tremendously, thus requiring larger and larger models to learn the underlying patterns in the data. Inference time, communication cost in the distributed case, required storage resources, and computational capabilities have increased proportional to the model's size, thus making NNs less suitable for two cases: i) tasks requiring low inference time (e.g., real-time monitoring) and ii) training on low powered devices. These two cases, which have become crucial in the last decade due to the pervasiveness of low-powered devices and NN models, are addressed in this licentiate thesis. As the first contribution, we analyze the distributed case with multiple low-powered devices in a federated scenario. Cross-device Federated Learning (FL) is a branch of Machine Learning (ML) where multiple participants train a common global model without sharing data in a centralized location. In this thesis, a novel technique named Coded Federated Dropout (CFD) is proposed to carefully split the global model into sub-models, thus increasing communication efficiency and reducing the burden on the devices with only a slight increase in training time. We showcase our results for an example image classification task. As the second contribution, we consider the anomaly detection task on Electrocardiogram (ECG) recordings and show that including prior knowledge in NNs models drastically reduces model size, inference time, and storage resources for multiple state-of-the-art NNs. In particular, this thesis focuses on AEs, a subclass of NNs, which is suitable for anomaly detection. I propose a novel approach, called FMM-Head, which incorporates basic knowledge of the ECG waveform shape into an AE. The evaluation shows that we improve the AUROC of baseline models while guaranteeing under-100ms inference time, thus enabling real-time monitoring of ECG recordings from hospitalized patients. Finally, several potential future works are presented. The inclusion of prior knowledge can be further exploited in the ECG Imaging (ECGI) case, where hundreds of ECG sensors are used to reconstruct the 3D electrical activity of the heart. For ECGI, the reduction in the number of sensors employed (i.e., the input space) is also beneficial in terms of reducing model size. Moreover, this thesis advocates additional techniques to integrate ECG anomaly detection in a distributed and federated case. / Neurala nätverk (NN) har visat god förmåga att tackla uppgifter inom en mängd olika områden, inklusive Natural Language Processing (NLP), bildklassificering och regression. Under de senaste åren har mängden tillgänglig data för att träna Deep Learning (DL)-modeller ökat enormt, vilket kräver större och större modeller för att lära sig de underliggande mönstren i datan. Inferens tid och kommunikationskostnad i det distribuerade fallet, nödvändiga lagringsresurser och beräkningskapacitet har ökat proportionerligt mot modellens storlek vilket gör NN mindre lämpliga använda i två fall: (i) uppgifter som kräver snabba slutledningar (t.ex. realtidsövervakning) och (ii) användning på mindre kraftfulla enheter. De här två fallen, som har blivit mer förekommande under det senaste decenniet på grund av omfattningen av mindre kraftfulla enheter och NN-modeller, behandlas i denna licentiatuppsats. Som det första bidraget analyserar vi det distribuerade fallet med flera lättdrivna enheter i ett federerat scenario. Cross-device Federated Learning (FL) är en gren av Machine Learning (ML) där flera deltagare tränar en gemensam global modell utan att dela data på en centraliserad plats. I denna avhandling föreslås en nyteknik, Coded Federated Dropout (CFD), som delar upp den globala modellen i undermodeller, vilket ökar kommunikationseffektiviteten och samtidigt minskar belastningen på enheterna. Detta erhålls med endast en liten förlängning av träningstiden. Vi delger våra resultat för en exempeluppgift för bildklassificering. Som det andra bidraget betraktar vi anomalidetekteringsuppgiften Elektrokardiogram (EKG)-registrering och visar att inklusionen av förkunskaper i NN-modeller drastiskt minskar modellstorlek, inferenstider och lagringsresurser för flera moderna NN. Speciellt fokuserar denna avhandling på Autoencoders (AEs), en delmängd av NN, lämplig för avvikelsedetektering. En ny metod, kallad FMM-Head, föreslås. vilken  omformar grundläggande kunskaper om EKG-vågformen till en AE. Utvärderingen visar att vi förbättrar arean under kurvan (AUROC) för baslinjemodeller samtidigt som vi garanterar under 100 ms inferenstid, vilket möjliggör realtidsövervakning av EKG-inspelningar från inlagda patienter.  Slutligen presenteras flera potentiella framtida utvidgningar. Införandet av tidigare kunskap kan utnyttjas ytterligare i fallet med EKG Imaging (ECGI), där hundratals EKG-sensorer används för att rekonstruera den elektriska 3D-aktiviteten hos hjärtat. För ECGI är minskningen av antalet använda sensorer (dvs inmatningsutrymme) också fördelaktig när det gäller att minska modellstorleken. Dessutom förespråkas i denna avhandling ytterligare tekniker för att integrera EKG-avvikelsedetektering i distribuerade och federerade fall. / <p>This research leading to this thesis is based upon work supported by the King Abdullah University of Science and Technology (KAUST) Office of Research Administration (ORA) under Award No. ORA-CRG2021-4699</p>
7

[en] VISION TRANSFORMERS AND MASKED AUTOENCONDERS FOR SEISMIC FACEIS SEGMENTATION / [pt] VISION TRANSFORMERS E MASKED AUTOENCONDERS PARA SEGMENTAÇÃO DE FÁCIES SÍSMICAS

DANIEL CESAR BOSCO DE MIRANDA 12 January 2024 (has links)
[pt] O desenvolvimento de técnicas de aprendizado auto-supervisionado vem ganhando muita visibilidade na área de Visão Computacional pois possibilita o pré-treinamento de redes neurais profundas sem a necessidade de dados anotados. Em alguns domínios, as anotações são custosas, pois demandam muito trabalho especializado para a rotulação dos dados. Esse problema é muito comum no setor de Óleo e Gás, onde existe um vasto volume de dados não interpretados. O presente trabalho visa aplicar a técnica de aprendizado auto-supervisionado denominada Masked Autoencoders para pré-treinar modelos Vision Transformers com dados sísmicos. Para avaliar o pré-treino, foi aplicada a técnica de transfer learning para o problema de segmentação de fácies sísmicas. Na fase de pré-treinamento foram empregados quatro volumes sísmicos distintos. Já para a segmentação foi utilizado o dataset Facies-Mark e escolhido o modelo da literatura Segmentation Transformers. Para avaliação e comparação da performance da metodologia foram empregadas as métricas de segmentação utilizadas pelo trabalho de benchmarking de ALAUDAH (2019). As métricas obtidas no presente trabalho mostraram um resultado superior. Para a métrica frequency weighted intersection over union, por exemplo, obtivemos um ganho de 7.45 por cento em relação ao trabalho de referência. Os resultados indicam que a metodologia é promissora para melhorias de problemas de visão computacional em dados sísmicos. / [en] The development of self-supervised learning techniques has gained a lot of visibility in the field of Computer Vision as it allows the pre-training of deep neural networks without the need for annotated data. In some domains, annotations are costly, as they require a lot of specialized work to label the data. This problem is very common in the Oil and Gas sector, where there is a vast amount of uninterpreted data. The present work aims to apply the self-supervised learning technique called Masked Autoencoders to pre-train Vision Transformers models with seismic data. To evaluate the pre-training, transfer learning was applied to the seismic facies segmentation problem. In the pre-training phase, four different seismic volumes were used. For the segmentation, the Facies-Mark dataset was used and the Segmentation Transformers model was chosen from the literature. To evaluate and compare the performance of the methodology, the segmentation metrics used by the benchmarking work of ALAUDAH (2019) were used. The metrics obtained in the present work showed a superior result. For the frequency weighted intersection over union (FWIU) metric, for example, we obtained a gain of 7.45 percent in relation to the reference work. The results indicate that the methodology is promising for improving computer vision problems in seismic data.
8

Regularizing Vision-Transformers Using Gumbel-Softmax Distributions on Echocardiography Data / Regularisering av Vision-Transformers med hjälp av Gumbel-Softmax-fördelningar på ekokardiografidata

Nilsson, Alfred January 2023 (has links)
This thesis introduces an novel approach to model regularization in Vision Transformers (ViTs), a category of deep learning models. It employs stochastic embedded feature selection within the context of echocardiography video analysis, specifically focusing on the EchoNet-Dynamic dataset. The proposed method, termed Gumbel Vision-Transformer (G-ViT), combines ViTs and Concrete Autoencoders (CAE) to enhance the generalization of models predicting left ventricular ejection fraction (LVEF). The model comprises a ViT frame encoder for spatial representation and a transformer sequence model for temporal aspects, forming a Video ViT (V-ViT) architecture that, when used without feature selection, serves as a baseline on LVEF prediction performance. The key contribution lies in the incorporation of stochastic image patch selection in video frames during training. The CAE method is adapted for this purpose, achieving approximately discrete patch selections by sampling from the Gumbel-Softmax distribution, a relaxation of the categorical. The experiments conducted on EchoNetDynamic demonstrate a consistent and notable regularization effect. The G-ViT model, trained with learned feature selection, achieves a test R² of 0.66 outperforms random masking baselines and the full-input V-ViT counterpart with an R² of 0.63, and showcasing improved generalization in multiple evaluation metrics. The G-ViT is compared against recent related work in the application of ViTs on EchoNet-Dynamic, notably outperforming the application of Swin-transformers, UltraSwin, which achieved an R² of 0.59. Moreover, the thesis explores model explainability by visualizing selected patches, providing insights into how the G-ViT utilizes regions known to be crucial for LVEF prediction for humans. This proposed approach extends beyond regularization, offering a unique explainability tool for ViTs. Efficiency aspects are also considered, revealing that the G-ViT model, trained with a reduced number of input tokens, yields comparable or superior results while significantly reducing GPU memory and floating-point operations. This efficiency improvement holds potential for energy reduction during training. / Detta examensarbete introducerar en ny metod för att uppnå regularisering av Vision-Transformers (ViTs), en kategori av deep learning-modeller. Den använder sig stokastisk inbäddad feature selection i kontexten av analys av ekokardiografivideor, specifikt inriktat på datasetet EchoNet-Dynamic. Den föreslagna metoden, kallad Gumbel Vision-Transformer (G-ViT), kombinerar ViTs och Concrete Autoencoders (CAE) för att förbättra generaliseringen av modeller som förutspår ejektionsfraktion i vänstra ventrikeln (left ventricular ejection fraction, LVEF). Modellen inbegriper en ViT frame encoder för spatiella representationer och en transformer-sekvensmodell för tidsaspekter, vilka bilder en arkitektur, Video-ViT (V-ViT), som tränad utan feature selection utgör en utgångspunkt (baseline) för jämförelse vid prediktion av LVEF. Det viktigaste bidraget ligger i införandet av stokastiskt urval av bild-patches i videobilder under träning. CAE-metoden anpassas för detta ändamål, och uppnår approxmativt diskret patch-selektion genom att dra stickprov från Gumbel-Softmax-fördelningen, en relaxation av den kategoriska fördelningen. Experimenten utförda på EchoNet-Dynamic visar en konsekvent och anmärkningsvärd regulariseringseffekt. G-ViTmodellen, tränad med inlärd feature selection, uppnår ett R² på 0,66 och överträffar slumpmässigt urval och V-ViT-motsvarigheten som använder sig av hela bilder med ett R² på 0,63, och uppvisar förbättrad generalisering i flera utvärderingsmått. G-ViT jämförs med nyligen publicerat arbete i tillämpningen av ViTs på EchoNet-Dynamic och överträffar bland annat en tillämpning av Swin-transformers, UltraSwin, som uppnådde en R² på 0,59. Dessutom utforskar detta arbete modellförklarbarhet genom att visualisera utvalda bild-patches, vilket ger insikter i hur G-ViT använder regioner som är kända för att vara avgörande för LVEF-estimering för människor. Denna föreslagna metod sträcker sig bortom regularisering och erbjuder ett unikt förklaringsverktyg för ViTs. Effektivitetsaspekter beaktas också, vilket avslöjar att G-ViT-modellen, tränad med ett reducerat antal inmatningstokens, ger jämförbara eller överlägsna resultat samtidigt som den avsevärt minskar GPU-minnet och flyttalsoperationer. Denna effektivitetsförbättring har potential för energireduktion under träning.
9

Deep Time: Deep Learning Extensions to Time Series Factor Analysis with Applications to Uncertainty Quantification in Economic and Financial Modeling

Miller, Dawson Jon 12 September 2022 (has links)
This thesis establishes methods to quantify and explain uncertainty through high-order moments in time series data, along with first principal-based improvements on the standard autoencoder and variational autoencoder. While the first-principal improvements on the standard variational autoencoder provide additional means of explainability, we ultimately look to non-variational methods for quantifying uncertainty under the autoencoder framework. We utilize Shannon's differential entropy to accomplish the task of uncertainty quantification in a general nonlinear and non-Gaussian setting. Together with previously established connections between autoencoders and principal component analysis, we motivate the focus on differential entropy as a proper abstraction of principal component analysis to this more general framework, where nonlinear and non-Gaussian characteristics in the data are permitted. Furthermore, we are able to establish explicit connections between high-order moments in the data to those in the latent space, which induce a natural latent space decomposition, and by extension, an explanation of the estimated uncertainty. The proposed methods are intended to be utilized in economic and financial factor models in state space form, building on recent developments in the application of neural networks to factor models with applications to financial and economic time series analysis. Finally, we demonstrate the efficacy of the proposed methods on high frequency hourly foreign exchange rates, macroeconomic signals, and synthetically generated autoregressive data sets. / Master of Science / This thesis establishes methods to quantify and explain uncertainty in time series data, along with improvements on some latent variable neural networks called autoencoders and variational autoencoders. Autoencoders and varitational autoencodes are called latent variable neural networks since they can estimate a representation of the data that has less dimension than the original data. These neural network architectures have a fundamental connection to a classical latent variable method called principal component analysis, which performs a similar task of dimension reduction but under more restrictive assumptions than autoencoders and variational autoencoders. In contrast to principal component analysis, a common ailment of neural networks is the lack of explainability, which accounts for the colloquial term black-box models. While the improvements on the standard autoencoders and variational autoencoders help with the problem of explainability, we ultimately look to alternative probabilistic methods for quantifying uncertainty. To accomplish this task, we focus on Shannon's differential entropy, which is entropy applied to continuous domains such as time series data. Entropy is intricately connected to the notion of uncertainty, since it depends on the amount of randomness in the data. Together with previously established connections between autoencoders and principal component analysis, we motivate the focus on differential entropy as a proper abstraction of principal component analysis to a general framework that does not require the restrictive assumptions of principal component analysis. Furthermore, we are able to establish explicit connections between high-order moments in the data to the estimated latent variables (i.e., the reduced dimension representation of the data). Estimating high-order moments allows for a more accurate estimation of the true distribution of the data. By connecting the estimated high-order moments in the data to the latent variables, we obtain a natural decomposition of the uncertainty surrounding the latent variables, which allows for increased explainability of the proposed autoencoder. The methods introduced in this thesis are intended to be utilized in a class of economic and financial models called factor models, which are frequently used in policy and investment analysis. A factor model is another type of latent variable model, which in addition to estimating a reduced dimension representation of the data, provides a means to forecast future observations. Finally, we demonstrate the efficacy of the proposed methods on high frequency hourly foreign exchange rates, macroeconomic signals, and synthetically generated autoregressive data sets. The results support the superiority of the entropy-based autoencoder to the standard variational autoencoder both in capability and computational expense.
10

Data-driven Definition of Cell Types Based on Single-cell Gene Expression Data

Glaros, Anastasios January 2016 (has links)
No description available.

Page generated in 0.4346 seconds