1 |
vU-net: edge detection in time-lapse fluorescence live cell images based on convolutional neural networksZhang, Xitong 23 April 2018 (has links)
Time-lapse fluorescence live cell imaging has been widely used to study various dynamic processes in cell biology. As the initial step of image analysis, it is important to localize and segment cell edges with higher accuracy. However, fluorescence live-cell images usually have issues such as low contrast, noises, uneven illumination in comparison to immunofluorescence images. Deep convolutional neural networks, which learn features directly from training images, have successfully been applied in natural image analysis problems. However, the limited amount of training samples prevents their routine application in fluorescence live-cell image analysis. In this thesis, by exploiting the temporal coherence in time-lapse movies together with VGG-16 [1] pre-trained model, we demonstrate that we can train a deep neural network using a limited number of image frames to segment the entire time-lapse movies. We propose a novel framework, vU-net, which combines the advantages of VGG-16 [1] in feature extraction and U-net [2] in feature reconstruction. Moreover, we design an auxiliary convolutional block at the end of the architecture to enhance edge detection. We evaluate our framework using dice coefficient and the distance between the predicted edge and the ground truth on high-resolution image datasets of an adhesion marker, paxillin, acquired by a Total Internal Reflection Fluorescence (TIRF) microscope. Our results demonstrate that, on difficult datasets: (i) The testing dice coefficient of vU-net is 3.2% higher than U-net with the same amount of training images. (ii) vU-net can achieve the best prediction results of U-net with one third of training images needed by U-net. (iii) vU-net produces more robust prediction than U-net. Therefore, vU-net can be more practically applied to challenging live cell movies than U-net since it requires a small size of training sets and achieved accurate segmentation.
|
2 |
Deep GCNs with Random Partition and Generalized AggregatorXiong, Chenxin 25 November 2020 (has links)
Graph Convolutional Networks (GCNs) draws significant attention due to its power of representation learning on graphs. Recent works developed frameworks to train deep GCNs. Such works show impressive results in tasks like point cloud classification and segmentation, and protein interaction prediction. While for large-scale graphs, doing full-batch training by GCNs is still challenging especially when GCNs go deeper. By fully analyzing a clustering-based mini-batch training algorithm ClusterGCN, we propose random partition which is a more efficient and effective method to implement mini-batch training. Besides, selecting different permutation invariance function (such as max, mean or add) for neighbors’ information aggregation will result in every different results. Therefore, we propose to alleviate it by introducing a novel Generalized Aggregation Function. In this thesis, I analyze the drawbacks caused by ClusterGCN and discuss about its limits. I further compare the performance of ClusterGCN with random partition and the final experimental results show that simple random partition outperforms ClusterGCN with very obvious advantageous for node property prediction task. For the techniques which are commonly used to make GCNs go deeper, I demonstrate a better way of applying residual connections (pre-activation) to stack more layers for GCNs. Last, I show the complete work of training deeper GCNs with generalized aggregators and display the promising results over several datasets from the Open Graph Benchmark (OGB).
|
3 |
Improving Text Classification Using Graph-based MethodsKarajeh, Ola Abdel-Raheem Mohammed 05 June 2024 (has links)
Text classification is a fundamental natural language processing task. However, in real-world applications, class distributions are usually skewed, e.g., due to inherent class imbalance. In addition, the task difficulty changes based on the underlying language. When rich morphological structure and high ambiguity are exhibited, natural language understanding can become challenging. For example, Arabic, ranked the fifth most widely used language, has a rich morphological structure and high ambiguity that result from Arabic orthography. Thus, Arabic natural language processing is challenging. Several studies employ Long Short- Term Memory (LSTM) networks and Convolutional Neural Networks (CNNs), but Graph Convolutional Networks (GCNs) have not yet been investigated for the task. Sequence- based models can successfully capture semantics in local consecutive text sequences. On the other hand, graph-based models can preserve global co-occurrences that capture non- consecutive and long-distance semantics. A text representation approach that combines local and global information can enhance performance in practical class imbalance text classification scenarios. Yet, multi-view graph-based text representations have received limited attention.
In this research, first we introduce Multi-view Minority Class Text Graph Convolutional Network (MMCT-GCN), a transductive multi-view text classification model that captures textual graph representations for the minority class alongside sequence-based text representations. Experimental results show that MMCT-GCN obtains consistent improvements over baselines. Second, we develop an Arabic Bidirectional Encoder Representations from Transformers (BERT) Graph Convolutional Network (AraBERT-GCN), a hybrid model that combines the large-scale pre-trained models that encode the local context and semantics alongside graph-based features that are capable of extracting the global word co-occurrences in non-consecutive extended semantics by only one or two hops. Experimental results show that AraBERT-GCN outperforms the state-of-the-art (SOTA) on our Arabic text datasets. Finally, we propose an Arabic Multidimensional Edge Graph Convolutional Network (AraMEGraph) designed for text classification that encapsulates richer and context-aware representations of word and phrase relationships, thus mitigating the impact of the complexity and ambiguity of the Arabic language. / Doctor of Philosophy / The text classification task is an important step in understanding natural language. However, this task has many challenges, such as uneven data distributions and language difficulty. For example, Arabic is the fifth most spoken language. It has many different word forms and meanings, which can make things harder to understand. Long Short-Term Memory (LSTM) networks and Convolutional Neural Networks (CNNs) are widely utilized for text classification. However, another kind of network called graph convolutional network (GCN) has yet to be explored for this task. Graph-based models keep track of how words are connected, even if they are not right next to each other in a sentence. This helps with better understanding the meaning of words. On the other hand, sequence-based models do well in understanding the meaning of words that are right next to each other. Mixing both types of information in text understanding can work better, especially when dealing with unevenly distributed data. In this research, we introduce a new text classification method called Multi-view Minority Class Text Graph Convolutional Network (MMCT-GCN). This model looks at text from different angles and combines information from graphs and sequence-based models. Our experiments show that this model performs better than other ones proposed in the literature. Additionally, we propose an Arabic BERT Graph Convolutional Network (AraBERT-GCN). It combines pre-trained models that understand words in context and graph features that look at how words relate to each other globally. This helps AraBERT- GCN do better than other models when working with Arabic text. Finally, we develop a special network called Arabic Multidimensional Edge Graph Convolutional Network (AraMEGraph) for Arabic text. It is designed to better understand Arabic and classify text more accurately. We do this by adding special edge features with multiple dimensions to help the network learn the relationships between words and phrases.
|
4 |
Bayesian Optimization for Neural Architecture Search using Graph KernelsKrishnaswami Sreedhar, Bharathwaj January 2020 (has links)
Neural architecture search is a popular method for automating architecture design. Bayesian optimization is a widely used approach for hyper-parameter optimization and can estimate a function with limited samples. However, Bayesian optimization methods are not preferred for architecture search as it expects vector inputs while graphs are high dimensional data. This thesis presents a Bayesian approach with Gaussian priors that use graph kernels specifically targeted to work in the higherdimensional graph space. We implemented three different graph kernels and show that on the NAS-Bench-101 dataset, an untrained graph convolutional network kernel outperforms previous methods significantly in terms of the best network found and the number of samples required to find it. We follow the AutoML guidelines to make this work reproducible. / Neural arkitektur sökning är en populär metod för att automatisera arkitektur design. Bayesian-optimering är ett vanligt tillvägagångssätt för optimering av hyperparameter och kan uppskatta en funktion med begränsade prover. Bayesianska optimeringsmetoder är dock inte att föredra för arkitektonisk sökning eftersom vektoringångar förväntas medan grafer är högdimensionella data. Denna avhandling presenterar ett Bayesiansk tillvägagångssätt med gaussiska prior som använder grafkärnor som är särskilt fokuserade på att arbeta i det högre dimensionella grafutrymmet. Vi implementerade tre olika grafkärnor och visar att det på NASBench- 101-data, till och med en otränad Grafkonvolutionsnätverk-kärna, överträffar tidigare metoder när det gäller det bästa nätverket som hittats och antalet prover som krävs för att hitta det. Vi följer AutoML-riktlinjerna för att göra detta arbete reproducerbart.
|
5 |
Komprese obrazu pomocí neuronových sítí / Image Compression with Neural NetworksTeuer, Lukáš January 2018 (has links)
This document describes image compression using different types of neural networks. Features of neural networks like convolutional and recurrent networks are also discussed here. The document contains detailed description of various neural network architectures and their inner workings. In addition, experiments are carried out on various neural network structures and parameters in order to find the most appropriate properties for image compression. Also, there are proposed new concepts for image compression using neural networks that are also immediately tested. Finally, a network of the best concepts and parts discovered during experimentation is designed.
|
6 |
Applications of Graph Convolutional Networks and DeepGCNs in Point Cloud Part Segmentation and UpsamplingAbualshour, Abdulellah 18 April 2020 (has links)
Graph convolutional networks (GCNs) showed promising results in learning from point cloud data. Applications of GCNs include point cloud classification, point cloud segmentation, point cloud upsampling, and more. Recently, the introduction of Deep Graph Convolutional Networks (DeepGCNs) allowed GCNs to go deeper, and thus resulted in better graph learning while avoiding the vanishing gradient problem in GCNs. By adapting impactful methods from convolutional neural networks (CNNs) such as residual connections, dense connections, and dilated convolutions, DeepGCNs allowed GCNs to learn better from non-Euclidean data. In addition, deep learning methods proved very effective in the task of point cloud upsampling. Unlike traditional optimization-based methods, deep learning-based methods to point cloud upsampling does not rely on priors nor hand-crafted features to learn how to upsample point clouds. In this thesis, I discuss the impact and show the performance results of DeepGCNs in the task of point cloud part segmentation on PartNet dataset. I also illustrate the significance of using GCNs as upsampling modules in the task of point cloud upsampling by introducing two novel upsampling modules: Multi-branch GCN and Clone GCN. I show quantitatively and qualitatively the performance results of our novel and versatile upsampling modules when evaluated on a new proposed standardized dataset: PU600, which is the largest and most diverse point cloud upsampling dataset currently in the literature.
|
7 |
Synthesis of sequential dataViklund, Joel January 2021 (has links)
Good generative models for short time series data exist and have been applied for both data augmentation and privacy protection purposes in the past. A common theme for existing generative models is that they all use a recurrent neural network (RNN) architecture, which makes the models limited regarding the length of the sequences. In real world problems, we might have to deal with data containing longer sequences, and it is such data we in this thesis attempt to synthesize. By combining the recently successful TimeGAN framework with a temporal convolutional network component architecture, we generate synthetic sequential data for two toy data sets: sequential MNIST and multivariate sine waves. The results strongly indicate, although relying solely on a visual inspection, that the model manage to capture long temporal dynamics over time and also relations between different features for the multivariate sine waves data set. In order to make our model applicable for real world data sets, we suggest two improvements. Firstly, the validation of the generated data should not only rely on visual inspection, but also ensure that the synthetic data has the same statistical distribution. Secondly, depending on the task, model refinements such that the synthetic samples look even more realistic should be made.
|
8 |
Milling Tool Condition Monitoring Using Acoustic Signals and Machine LearningCooper, Clayton Alan January 2019 (has links)
No description available.
|
9 |
Radar-based Machine Learning Approaches for Classification of Rehabilitation ExercisesSosa Gomez, Jose Maria 06 1900 (has links)
Muscular rehabilitation is essential for injury or surgery recovery by restoring
strength, flexibility, and range of motion to the affected joints and muscles.
It can also improve posture correction and performance by strengthening weak
areas, reducing the risk of injury, and managing chronic conditions like arthritis,
osteoporosis, or chronic pain. Currently, there is only physical therapy for
these problems, and the treatment is in person at a specific location, such as a
hospital or a clinic. Other works proposed mounting surface electromyography
to recognize muscle activation patterns or wrist-forearm for muscle fatigue or using
cameras to video call for sessions. Regrettably, such works put the patient’s
privacy or comfort in danger.
Our proposed solution is a radar and machine learning-based monitoring and
classification of rehabilitation exercises. This RF-based system can accurately
monitor and classify exercises that are part of the treatment for a specific need
and in the privacy of the patient’s house. The proposed solution uses the RF
reflections of the body and the environment. It uses these signals to analyze
them in a machine learning algorithm to classify the exercise the person executes.
This solution could be used anywhere in the home by any patient with minimal
setup effort. Our results, done by four subjects in their own homes, show that
the already trained model can classify with an accuracy of 87% to 97%.
|
10 |
Generating Comprehensible Equations from Unknown Discrete Dynamical Systems Using Neural NetworksMaroli, John Michael January 2019 (has links)
No description available.
|
Page generated in 0.0926 seconds