• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 8
  • 8
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Deep Learning on Graph-structured Data

Lee, John Boaz T. 11 November 2019 (has links)
In recent years, deep learning has made a significant impact in various fields – helping to push the state-of-the-art forward in many application domains. Convolutional Neural Networks (CNN) have been applied successfully to tasks such as visual object detection, image super-resolution, and video action recognition while Long Short-term Memory (LSTM) and Transformer networks have been used to solve a variety of challenging tasks in natural language processing. However, these popular deep learning architectures (i.e., CNNs, LSTMs, and Transformers) can only handle data that can be represented as grids or sequences. Due to this limitation, many existing deep learning approaches do not generalize to problem domains where the data is represented as graphs – social networks in social network analysis or molecular graphs in chemoinformatics, for instance. The goal of this thesis is to help bridge the gap by studying deep learning solutions that can handle graph data naturally. In particular, we explore deep learning-based approaches in the following areas. 1. Graph Attention. In the real-world, graphs can be both large – with many complex patterns – and noisy which can pose a problem for effective graph mining. An effective way to deal with this issue is to use an attention-based deep learning model. An attention mechanism allows the model to focus on task-relevant parts of the graph which helps the model make better decisions. We introduce a model for graph classification which uses an attention-guided walk to bias exploration towards more task-relevant parts of the graph. For the task of node classification, we study a different model – one with an attention mechanism which allows each node to select the most task-relevant neighborhood to integrate information from. 2. Graph Representation Learning. Graph representation learning seeks to learn a mapping that embeds nodes, and even entire graphs, as points in a low-dimensional continuous space. The function is optimized such that the geometric distance between objects in the embedding space reflect some sort of similarity based on the structure of the original graph(s). We study the problem of learning time-respecting embeddings for nodes in a dynamic network. 3. Brain Network Discovery. One of the fundamental tasks in functional brain analysis is the task of brain network discovery. The brain is a complex structure which is made up of various brain regions, many of which interact with each other. The objective of brain network discovery is two-fold. First, we wish to partition voxels – from a functional Magnetic Resonance Imaging scan – into functionally and spatially cohesive regions (i.e., nodes). Second, we want to identify the relationships (i.e., edges) between the discovered regions. We introduce a deep learning model which learns to construct a group-cohesive partition of voxels from the scans of multiple individuals in the same group. We then introduce a second model which can recover a hierarchical set of brain regions, allowing us to examine the functional organization of the brain at different levels of granularity. Finally, we propose a model for the problem of unified and group-contrasting edge discovery which aims to discover discriminative brain networks that can help us to better distinguish between samples from different classes.
2

Deep Learning on Graph-structured Data

Lee, John Boaz T 11 November 2019 (has links)
In recent years, deep learning has made a significant impact in various fields – helping to push the state-of-the-art forward in many application domains. Convolutional Neural Networks (CNN) have been applied successfully to tasks such as visual object detection, image super-resolution, and video action recognition while Long Short-term Memory (LSTM) and Transformer networks have been used to solve a variety of challenging tasks in natural language processing. However, these popular deep learning architectures (i.e., CNNs, LSTMs, and Transformers) can only handle data that can be represented as grids or sequences. Due to this limitation, many existing deep learning approaches do not generalize to problem domains where the data is represented as graphs – social networks in social network analysis or molecular graphs in chemoinformatics, for instance. The goal of this thesis is to help bridge the gap by studying deep learning solutions that can handle graph data naturally. In particular, we explore deep learning-based approaches in the following areas. 1. Graph Attention. In the real-world, graphs can be both large – with many complex patterns – and noisy which can pose a problem for effective graph mining. An effective way to deal with this issue is to use an attention-based deep learning model. An attention mechanism allows the model to focus on task-relevant parts of the graph which helps the model make better decisions. We introduce a model for graph classification which uses an attention-guided walk to bias exploration towards more task-relevant parts of the graph. For the task of node classification, we study a different model – one with an attention mechanism which allows each node to select the most task-relevant neighborhood to integrate information from. 2. Graph Representation Learning. Graph representation learning seeks to learn a mapping that embeds nodes, and even entire graphs, as points in a low-dimensional continuous space. The function is optimized such that the geometric distance between objects in the embedding space reflect some sort of similarity based on the structure of the original graph(s). We study the problem of learning time-respecting embeddings for nodes in a dynamic network. 3. Brain Network Discovery. One of the fundamental tasks in functional brain analysis is the task of brain network discovery. The brain is a complex structure which is made up of various brain regions, many of which interact with each other. The objective of brain network discovery is two-fold. First, we wish to partition voxels – from a functional Magnetic Resonance Imaging scan – into functionally and spatially cohesive regions (i.e., nodes). Second, we want to identify the relationships (i.e., edges) between the discovered regions. We introduce a deep learning model which learns to construct a group-cohesive partition of voxels from the scans of multiple individuals in the same group. We then introduce a second model which can recover a hierarchical set of brain regions, allowing us to examine the functional organization of the brain at different levels of granularity. Finally, we propose a model for the problem of unified and group-contrasting edge discovery which aims to discover discriminative brain networks that can help us to better distinguish between samples from different classes.
3

De novo genome-scale prediction of protein-protein interaction networks using ontology-based background knowledge

Niu, Kexin 18 July 2022 (has links)
Proteins and their function play one of the most essential roles in various biological processes. The study of PPI is of considerable importance. PPI network data are of great scientific value, however, they are incomplete and experimental identification is time and money consuming. Available computational methods perform well on model organisms’ PPI prediction but perform poorly for a novel organism. Due to the incompleteness of interaction data, it is challenging to train a model for a novel organism. Also, millions to billions of interactions need to be verified which is extremely compute-intensive. We aim to improve the performance of predicting whether a pair of proteins will interact, with only two sequences as input. And also efficiently predict a PPI network with a proteome of sequences as input. We hypothesize that information about cellular locations where proteins are active and proteins' 3D structures can help us to significantly improve predict performance. To overcome the lack of experimental data, we use predicted structures by AlphaFold2 and cellular locations by DeepGoPlus. We believe that proteins belonging to disjoint biological components have very little chance to interact. We manually choose several disjoint pairs and further confirmed it by experimental PPI. We generate new no-interaction pairs with disjoint classes to update the D-SCRIPT dataset. As result, the AUPR has improved by 10% compared to the D-SCRIPT dataset. Besides, we pre-filter the negatives instead of enumerating all the potential PPI for de-novo PPI network prediction. For E.coli, we can pass around a million negative interactions. To combine the structure and sequence information, we generate a graph for each protein. A graph convolution network using Self-Attention Graph Pooling in Siamese architecture is used to learn these graphs for PPI prediction. In this way, we can improve around 20% in AUPR compared to our baseline model D-SCRIPT.
4

A Graph Attention plus Reinforcement Learning Method for Antenna Tilt Optimization

Ma, Tengfei January 2021 (has links)
Remote Electrical Tilt optimization is an effective method to obtain the optimal Key Performance Indicators (KPIs) by remotely controlling the base station antenna’s vertical tilt. To improve the KPIs aims to improve antennas’ cooperation effect since KPIs measure the quality of cooperation between the antenna to be optimized and its neighbor antennas. Reinforcement Learning (RL) is an appropriate method to learn an antenna tilt control policy since the agent in RL can generate the optimal epsilon greedy tilt optimization policy by observing the environment and learning from the state- action pairs. However, existing models only produced tilt modification strategies by interpreting the to- be- optimized antenna’s features, which cannot fully characterize the mobile cellular network formed by the to- be- optimized antenna and its neighbors. Therefore, incorporating the features of the neighboring antennas into the model is an important measure to improve the optimization strategy. This work will introduce the Graph Attention Network to model the neighborhood antenna’s impact on the antenna to be optimized through the attention mechanism. Furthermore, it will generate a low- dimensional embedding vector with more expressive power to represent the to- be- optimized antenna’s state in the RL framework through dealing with graph- structural data. This new model, namely Graph Attention Q- Network (GAQ), is a model based on DQN and aims to acquire a higher performance than the Deep Q- Network (DQN) model, which is the baseline, evaluated by the same metric — KPI Improvement. Since GAQ has a richer perception of the environment than the vanilla DQN model, it thereby outperforms the DQN model, obtaining fourteen percent performance improvement compared to the baseline. Besides, GAQ also performs 14 per cent better than DQN in terms of convergence efficiency. / Optimering av fjärrlutning är en effektiv metod för att nå optimala nyckeltal genom fjärrstyrning av den vertikala lutningen av en antenn i en basstation. Att förbättra nyckeltalen innebär att förbättra sammarbetseffekten mellan antenner eftersom nyckeltalen är mått på kvalitén av sammarbetet mellan den antenn som optimeras och dess angränsande antenner. Förstärkande Inlärning (FI) är en lämplig metod för att lära sig en optimal strategi för reglering av antennlutningen eftersom agenten inom FI kan generera den optimala epsilongiriga optimeringsstrategin genom att observera miljön och lära sig från par av tillstånd och aktioner. Nuvarande modeller genererar dock endast lutningsstrategier genom att tolka egenskaperna hos den antenn som ska optimeras, vilket inte är tillräckligt för att karatärisera mobilnätverket bestående av antennen som ska optimeras samt dess angränsande antenner. Därav är inkluderingen av de angränsande antennernas egenskaper i modellen viktig för att förbättra optimeringsstrategin. Detta arbete introducerar Graf- Uppmärksammat Nätverk för att modellera de angränsande antennernas påverkan på den antenn som ska optimeras genom uppmärksamhetsmekanismen. Metoden genererar en lågdimensionell vektor med större förmåga att representera den optimerade antennens tillstånd i FI modellen genom att hantera data i struktur av en graf. Den nya modellen, Graf- Uppmärksammat Q- Nätverk (GUQ), är en modell baserad på DQN med mål att nå bättre prestanda än en standard DQN- modell, utvärderat efter samma mätvärde –– förbättring av nyckeltalen. Eftersom GUQ har en större upfattning av miljön så överträffar metoden DQN- modellen genom en fjorton procent bättre prestandaökning. Dessutom, så överträffar GUQ även DQN i form av snabbare konvergens.
5

Handling Occlusion using Trajectory Prediction in Autonomous Vehicles / Ocklusionshantering med hjälp av banprediktion för självkörande fordon

Ljung, Mattias, Nagy, Bence January 2022 (has links)
Occlusion is a frequently occuring challenge in vision systems for autonomous driving. The density of objects in the field-of-view of the vehicle may be so high that some objects are only visible intermittently. It is therefore beneficial to investigate ways to predict the paths of objects under occlusion. In this thesis, we investigate whether trajectory prediction methods can be used to solve the occlusion prediction problem. We investigate two different types of approaches, one based on motion models, and one based on machine learning models. Furthermore, we investigate whether these two approaches can be fused to produce an even more reliable model. We evaluate our models on a pedestrian trajectory prediction dataset, an autonomous driving dataset, and a subset of the autonomous driving dataset that only includes validation examples of occlusion. The comparison of our different approaches shows that pure motion model-based methods perform the worst out of the three. On the other hand, machine learning-based models perform better, yet they require additional computing resources for training. Finally, the fused method performs the best on both the driving dataset and the occlusion data. Our results also indicate that trajectory prediction methods, both motion model-based and learning-based ones, can indeed accurately predict the path of occluded objects up to at least 3 seconds in the autonomous driving scenario.
6

Reducing Power Consumption For Signal Computation in Radio Access Networks : Optimization With Linear Programming and Graph Attention Networks / Reducering av energiförbrukning för signalberäkning i radioaccessnätverk : Optimering med linjär programmering och graf uppmärksamhets nätverk

Nordberg, Martin January 2023 (has links)
There is an ever-increasing usage of mobile data with global traffic having reached 115 exabytes per month at the end of 2022 for mobile data traffic including fixed wireless access. This is projected to grow up to 453 exabytes at the end of 2028, according to Ericssons 2022 mobile data traffic outlook report. To meet the increasing demand radio access networks (RAN) used for mobile communication are continuously being improved with the current generation enabling larger virtualization of the network through the Cloud RAN (C-RAN) architecture. This facilitates the usage of commercial off-the-shelf servers (COTS) in the network replacing specialized hardware servers and making it easier to scale up or down the network capacity after traffic demand. This thesis looks at how we can efficiently identify servers needed to meet traffic demand in a network consisting of both COTS servers and specialized hardware servers while trying to reduce the energy consumption of the network. We model the problem as a network where the antennas and radio heads are connectedto the core network through a C-RAN and a specialized hardware layer. The network is then represented using a graph where the nodes represent servers in the network. Using this problem model as a base we then generate problem instances with varying topologies, server profiles, and traffic demands. To find out how the traffic should be passed through the network we test two different methods: A mixed integer linear programming (MILP) method focused on energy minimization and a graph attention network (GAT) predictor combined with the energy minimization MILP. To help evaluate the results we also create three other methods: a MILP model that tries to spread the traffic as evenly as possible, a random predictor combined with the energy minimization MILP and a greedy method. Our results show that the energy optimization MILP method can be used to create optimal solutions, but it suffer from a slow computation time compared to the other methods. The GAT model shows promising results in making predictions regarding what servers should be included in a network making it possible to reduce the problem size and solve it faster with MILP. The mean energy cost of the solutions created using the combined GAT/MILP method was 4% more than just using MILP but the time gain was substantial for problems of similar size as the GAT was trained on. With regards to computation time the combined GAT/MILP method used was 85% faster than using only MILP. For networks of almost double the size than the ones that the GAT model was trained on the solutions of the combined GAT and MILP methods had a mean energy cost increase of 7% while still showing a strong speedup, being 93% faster than when only using MILP.
7

Exploring Graph Neural Networks for Clustering and Classification

Tahabi, Fattah Muhammad 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Graph Neural Networks (GNNs) have become excessively popular and prominent deep learning techniques to analyze structural graph data for their ability to solve complex real-world problems. Because graphs provide an efficient approach to contriving abstract hypothetical concepts, modern research overcomes the limitations of classical graph theory, requiring prior knowledge of the graph structure before employing traditional algorithms. GNNs, an impressive framework for representation learning of graphs, have already produced many state-of-the-art techniques to solve node classification, link prediction, and graph classification tasks. GNNs can learn meaningful representations of graphs incorporating topological structure, node attributes, and neighborhood aggregation to solve supervised, semi-supervised, and unsupervised graph-based problems. In this study, the usefulness of GNNs has been analyzed primarily from two aspects - clustering and classification. We focus on these two techniques, as they are the most popular strategies in data mining to discern collected data and employ predictive analysis.
8

EXPLORING GRAPH NEURAL NETWORKS FOR CLUSTERING AND CLASSIFICATION

Fattah Muhammad Tahabi (14160375) 03 February 2023 (has links)
<p><strong>Graph Neural Networks</strong> (GNNs) have become excessively popular and prominent deep learning techniques to analyze structural graph data for their ability to solve complex real-world problems. Because graphs provide an efficient approach to contriving abstract hypothetical concepts, modern research overcomes the limitations of classical graph theory, requiring prior knowledge of the graph structure before employing traditional algorithms. GNNs, an impressive framework for representation learning of graphs, have already produced many state-of-the-art techniques to solve node classification, link prediction, and graph classification tasks. GNNs can learn meaningful representations of graphs incorporating topological structure, node attributes, and neighborhood aggregation to solve supervised, semi-supervised, and unsupervised graph-based problems. In this study, the usefulness of GNNs has been analyzed primarily from two aspects - <strong>clustering and classification</strong>. We focus on these two techniques, as they are the most popular strategies in data mining to discern collected data and employ predictive analysis.</p>

Page generated in 0.1567 seconds