• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 2
  • 1
  • Tagged with
  • 32
  • 32
  • 32
  • 22
  • 12
  • 12
  • 10
  • 9
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Exploring the relationship between network topology and braess paradox

Prabhakar, Samuel Giftson 10 May 2024 (has links) (PDF)
The Braess Paradox is a rare phenomenon that only occurs under specific scenarios. This project aims to study the probability of the Braess Paradox occurring in a Directed Weighted Graph while the number of edges increases. The graphs in the experiment are focused on studying the occurrence of the Braess Paradox in a directed weighted scale-free network while transforming it into a directed weighted complete graph. A simulation model is used to simulate the bots traveling through a network to detect the occurrence of the Braess Paradox, considering the increase of directed weighted edges. A Graph Neural Network (GNN) is later used to train on the data produced by the simulation model.
12

Solving Prediction Problems from Temporal Event Data on Networks

Sha, Hao 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Many complex processes can be viewed as sequential events on a network. In this thesis, we study the interplay between a network and the event sequences on it. We first focus on predicting events on a known network. Examples of such include: modeling retweet cascades, forecasting earthquakes, and tracing the source of a pandemic. In specific, given the network structure, we solve two types of problems - (1) forecasting future events based on the historical events, and (2) identifying the initial event(s) based on some later observations of the dynamics. The inverse problem of inferring the unknown network topology or links, based on the events, is also of great important. Examples along this line include: constructing influence networks among Twitter users from their tweets, soliciting new members to join an event based on their participation history, and recommending positions for job seekers according to their work experience. Following this direction, we study two types of problems - (1) recovering influence networks, and (2) predicting links between a node and a group of nodes, from event sequences.
13

New Computational Methods for Literature-Based Discovery

Ding, Juncheng 05 1900 (has links)
In this work, we leverage the recent developments in computer science to address several of the challenges in current literature-based discovery (LBD) solutions. First, LBD solutions cannot use semantics or are too computational complex. To solve the problems we propose a generative model OverlapLDA based on topic modeling, which has been shown both effective and efficient in extracting semantics from a corpus. We also introduce an inference method of OverlapLDA. We conduct extensive experiments to show the effectiveness and efficiency of OverlapLDA in LBD. Second, we expand LBD to a more complex and realistic setting. The settings are that there can be more than one concept connecting the input concepts, and the connectivity pattern between concepts can also be more complex than a chain. Current LBD solutions can hardly complete the LBD task in the new setting. We simplify the hypotheses as concept sets and propose LBDSetNet based on graph neural networks to solve this problem. We also introduce different training schemes based on self-supervised learning to train LBDSetNet without relying on comprehensive labeled hypotheses that are extremely costly to get. Our comprehensive experiments show that LBDSetNet outperforms strong baselines on simple hypotheses and addresses complex hypotheses.
14

Predikce spojení v odvozených sociálních sítích / Link Prediction in Inferred Social Networks

Měkota, Ondřej January 2021 (has links)
Social networks can be helpful for the analysis of behaviour of people. An existing social network is rarely available, and its nodes and edges have to be inferred from not necessarily graph data. Link prediction can be used to either correct inaccuracies or to forecast links about to appear in the future. In this work, we study the prediction of miss- ing links in a social network inferred from real-world bank data. We review and compare both verified and modern approaches to link prediction. Following the advancements of deep learning in recent years, we primarily focus on graph neural networks, and their ability to scale to large networks. We propose an adjustment to an existing graph neural network method and show that its performance is either comparable with or outperform- ing the original method. The comparison is performed on two social networks inferred from the same data. We show that it is relatively hard to outperform the verified link prediction methods with graph neural networks. 1
15

Security Vetting Of Android Applications Using Graph Based Deep Learning Approaches

Poudel, Prabesh 02 June 2021 (has links)
No description available.
16

SEARCH FOR LEPTON FLAVOUR UNIVERSALITY VIOLATION AT THE CMS EXPERIMENT

Hyeon Seo Yun (17548389) 05 December 2023 (has links)
<p dir="ltr">This thesis presents two studies in search for violation of lepton flavor universality as predicted by the Standard Model. The first searches for signs of the violation by studying beyond the Standard Model (BSM) physics models involving same flavor and opposite sign dilepton pair and bottom quarks as final states. This study was done using the dataset collected during years of 2016, 2017 and 2018, with center of mass energy $\sqrt{s} = 13$ TeV and integrated luminosity of 138 $fb^-1$. In the study, scale factors were derived in order to correct deviations between simulation and real life data, specifically for high transverse momentum muons and top\&anti-top quark background. Furthermore, lower limits of energy scale were calculated leading to exclusion of the BSM models with energy scale values lower than that of the calculated value.</p><p dir="ltr">The second study also searches for of the lepton flavor universality violation, but in the specific decay of a tauon into three muons ($\tau \rightarrow 3\mu$). In the study, graph based neural network model (GNN) designed to classify $\tau \rightarrow 3\mu$ events at the CMS detector was converted to high level synthesis (HLS) code, so that the GNN could be coded into a custom hardware such as field programmable gate arrays (FPGA) for deployment. Moreover, techniques such as pruning and quantization were applied in an attempt to make the GNN more light weight, due to strict requirements of FPGA.</p>
17

Robust Representation Learning for Out-of-Distribution Extrapolation in Relational Data

Yangze Zhou (18369795) 17 April 2024 (has links)
<p dir="ltr">Recent advancements in representation learning have significantly enhanced the analysis of relational data across various domains, including social networks, bioinformatics, and recommendation systems. In general, these methods assume that the training and test datasets come from the same distribution, an assumption that often fails in real-world scenarios due to evolving data, privacy constraints, and limited resources. The task of out-of-distribution (OOD) extrapolation emerges when the distribution of test data differs from that of the training data, presenting a significant, yet unresolved challenge within the field. This dissertation focuses on developing robust representations for effective OOD extrapolation, specifically targeting relational data types like graphs and sets. For successful OOD extrapolation, it's essential to first acquire a representation that is adequately expressive for tasks within the distribution. In the first work, we introduce Set Twister, a permutation-invariant set representation that generalizes and enhances the theoretical expressiveness of DeepSets, a simple and widely used permutation-invariant representation for set data, allowing it to capture higher-order dependencies. We showcase its implementation simplicity and computational efficiency, as well as its competitive performances with more complex state-of-the-art graph representations in several graph node classification tasks. Secondly, we address OOD scenarios in graph classification and link prediction tasks, particularly when faced with varying graph sizes. Under causal model assumptions, we derive approximately invariant graph representations that improve extrapolation in OOD graph classification task. Furthermore, we provide the first theoretical study of the capability of graph neural networks for inductive OOD link prediction and present a novel representation model that produces structural pairwise embeddings, maintaining predictive accuracy for OOD link prediction as the test graph size increases. Finally, we investigate the impact of environmental data as a confounder between input and target variables, proposing a novel approach utilizing an auxiliary dataset to mitigate distribution shifts. This comprehensive study not only advances our understanding of representation learning in OOD contexts but also highlights potential pathways for future research in enhancing model robustness across diverse applications.</p>
18

Functional distributional semantics : learning linguistically informed representations from a precisely annotated corpus

Emerson, Guy Edward Toh January 2018 (has links)
The aim of distributional semantics is to design computational techniques that can automatically learn the meanings of words from a body of text. The twin challenges are: how do we represent meaning, and how do we learn these representations? The current state of the art is to represent meanings as vectors - but vectors do not correspond to any traditional notion of meaning. In particular, there is no way to talk about 'truth', a crucial concept in logic and formal semantics. In this thesis, I develop a framework for distributional semantics which answers this challenge. The meaning of a word is not represented as a vector, but as a 'function', mapping entities (objects in the world) to probabilities of truth (the probability that the word is true of the entity). Such a function can be interpreted both in the machine learning sense of a classifier, and in the formal semantic sense of a truth-conditional function. This simultaneously allows both the use of machine learning techniques to exploit large datasets, and also the use of formal semantic techniques to manipulate the learnt representations. I define a probabilistic graphical model, which incorporates a probabilistic generalisation of model theory (allowing a strong connection with formal semantics), and which generates semantic dependency graphs (allowing it to be trained on a corpus). This graphical model provides a natural way to model logical inference, semantic composition, and context-dependent meanings, where Bayesian inference plays a crucial role. I demonstrate the feasibility of this approach by training a model on WikiWoods, a parsed version of the English Wikipedia, and evaluating it on three tasks. The results indicate that the model can learn information not captured by vector space models.
19

Monolith to microservices using deep learning-based community detection / Monolit till mikrotjänster med hjälp av djupinlärningsbaserad klusterdetektion

Bothin, Anton January 2023 (has links)
The microservice architecture is widely considered to be best practice. Yet, there still exist many companies currently working in monolith systems. This can largely be attributed to the difficult process of updating a systems architecture. The first step in this process is to identify microservices within a monolith. Here, artificial intelligence could be a useful tool for automating the process of microservice identification. The aim of this thesis was to propose a deep learning-based model for the task of microservice identification, and to compare this model to previously proposed approaches. With the goal of helping companies in their endeavour to move towards a microservice-based architecture. In particular, the thesis has evaluated whether the more complex nature of newer deep learning-based techniques can be utilized in order to identify better microservices. The model proposed by this thesis is based on overlapping community detection, where each identified community is considered a microservice candidate. The model was evaluated by looking at cohesion, modularity, and size. Results indicate that the proposed deep learning-based model performs similarly to other state-of-the-art approaches for the task of microservice identification. The results suggest that deep learning indeed helps in finding nontrivial relations within communities, which overall increases the quality of identified microservices, From this it can be concluded that deep learning is a promising technique for the task of microservice identification, and that further research is warranted. / Allmänt anses mikrotjänstarkitekturen vara bästa praxis. Trots det finns det många företag som fortfarande arbetar i monolitiska system. Detta då det finns många svårigheter runt processesen av att byta systemaritekture. Första steget i denna process är att identifiera mikrotjänster inom en monolit. Här kan artificiell intelligens vara ett användbart verktyg för att automatisera processen runt att identifiera mikrotjänster. Denna avhandling syftar till att föreslå en djupinlärningsbaserad modell för att identifiera mikrotjänster och att jämföra denna modell med tidigare föreslagna modeller. Målet är att hjälpa företag att övergå till en mikrotjänstbaserad arkitektur. Avhandlingen kommer att utvärdera nyare djupinlärningsbaserade tekniker för att se ifall deras mer komplexa struktur kan användas för att identifiera bättre mikrotjänster. Modellen som föreslås är baserad på överlappande klusterdetektion, där varje identifierad kluster betraktas som en mikrotjänstkandidat. Modellen utvärderades genom att titta på sammanhållning, modularitet och storlek. Resultaten indikerar att den föreslagna djupinlärningsbaserade modellen identifierar mikrotjänster av liknande kvalitet som andra state-of-the-art-metoder. Resultaten tyder på att djupinlärning bidrar till att hitta icke triviala relationer inom kluster, vilket ökar kvaliteten av de identifierade mikrotjänsterna. På grund av detta dras slutsatsen att djupinlärning är en lovande teknik för identifiering av mikrotjänster och att ytterligare forskning bör utföras.
20

Characterizing Structure of High Entropy Alloys (HEAs) Using Machine Learning

Reimer, Christoff 13 December 2023 (has links)
The irradiation of crystalline materials in environments such as nuclear reactors leads to the accumulation of micro and nano-scale defects with a negative impact on material properties such as strength, corrosion resistance, and dimensional stability. Point defects in the crystal lattice, the vacancy and self-interstitial, form the basis of this damage and are capable of migrating through the lattice to become part of defect clusters and sinks, or to annihilate themselves. Recently, attention has been given to HEAs for fusion and fission components, as some materials of this class have shown resilience to irradiation-induced damage. The ability to predict defect diffusion and accelerate simulations of defect behaviour in HEAs using ML techniques is consequently a subject that has gathered significant interest. The goal of this work was to produce an unsupervised neural network capable of learning the interatomic dynamics within a specific HEA system from MD data in order to create a KMC type predictor of defect diffusion paths for common point defects in crystal systems such as the vacancy and self-interstitial. Self-interstitial defect states were identified and purified from MD datasets using graph-isomorphism, and a proof-of-concept model for the HEA environment was used with several interaction setups to demonstrate the feasibility of training a GCN to predict vacancy defect transition rates in the HEA crystalline environment.

Page generated in 0.0741 seconds