• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 3
  • 1
  • Tagged with
  • 50
  • 50
  • 50
  • 37
  • 19
  • 17
  • 17
  • 14
  • 13
  • 13
  • 13
  • 11
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Graph neural networks for prediction of formation energies of crystals / Graf-neuronnät för prediktion av kristallers formationsenergier

Ekström, Filip January 2020 (has links)
Predicting formation energies of crystals is a common but computationally expensive task. In this work, it is therefore investigated how a neural network can be used as a tool for predicting formation energies with less computational cost compared to conventional methods. The investigated model shows promising results in predicting formation energies, reaching below a mean absolute error of 0.05 eV/atom with less than 4000 training datapoints. The model also shows great transferability, being able to reach below an MAE of 0.1 eV/atom with less than 100 training points when transferring from a pre-trained model. A drawback of the model is however that it is relying on descriptions of the crystal structures that include interatomic distances. Since these are not always accurately known, it is investigated how inaccurate structure descriptions affect the performance of the model. The results show that the quality of the descriptions definitely worsen the accuracy. The less accurate descriptions can however be used to reduce the search space in the creation of phase diagrams, and the proposed workflow which combines conventional density functional theory and machine learning shows a reduction in time consumption of more than 50 \% compared to only using density functional theory for creating a ternary phase diagram.
22

Graph Neural Networks for Article Recommendation based on Implicit User Feedback and Content

Bereczki, Márk January 2021 (has links)
Recommender systems are widely used in websites and applications to help users find relevant content based on their interests. Graph neural networks achieved state- of-the- art results in the field of recommender systems, working on data represented in the form of a graph. However, most graph- based solutions hold challenges regarding computational complexity or the ability to generalize to new users. Therefore, we propose a novel graph- based recommender system, by modifying Simple Graph Convolution, an approach for efficient graph node classification, and add the capability of generalizing to new users. We build our proposed recommender system for recommending the articles of Peltarion Knowledge Center. By incorporating two data sources, implicit user feedback based on pageview data as well as the content of articles, we propose a hybrid recommender solution. Throughout our experiments, we compare our proposed solution with a matrix factorization approach as well as a popularity- based and a random baseline, analyse the hyperparameters of our model, and examine the capability of our solution to give recommendations to new users who were not part of the training data set. Our model results in slightly lower, but similar Mean Average Precision and Mean Reciprocal Rank scores to the matrix factorization approach, and outperforms the popularity- based and random baselines. The main advantages of our model are computational efficiency and its ability to give relevant recommendations to new users without the need for retraining the model, which are key features for real- world use cases. / Rekommendationssystem används ofta på webbplatser och applikationer för att hjälpa användare att hitta relevant innehåll baserad på deras intressen. Med utvecklingen av grafneurala nätverk nådde toppmoderna resultat inom rekommendationssystem och representerade data i form av en graf. De flesta grafbaserade lösningar har dock svårt med beräkningskomplexitet eller att generalisera till nya användare. Därför föreslår vi ett nytt grafbaserat rekommendatorsystem genom att modifiera Simple Graph Convolution. De här tillvägagångssätt är en effektiv grafnodsklassificering och lägga till möjligheten att generalisera till nya användare. Vi bygger vårt föreslagna rekommendatorsystem för att rekommendera artiklarna från Peltarion Knowledge Center. Genom att integrera två datakällor, implicit användaråterkoppling baserad på sidvisningsdata samt innehållet i artiklar, föreslår vi en hybridrekommendatörslösning. Under våra experiment jämför vi vår föreslagna lösning med en matrisfaktoriseringsmetod samt en popularitetsbaserad och en slumpmässig baslinje, analyserar hyperparametrarna i vår modell och undersöker förmågan hos vår lösning att ge rekommendationer till nya användare som inte deltog av träningsdatamängden. Vår modell resulterar i något mindre men liknande Mean Average Precision och Mean Reciprocal Rank poäng till matrisfaktoriseringsmetoden och överträffar de popularitetsbaserade och slumpmässiga baslinjerna. De viktigaste fördelarna med vår modell är beräkningseffektivitet och dess förmåga att ge relevanta rekommendationer till nya användare utan behov av omskolning av modellen, vilket är nyckelfunktioner för verkliga användningsfall.
23

RECOMMENDATION SYSTEMS IN SOCIAL NETWORKS

Behafarid Mohammad Jafari (15348268) 18 May 2023 (has links)
<p> The dramatic improvement in information and communication technology (ICT) has made an evolution in learning management systems (LMS). The rapid growth in LMSs has caused users to demand more advanced, automated, and intelligent services. CourseNetworking is a next-generation LMS adopting machine learning to add personalization, gamification, and more dynamics to the system. This work tries to come up with two recommender systems that can help improve CourseNetworking services. The first one is a social recommender system helping CourseNetworking to track user interests and give more relevant recommendations. Recently, graph neural network (GNN) techniques have been employed in social recommender systems due to their high success in graph representation learning, including social network graphs. Despite the rapid advances in recommender systems performance, dealing with the dynamic property of the social network data is one of the key challenges that is remained to be addressed. In this research, a novel method is presented that provides social recommendations by incorporating the dynamic property of social network data in a heterogeneous graph by supplementing the graph with time span nodes that are used to define users long-term and short-term preferences over time. The second service that is proposed to add to Rumi services is a hashtag recommendation system that can help users label their posts quickly resulting in improved searchability of content. In recent years, several hashtag recommendation methods are proposed and developed to speed up processing of the texts and quickly find out the critical phrases. The methods use different approaches and techniques to obtain critical information from a large amount of data. This work investigates the efficiency of unsupervised keyword extraction methods for hashtag recommendation and recommends the one with the best performance to use in a hashtag recommender system. </p>
24

RNN-based Graph Neural Network for Credit Load Application leveraging Rejected Customer Cases

Nilsson, Oskar, Lilje, Benjamin January 2023 (has links)
Machine learning plays a vital role in preventing financial losses within the banking industry, and still, a lot of state of the art and industry-standard approaches within the field neglect rejected customer information and the potential information that they hold to detect similar risk behavior.This thesis explores the possibility of including this information during training and utilizing transactional history through an LSTM to improve the detection of defaults.  The model is structured so an encoder is first trained with or without rejected customers. Virtual distances are then calculated in the embedding space between the accepted customers. These distances are used to create a graph where each node contains an LSTM network, and a GCN passes messages between connected nodes. The model is validated using two datasets, one public Taiwan dataset and one private Swedish one provided through the collaborative company. The Taiwan dataset used 8000 data points with a 50/50 split in labels. The Swedish dataset used 4644 with the same split.  Multiple metrics were used to validate the impact of the rejected customers and the impact of using time-series data instead of static features. For the encoder part, reconstruction error was used to measure the difference in performance. When creating the edges, the homogeny of the neighborhoods and if a node had a majority of the same labeled neighbors as itself were determining factors, and for the classifier, accuracy, f1-score, and confusion matrix were used to compare results. The results of the work show that the impact of rejected customers is minor when it comes to changes in predictive power. Regarding the effects of using time-series information instead of static features, we saw a comparative result to XGBoost on the Taiwan dataset and an improvement in the predictive power on the Swedish dataset. The results also show the importance of a well-defined virtual distance is critical to the classifier's performance.
25

Models and Representation Learning Mechanisms for Graph Data

Susheel Suresh (14228138) 15 December 2022 (has links)
<p>Graph representation learning (GRL) has been increasing used to model and understand data from a wide variety of complex systems spanning social, technological, bio-chemical and physical domains. GRL consists of two main components (1) a parametrized encoder that provides representations of graph data and (2) a learning process to train the encoder parameters. Designing flexible encoders that capture the underlying invariances and characteristics of graph data are crucial to the success of GRL. On the other hand, the learning process drives the quality of the encoder representations and developing principled learning mechanisms are vital for a number of growing applications in self-supervised, transfer and federated learning settings. To this end, we propose a suite of models and learning algorithms for GRL which form the two main thrusts of this dissertation.</p> <p><br></p> <p>In Thrust I, we propose two novel encoders which build upon on a widely popular GRL encoder class called graph neural networks (GNNs). First, we empirically study the prediction performance of current GNN based encoders when applied to graphs with heterogeneous node mixing patterns using our proposed notion of local assortativity. We find that GNN performance in node prediction tasks strongly correlates with our local assortativity metric---thereby introducing a limit. We propose to transform the input graph into a computation graph with proximity and structural information as distinct types of edges. We then propose a novel GNN based encoder that operates on this computation graph and adaptively chooses between structure and proximity information. Empirically, adopting our transformation and encoder framework leads to improved node classification performance compared to baselines in real-world graphs that exhibit diverse mixing.</p> <p>Secondly, we study the trade-off between expressivity and efficiency of GNNs when applied to temporal graphs for the task of link ranking. We develop an encoder that incorporates a labeling approach designed to allow for efficient inference over the candidate set jointly, while provably boosting expressivity. We also propose to optimize a list-wise loss for improved ranking. With extensive evaluation on real-world temporal graphs, we demonstrate its improved performance and efficiency compared to baselines.</p> <p><br></p> <p>In Thrust II, we propose two principled encoder learning mechanisms for challenging and realistic graph data settings. First, we consider a scenario where only limited or even no labelled data is available for GRL. Recent research has converged on graph contrastive learning (GCL), where GNNs are trained to maximize the correspondence between representations of the same graph in its different augmented forms. However, we find that GNNs trained by traditional GCL often risk capturing redundant graph features and thus may be brittle and provide sub-par performance in downstream tasks. We then propose a novel principle, termed adversarial-GCL (AD-GCL), which enables GNNs to avoid capturing redundant information during the training by optimizing adversarial graph augmentation strategies used in GCL. We pair AD-GCL with theoretical explanations and design a practical instantiation based on trainable edge-dropping graph augmentation. We experimentally validate AD-GCL by comparing with state-of-the-art GCL methods and achieve performance gains in semi-supervised, unsupervised and transfer learning settings using benchmark chemical and biological molecule datasets. </p> <p>Secondly, we consider a scenario where graph data is silo-ed across clients for GRL. We focus on two unique challenges encountered when applying distributed training to GRL: (i) client task heterogeneity and (ii) label scarcity. We propose a novel learning framework called federated self-supervised graph learning (FedSGL), which first utilizes a self-supervised objective to train GNNs in a federated fashion across clients and then, each client fine-tunes the obtained GNNs based on its local task and available labels. Our framework enables the federated GNN model to extract patterns from the common feature (attribute and graph topology) space without the need of labels or being biased by heterogeneous local tasks. Extensive empirical study of FedSGL on both node and graph classification tasks yields fruitful insights into how the level of feature / task heterogeneity, the adopted federated algorithm and the level of label scarcity affects the clients’ performance in their tasks.</p>
26

Reliable graph predictions : Conformal prediction for Graph Neural Networks

Bååw, Albin January 2022 (has links)
We have seen a rapid increase in the development of deep learning algorithms in recent decades. However, while these algorithms have unlocked new business areas and led to great development in many fields, they are usually limited to Euclidean data. Researchers are increasingly starting to find out that they can better represent the data used in many real-life applications as graphs. Examples include high-risk domains such as finding the side effects when combining medicines using a protein-protein network. In high-risk domains, there is a need for trust and transparency in the results returned by deep learning algorithms. In this work, we explore how we can quantify uncertainty in Graph Neural Network predictions using conventional methods for conformal prediction as well as novel methods exploiting graph connectivity information. We evaluate the methods on both static and dynamic graphs and find that neither of the novel methods offers any clear benefits over the conventional methods. However, we see indications that using the graph connectivity information can lead to more efficient conformal predictors and a lower prediction latency than the conventional methods on large data sets. We propose that future work extend the research on using the connectivity information, specifically the node embeddings, to boost the performance of conformal predictors on graphs. / De senaste årtiondena har vi sett en drastiskt ökad utveckling av djupinlärningsalgoritmer. Även fast dessa algoritmer har skapat nya potentiella affärsområden och har även lett till nya upptäckter i flera andra fält, är dessa algoritmer dessvärre oftast begränsade till Euklidisk data. Samtidigt ser vi att allt fler forskare har upptäckt att data i verklighetstrogna applikationer oftast är bättre representerade i form av grafer. Exempel inkluderar hög-risk domäner som läkemedelsutveckling, där man förutspår bieffekter från mediciner med hjälp av protein-protein nätverk. I hög-risk domäner finns det ett krav på tillit och att resultaten från djupinlärningsalgoritmer är transparenta. I den här tesen utforskar vi hur man kan kvantifiera osäkerheten i resultaten hos Neurala Nätverk för grafer (eng. Graph Neural Networks) med hjälp av konform prediktion (eng. Conformal Prediction). Vi testar både konventionella metoder för konform prediktion, samt originella metoder som utnyttjar strukturell information från grafen. Vi utvärderar metoderna både på statiska och dynamiska grafer, och vi kommer fram till att de originella metoderna varken är bättre eller sämre än de konventionella metoderna. Däremot finner vi indikationer på att användning av den strukturella informationen från grafen kan leda till effektivare prediktorer och till lägre svarstid än de konventionella metoderna när de används på stora grafer. Vi föreslår att framtida arbete i området utforskar vidare hur den strukturella informationen kan användas, och framförallt nod representationerna, kan användas för att öka prestandan i konforma prediktorer för grafer.
27

Information Extraction from Invoices using Graph Neural Networks / Utvinning av information från fakturor med hjälp av grafiska neurala nätverk

Tan, Tuoyuan January 2023 (has links)
Information Extraction is a sub-field of Natural Language Processing that aims to extract structured data from unstructured sources. With the progress in digitization, extracting key information like account number, gross amount, etc. from business invoices becomes an interesting problem in both industry and academy. Such a process can largely facilitate online payment, as users do not have to type in key information by themselves. In this project, we design and implement an extraction system that combines Machine Learning and Heuristic Rules to solve the problem. Invoices are transformed into a graph structure and then Graph Neural Networks are used to give predictions of the role of each word appearing on invoices. Rule-based modules output the final extraction results based on aggregated information from predictions. Different variants of graph models are evaluated and the best system achieves 90.93% correct rate. We also study how the number of stacked graph neural layers influences the performance of the system. The ablation study compares the importance of each extracted feature and results show that the combination of features from different sources, rather than any single feature, plays the key role in the classification. Further experiments reveal the respective contributions of Machine Learning and rule-based modules for each label. / Informationsutvinning är ett delområde inom språkteknologi som syftar till att utvinna strukturerade data från ostrukturerade källor. I takt med den ökande digitaliseringen blir det ett intressant problem för både industrin och akademin att extrahera nyckelinformation som t.ex. kontonummer, bruttobelopp och liknande från affärsfakturor. En sådan process kan i hög grad underlätta onlinebetalningar, eftersom användarna inte behöver skriva in nyckelinformation själva. I det här projektet utformar och implementerar vi ett extraktionssystem som kombinerar maskininlärning och heuristiska regler för att lösa problemet. Fakturor kommer att omvandlas till en grafstruktur och sedan används grafiska neurala nätverk för att förutsäga betydelsen av varje ord som förekommer på fakturan. Regelbaserade moduler producerar de slutliga utvinningsresultaten baserat på aggregerad information från förutsägelserna. Olika varianter av grafmodeller utvärderas och det bästa systemet uppnår 90,93 % korrekta resultat. Vi studerar också hur antalet neurala graflager påverkar systemets prestanda. I ablationsstudien jämförs betydelsen av varje extraherat särdrag och resultaten visar att kombinationen av särdrag från olika källor, snarare än något enskilt särdrag, spelar en nyckelroll i klassificeringen. Ytterligare experiment visar hur maskininlärning och regelbaserade moduler på olika sätt bidrar till resultatet.
28

Link Prediction Using Learnable Topology Augmentation / Länkprediktion med hjälp av en inlärningsbar topologiförstärkning

Leatherman, Tori January 2023 (has links)
Link prediction is a crucial task in many downstream applications of graph machine learning. Graph Neural Networks (GNNs) are a prominent approach for transductive link prediction, where the aim is to predict missing links or connections only within the existing nodes of a given graph. However, many real-life applications require inductive link prediction for the newly-coming nodes with no connections to the original graph. Thus, recent approaches have adopted a Multilayer Perceptron (MLP) for inductive link prediction based solely on node features. In this work, we show that incorporating both connectivity structure and features for the new nodes provides better model expressiveness. To bring such expressiveness to inductive link prediction, we propose LEAP, an encoder that features LEArnable toPology augmentation of the original graph and enables message passing with the newly-coming nodes. To the best of our knowledge, this is the first attempt to provide structural contexts for the newly-coming nodes via learnable augmentation under inductive settings. Conducting extensive experiments on four real- world homogeneous graphs demonstrates that LEAP significantly surpasses the state-of-the-art methods in terms of AUC and average precision. The improvements over homogeneous graphs are up to 22% and 17%, respectively. The code and datasets are available on GitHub*. / Att förutsäga länkar är en viktig uppgift i många efterföljande tillämpningar av maskininlärning av grafer. Graph Neural Networks (GNNs) är en framträdande metod för transduktiv länkförutsägelse, där målet är att förutsäga saknade länkar eller förbindelser endast inom de befintliga noderna i en given graf. I många verkliga tillämpningar krävs dock induktiv länkförutsägelse för nytillkomna noder utan kopplingar till den ursprungliga grafen. Därför har man på senare tid antagit en Multilayer Perceptron (MLP) för induktiv länkförutsägelse som enbart bygger på nodens egenskaper. I det här arbetet visar vi att om man införlivar både anslutningsstruktur och egenskaper för de nya noderna får man en bättre modelluttryck. För att ge induktiv länkförutsägelse en sådan uttrycksfullhet föreslår vi LEAP, en kodare som innehåller LEArnable toPology augmentation av den ursprungliga grafen och möjliggör meddelandeöverföring med de nytillkomna noderna. Såvitt vi vet är detta det första försöket att tillhandahålla strukturella sammanhang för de nytillkomna noderna genom en inlärningsbar ökning i induktiva inställningar. Omfattande experiment på fyra homogena grafer i den verkliga världen visar att LEAP avsevärt överträffar "state-of-the-art" metoderna när det gäller AUC och genomsnittlig precision. Förbättringarna jämfört med homogena grafer är upp till 22% och 17%. Koden och datamängderna finns tillgängliga på Github*.
29

The Applicability and Scalability of Graph Neural Networks on Combinatorial Optimization / Tillämpning och Skalbarhet av Grafiska Neurala Nätverk på Kombinatorisk Optimering

Hårderup, Peder January 2023 (has links)
This master's thesis investigates the application of Graph Neural Networks (GNNs) to address scalability challenges in combinatorial optimization, with a primary focus on the minimum Total Dominating set Problem (TDP) and additionally the related Carrier Scheduling Problem (CSP) in networks of Internet of Things. The research identifies the NP-hard nature of these problems as a fundamental challenge and addresses how to improve predictions on input graphs of sizes much larger than seen during training phase. Further, the thesis explores the instability in such scalability when leveraging GNNs for TDP and CSP. Two primary measures to counter this scalability problem are proposed and tested: incorporating node degree as an additional feature and modifying the attention mechanism in GNNs. Results indicate that these countermeasures show promise in addressing scalability issues in TDP, with node degree inclusion demonstrating overall performance improvements while the modified attention mechanism presents a nuanced outcome with some metrics improved at the cost of others. Application of these methods to CSP yields bleak results, evincing the challenges of scalability in more complex problem domains. The thesis contributes by detecting and addressing scalability challenges in combinatorial optimization using GNNs and provides insights for further research in refining methodologies for real-world applications. / Denna masteruppsats undersöker tillämpningen av Grafiska Neurala Nätverk (GNN) för att hantera utmaningar inom skalbarhet vid kombinatorisk optimering, med ett primärt fokus på minimum Total Dominating set Problem (TDP) samt även det relaterade Carrier Scheduling Problem (CSP) i nätverk inom Internet of Things. Studien identifierar den NP-svåra karaktären av dessa problem som en grundläggande utmaning och lyfter hur man kan förbättra prediktioner på indatagrafer av storlekar som är mycket större än vad man sett under träningsfasen. Vidare utforskar uppsatsen instabiliteten i sådan skalbarhet när man utnyttjar GNN för TDP och CSP. Två primära åtgärder mot detta skalbarhetsproblem föreslås och testas: inkorporering av nodgrad som ett extra attribut och modifiering av attention-mekanismer i GNN. Resultaten indikerar att dessa motåtgärder har potential för att angripa skalbarhetsproblem i TDP, där inkludering av nodgrad ger övergripande prestandaförbättringar medan den modifierade attention-mekanismen ger ett mer tvetydigt resultat med vissa mätvärden förbättrade på bekostnad av andra. Tillämpning av dessa metoder på CSP ger svaga resultat, vilket antyder om utmaningarna med skalbarhet i mer komplexa problemdomäner. Uppsatsen bidrar genom att upptäcka och adressera skalbarhetsutmaningar i kombinatorisk optimering med hjälp av GNN och ger insikter för vidare forskning i att förfina metoder för verkliga tillämpningar.
30

Engineering Coordination Cages With Generative AI / Konstruktion av Koordinationsburar med Generativ AI

Ahmad, Jin January 2024 (has links)
Deep learning methods applied to chemistry can speed the discovery of novel compounds and facilitate the design of highly complex structures that are both valid and have important societal applications. Here, we present a pioneering exploration into the use of Generative Artificial Intelligence (GenAI) to design coordination cages within the field of supramolecular chemistry. Specifically, the study leverages GraphINVENT, a graph-based deep generative model, to facilitate the automated generation of tetrahedral coordination cages. Through a combination of computational tools and cheminformatics, the research aims to extend the capabilities of GenAI, traditionally applied in simpler chemical contexts, to the complex and nuanced arena of coordination cages. The approach involves a variety of training strategies, including initial pre-training on a large dataset (GDB-13) followed by transfer learning targeted at generating specific coordination cage structures. Data augmentation techniques were also applied to enrich training but did not yield successful outcomes. Several other strategies were employed, including focusing on single metal ion structures to enhance model familiarity with Fe-based cages and extending training datasets with diverse molecular examples from the ChEMBL database. Despite these strategies, the models struggled to capture the complex interactions required for successful cage generation, indicating potential limitations with both the diversity of the training datasets and the model’s architectural capacity to handle the intricate chemistry of coordination cages. However, training on the organic ligands (linkers) yielded successful results, emphasizing the benefits of focusing on smaller building blocks. The lessons learned from this project are substantial. Firstly, the knowledge acquired about generative models and the complex world of supramolecular chemistry has provided a unique opportunity to understand the challenges and possibilities of applying GenAI to such a complicated field. The results obtained in this project have highlighted the need for further refinement of data handling and model training techniques, paving the way for more advanced applications in the future. Finally, this project has not only raised our understanding of the capabilities and limitations of GenAI in coordination cages, but also set a foundation for future research that could eventually lead to breakthroughs in designing novel cage structures. Further study could concentrate on learning from the linkers in future data-driven cage design projects. / Deep learning-metoder (djup lärande metoder) som tillämpas på kemi kan påskynda upptäckten av nya molekyler och underlätta utformningen av mycket komplexa strukturer som både är giltiga och har viktiga samhällstillämpningar. Här presenterar vi en banbrytande undersökning av användningen av generativ artificiell intelligens (GenAI) för att designa koordinationsburar inom supramolekylär kemi. Specifikt utnyttjar studien GraphINVENT, en grafbaserad djup generativ modell, för att underlätta den automatiska genereringen av tetraedriska koordinationsburar. Genom en kombination av beräkningsverktyg och kemiinformatik syftar forskningen till att utöka kapaciteten hos GenAI, som traditionellt tillämpas i enklare kemiska sammanhang, till den komplexa och nyanserade arenan för koordinationsburar. Metoden innebar inledande förträning på ett brett dataset (GDB-13) följt av transferinlärning inriktad på att generera specifika koordinationsburstrukturer. Dataförstärkningstekniker användes också för att berika träningen men gav inte några lyckade resultat. Flera strategier användes, inklusive fokusering på enstaka metalljonsystem för att förbättra modellens förtrogenhet med Fe-baserade burar och utöka träningsdataset med olika molekylära exempel från ChEMBL-databasen. Trots dessa strategier hade modellerna svårt att fånga de komplexa interaktioner som krävs för framgångsrik generering av burar, vilket indikerar potentiella begränsningar inom både mångfalden av träningsdataset och modellens arkitektoniska kapacitet att hantera den invecklade kemin i koordinationsburar. Däremot var träningen på de organiska liganderna (länkarna) framgångsrik, vilket betonar fördelarna med att fokusera på mindre byggstenar. Dock är fördelarna med detta projekt betydande. Den kunskap som förvärvats om hur generativa modeller fungerar och den komplexa världen av supramolekylär kemi har gett en unik möjlighet att förstå utmaningarna och möjligheterna med att tillämpa GenAI på ett så komplicerat område. Erfarenheterna har visat på behovet av ytterligare förfining av datahantering och modellträningstekniker, vilket banar väg för mer avancerade tillämpningar i framtiden. Det här projektet har inte bara ökat vår förståelse för GenAI:s möjligheter och begränsningar i koordinationsburar utan också lagt grunden för framtida forskning som i slutändan kan leda till banbrytande upptäckter i utformningen av nya burstrukturer. Ytterligare studier skulle kunna fokusera på att lära sig från länkarna för att hjälpa framtida datadrivna projekt för burdesign.

Page generated in 0.0359 seconds