Spelling suggestions: "subject:"7molecular aptimization"" "subject:"7molecular anoptimization""
1 |
Molecular Optimization Using Graph-to-Graph TranslationSandström, Emil January 2020 (has links)
Drug development is a protracted and expensive process. One of the main challenges indrug discovery is to find molecules with desirable properties. Molecular optimization is thetask of optimizing precursor molecules by affording them with desirable properties. Recentadvancement in Artificial Intelligence, has led to deep learning models designed for molecularoptimization. These models, that generates new molecules with desirable properties, have thepotential to accelerate the drug discovery. In this thesis, I evaluate the current state-of-the-art graph-to-graph translation model formolecular optimization, the HierG2G. I examine the HierG2G’s performance using three testcases, where the second test is designed, with the help of chemical experts, to represent a commonmolecular optimization task. The third test case, tests the HierG2G’s performance on,for the model, previously unseen molecules. I conclude that, in each of the test cases, the HierG2Gcan successfully generate structurally similar molecules with desirable properties givena source molecule and an user-specified desired property change. Further, I benchmark the HierG2Gagainst two famous string-based models, the seq2seq and the Transformer. My resultsuggests that the seq2seq is the overall best model for molecular optimization, but due to thevarying performance among the models, I encourage a potential user to simultaneously use allthree models for molecular optimization.
|
2 |
Improving Transformer-Based Molecular Optimization Using Reinforcement LearningChang, PoChun January 2021 (has links)
By formulating the task of property-based molecular optimization into a neural machine translation problem, researchers have been able to apply the Transformer model from the field of natural language processing to generate molecules with desirable properties by making a small modification to a given starting molecule. These results verify the capability of Transformer models in capturing the connection between properties and structural changes in molecular pairs. However, the current research only proposes a Transformer model with fixed parameters that can produce limit amount of optimized molecules. Additionally, the trained Transformer model does not always successfully generate optimized output for every molecule and desirable property constraint given. In order to push the Transformer model into real applications where different sets of desirable property constraints in combination of variety of molecules might need to be optimized, these obstacles need to be overcome first. In this work, we present a framework using reinforcement learning as a fine-tuning method for the pre-trained Transformer to induce various output and leverage the prior knowledge of the model for a challenging data point. Our results show that, based on the definition of the scoring function, the Transformer model can generate much larger numbers of optimized molecules for a data point that is considered challenging to the pre-trained model. Meanwhile, we also showcase the relation between the sampling size and the efficiency of the framework in yielding desirable outputs to demonstrate the optimal configuration for future users. Furthermore, we have chemists to inspect the generated molecules and find that the reinforcement learning fine-tuning causes the catastrophic forgetting problem that leads our model into generating unstable molecules. Through maintaining the prior knowledge or applying rule-based scoring component, we demonstrate two strategies that can successfully reduce the effect of catastrophic forgetting as a reference for future research.
|
Page generated in 0.0913 seconds