laitimes

Defeated 25 molecular design algorithms, Georgia Tech, University of Toronto, etc. proposed large language models

author:ScienceAI
Defeated 25 molecular design algorithms, Georgia Tech, University of Toronto, etc. proposed large language models

Author | Georgia Institute of Technology Haorui Wang

编辑 | ScienceAI

As an optimization problem, molecular discovery presents significant computational challenges due to the fact that the optimization targets may not be differentiable. Evolutionary algorithms (EAs) are often used to optimize black-box targets in molecular discovery, traversing the chemical space through random mutations and crossovers, but this results in a large number of expensive target evaluations.

In this work, researchers from Georgia Tech, the University of Toronto, and Cornell University collaborated to propose Molecular Language Enhanced Evolutionary Optimization (MOLLEO), which significantly improves the molecular optimization capabilities of evolutionary algorithms by integrating pre-trained large language models (LLMs) with chemical knowledge into evolutionary algorithms.

该研究以《Efficient Evolutionary Search Over Chemical Space with Large Language Models》为题,于 6 月 23 日发布在预印平台 arXix 上。

Defeated 25 molecular design algorithms, Georgia Tech, University of Toronto, etc. proposed large language models

Paper link: https://arxiv.org/abs/2406.16976

Huge computational challenges for molecule discovery

Molecule discovery is a complex iterative process involving the design, synthesis, evaluation, and improvement of candidate molecules, with a wide range of real-world applications, including drug design, material design, energy improvement, disease problems, and more. This process is often slow and laborious, and even approximate computational evaluations require significant resources due to complex design conditions and expensive evaluations (e.g., wet experiments, bioassays, and computational simulations) that are often required to evaluate molecular properties.

Therefore, the development of efficient molecular search, prediction, and generation algorithms has become a research hotspot in the field of chemistry to accelerate the discovery process. In particular, machine learning-driven approaches have played an important role in rapidly identifying and proposing promising molecular candidates.

Due to the importance of the problem, molecular optimization has received a great deal of attention, including more than 20 molecular design algorithms that have been developed and tested (among which combinatorial optimization methods such as genetic algorithms and reinforcement learning are ahead of other generative models and continuous optimization algorithms), as detailed in a recent review article in Nature. One of the most effective methods is evolutionary algorithms (EAs), which are ideal for black-box target optimization in molecular discovery because they do not require gradient evaluation.

However, a major drawback of these algorithms is that they randomly generate candidate structures without utilizing task-specific information, resulting in the need for a large number of objective function evaluations. Because of the high cost of evaluating attributes, molecular optimization not only finds the molecular structure of the best desired attribute, but also minimizes the number of objective function evaluations (which also equates to improving search efficiency).

Recently, LLMs have demonstrated some fundamental abilities in a number of chemistry-related tasks, such as predicting molecular properties, retrieving optimal molecules, automating chemical experiments, and generating molecules with target properties. Since LLMs are trained on a large-scale textual corpus that encompasses a wide range of tasks, they demonstrate general-purpose language comprehension and basic chemistry knowledge, making them an interesting tool for chemical discovery tasks.

However, many LLM-based approaches rely on in-context learning and prompt engineering, which can be problematic when designing molecules with strict numerical goals, as LLMs may struggle to meet precise numerical constraints or optimize specific numerical goals. In addition, methods that rely solely on LLM prompts may generate molecules with poor physical basis, or invalid SMILES strings that cannot be decoded into chemical structures.

Molecular language enhancement evolutionary optimization

In this study, we propose Molecular Language Enhanced Evolutionary Optimization (MOLLEO), which integrates LLMs into EA to improve the quality of generated candidates and accelerate the optimization process. MOLLEO uses LLMs as genetic operators to generate new candidates through crossover or mutation. For the first time, we show how LLMs can be integrated into an EA framework for molecular generation.

In this study, we considered three language models with different competency strengths: GPT-4, BioT5, and MoleculeSTM. We integrate each LLM into different crossover and variant procedures and justify our design choices with ablation studies.

We have demonstrated the superior performance of MOLLEO through experiments on multiple black-box optimization tasks, including single-objective and multi-objective optimization. For all tasks, including the more challenging protein-ligand linkage, MOLLEO outperformed the baseline EA and 25 other strong baseline methods. In addition, we demonstrated MOLLEO's ability to further optimize on the best JNK3 inhibitor molecules in the ZINC 250K database.

Our MOLLEO framework is based on a simple evolutionary algorithm, the Graph-GA algorithm, and enhances its functionality by integrating chemically aware LLMs in genetic manipulation.

We begin with an overview of the problem statement, emphasizing the need to minimize costly target evaluation in black-box optimization. MOLLEO utilizes LLMs such as GPT-4, BioT5, and MoleculeSTM to generate novel candidate molecules guided by target descriptions.

Specifically, in the crossover step, instead of randomly combining the two parent molecules, we use the LLM to generate the molecule that maximizes the target fitness function. In the mutation step, the operator mutates the most adapted members of the current population based on the target description. However, we noticed that LLMs did not always generate candidates with higher fitness than the input molecules, so we constructed the selection pressure to filter the edited molecules based on structural similarity.

Experimental results

We evaluated MOLLEO on 18 tasks. Tasks are selected from PMO and TDC benchmarks and databases and can be grouped into the following categories:

  1. Structure-based optimization: Optimization of molecules based on the target structure, including isomer generation (isomers_c9h10n2o2pf2cl) based on the target molecule formula and two tasks based on matching or avoiding backbone and substructure motifs (deco_hop, scaffold_hop).
  2. Name-based optimization: includes finding compounds that are similar to known drugs (mestranol_similarity, thiothixene_rediscovery) and three multi-attribute optimization tasks (MPOs) that optimize other properties such as hydrophobicity (LogP) and permeability (TPSA) while rediscovering drugs (e.g., Perindopril, Ranolazine, Sitagliptin). Although these tasks primarily involve the rediscovery of existing drugs rather than the design of new molecules, they demonstrate the fundamental chemical optimization capabilities of LLMs.
  3. Attribute optimization: Includes a simple attribute optimization task QED, which measures the drug similarity of molecules. We then focus on three tasks in the PMO, measuring the activity of the molecule against the following proteins: DRD2 (dopamine receptor D2), GSK3β (glycogen synthase kinase-3β), and JNK3 (c-Jun N-terminal kinase-3). In addition, we have included three protein-ligand docking tasks (structural drug design) in TDCs that are closer to real-world drug design than simple physicochemical properties.

To evaluate our methodology, we reported the area under the curve of the first k average attribute values versus the number of objective function calls (AUC top-k) following the PMO benchmark methodology, taking into account the target value and the calculated budget.

As a benchmark, we used top-of-the-line models from the PMO benchmark, including reinforcement learning-based REINVENT, the fundamental evolutionary algorithm Graph-GA, and Gaussian Process Bayesian Optimization GP BO.

Defeated 25 molecular design algorithms, Georgia Tech, University of Toronto, etc. proposed large language models

Figure: Top-10 AUC for a single target task. (Source: Paper)

We conducted single-objective optimization experiments across 12 tasks at the PMO, and the results are shown in the table above, where we report the AUC top-10 score for each task and the overall ranking of each model. The results show that using any large language model (LLM) as a genetic operator improves performance over the default Graph-GA and all other baseline models.

GPT-4 outperformed all models in 9 out of 12 tasks, demonstrating its effectiveness and promise as a general-purpose large language model in molecular generation. BioT5 achieved the second best results among all the tested models, with a total score close to GPT-4, indicating that small models trained and fine-tuned on domain knowledge also have good application prospects in MOLLEO.

MOLSTM is a small model based on the CLIP model to fine-tune the natural language description of the molecule and the chemical formula of the molecule, and we use the gradient descent algorithm on the same natural language description in the evolutionary algorithm to generate different new molecules, and it also outperforms other benchmark methods.

Defeated 25 molecular design algorithms, Georgia Tech, University of Toronto, etc. proposed large language models

Figure: JNK3 inhibits population fitness as the number of iterations increases. (Source: Paper)

To verify the effectiveness of integrating LLMs into the EA framework, we show the score distribution of the initial random molecular pool on the JNK3 task. Subsequently, we performed a round of editing of all molecules in the pool and plotted the JNK3 score distribution of the edited molecules.

The results show that the distributions edited by the LLM are all slightly shifted in the direction of higher scores, suggesting that the LLM does provide useful modifications. However, the overall goal score is still low, so single-step editing is not sufficient, and iterative optimization using evolutionary algorithms is necessary here.

Defeated 25 molecular design algorithms, Georgia Tech, University of Toronto, etc. proposed large language models

Graph: Average docking score of the first 10 molecules when docked to DRD3, EGFR, or adenosine A2A receptor proteins. (Source: Paper)

In addition to the 12 single-target optimization tasks in the PMO, we also tested MOLLEO in more challenging protein-ligand docking tasks that are closer to real-world molecule generation scenarios than single-target tasks. The graph above is a graph of the average docking scores of the top ten best molecules of MOLLEO and Graph-GA versus the number of objective function calls.

The results showed that the docking scores of the molecules generated by our method were almost all better than those of the baseline model and converged faster across all three proteins. Of the three language models we used, BioT5 performed best. In reality, better docking scoring and faster convergence can reduce the number of bioassays required to screen molecules, making the process more efficient in terms of cost and time.

Defeated 25 molecular design algorithms, Georgia Tech, University of Toronto, etc. proposed large language models

Figure: Sum and supervolume fraction of multiple target tasks. (Source: Paper)

Defeated 25 molecular design algorithms, Georgia Tech, University of Toronto, etc. proposed large language models

Figure: Pareto-optimal visualization of Graph-GA and MOLLEO on multi-objective tasks. (Source: Paper)

For multi-objective optimization, we consider two metrics: AUC top-10 of the sum of the scores of all optimization objectives and the supervolume of the Pareto optimal set. We show the results of multi-objective optimization in three tasks. Tasks 1 and 2 are inspired by drug discovery goals and aim to optimize three objectives simultaneously: maximizing the QED of a molecule, minimizing its synthetic accessibility (SA) score (meaning easier synthesis), and maximizing its binding score to JNK3 (task 1) or GSK3β (task 2). Task 3 is more challenging because it requires simultaneous optimization of five objectives: maximizing QED and JNK3 binding scores, and minimizing GSK3β binding scores, DRD2 binding scores, and SA scores.

We found that MOLLEO (GPT-4) consistently outperformed the baseline Graph-GA in terms of supervolume and sum across all three tasks. In the graph, we visualize our method and the Pareto optimal set of Graph-GA (in the target space) in Task 1 and Task 2. When multiple targets are introduced, the performance of open-source language models degrades. We speculate that this performance degradation may stem from their inability to capture large amounts of information-intensive context.

Defeated 25 molecular design algorithms, Georgia Tech, University of Toronto, etc. proposed large language models

FIGURE: INITIALIZATION OF MOLLEO WITH THE BEST MOLECULE IN ZINC 250K. (Source: Paper)

The ultimate goal of the evolutionary algorithm is to improve the properties of the initial molecular pool and discover new molecules, in order to explore the ability of MOLLEO to explore new molecules, we initialized the molecular pool with the best molecule in ZINC 250K, and then optimized it using MOLLEO and Graph-GA. Experimental results on the JNK3 task show that our algorithm consistently outperforms the baseline model Graph-GA and is able to improve the best molecules found in existing datasets.

In addition, we note that the training set for BioT5 is the ZINC20 database (containing 1.4 billion compounds) and the training set for MoleculeSTM is the PubChem database (about 250,000 molecules). We checked whether the final molecule generated by each model in the JNK3 task appeared in the corresponding dataset. It was found that the generated molecules did not overlap with the dataset. This indicates that the model is capable of generating new molecules that do not appear in the training set.

It can be applied to drug discovery, materials, and biomolecule design

Molecular discovery and design is a rich field with numerous practical applications, many of which are beyond the scope of current research, but still relevant to the framework we propose. MOLLEO combines LLM with EA algorithms to provide a flexible algorithm framework through the combination of plain text, which can be applied to drug discovery, expensive computer simulations, and the design of materials or large biomolecules in the future.

In future work, we will further focus on how to improve the quality of the generated molecules, including their target values and speed of discovery. As LLMs continue to advance, we expect the performance of the MOLLEO framework to continue to improve, making it a promising tool for generative chemistry applications.

Read on