Poster
Enhancing Ligand Validity and Affinity in Structure-Based Drug Design with Multi-Reward Optimization
Seungbeom Lee · Munsun Jo · Jungseul Ok · Dongwoo Kim
West Exhibition Hall B2-B3 #W-124
Deep learning-based Structure-based drug design aims to generate ligand molecules with desirable properties for protein targets. While existing models have demonstrated competitive performance in generating ligand molecules, they primarily focus on learning the chemical distribution of training datasets, often lacking effective steerability to ensure the desired chemical quality of generated molecules. To address this issue, we propose a multi-reward optimization framework that fine-tunes generative models for attributes, such as binding affinity, validity, and drug-likeness, together. Specifically, we derive direct preference optimization for a Bayesian flow network, used as a backbone for molecule generation, and integrate a reward normalization scheme to adopt multiple objectives. Experimental results show that our method generates more realistic ligands than baseline models while achieving higher binding affinity, expanding the Pareto front empirically observed in previous studies.
Designing new drugs often involves finding molecules that can effectively bind to disease-related proteins. While AI models have made progress in generating such molecules, they typically focus on the chemical distribution of training datasets and lack effective steerability to ensure the desired chemical properties of generated molecules. We propose a new method that allows users to steer the AI toward generating molecules with specific desirable features—such as strong binding affinity and drug-like properties—simultaneously. Our approach fine-tunes the model with multiple objectives, enabling the generation of more realistic and effective drug candidates. Experimental results demonstrate that our approach outperforms baseline models, offering a more practical solution for drug discovery tasks.