Equivariant Pretrained Transformer for Unified Geometric Learning on Multi-Domain 3D Molecules

التفاصيل البيبلوغرافية
العنوان: Equivariant Pretrained Transformer for Unified Geometric Learning on Multi-Domain 3D Molecules
المؤلفون: Jiao, Rui, Kong, Xiangzhe, Yu, Ziyang, Huang, Wenbing, Liu, Yang
سنة النشر: 2024
المجموعة: Computer Science
Physics (Other)
مصطلحات موضوعية: Computer Science - Machine Learning, Physics - Chemical Physics
الوصف: Pretraining on a large number of unlabeled 3D molecules has showcased superiority in various scientific applications. However, prior efforts typically focus on pretraining models on a specific domain, either proteins or small molecules, missing the opportunity to leverage the cross-domain knowledge. To mitigate this gap, we introduce Equivariant Pretrained Transformer (EPT), a novel pretraining framework designed to harmonize the geometric learning of small molecules and proteins. To be specific, EPT unifies the geometric modeling of multi-domain molecules via the block-enhanced representation that can attend a broader context of each atom. Upon transformer framework, EPT is further enhanced with E(3) equivariance to facilitate the accurate representation of 3D structures. Another key innovation of EPT is its block-level pretraining task, which allows for joint pretraining on datasets comprising both small molecules and proteins. Experimental evaluations on a diverse group of benchmarks, including ligand binding affinity prediction, molecular property prediction, and protein property prediction, show that EPT significantly outperforms previous SOTA methods for affinity prediction, and achieves the best or comparable performance with existing domain-specific pretraining models for other tasks.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2402.12714
رقم الأكسشن: edsarx.2402.12714
قاعدة البيانات: arXiv