AutoVP: An Automated Visual Prompting Framework and Benchmark
Hsi-ai Tsao, Lei Hsiung, et al.
ICLR 2024
Recent advances in language modeling have had a tremendous impact on how we handle sequential data in science. Language architectures have emerged as a hotbed of innovation and creativity in natural language processing over the last decade, and have since gained prominence in modeling proteins and chemical processes, elucidating structural relationships from textual/sequential data. Surprisingly, some of these relationships refer to three-dimensional structural features, raising important questions on the dimensionality of the information encoded within sequential data. Here, we demonstrate that the unsupervised use of a language model architecture to a language representation of bio-catalyzed chemical reactions can capture the signal at the base of the substrate-binding site atomic interactions. This allows us to identify the three-dimensional binding site position in unknown protein sequences. The language representation comprises a reaction-simplified molecular-input line-entry system (SMILES) for substrate and products, and amino acid sequence information for the enzyme. This approach can recover, with no supervision, 52.13% of the binding site when considering co-crystallized substrate-enzyme structures as ground truth, vastly outperforming other attention-based models.
Hsi-ai Tsao, Lei Hsiung, et al.
ICLR 2024
Vinamra Baghel, Ayush Jain, et al.
INFORMS 2023
Marianna Rapsomaniki, Jannis Born, et al.
AMLD EPFL 2024
Clément L. Canonne, Gautam Kamath, et al.
NeurIPS 2020