Dilated Convolution for Time Series Learning
Wang Zhang, Subhro Das, et al.
ICASSP 2025
Large language models, commonly known as LLMs, are showing promise in tacking some of the most complex tasks in AI. In this perspective, we review the wider field of foundation models—of which LLMs are a component—and their application to the field of materials discovery. In addition to the current state of the art—including applications to property prediction, synthesis planning and molecular generation—we also take a look to the future, and posit how new methods of data capture, and indeed modalities of data, will influence the direction of this emerging field.
Wang Zhang, Subhro Das, et al.
ICASSP 2025
Susan L. Spraragen
International Conference on Design and Emotion 2010
Zhikun Yuen, Paula Branco, et al.
DSAA 2023
John R. Kender, Rick Kjeldsen
IEEE Transactions on Pattern Analysis and Machine Intelligence