Recurrent Transformers Trade-off Parallelism for Length Generalization on Regular LanguagesPaul SoulosAleksandar Terzicet al.2024NeurIPS 2024
A Large Encoder-Decoder Polymer-Based Foundation ModelEduardo Almeida SoaresNathaniel Parket al.2024NeurIPS 2024
Compressing Recurrent Neural Networks for FPGA-accelerated Implementation in Fluorescence Lifetime ImagingIsmail ErbasVikas Pandeyet al.2024NeurIPS 2024
Towards Using Large Language Models and Deep Reinforcement Learning for Inertial Fusion EnergyVadim ElisseevMax Espositoet al.2024NeurIPS 2024
On the role of noise in factorizers for disentangling distributed representationsKumudu Geethan KarunaratneMichael Herscheet al.2024NeurIPS 2024
Agnostic Causality-Driven Enhancement of Chemical Foundation Models on Downstream TasksVictor ShirasunaEduardo Almeida Soareset al.2024NeurIPS 2024
Fine-Tuned MLP-Mixers as data-driven Numerical Surrogates?Imran NasimJoao Lucas de Sousa Almeida2024NeurIPS 2024
Automated, LLM enabled extraction of synthesis details for reticular materials from scientific literatureViviane T. SilvaAlexandre Rademakeret al.2024NeurIPS 2024
Regress, Don’t Guess – A Regression-like Loss on Number Tokens for Language ModelsJonas ZausingerLars Penniget al.2024NeurIPS 2024