Towards a Statistical Theory of Learning to Learn In-context with TransformersYoussef Mroueh2023NeurIPS 2023
Forcing Generative Models to Degenerate Ones: The Power of Data Poisoning AttacksShuli JiangSwanand Ravindra Kadheet al.2023NeurIPS 2023
Beyond Chemical Language: A Multimodal Approach to Enhance Molecular Property PredictionEduardo Almeida SoaresEmilio Ashton Vital Brazilet al.2023NeurIPS 2023
Capturing Formulation Design of Battery Electrolytes with Chemical Large Language ModelEduardo Almeida SoaresVidushi Sharmaet al.2023NeurIPS 2023
A Framework for Toxic PFAS Replacement based on GFlowNet and Chemical Foundation ModelEduardo Almeida SoaresFlaviu Cipciganet al.2023NeurIPS 2023
FairSISA: Ensemble Post-Processing to Improve Fairness of Unlearning in LLMsSwanand Ravindra KadheAnisa Halimiet al.2023NeurIPS 2023
Weakly Supervised Detection of Hallucinations in LLM ActivationsMiriam RateikeCelia Cintaset al.2023NeurIPS 2023
Using Foundation Models to Promote Digitization and Reproducibility in Scientific ExperimentationAmol ThakkarAndrea Giovanniniet al.2023NeurIPS 2023