Time-LLM: Time Series Forecasting by Reprogramming Large Language ModelsMing JinShiyu Wanget al.2024ICLR 2024
Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method PerspectiveMing-yu ChungSheng-yen Chouet al.2024ICLR 2024
THE DEVIL IS IN THE NEURONS: INTERPRETING AND MITIGATING SOCIAL BIASES IN PRE-TRAINED LANGUAGE MODELSYan LiuYu Liuet al.2024ICLR 2024
Ring-A-Bell! How Reliable are Concept Removal Methods For Diffusion Models?Yu-Lin TsaiChia-yi Hsuet al.2024ICLR 2024
It's Never Too Late: Fusing Acoustic Information into Large Language Models for Automatic Speech RecognitionChen ChenRuizhe Liet al.2024ICLR 2024
Large Language Models are Efficient Learners of Noise-Robust Speech RecognitionYuchen HuChen Chenet al.2024ICLR 2024
The Devil is in the Neurons: Interpreting and Mitigating Social Biases in Language ModelsYan LiuYu Liuet al.2024ICLR 2024
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!Xiangyu QiYi Zenget al.2024ICLR 2024
AutoVP: An Automated Visual Prompting Framework and BenchmarkHsi-ai TsaoLei Hsiunget al.2024ICLR 2024
MulBERRY: Enabling Bit-Error Robustness for Energy-Efficient Multi-Agent Autonomous SystemsZishen WanNandhini Chandramoorthyet al.2024ASPLOS 2024