Megh Thakkar, Quentin Fournier, et al.
ACL 2024
Recent works have demonstrated the effectiveness of self-alignment in which a large language model is aligned to follow general instructions using instructional data generated from the model itself starting from a handful of human-written seeds. Instead of general alignment, in this work, we focus on self-alignment for expert domain specialization (e.g., biomedicine, finance). As a preliminary, we quantitively show the marginal effect that generic instruction-following training has on downstream expert domains' performance. To remedy this, we propose self-specialization - allowing for effective model specialization while achieving cross-task generalization by leveraging only a few labeled seeds. Self-specialization offers a data- and parameter-efficient way of "carving out" an expert model out of a generalist pre-trained LLM. Exploring a variety of popular open large models as a base for specialization, our experimental results in both biomedical and financial domains show that our self-specialized models outperform their base models by a large margin, and even larger models that are generally instruction-tuned or that have been adapted to the target domain by other means.
Megh Thakkar, Quentin Fournier, et al.
ACL 2024
Yidi Wu, Thomas Bohnstingl, et al.
ICML 2025
Tahira Naseem, GX Xu, et al.
ACL 2024
Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010