On the Expressiveness and Length Generalization of Selective State-Space Models on Regular Languages
Abstract
Selective state-space models (SSMs) are an emerging alternative to the Transformer, offering the unique advantage of parallel training and sequential inference. While these models have shown promising performance on a variety of tasks, their formal expressiveness and length generalization properties are not sufficiently explored. In this work, we provide insight into the workings of selective SSMs by analyzing their expressiveness and length generalization performance on regular language tasks, i.e. finite-state automaton (FSA) emulation. We address the limitations of modern SSM-based architectures by introducing the Selective Dense State-Space Model (\name), the first selective SSM that exhibits perfect length generalization on a set of various regular language tasks using a single layer. It utilizes a dictionary of dense transition matrices, a softmax selection mechanism that creates a convex combination of the dictionary matrices at each time step, and a readout consisting of layer normalization followed by a linear map. We then question the expressiveness of variants of diagonal selective SSMs by considering their empirical performance on commutative and non-commutative automata. We explain most of the experimental results with theoretical considerations.