Tim Erdmann, Stefan Zecevic, et al.
ACS Spring 2024
Given the lack of word delimeters in written Japanese, word segmentation is generally considered a crucial first step in processing Japanese texts. Typical Japanese segmentation algorithms rely either on a lexicon and syntactic analysis or on pre-segmented data; but these are labor-intensive, and the lexion-syntactic techniques are vulnerable to the unknown word problem. In contrast, we introduce a novel, more robust statistical method utilizing unsegmented training data. Despite its simplicity, the algorithm yields performance on long kanji sequences comparable to and sometimes surpassing that of state-of-the-art morphological analyzers over a variety of error metrics. The algorithm also outperforms another mostly-unsupervised statistical algorithm previously proposed for Chinese. Additionally, we present a two-level annotation scheme for Japanese to incorporate multiple segmentation granularities, and introduce two novel evaluation metrics, both based on the notion of a compatible bracket, that can account for multiple granularities simultaneously.
Tim Erdmann, Stefan Zecevic, et al.
ACS Spring 2024
Freddy Lécué, Jeff Z. Pan
IJCAI 2013
David Carmel, Haggai Roitman, et al.
ACM TIST
Wang Zhang, Subhro Das, et al.
ICASSP 2025