Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Tokenization Workshop (TokShop)

Conditional Unigram Tokenization with Parallel Data

Gianluca Vico · Jindřich Libovický

Keywords: [ language modeling ] [ tokenization ] [ natural language processing ] [ machine translation ]

[ ] [ Project Page ]
Fri 18 Jul 1:50 p.m. PDT — 3 p.m. PDT

Abstract:

We introduce conditional unigram tokenization, a novel approach that extends unigram tokenization by conditioning target token probabilities on source-language tokens from parallel data.Given a fixed source tokenizer, our method learns a target tokenizer that maximizes cross-lingual semantic alignment.We evaluate our tokenizer on four language pairs across different families and resource levels, examining intrinsic properties and downstream performance on machine translation and language modeling.While our conditional tokenizer maintains comparable statistical properties to standard unigram tokenizers, results are mixed: we observe no improvements in machine translation quality, but find consistent perplexity reductions in language modeling.We hypothesize that the quadratic scaling of conditional probability estimation with respect to the vocabulary size creates a data efficiency bottleneck.Our findings suggest that alternative parameterizations may be necessary for practical cross-lingual tokenization.

Chat is not available.