Post
360
Does tokenizing numbers into single digits outperform three-digit or BPE tokenization for arithmetic tasks? We explore various tokenization methods in our upcoming blog (releasing next week 👀)!
🔹 Bringing objectivity to comparisons
Existing comparisons of number tokenization methods often ignore the difference in models’ compute budgets: larger tokenizer vocabularies naturally lead to more parameters, which produces less objective comparisons of model performances due to more “learning” being done by these bigger models.
We addressed this by keeping architectures consistent but adjusting the number of hidden layers to produce roughly equal parameter counts.
🔹 Key results
We trained models on the same data mix and evaluated their performance on various arithmetic tasks (digits, operations, floats vs. ints):
- When splitting evals based on operators, single-digit tokenization consistently outperformed other methods.
- Right-to-left tokenization (which I covered in a previous post) matched or exceeded left-to-right approaches in all tasks.
All in all, single-digit tokenization is best compared to other methods, and similar to our previous post’s finding, R2L works better than L2R tokenization, although not as significant as the gap between single-digit and the rest!
The wait is almost over 🤗, the full report is coming next week - stay tuned!
🔹 Bringing objectivity to comparisons
Existing comparisons of number tokenization methods often ignore the difference in models’ compute budgets: larger tokenizer vocabularies naturally lead to more parameters, which produces less objective comparisons of model performances due to more “learning” being done by these bigger models.
We addressed this by keeping architectures consistent but adjusting the number of hidden layers to produce roughly equal parameter counts.
🔹 Key results
We trained models on the same data mix and evaluated their performance on various arithmetic tasks (digits, operations, floats vs. ints):
- When splitting evals based on operators, single-digit tokenization consistently outperformed other methods.
- Right-to-left tokenization (which I covered in a previous post) matched or exceeded left-to-right approaches in all tasks.
All in all, single-digit tokenization is best compared to other methods, and similar to our previous post’s finding, R2L works better than L2R tokenization, although not as significant as the gap between single-digit and the rest!
The wait is almost over 🤗, the full report is coming next week - stay tuned!