cohere extension experiment to the paper : One Tokenizer to Rule Them All you can support my work here training run for the universal tokenizer finetune :