mmBERT: The 3-Trillion-Token Encoder Outperforming XLM-R in Multilingual NLP

8 hours ago 高效码农

Meet mmBERT: The 3-Trillion-Token Encoder That Overtakes XLM-R After Six Years In one sentence: Johns Hopkins’ 307 M-parameter mmBERT trains on 3 T tokens across 1 833 languages, needs only 100 B tokens to “grow” 1 700 low-resource tongues at the very end, and still runs 2–4× faster than XLM-R while topping it on every benchmark that matters. What this article answers in plain English Why was a new multilingual encoder overdue? How does “annealed language learning” squeeze 1 833 languages into the last training stage? What tricks (inverse masking, model merging, FlashAttention2) make mmBERT both faster and stronger? How …