Why RL for Large Language Models Keeps Crashing — and the 7 Engineering Tweaks That Finally Made a 30B MoE Stable After 300k GPU Hours “ What makes policy-gradient RL for LLMs explode, and how do we stop it? Token-level objectives are only a first-order approximation of the true sequence reward. When the training-inference gap or policy staleness grows, the approximation breaks. Importance sampling, clipping and Routing Replay keep the two gaps small and training stable. 0. One-glance cheat-sheet Scenario Must-have knobs Typical failure signal Proven combo in paper Pure on-policy (N=1) Importance-Sampling (IS) KL(μ‖π) ↑ entropy ↓ MiniRL w/ …
From GPT-2 to Kimi 2: A Visual Guide to 2025’s Leading Large Language Model Architectures If you already use large language models but still get lost in technical jargon, this post is for you. In one long read you’ll learn: Why DeepSeek-V3’s 671 B parameters run cheaper than Llama 3’s 405 B How sliding-window attention lets a 27 B model run on a Mac Mini Which open-weight model to download for your next side project Table of Contents Seven Years of the Same Backbone—What Actually Changed? DeepSeek-V3 / R1: MLA + MoE, the Memory-Saving Duo OLMo 2: Moving RMSNorm One …