Unmasking AI Distillation Attacks: The Industrial-Scale Theft of Frontier Models Core Question Answered: What exactly are “distillation attacks” on large language models, why do they pose a critical national security threat beyond mere intellectual property theft, and how can AI laboratories defend against this covert, industrial-scale capability extraction? As the race for Artificial General Intelligence accelerates, the competition among frontier AI laboratories has intensified. However, behind the impressive benchmark scores and public releases, a silent war of “capability extraction” is underway. Recent security investigations have identified three industrial-scale “distillation attack” campaigns, revealing how certain AI labs use fraudulent tactics to …