Uni-MoE-2.0-Omni: The Open-Source MoE Model Mastering Text, Images, Audio & Video

3 hours ago 高效码农

Uni-MoE-2.0-Omni: One Open-Source MoE Model that Understands and Generates Text, Images, Audio, and Video Core question: Is there a single open-source large model that can both understand and generate text, images, speech, and video without stacking multiple pipelines? One-sentence answer: Uni-MoE-2.0-Omni uses a dynamic-capacity Mixture-of-Experts (MoE) architecture built on Qwen2.5-7B, trained with 75B multimodal tokens, to deliver state-of-the-art performance on 85 benchmarks while keeping all code and weights publicly available. Quick Scan (30 seconds) What you get Why it matters Unified tokenizer for audio, image, video, text One sequence → one forward pass → no external fusion Dynamic MoE layer …