In one sentence: the cheapest, fastest and most dialect-rich Chinese text-to-speech engine you can actually use in production today. After reading you will be able to: ① make a Beijing-uncle read today’s hot news in 3 lines of code; ② batch-produce 1 000 short-video voice-overs in 17 different timbres overnight; ③ keep first-packet latency under 100 ms for live streaming. 0. Try Before You Read: A 30-Second Blind Test I fed the same 60-word latte-copy to GPT-4o-Audio, MiniMax and Qwen3-TTS-Flash. Twenty volunteers guessed which sounded most human: Engine Votes for “Most Natural” Ear-note Qwen3-TTS-Flash 14 Smooth erhua, breathing feels real …
Author / Team / Institution Authors: Yixuan Zhou, Guoyang Zeng, Xin Liu, Xiang Li, Renjie Yu, Ziyang Wang, Runchuan Ye, Weiyue Sun, Jiancheng Gui, Kehan Li, Zhiyong Wu, Zhiyong Liu. Team/Institution: Developed by ModelBest and THUHCSI, under the OpenBMB project. Role: Researchers and developers in text-to-speech systems. Authority Backing: The model is open-sourced under Apache-2.0 license, with acknowledgments to foundational works like DiTAR, MiniCPM-4, CosyVoice, and DAC. No external peer reviews or third-party reports are provided in the input files. Abstract VoxCPM represents a shift in text-to-speech (TTS) technology by eliminating discrete tokenization and operating directly in continuous speech space. …
Marvis: The New Era of Real-Time Voice Cloning and Streaming Speech Synthesis Marvis Speech Synthesis Model Introduction In today’s rapidly evolving artificial intelligence landscape, speech synthesis technology is transforming how we interact with machines at an unprecedented pace. From virtual assistants to content creation and accessibility services, high-quality speech synthesis plays an increasingly vital role. However, traditional voice cloning models often require extensive audio samples and lack real-time streaming capabilities, limiting their adoption in mobile devices and personal applications. Marvis emerges as the solution to these challenges. This revolutionary conversational speech model is specifically designed to break through these limitations. …
LLaMA-Omni2: Achieving Real-Time Speech Synthesis with Low-Latency Modular Architecture Researchers from the Institute of Computing Technology, Chinese Academy of Sciences, have unveiled LLaMA-Omni2, a groundbreaking speech-language model (SpeechLM) that enables seamless real-time voice interactions. By integrating modular design with autoregressive streaming speech synthesis, this model achieves synchronized text and speech generation with latency reduced to milliseconds. This article explores its technical innovations, performance benchmarks, and practical applications. Technical Architecture: How Modular Design Enables Real-Time Speech Generation LLaMA-Omni2’s architecture combines speech processing and language understanding through four core components: 1. Speech Encoder: Transforming Audio to Acoustic Tokens Built on Whisper-large-v3, this …