On-Device Language Models: How MiniCPM4 Achieves 128K Context AI on Mobile Devices

14 hours ago 高效码农

MiniCPM4: Run Powerful Language Models on Your Phone or Laptop Achieve 128K context processing with 78% less training data using 0.5B/8B parameter models optimized for edge devices Why We Need On-Device Language Models While cloud-based AI models like ChatGPT dominate the landscape, edge devices (smartphones, laptops, IoT systems) have remained largely excluded due to computational constraints. Traditional large language models face three fundamental barriers: Compute Overload: Processing 128K context requires calculating all token relationships Memory Constraints: Loading an 8B parameter model demands ~32GB RAM Training Costs: Standard models require 36 trillion training tokens MiniCPM Team’s breakthrough solution, MiniCPM4, shatters these …

IBM’s Bamba Model: Merging Transformers and SSMs to Break AI Efficiency Barriers

1 months ago 高效码农

The rise of large language models (LLMs) like ChatGPT has made the Transformer architecture a household name. Yet, as conversations grow longer, Transformers face a critical roadblock: escalating latency and computational costs. To tackle this, IBM Research partnered with Carnegie Mellon University, Princeton University, and other leading institutions to launch Bamba, an open-source hybrid model that combines the expressive power of Transformers with the runtime efficiency of state-space models (SSMs). This breakthrough promises to redefine AI efficiency. Let’s dive into how Bamba works and why it matters. The Transformer Dilemma: Why Long Conversations Slow Down AI 1.1 The Power of …