Core Cognition Deficits in AI: 2025 Study Reveals Critical Gaps in Multi-Modal Language Models

15 hours ago 高效码农

Core Cognition Deficits in Multi-Modal Language Models: A 2025 Guide TL;DR 2025 research reveals Multi-Modal Language Models (MLLMs) underperform humans in core cognition tasks. Top models like GPT-4o show significant gaps in low-level cognitive abilities (e.g., object permanence: humans at 88.80% accuracy vs. GPT-4o at 57.14%). Models exhibit a “reversed cognitive development trajectory,” excelling in advanced tasks but struggling with basic ones. Scaling model parameters improves high-level performance but barely affects low-level abilities. “Concept Hacking”验证发现73%的模型依赖捷径学习,存在认知幻觉现象。比如在视角转换任务中,某大型商业模型对照任务准确率为76%,但在操纵任务中骤降至28%。 Understanding Core Cognition Assessment Assessing core cognition in MLLMs requires a systematic approach. The CoreCognition benchmark evaluates 12 key abilities across different cognitive stages: Sensory-Motor …