SpatialTree: How Spatial Abilities Hierarchically Develop in Multimodal LLMs Have you ever wondered how AI perceives the size of objects, judges distances, or predicts movement when looking at an image? In cognitive science, human spatial ability develops progressively—from basic perception to complex reasoning and real-world interaction. Yet for multimodal large language models (MLLMs), this hierarchical structure has long been poorly understood, with most research focusing on isolated tasks rather than the bigger picture. Today, we’ll explore SpatialTree—a cognitive science-inspired framework that organizes AI’s spatial abilities into four distinct layers. It also introduces the first capability-centric hierarchical benchmark, allowing us to …
Ollama Launches New Multimodal Engine: Redefining the Boundaries of AI Cognition Ollama Multimodal Engine Visualization Introduction: When AI Learns to “See” and “Think” The AI field is undergoing a silent revolution. Following breakthroughs in text processing, next-generation systems are breaking free from single-modality constraints. Ollama, a pioneer in open-source AI deployment, has unveiled its new multimodal engine, systematically integrating visual understanding and spatial reasoning into localized AI solutions. This technological leap enables machines not only to “see” images but marks a crucial step toward comprehensive cognitive systems. I. Practical Analysis of Multimodal Models 1.1 Geospatial Intelligence: Meta Llama 4 in …