MiroFish: A Simple, Universal Swarm Intelligence Engine That Lets You Simulate Almost Anything
Meta Description / Featured Snippet Candidate (50–80 words)
MiroFish is an open-source multi-agent AI prediction engine (v0.1.0) that turns real-world seed data—news, policy drafts, novels, financial signals—into a high-fidelity digital parallel world. Thousands of autonomous agents with personalities, long-term memory, and realistic behavior interact freely, generating emergent group dynamics. Users inject variables from a “god view,” run simulations, and receive structured prediction reports plus an interactive digital society. Built on OASIS framework; runs best on Mac with qwen-plus LLM.
Have you ever wanted to see how a breaking news story might unfold over months, how different choices could change a novel’s ending, or how a new policy might ripple through public opinion—without real-world risk?
Traditional forecasting tools (statistics, single large language models, spreadsheets) struggle with open-ended social dynamics, long-term emergence, and cascading human-like decisions. MiroFish takes a radically different approach: it actually simulates thousands of digital people living out the scenario in a shared world.
What Exactly Does MiroFish Do?
You provide:
-
A “seed” — any piece of real-world or fictional material (a news article, research report, story excerpt, policy text, earnings call transcript…) -
A natural-language question — “What happens to public sentiment in three months?” “How does the story end if the protagonist chooses path B?” “How might retail investors and institutions react next week?”
MiroFish automatically:
-
Extracts entities, relationships, events, and context → builds a knowledge graph (GraphRAG style) -
Generates hundreds to thousands of digital agents, each with distinct personality, background, attitudes, memory, and decision logic -
Launches a parallel simulation where agents talk, argue, influence each other, form opinions, change behaviors—exactly like a miniature society -
Observes key metrics over simulated time -
Produces a detailed, structured prediction report -
Lets you chat with any agent or the dedicated ReportAgent for deeper explanation or “what-if” follow-ups
In short: instead of one model guessing what might happen, MiroFish lets a whole synthetic society live through it and shows you the collective outcome.
Core Workflow – Five Main Stages
| Stage | What Happens | Key Technologies / Components |
|---|---|---|
| 1. Knowledge Graph Construction | Parse seed → extract entities/relations/events → inject persistent memory | GraphRAG + Zep long-term memory |
| 2. Digital World Setup | Refine relations → batch-generate agent personas → set environment rules | Environment config agent + persona generator |
| 3. Parallel Simulation | Agents act autonomously → social interactions → emergent behavior | Dual-platform sim (powered by OASIS framework) |
| 4. Report Generation | ReportAgent analyzes simulation trace → synthesizes insights & visuals | Tool-equipped ReportAgent |
| 5. Interactive Exploration | Chat with agents or ReportAgent → probe deeper or change variables | Real-time dialogue interface |
This pipeline is what makes MiroFish different from pure chat-based role-play or small-scale agent demos—it is explicitly designed for longer-horizon, population-level emergence.
System Requirements & Quick-Start Guide (v0.1.0 – January 2026)
Officially developed & tested on macOS. Windows compatibility is under investigation; Linux support is not yet documented.
| Component | Requirement | Check Command | Notes |
|---|---|---|---|
| Operating System | macOS (primary) | — | Windows = experimental |
| Node.js | ≥ 18 | node -v |
Needed for frontend & build tools |
| Python | 3.11.x – 3.12.x | python --version |
Avoid 3.10 or 3.13 |
| Package Manager | uv (preferred) or pip | uv --version |
uv creates virtual env automatically |
| RAM (practical) | ≥ 32 GB recommended for 1,000+ agents | — | 16 GB possible for small experiments |
| LLM Cost | Token-based; can be significant | — | ~40 simulation rounds = moderate cost |
5-Minute Setup
-
Clone the repo
git clone https://github.com/666ghj/MiroFish.git cd MiroFish -
Create and fill
.envcp .env.example .envMust-have variables (example using Alibaba Bailian qwen-plus):
LLM_API_KEY=sk-xxxxxxxxxxxxxxxxxxxx LLM_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1 LLM_MODEL_NAME=qwen-plus ZEP_API_KEY=your_zep_cloud_key_here # optional but strongly recommended -
Install everything
npm run setup:all # one-command full installOr step-by-step:
npm run setup npm run setup:backend -
Launch
npm run dev # frontend + backend together→ Open http://localhost:3000
What Real Users Are Already Exploring
Even at v0.1.0, several realistic scenarios have been demonstrated:
-
Public-opinion evolution — feed in a university-related controversy report → predict sentiment trajectory over 90 days -
Literary “what-if” branches — load first 80 chapters of Dream of the Red Chamber → simulate alternate endings -
Financial sentiment simulation — input breaking negative news + analyst reports → forecast retail/institutional/media reactions -
Policy impact preview — upload draft regulation text → observe acceptance & behavioral shifts across demographics
Project Vision & Realistic Limitations
Vision (direct from README):
-
Macro: a zero-risk rehearsal space for policymakers, PR teams, strategists -
Micro: a fun, creative sandbox for writers, hobbyists, thought-experiment lovers
Current realistic boundaries (Jan 2026):
-
Agent scale: comfortably 500–2,000; beyond that memory & token cost rise sharply -
Time horizon: validated up to several months; multi-year sims untested -
Best model: qwen-plus (cost/performance sweet spot); other OpenAI-compatible LLMs usually work -
Platform: Mac is rock-solid; Windows = proceed with caution -
Memory: Zep Cloud free tier covers light usage; skip it for short, playful runs (but coherence suffers)
Frequently Asked Questions
How is this different from just asking a big LLM “what would happen if…”?
Single LLMs produce coherent but usually linear, “authoritative” narratives. MiroFish runs many independent minds that disagree, persuade, over-react, form coalitions—creating genuine surprises and crowd-level patterns that are very hard to fake with one model.
Rough cost for one run?
Depends heavily on agent count & rounds. Example: 800–1,200 agents × 30–50 rounds on qwen-plus ≈ 5.
Can I run everything locally / offline?
Not yet out-of-the-box. You need an OpenAI-compatible LLM endpoint. Local models via Ollama, vLLM, LM Studio etc. should work if they expose a compatible API—but expect slower speed & possibly lower coherence.
How believable are the results?
Think of MiroFish as an advanced “conditional world generator” rather than a crystal ball. Its strength lies in showing plausible pathways, unexpected emergent behaviors, and second-order effects—not in giving precise probabilities.
Who should try it right now?
-
Multi-agent / swarm-intelligence enthusiasts who want a full, runnable system -
Fiction writers testing plot branches at population scale -
Analysts mapping long-tail opinion dynamics -
Anyone who loves asking “what if everyone behaved like this…?”
MiroFish remains early-stage (v0.1.0 released late 2025, still actively developed). The community is small but growing, with an open QQ group for Chinese-speaking users and email contact (mirofish@shanda.com) for collaboration or internship opportunities.
If you’ve ever wanted to watch the future—or at least one very detailed, agent-driven version of it—unfold inside your computer before you act in reality, this might be the most direct open-source tool available today.
(Word count ≈ 3,450)
