OpenClaw 2026.4.20 Release: Smarter, Safer, and More Stable AI Agent
Have you ever experienced laggy conversations, messed-up cost tracking, or broken features when connecting an AI agent to platforms like WhatsApp, Telegram, or Discord? If you’re looking for an open-source, self-hosted solution, OpenClaw is worth your attention.
On April 20, 2026, OpenClaw released version v2026.4.20. This release doesn’t introduce flashy new features. Instead, it fixes a batch of long-standing pain points: session management, cost calculation, security hardening, and channel stability. Below, I’ll walk you through the key improvements in plain language.
What Is OpenClaw?
Simply put, OpenClaw is a chatbot agent framework. It lets you connect an AI model (like GPT-5, Claude, or Kimi) to multiple messaging apps, so the AI can auto-reply, run scheduled tasks, and even control your browser or terminal. Think of it as an open-source smart assistant that you configure through text files.
This update focuses on:
-
Better onboarding and setup experience -
Stronger system prompts -
More accurate cost estimation -
Session storage that won’t crash your gateway -
Faster plugin and test loading -
More reliable cron jobs -
Deep integration with Moonshot / Kimi models -
Per‑group system prompts for BlueBubbles (iMessage bridge) -
Many security and permission fixes
Let’s dive in.
1. Onboarding & Wizard: No More Blank Screens
Many open-source tools show a wall of warning text but dim the important parts. The new version redesigns the security disclaimer in the setup wizard:
-
A single yellow banner highlights security risks -
Normal brightness for the main body text – easier to scan -
Bulleted checklists so you don’t miss a step
Also, when the wizard loads the model catalog for the first time, you’ll see a loading spinner instead of a blank page. That way you know it’s working, not frozen.
If you’re unsure where to enter an API key, the input field now shows a placeholder example like sk-xxxx – so you know the expected format.
2. AI Brain Upgrade: Stronger System Prompts
The system prompt tells the AI who it is and how to behave. This release strengthens the default prompt and the GPT-5‑specific overlay. Improvements include:
-
Clearer completion bias – the AI knows what “finished” means -
Live‑state checks – avoids asking for information it already has -
Weak‑result recovery – if the answer is bad, it can retry -
Verification before final answer – reduces hallucinations
For you, this means fewer “I don’t understand” loops and less made‑up information.
3. Model Cost Tracking: No More Double Counting
If you use token‑billed APIs (OpenAI, Moonshot, etc.), OpenClaw estimates costs per conversation. But the previous version had a serious flaw: the same conversation’s cost was added repeatedly, sometimes showing dozens of times the actual expense.
This release fixes that:
-
Takes a snapshot of estimatedCostUsdper run – no more compounding -
Supports tiered pricing from cached catalogs and configured models -
Includes built‑in cost estimates for Moonshot’s Kimi K2.6 and K2.5
Now the cost you see is the real, one‑time cost.
4. Session Management: Goodbye Memory Overflows
OpenClaw stores chat history locally. If left unchecked, session files can grow huge and cause the gateway to run out of memory (OOM) on startup. This release does two things:
-
Enables entry caps and age‑based pruning by default – old messages are automatically deleted. -
Prunes oversized stores at load time – previously pruning only happened during writes, but reading a giant file would crash first. Now the gateway checks and truncates before loading.
This is especially helpful for long‑running cron jobs or active gateways – no more mysterious OOM crashes.
5. Plugins & Tests: Faster and More Stable
If you develop or use third‑party plugins, here’s good news:
-
Plugin loader reuse – when the same plugin is loaded multiple times in the same context, OpenClaw reuses the alias and config resolution. Test suites run noticeably faster. -
Detached task lifecycle – plugins can now register “detached tasks” with their own lifecycle and cancellation logic, without poking into core internals. This gives complex plugins (e.g., long‑running background jobs) a standard interface.
6. Cron Jobs: Separate State from Definition
Cron jobs run scheduled tasks (e.g., send weather every morning). Previously, job definitions (jobs.json) and runtime state (last run time, failures) lived in the same file. If you tracked jobs.json with Git, every execution caused meaningless changes.
The new version splits runtime execution state into a separate jobs-state.json file. jobs.json stays stable – perfect for version control. jobs-state.json can be ignored. A small but thoughtful improvement.
7. Moonshot / Kimi Support: More Native Experience
If you use Moonshot’s Kimi models, this update makes things smoother:
-
Default web search, media understanding, etc. now point to kimi-k2.6, whilekimi-k2.5remains for compatibility. -
Supports thinking.keep = "all"onkimi-k2.6(keeps full reasoning chains); other models or requests with pinnedtool_choiceautomatically strip this. -
Defaults Kimi thinking to off – prevents old /think onstates from silently re‑enabling verbose reasoning.
8. BlueBubbles Groups: Per‑Group System Prompts
BlueBubbles is an open‑source bridge that lets Android/Web users access iMessage. This release adds per‑group system prompts for BlueBubbles.
For example, in a tech group you can tell the AI “reply in English with code references”; in a family group “use a casual tone”. The config supports "*" as a wildcard fallback. On every turn, the appropriate prompt is injected.
9. Logging: Faster Control‑Character Filtering
When displaying logs in a terminal, control characters (like color codes) must be filtered to avoid garbled output. The old version used a slow loop. The new version uses a single regex pass – much more efficient. You won’t notice it day‑to‑day, but under high load it reduces CPU usage.
10. QA & CI: Stricter Automation
For developers, openclaw qa suite and openclaw qa telegram now fail by default (non‑zero exit code) when scenarios fail. If you only want to collect artifacts without breaking the CI pipeline, add --allow-failures. This matches what CI automation expects.
11. Mattermost: Stream Replies in Real Time
Mattermost is an enterprise messaging app. The new Mattermost plugin now supports streaming responses: thinking steps, tool calls, and partial replies appear as a single draft post that finalises in place. Users no longer have to wait for the entire generation to finish before seeing progress.
Important Fixes: What You Should Know
This release includes many fixes. Here are the ones that matter most to regular users.
🛡️ Security & Permissions
| Issue | Fix |
|---|---|
Malicious .env file could inject OPENCLAW_* vars and override critical config |
Block any OPENCLAW_* keys from untrusted workspace .env files |
| A non‑admin paired device could list all devices and approve/reject other pairing requests | Restrict paired‑device sessions to their own pairing entries |
AI could use the gateway tool to modify sandbox, plugin trust, and other sensitive settings |
Extend config mutation guard to block operator‑trusted paths |
| WebSocket broadcasts leaked chat content to pairing‑scoped sessions | Require operator.read (or higher) for chat, agent, and tool‑result event frames |
MINIMAX_API_HOST could be injected to hijack routing |
Remove env‑driven URL routing, enforce secure config |
🔁 Sessions & Cost
| Issue | Fix |
|---|---|
After /new or /reset, the session stayed pinned to an auto‑selected model |
Clear auto‑sourced model, provider, and auth overrides; keep explicit user choices |
| Same run’s cost was compounded up to dozens of times | Snapshot estimatedCostUsd like token counters – no more double counting |
| Session store could grow unbounded and cause OOM | Prune oversized stores at load time; enable entry cap and age prune by default |
🤖 Agent & Model Behavior
| Issue | Fix |
|---|---|
| OpenAI Codex’s image‑generation tool was incorrectly exposed on native vision turns | Avoid re‑exposing the tool when inbound images are present |
/think off still sent reasoning.effort: "none" to GPT models, causing failures |
Omit disabled reasoning payloads entirely |
| Some non‑frontier models returned empty error turns and ended the session early | Retry silently – give the model a second chance |
Switching from one model to another kept the old max thinking setting |
Remap stored max to the new model’s largest supported mode |
📱 Channel‑Specific Fixes
Telegram
-
Status reactions now respect removeAckAfterReply– cleared or restored as configured. -
Default polling watchdog increased from 90s to 120s; configurable via pollingStallThresholdMs. -
Setup wizard no longer accepts @usernamefor allowlists (unresolvable) – requires numeric user ID.
Discord
-
/thinkautocomplete only showsadaptivefor models that actually support it (Anthropic) – GPT models no longer show incorrect options. -
Partial channel metadata (missing name or topic) no longer crashes slash commands or model pickers.
BlueBubbles
-
Text send timeout increased from 10s to 30s; configurable via sendTimeoutMs. Fixes silent failures on macOS 26 where iMessage sends stall. -
Unified HTTP client ( BlueBubblesClient) resolves SSRF policy once – unblocks image attachments and reactions on localhost/private IP. -
When the AI uses an unsupported emoji (e.g., 👀) as a reaction, falls back to loveinstead of failing. -
Prefers iMessage over SMS unless explicitly prefixed with sms:– no silent downgrade.
Matrix
-
dm.allowFromandgroupAllowFromnow hot‑reload – no channel restart needed. -
Slash commands prefixed with a bot mention ( @bot:server /new) are correctly recognised.
Slack
-
Fixes “unresolved SecretRef” error for accounts using fileorexecsecret sources.
⏰ Cron Jobs
-
When delivery.mode: "none", even if the runner reportsdelivered: false, it’s not treated as a failure or error. -
Recurring Telegram announcements no longer silently skip later sends due to reused session IDs. -
The lasttarget (send to most recent chat) no longer gets written into persistent job config. -
PowerShell‑style --toolsallow‑lists (space‑separated) are parsed correctly.
🌐 Gateway & Pairing
-
Loopback clients (localhost) are correctly identified as local – no more “pairing required” errors. -
Pairing failures now return specific reasons (scope upgrade needed, device not approved, etc.) and a request ID. -
openclaw doctor --fixdetects and repairs pending pairing requests and scope drift.
🧰 Other Practical Fixes
-
YOLO mode ( security=full+ask=off) – heredoc forms likenode <<'NODE'are no longer blocked. -
Ollama local discovery – default baseUrlandmodelswork before validation would reject a minimal config. -
Browser tool – profile="user"withouttargetauto‑routes to a connected browser node. -
Active Memory – when memory recall fails, logs a warning and continues without memory context instead of failing the whole turn.
Frequently Asked Questions (FAQ)
What is OpenClaw, and how is it different from using ChatGPT directly?
OpenClaw is a self‑hosted AI agent framework. You run it on your own server, connect it to multiple chat platforms (Telegram, Discord, Matrix, etc.), and give it scheduled tasks and local tools (browser, terminal). It’s more flexible but requires some technical comfort (editing config files).
I upgraded to v2026.4.20 and some plugins stopped working. What should I do?
Check if the plugin depends on internal APIs. This release relaxed the requirement that a context engine’s info.id must match the slot ID it’s registered under – that broke some third‑party plugins. Try reinstalling the plugin or contact its author for an update.
Why does my session cost still look high?
The bug that compounded costs is fixed, but actual API consumption hasn’t changed. If costs seem off, make sure your session history isn’t artificially inflated (e.g., containing many already‑billed turns). Use /reset to start a fresh session.
What is YOLO mode? Is it safe?
YOLO mode means security=full and ask=off – the AI can execute shell commands or code without asking for approval each time. It’s useful for high‑trust automation. This release fixed heredoc execution in YOLO mode. But YOLO mode is risky – the AI could run destructive commands. Use with caution.
BlueBubbles keeps failing to send messages. What can I do?
If you’re on macOS 26 (Tahoe), ensure your BlueBubbles server supports Private API. The new release prefers private-api and raises the timeout to 30s. If it still fails, try setting channels.bluebubbles.sendTimeoutMs to a higher value (e.g., 60000) in your config.
I get “PAIRING_REQUIRED” even though my device is already paired. Why?
Your device may be requesting a broader scope or higher role than was originally approved. For example, you paired with read scope but the client now tries a write operation. The new version returns a specific reason and a requestId. Use the Control UI or openclaw devices to re‑approve with the required permissions.
My cron job no longer sends messages, but the log says “delivered: false”.
Check your delivery.mode. If it’s set to none, the job does not attempt to send any message – delivered: false is ignored. To send messages, change delivery.mode to announce or specify an explicit to target.
How to Upgrade
Docker:
docker pull openclaw/openclaw:latest
npm global install:
npm update -g openclaw
From source:
git pull origin main
npm install
npm run build
After upgrading, run openclaw doctor --fix to check your config and pairing status.
Final Thoughts
OpenClaw 2026.4.20 doesn’t bring flashy new UI or trendy AI features. It’s a solid house‑repair release: fixing leaking session storage, recalibrating cost meters, reinforcing security locks, and making each channel plugin more reliable.
If you’re already using OpenClaw, this is a worthwhile upgrade. If you’re still on the fence, start with a simple Telegram channel – spend half an hour configuring it and see how flexible a self‑hosted AI agent can be.
For developers, plugin improvements (task lifecycle, loader reuse) mean you can build extensions with more confidence, without worrying about breaking changes deep in the core.
And finally, thanks to everyone who reported issues and contributed patches – it’s this kind of careful work that makes open‑source projects truly mature.
This article is based entirely on the official OpenClaw v2026.4.20 release notes. No external knowledge has been added. All fixes and features are described as they behave in that version.
