OpenClaw 2026.4.22 Release: More Models, Smoother Conversations, and Reliable Operation
If you’re using OpenClaw to build your own conversational AI agents or automate workflows, this release is worth your attention. The April 22, 2026 update brings substantial improvements: full xAI integration, more flexible model management, more reliable message delivery, and deep optimizations for many popular services and plugins.
Below I’ll walk through the most important changes in plain English, helping you quickly decide which parts matter most for your use case.
What problems does this release mainly solve?
OpenClaw acts as a central hub connecting multiple chat channels, large language models, and automation tools. Its core job is to route messages and commands reliably and efficiently. This release does three major things:
-
Expand capabilities – Add new AI providers (like xAI’s Grok family) and support more media types (image generation, speech recognition and synthesis). -
Improve daily usability – Fix numerous bugs that caused message duplication, session interruptions, and configuration loss, making long-running instances more stable. -
Make configuration and management more flexible – Add new CLI commands, improve model addition, and optimize config merging logic to reduce restarts.
Let’s go through the details section by section.
1. New AI provider: full xAI support
This release officially adds xAI as a provider, and not just for text chat. You can now directly call Grok models for:
| Capability | Details |
|---|---|
| Image generation | grok-imagine-image and grok-imagine-image-pro, with reference‑image editing support |
| Text-to-speech (TTS) | Six live voices, output formats MP3, WAV, PCM, G.711 |
| Speech-to-text (STT) | grok-stt audio transcription and real‑time transcription for voice call streaming |
If you previously used other providers for TTS or STT in OpenClaw, you now have an additional option from xAI. The image generation feature is particularly useful – you can ask Grok to create or edit images directly in a conversation without switching tools.
2. Real‑time transcription for voice calls is strengthened
Real‑time STT is a cross‑provider improvement in this release. Beyond the existing OpenAI and xAI paths, Deepgram, ElevenLabs, and Mistral now also support real‑time transcription for voice call streams. In simple terms, when OpenClaw receives a live audio stream (e.g., from a web‑based or phone call), these providers can output text as they receive the audio, instead of waiting for the entire audio to finish.
ElevenLabs also gains Scribe v2 batch audio transcription for processing uploaded audio files such as voicemail or meeting recordings.
3. TUI adds a local embedded mode
For users who prefer not to use the web control panel, the TUI (terminal user interface) now includes a local embedded mode. What it does:
-
You can chat directly in the terminal without starting the Gateway. -
Plugin approval rules still apply, so security controls are not bypassed.
This mode is especially useful for quick dialog testing on a server, or when working in an environment without a graphical interface.
4. Model management: you can now add models dynamically
Previously, to use a new model in OpenClaw you often had to edit configuration files and restart the Gateway. This release brings two important command improvements:
-
/models add <provider> <modelId>
Register a new model directly from chat without a restart. The model becomes available immediately. -
Redesigned
/modelscommand
It now works mainly as a “model browser” to see available models. Clearer guidance and copy‑friendly command examples have been added.
Additionally, when you re‑authenticate an OAuth provider (like OpenAI Codex) with openclaw models auth login, the system merges the newly obtained default models instead of wiping out existing configurations from other providers. Replacement only happens if you explicitly set replaceDefaultModels. This avoids losing frequently used model aliases due to repeated logins.
5. WhatsApp improvements
For WhatsApp channel users, two thoughtful changes:
-
Configurable reply quoting – New replyToModeoption lets you control whether and how OpenClaw quotes the original message when replying. This makes context clearer in group chats. -
Per‑group and per‑direct system prompts – You can set different systemPromptvalues for different WhatsApp accounts, different groups, and even different direct contacts. Wildcard"*"fallback is supported, and account‑level overrides fully replace root configurations (matching the existingrequireMentionbehavior).
Example: you can make OpenClaw act as a “tech assistant” in work groups and use a more relaxed tone in family groups – all without complex conditional logic.
6. Session management: manage sessions like an inbox
The sessions_list command now includes mailbox‑style filters. You can now:
-
Filter by label -
Filter by agent name -
Search by keyword -
View automatically derived titles and last‑message previews
This is extremely helpful when you maintain hundreds of simultaneous conversations (e.g., customer service systems or community bots). Quickly find relevant sessions without scrolling through all records.
7. Control UI personalisation and layout optimisation
The web control panel updates focus on improving personal user experience:
-
Set a browser‑local identity (name and avatar). This identity shares the same rendering path as assistant and agent avatars, making the interface feel more consistent. -
Quick Settings, agent fallback chips, and narrow‑screen layouts are optimised so personalisation does not waste space and controls are not clipped.
In short: the interface is cleaner, and you can see who is who – especially useful when multiple people share the same OpenClaw instance’s control panel.
8. Diagnostics and troubleshooting: no more blind restarts
A diagnostic export feature has been added. When you encounter a problem and need to file a bug report, you can export:
-
Sanitised logs -
Status and health checks -
Sanitised configuration -
Stability snapshots
This greatly reduces the back‑and‑forth on GitHub issues. Also, the gateway now records “payload‑free stability” by default – it logs only system stability, not message content, balancing debugging needs with privacy.
9. Deep integration with Tencent and Amazon Bedrock
Tencent
A bundled Tencent Cloud provider plugin is added with TokenHub onboarding, documentation, hy3-preview model catalog entries, and tiered pricing metadata. If you use Tencent’s AI services, you can now call them directly from OpenClaw without hand‑crafting API calls.
Amazon Bedrock: Claude Opus 4.7 and Mantle channel
-
Claude Opus 4.7 is supported through Mantle’s Anthropic Messages route, using provider‑owned bearer‑auth streaming. This means you don’t have to treat AWS bearer tokens as Anthropic API keys. -
Mantle now refreshes IAM bearer tokens at runtime instead of baking a discovery‑time token into the provider config. Long‑running sessions no longer interrupt due to expired tokens. -
For newly discovered models, the actual context window size is used rather than conservative old defaults. This improves compaction and overflow handling.
10. Unified handling of GPT‑5 models
If you use GPT‑5 family models, this release gives them consistent behavior across providers. OpenClaw moves the GPT‑5 prompt overlay logic into the shared provider runtime. That means whether you call GPT‑5 through OpenAI, OpenRouter, OpenCode, Codex, or another compatible provider, you get the same instruction processing and heartbeat guidance.
A global setting agents.defaults.promptOverlays.gpt5.personality controls the friendly‑style toggle, while the OpenAI plugin setting remains as a fallback.
11. No more automatic copying of Codex OAuth credentials
Previously, OpenClaw might copy OAuth material from ~/.codex during first‑time setup. This release removes that import path. Configuring Codex now requires browser login or device pairing. This avoids authentication confusion caused by accidentally copying stale credentials.
12. CLI and plugin performance and stability improvements
-
Doctor command 74% faster – Lazy‑load plugin paths and prefer plugin dist/*runtime entries, significantly reducingdoctor --non-interactivestartup time. -
Plugin loading 82‑90% faster – On supported runtimes, prefer Jiti to load built plugin dist modules, while keeping TypeScript on the transform path. -
More reliable plugin updates – When an installed plugin version already matches the registry target, skip reinstall and config rewrites. plugins installnow points users toplugins updateor--forceinstead of falling back to an old hook‑pack path.
13. Security and configuration fixes (highlights)
This release fixes several issues that could affect data security and runtime stability. Here are a few important ones:
| Issue type | Fix |
|---|---|
| Sandbox escape risk | Fixed file reading vulnerability where symlink swapping in parent directories could read host files. Now pins an already‑opened descriptor and re‑checks identity. |
| Unauthorised config read | When gateway auth is enabled, unauthenticated users can no longer access __openclaw/control-ui-config.json. |
Workspace .env overrides blocked |
Matrix, Mattermost, IRC, and Synology channels no longer allow workspace .env files to override connection settings – prevents cloned repos from hijacking traffic. |
| WebSocket pairing hardened | If forwarded headers indicate proxied traffic, the loopback shared‑secret auto‑pairing path is disabled. Also, deleting a paired device now cleans up pending pairing requests. |
| Log and diagnostic sanitisation | Diagnostic exports automatically filter sensitive info. Default stability recording does not include message payloads. |
14. Annoying bug fixes (selected high‑frequency scenarios)
Beyond the major items above, many smaller fixes matter for daily use:
-
Duplicate message sending – A long‑standing WhatsApp bug caused pending queues to be re‑driven after a reconnect, sometimes sending the same message 7‑12 times. Now an in‑memory “active delivery claim” prevents re‑driving the same queue entry during concurrent reconnects. -
Slack streaming reply loss – When a Slack Connect stream is rejected before the SDK flushes its local buffer, it now falls back to normal Slack replies. Short replies no longer disappear or falsely report success. -
Discord /newcrash in threads – Fixed an issue where accessingparentIdin partial thread channels would throw an error. Also fixed/vccommand crashes in similar partial threads. -
Telegram long polling 409 conflict – When getUpdatesreturns a 409 conflict, the HTTP transport is rebuilt instead of looping on a stale keep‑alive socket. -
Ollama /think offnot working – OpenClaw’s thinking control is now correctly passed to Ollama’s/api/chatrequests. Disabling thinking no longer causes infinite idle time. -
Codex ACP session overwriting local auth – The bundled Codex ACP now uses an isolated CODEX_HOMEand does not write incomplete ChatGPT bridge files into the user’s real Codex CLI auth directory.
Frequently Asked Questions (FAQ)
Q: Should I upgrade immediately?
If you use xAI, Tencent, or WhatsApp, strongly consider upgrading. Also upgrade if you experience any of the bugs mentioned above (e.g., duplicate messages, slow plugin loading, lost model configuration).
Q: Will I lose configuration after upgrading?
Normally, no. But note: the legacy Codex OAuth credential import path has been removed. If you previously relied on automatic copying from ~/.codex, you’ll need to run browser login or device pairing once after upgrade. Also, plugins.allow and similar settings now merge correctly and won’t be accidentally emptied.
Q: How do I use the new TUI local embedded mode?
Run openclaw tui --local in your terminal (check official docs for exact command). The Gateway does not need to be pre‑started, but plugin approval rules remain active.
Q: How do I set independent system prompts for WhatsApp?
In the config file, under channels.whatsapp.accounts.<id>.groups or channels.whatsapp.accounts.<id>.direct, set systemPrompt for each group or direct contact. Use "*" as the key to set a default fallback.
Q: I use local llama.cpp – what’s in this release for me?
Token usage from streaming responses can now be recovered from timings.prompt_n / timings.predicted_n metadata. Also, several local backends (vLLM, SGLang, LM Studio, etc.) are marked as streaming‑usage compatible, so token statistics no longer appear unknown or stale.
Q: I use Amazon Bedrock Mantle – my sessions often break after a few hours.
This is fixed. Mantle now refreshes IAM bearer tokens at runtime instead of using a one‑time token created at discovery. Long‑running sessions should stay alive after upgrading.
Summary
OpenClaw 2026.4.22 is a solid release worth upgrading to. It doesn’t introduce radical architectural changes, but it systematically addresses real‑world pain points: model management flexibility, duplicate message sending, crashes on certain channels, and unexpected configuration loss. The addition of xAI and enhanced voice capabilities give developers more options.
If you maintain a bot or automation system built on OpenClaw, this release will make day‑to‑day operations noticeably smoother. We recommend testing your core workflows in a staging environment before planning the upgrade.
This article is based on the official changelog of OpenClaw v2026.4.22. All features and fixes described come from that release. Actual behaviour depends on your specific configuration and environment.

