OpenClaw 2026.3.28 Release: A New Era for Multi-Model Autonomy and Human-AI Safety
This article aims to answer the core question: What are the most significant architectural changes in OpenClaw 2026.3.28, and how do they improve the reliability, security, and cross-platform capabilities of AI agents?
The release of OpenClaw 2026.3.28 represents a pivotal shift from simple model integration toward a robust, “human-in-the-loop” autonomous system. By migrating major providers like Qwen to standardized enterprise APIs, introducing asynchronous tool approval workflows, and optimizing agent memory compaction, this update addresses the core challenges of modern AI orchestration: stability, safety, and seamless platform integration. Whether you are managing complex developer workspaces via ACP or deploying agents across messaging platforms like Telegram and Matrix, this version provides the necessary infrastructure to scale AI operations with confidence.
1. The Great Qwen Migration: Why Model Studio is Now Mandatory
This section aims to answer the core question: Why has the Qwen portal authentication been removed, and what steps must users take to maintain service continuity?
Summary: OpenClaw has officially deprecated the qwen-portal-auth for portal.qwen.ai in favor of the Model Studio API, ensuring a more stable and enterprise-ready connection for Alibaba’s Qwen models.
1.1 Transitioning to Model Studio API
The previous OAuth integration for the Qwen portal has been completely removed. This architectural decision was made to align with industry standards for API reliability. Users must now migrate their authentication to Model Studio to continue using Qwen models.
Actionable Steps:
To update your authentication, run the following command in your terminal:
openclaw onboard --auth-choice modelstudio-api-key
This migration ensures that your agent sessions are no longer tied to session-based portal tokens, which were prone to frequent expiration and instability.
1.2 Strict Configuration Validation (Config/Doctor)
As part of the effort to clean up technical debt, the Config/Doctor utility has undergone a major policy change.
-
No more legacy rewrites: Automatic migrations for configurations older than two months have been dropped. -
Validation failure: Very old legacy keys will now trigger a validation failure instead of being silently rewritten.
Author’s Reflection: Technical debt is the silent killer of modular systems. By forcing a clean break from configurations older than 60 days, the OpenClaw team is prioritizing system predictability over infinite backward compatibility. It’s a bold move that ensures every user is running on a modernized, secure JSON schema.
2. xAI and MiniMax: Deepening the Generative Web
This section aims to answer the core question: How do the new integrations with xAI and MiniMax enhance the search and creative capabilities of OpenClaw agents?
Summary: The integration of xAI’s Responses API and MiniMax’s image-01 model brings native web search and advanced image editing directly into the OpenClaw ecosystem.
2.1 xAI (Grok) with Native Search
The bundled xAI provider has moved to the Responses API, introducing x_search as a first-class citizen.
-
Automatic Activation: The xAI plugin now auto-enables itself if you own a valid web-search and tool configuration. This eliminates the need for manual plugin toggles. -
Onboarding Integration: During openclaw onboardoropenclaw configure --section web, users can now use a dedicated model picker to set upx_searchusing a shared xAI key.
2.2 MiniMax Multi-Modal Evolution
MiniMax has streamlined its offerings in this release, focusing on the high-performance M2.7 and the new image-01 model.
New Capabilities with image-01:
-
Text-to-Image: Generate high-fidelity images from text prompts. -
Image-to-Image Editing: Modify existing images with precise control. -
Aspect Ratio Control: Fine-tune the dimensions of the output to match specific platform requirements.
| Model | Status | Capabilities |
|---|---|---|
| M2.7 | Active | High-performance text processing |
| image-01 | New | Generation & Editing with Aspect Ratio control |
| M2, M2.1, M2.5, VL-01 | Removed | Deprecated legacy models |
Image Source: Unsplash
3. Guarding the Loop: Asynchronous Tool Approval
This section aims to answer the core question: How can developers prevent AI agents from performing unintended actions in sensitive environments?
Summary: The introduction of the requireApproval async hook allows for a “Human-in-the-loop” workflow, where sensitive tool executions are paused until explicitly authorized by a user.
3.1 The requireApproval Workflow
In the before_tool_call hook, plugins can now invoke requireApproval. This effectively “freezes” the agent’s execution path. The user is then notified via several possible channels to provide consent.
Supported Approval Channels:
-
Execution Overlay: A dedicated UI mask for web-based control. -
Telegram/Discord Buttons: Interactive UI elements within the chat interface. -
The /approveCommand: A universal command that can be typed in any channel to authorize the pending action.
Technical Implementation Logic:
The /approve command is now intelligent enough to handle both execution-level approvals and plugin-specific approvals, with an automatic fallback mechanism to ensure no authorization request is left hanging.
Author’s Reflection: Trust is the prerequisite for delegation. We often talk about “AI Safety” in the context of global ethics, but on a practical engineering level, safety is about whether the bot can delete a database without asking.
requireApprovalis the bridge that allows us to give agents powerful tools while maintaining absolute control.
4. Transforming Chat into Workspace: ACP Binds and Messaging Evolution
This section aims to answer the core question: How does the new ACP Bind feature change the way users interact with specialized agents like Codex in their daily messaging apps?
Summary: The new current-conversation ACP bind allows users to turn a standard chat in Discord, iMessage, or BlueBubbles into a fully functional Codex-backed workspace without the friction of creating sub-threads.
4.1 The Power of /acp spawn
Users can now use the following command to bind an agent to their current thread:
/acp spawn codex --bind here
This creates a seamless transition between a “Chat Surface” (the UI you see) and a “Runtime Workspace” (where the AI performs its tasks).
4.2 Unified File Handling
OpenClaw is standardizing how files are sent across different platforms. The release introduces a canonical upload-file action that works across:
-
Slack: Supports filename, title, and comment overrides. -
Microsoft Teams & Google Chat: Now included in the unified upload flow. -
BlueBubbles: Gains the upload-fileaction while maintaining thesendAttachmentalias for backward compatibility.
4.3 Matrix: Native Voice and E2EE Security
The Matrix integration receives two significant upgrades:
-
Native Voice Bubbles: Auto-TTS replies are now sent as native Matrix voice bubbles instead of generic file attachments, providing a much better mobile experience. -
E2EE Metadata Security: Image thumbnails in encrypted rooms are now encrypted with thumbnail_fileto prevent plaintext leakage viathumbnail_url.
5. Architecting for Stability: Rate Limits and Memory Compaction
This section aims to answer the core question: How does OpenClaw 2026.3.28 handle high-traffic scenarios and large context windows to prevent agent crashes?
Summary: By implementing model-specific rate-limit cooldowns and proactive memory compaction, OpenClaw ensures that agents remain responsive even when under heavy load or dealing with massive amounts of data.
5.1 Model-Specific Cooldowns (The 429 Fix)
Previously, a single “Rate Limit Exceeded” (HTTP 429) error could block an entire authentication profile. This release introduces a more granular approach.
-
Scoped Cooldowns: Cooldowns are now scoped per model. If your GPT-4 limit is hit, your Claude 3.5 model remains unaffected. -
The Stepped Ladder: Instead of an aggressive 1-hour ban, the system now uses a ladder: 30s -> 1m -> 5m. -
User Feedback: A countdown message is displayed when all models are rate-limited, so the user knows exactly when the agent will be back online.
5.2 Proactive Context Compaction
To prevent “repeated oversized requests,” the system now triggers a timeout recovery compaction before retrying a high-context LLM call. This ensures that the agent doesn’t waste time and tokens on a request that is destined to time out again.
5.3 Agent Status Accuracy
The /status command and the session_status UI now use provider-aware context window lookups. For instance, it will correctly report the 1.0M context window for Anthropic 4.6 models instead of defaulting to a lower shared-cache minimum.
6. Platform-Specific Refinements: Engineering Solutions to Edge Cases
This section aims to answer the core question: What specific bugs were addressed to improve the experience on WhatsApp, Telegram, and Discord?
Summary: Critical fixes were implemented to prevent infinite loops, improve message formatting, and stabilize gateway connections.
6.1 WhatsApp: Ending the Echo Loop
A significant fix addresses the “Self-chat DM mode” loop. Previously, a bot’s outbound reply would be re-processed as a new inbound message, leading to an infinite conversation between the bot and itself. This loop has been successfully terminated.
6.2 Telegram: Verified HTML Splitting
Standard text estimation often broke long messages in the middle of a word or an HTML tag.
-
The Solution: OpenClaw now uses a verified HTML-length search. -
The Result: Messages are split at word boundaries, ensuring that code blocks and formatted text remain readable.
6.3 Discord: Stabilized Gateway Recovery
Discord recovery has been hardened by draining stale gateway sockets and clearing cached resume states before a forced reconnect. This prevents the “poisoned resume state” loop that previously caused gateway crashes.
| Platform | Key Fix | Benefit |
|---|---|---|
| Infinite echo loop fix | Prevents resource exhaustion in self-chats | |
| Telegram | Word-boundary HTML splitting | Ensures readable formatting for long messages |
| Discord | Stale socket drainage | Drastically improves reconnection reliability |
| iMessage | Metadata stripping | Stops internal tags (e.g., [[reply_to]]) from leaking to users |
7. DevOps and CLI: Streamlining the Deployment Experience
This section aims to answer the core question: How can administrators and developers better manage OpenClaw configurations and containerized environments?
Summary: New CLI tools and simplified Podman setups make it easier to deploy and audit OpenClaw instances.
7.1 Configuration Auditing with JSON Schema
Developers can now generate a JSON schema for their configuration file by running:
openclaw config schema
This is an invaluable tool for validating openclaw.json files and ensuring they adhere to the latest structure required by the 2026.3.28 release.
7.2 Optimized Podman Rootless Mode
The Podman setup has been simplified around the current rootless user.
-
No Dedicated Service User: A dedicated openclawservice user is no longer required. -
Host-CLI Workflow: Documentation now highlights the openclaw --container <name> ...workflow, making it more intuitive for Linux administrators.
7.3 Advanced Debugging
The new --cli-backend-logs flag replaces the legacy --claude-cli-logs. It provides a generic interface to view inference logs for Claude, Codex, and Gemini CLI backends in one place.
8. Conclusion and Practical Checklist
OpenClaw 2026.3.28 is a “stability-first” release. It moves the platform away from the fragility of session-based web portal hacks and toward the robustness of enterprise-grade API integrations and human-governed automation. By emphasizing granular rate-limiting, proactive memory management, and cross-platform consistency, this version sets a new benchmark for AI agent frameworks.
Practical Checklist for Upgrading
-
[ ] Update Qwen: Obtain a Model Studio API key and run openclaw onboard --auth-choice modelstudio-api-key. -
[ ] Validate Config: Run openclaw doctorto identify any legacy keys that are no longer supported. -
[ ] Test Approvals: If using custom plugins, implement the requireApprovalhook for sensitive tool calls. -
[ ] Check CLI Schema: Run openclaw config schemato update your local IDE’s validation rules foropenclaw.json. -
[ ] Monitor Status: Use /statusto verify that your model context windows (especially for Anthropic and Gemini) are correctly reported.
One-page Summary
| Category | High-Level Change | Impact |
|---|---|---|
| Providers | Qwen to Model Studio, MiniMax image-01 added | High – Breaking change for Qwen users. |
| Security | Async requireApproval hook added |
High – Critical for enterprise/sensitive tool use. |
| ACP/Workspaces | --bind here for iMessage/Discord |
Medium – Better UX for persistent workspaces. |
| Stability | Scoped 429 cooldowns & context compaction | Medium – Reduced downtime during peak usage. |
| Messaging | Native Matrix voice, Telegram HTML splitting | Medium – Higher quality multi-platform output. |
FAQ: Frequently Asked Questions
Q1: What happens if I don’t migrate my Qwen account to Model Studio?
A: Your Qwen-based agents will stop functioning. The old qwen-portal-auth has been completely removed from the codebase.
Q2: Can I still use the old MiniMax models like M2 or VL-01?
A: No. The model catalog has been trimmed to focus on M2.7 and image-01. Older models are no longer supported.
Q3: Does the /approve command work in a group chat?
A: Yes. The /approve command works on any channel and is designed to handle both system-level and plugin-level authorization requests.
Q4: How does the new rate-limiting work if I have multiple models?
A: Rate limits are now scoped by model. If GPT-4 is on a 5-minute cooldown due to a 429 error, you can still use Claude or Gemini without interruption.
Q5: Why did my Discord bot stop crashing during reconnections?
A: v2026.3.28 includes a fix that drains stale gateway sockets and clears the “poisoned resume state” that previously caused infinite reconnection loops.
Q6: Can I generate a JSON schema to validate my config file?
A: Yes, run openclaw config schema. This will print the generated schema for your openclaw.json.
Q7: Is the Podman setup still requiring a dedicated service user?
A: No. The setup has been simplified for the current rootless user, and the launch helper is now installed under ~/.local/bin.
