OpenClaw v2026.3.31: Stronger Security, Smarter Tasks, and a Broader Platform Ecosystem

What is the core question this section and article aim to answer? What are the most important changes in the newly released OpenClaw v2026.3.31 that developers and operators must know about? How does it make agent execution safer, background tasks more manageable, and cross-platform integration smoother?

OpenClaw, the open-source agent framework, released version v2026.3.31 on March 31, 2026. This is not a minor update; it’s a major release that touches the core of the system—from how security is enforced to how background work is managed and how the platform connects with the outside world. If you use OpenClaw to build automation workflows or integrate AI into your team’s operations, this update includes changes that you need to understand and, in some cases, actively adapt to.

This article will walk you through the key updates, based strictly on the official release notes. Instead of just listing technical terms, we’ll explain the context behind each change using practical scenarios. We’ll also include some personal reflections to help you decide how these updates might affect your own projects.


1. Security Model Overhaul: Trust Is No Longer Automatic

What is the core question this section aims to answer? In version v2026.3.31, how has OpenClaw redefined what “trust” means? Which actions that used to be automatic now require explicit permission?

Security is a central theme in this release. Several of the “breaking changes” all follow one core principle: make implicit trust explicit and require clear approval for every high-risk action. This reflects a deeper understanding of AI agent security—when an AI can execute system-level commands, you need strict safeguards.

1.1 Node Execution: A Cleaner Command Path

Scenario: Imagine you’re managing multiple remote servers (called “nodes”) through the OpenClaw gateway. Before, the nodes.run shell wrapper could be used both from the command line and as a tool for the agent, which created confusion about its role. Now, node shell execution is unified under exec host=node, while nodes invoke is reserved for node-specific features like media, location, and notifications.

This separation makes the purpose of each command much clearer. Reflection: This kind of “single responsibility” separation reduces the mental overhead when debugging. If a command fails, you can quickly tell if the issue is with the general execution environment or a node-specific capability, rather than trying to figure out which overlapping function is causing the problem.

1.2 Skill and Plugin Installation: “Fail Closed” by Default

Scenario: In previous versions, if you tried to install a plugin with potentially unsafe code, the system might warn you but still allow the installation. This was like leaving a small door open in a firewall. Starting with v2026.3.31, that door is locked by default. During installation, a “dangerous code” scan runs. If a critical-level issue is found, the installation fails outright unless you explicitly add the --dangerously-force-unsafe-install flag.

This is a significant security improvement. It forces the user to make a deliberate, conscious decision before installing a plugin that may not be safe. For teams or organizations, this can act as a policy enforcement point, ensuring that every new extension goes through a proper review.

1.3 Gateway Authentication: Fine-Grained Proxy Trust

Scenario: The trusted-proxy configuration is now stricter. If a proxy configuration mixes different shared tokens, the system will reject it. Also, callers from the local loopback address (127.0.0.1) no longer get a free pass—they must provide the correct token for authentication.

This closes a potential security hole. Previously, any local process or script on the same server could potentially impersonate the gateway. Now, even local calls need credentials. This greatly improves security in complex environments like containerized deployments or multi-tenant servers.

1.4 Node Commands and Events: Reducing the Trusted Surface

Scenario: Here’s another thoughtful security adjustment. A successful node “pairing” means the device is recognized by the gateway. But the “commands” declared on that device are not automatically exposed. They remain disabled until the node pairing itself is explicitly approved. Similarly, runs that originate from a node now have a reduced set of tools available. This means that if a compromised node tries to execute high-privilege commands through its declared commands or by starting a run, the system will block it.

Personal insight: This is a good example of “defense in depth.” It acknowledges that “device is trusted” is not the same as “all features on that device are trusted.” By decoupling device connection from device capability, OpenClaw gives administrators a valuable intermediate state: you can let a device connect to the gateway but hold off on granting it permission to run sensitive commands until you’ve had a chance to verify its behavior.


2. Background Tasks: From Temporary Execution to Persistent Workflows

What is the core question this section aims to answer? How has OpenClaw’s “Tasks” system evolved in v2026.3.31? How does it move from simple background execution to a system that can track and manage complex workflows?

If security forms the foundation of this release, the re-architecture of the background task system is its core. Several contributors worked together to transform the task system from something that was originally tied to the Agent Communication Protocol (ACP) into a full-fledged “control plane” for OpenClaw.

2.1 A Unified Registry for Tasks and Flows

Scenario: In the past, a cron-triggered job, a sub-agent task, and a CLI background command all had different lifecycles and management methods. v2026.3.31 unifies them all under a single SQLite-backed ledger.

Now, regardless of where a task comes from, it has a unified “record.” You can view and manage all these tasks using new commands like openclaw flows list|show|cancel, and you can clearly see how a single “Flow” is composed of multiple “Tasks.” This lays the groundwork for building complex, long-running automations. Practical example: Imagine an automation that processes user-uploaded videos. It might have steps like “receive file,” “transcode,” “add watermark,” and “notify user.” Now you can track the entire flow, see where it’s stuck, and retry only the failed step.

2.2 Intelligent Linkage Between Tasks and Flows

Scenario: When a “single-task flow” (for example, a single agent response triggered by a user message) gets blocked—say, it’s waiting for user input—the system persists its state. When the block is cleared (the user replies), the task can resume cleanly within the same flow, rather than creating a new, contextless task.

This solves a major pain point in asynchronous interactions. Reflection: In human-AI collaboration, “being blocked” is normal. The agent needs to wait for information, confirmation, or an external system. The old task model treated “waiting” as “ending,” making it hard to continue the interaction. The new model sees “blocked” as a valid state of a flow, allowing complex, multi-turn interactions to be modeled correctly.

2.3 Context Routing: Bringing Results Back to the Right “Room”

Scenario: Imagine an ACP sub-task that does a complex data analysis in the background. Under the old model, the results of that sub-task might only appear in its own logs. In the new version, the sub-task’s results are routed back to the session that started the parent task—maybe a specific Discord channel or a web chat session.

This means your background tasks are no longer isolated. They can “wake up” and “reach back” to the original conversation thread to report results. This is essential for building stateful, context-aware agent applications. For example, a user in Slack asks, “Can you check last quarter’s sales numbers and send me the report?” The agent can start a background task to query the database. When the task finishes, it automatically sends the report file back to that same Slack channel.


3. Expanding the Platform Ecosystem: More Channels, More Capabilities

What is the core question this section aims to answer? What new communication platforms are supported in this release? What new capabilities have existing platforms like Matrix, LINE, Slack, and WhatsApp gained?

OpenClaw’s “channel” plugin system continues to grow. From enterprise collaboration tools to instant messaging platforms, the coverage is broader and the features are deeper.

3.1 New Member: QQ Bot

Scenario: For developers in the Chinese ecosystem, this is exciting news. OpenClaw now natively supports QQ bots! This isn’t just about sending and receiving text. The new channels.qq-bot plugin supports multiple accounts, secure credential management via SecretRef, slash commands, reminders, and even sending and receiving media (images, videos, etc.).

This means you can use OpenClaw’s powerful AI to build a QQ bot that can listen, speak, see, set reminders, and run commands. Use case: In a technical community group, you could deploy an OpenClaw QQ bot to answer common questions, run code via slash commands (like /ping), post scheduled announcements, or even describe images sent by members.

3.2 Matrix: A Professional Group Chat Experience

Scenario: Matrix users now get three significant upgrades. First, you can configure channels.matrix.historyLimit to provide historical context for group triggers, helping the AI better understand the conversation background. Second, channels.matrix.proxy lets you route Matrix traffic through an HTTP(S) proxy for secure connectivity in complex networks. Finally, and perhaps most importantly, “draft streaming” means AI replies can now stream word by word within a single message bubble, just like ChatGPT, instead of sending one new message per chunk.

Use case: Run a support bot for an open-source community on Matrix. When a user asks for help in a channel with a long history, the bot can read the recent conversation to understand the context. Its response will stream smoothly without spamming the channel. You can also route all Matrix traffic through a corporate proxy for compliance.

3.3 LINE & WhatsApp: More Natural Interactions

Scenario: LINE and WhatsApp users will enjoy richer ways to interact. LINE now supports sending images, videos, and audio files, so your agent can be more than a text responder—it can become a multimedia content distributor. WhatsApp gets a very human-friendly feature: emoji reactions. The agent can now react to incoming messages with an emoji, like using a ❤️ to acknowledge a cute cat photo instead of typing “I like this picture.”

Personal reflection: This small WhatsApp reaction feature is a great example of good human-computer interaction. In human conversations, non-verbal feedback (nods, smiles, emojis) is a big part of communication. Allowing the AI to respond in such a low-cost, high-affinity way makes the interaction feel much more natural and engaging.

3.4 Slack: Closed-Loop Approval Workflows

Scenario: In enterprise settings, running sensitive commands often requires approval. Previously, approval prompts might be sent to the web UI or terminal, disrupting the Slack workflow. Now, Slack’s “exec approval” feature works natively within Slack. When an agent tries to run a command that needs approval, the request is sent to a specified Slack channel or user. The approver can approve or deny it using interactive buttons directly in Slack.

This enables a true “single pane of glass” workflow. Team members don’t need to leave Slack to manage AI agent actions, which greatly improves adoption and security in enterprise environments.

3.5 Other Platform Highlights

  • Discord/Voice: Voice message transcripts now respect channel and member allowlists, preventing unauthorized users from triggering the agent via voice channels.
  • Microsoft Teams: Added a Graph-backed action to fetch channel member information, making it easier to query user details in automations.
  • Nostr: Inbound direct messages now require signature verification, preventing forged events that could otherwise create pairing requests or trigger replies.

4. Core Capability Upgrades: MCP, Agent Execution, and Tooling

What is the core question this section aims to answer? How does v2026.3.31 make OpenClaw’s “brain” (the LLM) and its “hands and feet” (tools) work together more intelligently and reliably?

4.1 Model Context Protocol (MCP) Matures

Scenario: MCP is the bridge connecting the LLM to external tools. This release brings significant enhancements. Tool names are now automatically prefixed (serverName__toolName), eliminating naming conflicts between different MCP servers. You can now configure remote HTTP/SSE servers via URL, with connection timeouts, allowing MCP to connect to cloud-based tools. There’s also support for selecting streamable-http transport, laying the groundwork for streaming-capable tools.

Use case: You can configure a remote MCP server that connects to your company’s internal database, offering tools like database__query and database__analyze. An OpenClaw agent can use these tools as if they were local, with all authentication and network configuration handled in the mcp.servers URL config.

4.2 Fine-Grained Control Over Agent Execution

Scenario: The /exec command is one of OpenClaw’s most powerful—and potentially dangerous—tools. This release refines its behavior. The default execution target is now host=auto. It checks if a sandbox environment exists and runs there if it does; otherwise, it falls back to the host machine. The system also blocks request-scoped environment variables (like HTTP_PROXY, DOCKER_HOST) from affecting host execution, preventing malicious requests from redirecting network traffic or altering system configurations.

Personal insight: This highlights a core tension in AI security: we want the agent to be powerful enough to get things done, but we must strictly limit what it can do. By “sanitizing” environment variables, OpenClaw ensures that even if a user tricks the agent into executing a malicious command (through prompt injection, for example), that command can’t easily “escape” to change system configurations or attack the internal network. This is a very pragmatic defense.

4.3 Agent Experience and Reliability Fixes

  • BTW Question Fix: The /btw side-question command now forces the provider to disable reasoning modes (like Anthropic’s adaptive thinking), resolving issues where side questions would fail to generate a response.
  • System Prompt Interpolation: Fixed a bug where agent.name wasn’t being properly inserted into the system prompt for embedded runtimes, ensuring the agent correctly identifies itself.
  • OpenAI Responses API Compatibility: Fixed tool schema normalization issues, allowing more clients that use the Responses API (like Codex) to register and use tools correctly.
  • TTS Diagnostics: Added structured diagnostics and fallback analytics for Text-to-Speech providers, making it easier to troubleshoot voice service issues.

5. Summary and Action Guide

What is the core question this section aims to answer? What should existing users focus on when upgrading to v2026.3.31? What are some actionable steps to try right away?

OpenClaw v2026.3.31 is a major release that strengthens security, reshapes background task management, and expands platform capabilities. It sets a solid foundation for building more complex, secure, and reliable AI-driven automations.

Practical Summary / Action Checklist

  1. Before You Upgrade:

    • Check your plugin and skill installation scripts: If they contain potentially unsafe code, you’ll now need to add the --dangerously-force-unsafe-install flag.
    • Review your gateway configuration: Pay special attention to the trusted-proxy settings and authentication tokens. Ensure that local callers have the correct token configured.
    • Re-evaluate node permissions: If you previously relied on node pairing to automatically enable node commands, you’ll now need to explicitly approve the node for command execution in the gateway.
  2. After You Upgrade, Try These:

    • Explore the new task management: Run openclaw flows list to see how your background tasks are now unified. Use openclaw flows show <id> to inspect the structure of a complex workflow.
    • Configure a remote MCP server: If you have cloud-based tools, try configuring them as an MCP server via URL and auth headers.
    • Enable exec approvals in Slack or WhatsApp: Let your team approve sensitive actions without leaving their chat app.
    • Turn on streaming and history for your Matrix channel: Experience a more modern chatbot interaction.
    • Send a media file on LINE: Test the new multimedia sending capabilities.
    • React to a WhatsApp message: Try having the agent use emoji reactions for a more natural interaction.
  3. For Developers:

    • Plugin SDK Warnings: The legacy openclaw/plugin-sdk/* compatibility paths are deprecated. Migrate to the currently documented entry points.
    • ACP Security Changes: Tool authorization is no longer based on simple name overrides but on semantic approval classes. Adjust your code accordingly.

One-Page Summary

Area Key Change Impact on Users Action Required
Security Plugins/skills fail-closed by default; node commands need extra approval; local auth requires token Risky installs need a flag; node management workflow changes; all calls need credentials Update automation scripts; audit gateway auth config
Background Tasks Unified SQLite task ledger; tasks linked to flows; results route back to original sessions All background work is trackable; complex flows are manageable; async results can return Use new openclaw flows commands; design “blocked” and “resume” logic for long tasks
Platform Integration New QQ Bot; Matrix proxy/history/streaming; LINE/WhatsApp feature enhancements Stronger Chinese ecosystem support; Matrix experience upgrade; more natural interactions Try configuring new platforms; enable new features like historyLimit or proxy in config
Core Capabilities Remote MCP servers; refined exec behavior; OpenAI compatibility fixes Can connect to remote tools; execution is safer; better third-party client compatibility Configure remote MCP servers; understand new exec default behavior (host=auto)

Frequently Asked Questions (FAQ)

  1. Q: After upgrading, a plugin I was using won’t install. What happened?
    A: The new version blocks installation of plugins with high-risk code by default. Check the installation log. If you trust the source, you can force the install with openclaw plugins install --dangerously-force-unsafe-install <plugin-name>.

  2. Q: My node is paired, but I can’t use its commands. Why?
    A: Pairing now only acknowledges the device. To expose the commands declared on that node, you need to explicitly approve the node in the gateway. Check your gateway management interface or API.

  3. Q: Where does the /exec command run now? On the host or in a sandbox?
    A: The default is host=auto. It will first look for a configured sandbox environment and run there if found. If not, it runs on the host. You can also explicitly set host=sandbox or host=node.

  4. Q: How do I set up a QQ bot for OpenClaw?
    A: The channels.qq-bot plugin is bundled. You’ll need to provide your QQ Bot credentials in the configuration file (using SecretRef for security) and set up the appropriate accounts and permissions. The bot will then respond to slash commands and other interactions.

  5. Q: My background task seems “dead” when it’s waiting for user input. Is it broken?
    A: No, it’s now in a “blocked” state. The new task system persists this state. When the user provides the needed information, the task will automatically resume within the same flow. You can check its status with openclaw flows show <flow-id>.

  6. Q: Does this version support OpenAI’s Responses API?
    A: Yes. v2026.3.31 fixes tool compatibility with the Responses API and supports passing the text.verbosity configuration. Clients using the Responses API can now call the OpenClaw gateway as reliably as those using the Chat Completions API.

  7. Q: Do I need to configure anything to get streaming replies on Matrix?
    A: Yes, this is a feature you need to enable in your configuration. Look for the streaming reply settings under channels.matrix. Once enabled, agent replies will update incrementally in a single message rather than sending multiple messages.

  8. Q: How does the WhatsApp reactions feature work?
    A: The agent can decide to use it. When the model determines that a non-textual response is appropriate—for example, to acknowledge a photo—it can call an internal tool to add an emoji reaction to a specific message. No special user configuration is usually required.

OpenClaw v2026.3.31 marks a significant step forward for the project, moving it toward a more secure, robust, and professional AI agent framework. Whether you’re a solo developer or part of a large enterprise, taking the time to understand and plan for this upgrade will help you fully leverage its new features and security enhancements.