Claude Managed Agents Explained: What Anthropic’s Cloud-Based AI Agent Service Means for Enterprise Automation

Anthropic recently released a new product called Claude Managed Agents. In simple terms: you tell Anthropic what kind of AI agent you want, and they run it in the cloud for you. They handle all the infrastructure, and you pay only for what you use.

This announcement has caused quite a bit of discussion among developers. Some say it could challenge many startups that are building AI agent infrastructure. Meanwhile, Anthropic’s annual recurring revenue just passed $30 billion – three times what it was last December. Most of that growth comes from enterprise customers. Wall Street is starting to pay attention. The Wall Street Journal reports that investors are becoming more cautious about traditional SaaS companies, worried that products like this could make some existing software services unnecessary.

So what exactly is this product? How is it different from Claude Code, which developers already know? And how does it work under the hood? Let’s walk through it step by step.


What Is Claude Managed Agents? And How Is It Different From Claude Code?

If you have used Claude Code, you already know how an AI agent works: you give it a task, and it plans its own steps, calls tools, writes code, edits files, and gets things done one step at a time.

The difference is in the environment where it runs.

Claude Code runs on your own computer. It is a command-line tool for individual developers. When you shut down your computer, it stops.

Claude Managed Agents runs on Anthropic’s cloud. It is an API service designed for businesses. It can run 24 hours a day, 7 days a week. Even if your network disconnects, it does not lose progress. You can embed agent capabilities directly into your own product.

For example, Notion does exactly this: users inside Notion assign tasks to a Claude agent. The agent works in the background and hands back the results. The user never has to leave Notion.

Notion integration example

Here is a quick comparison table to help you see the differences clearly:

Feature Claude Code Claude Managed Agents
Where it runs Your local computer Anthropic’s cloud
Who it is for Individual developers Businesses and product teams
How you use it Command-line tool API service
Continuous operation Only when your device is on 24/7, with recovery after disconnection
Typical use case Personal coding assistance Embedding into products, team workflows, automated processes

Common Ways to Use Managed Agents

  • Event‑triggered – A system finds a bug and automatically sends an agent to fix it and open a pull request. No human needs to get involved.
  • Scheduled – Every morning, an agent automatically generates a GitHub activity summary or a team work report.
  • Fire‑and‑forget – You give an agent a task in Slack. It comes back later with a completed spreadsheet, presentation, or even a small application.
  • Long‑running tasks – Deep research or code refactoring that takes several hours to finish.

Why Would a Business Need This? Can’t You Just Build It Yourself?

You can absolutely build it yourself. But it is expensive and slow.

A production‑ready agent needs much more than just “calling an API”. Here is what you have to set up:

  • A sandbox environment – An isolated, secure space where the AI can run code and edit files without affecting your real systems.
  • Credential management – Storing and using various access keys safely.
  • State recovery – The ability to resume a task from where it stopped if something fails.
  • Access control – Fine‑grained rules about what the AI can and cannot do.
  • End‑to‑end tracing – Logging every step for debugging and auditing.

Many enterprise customers previously needed an entire engineering team just to handle these things. With Managed Agents, those features come ready to use. Engineers can focus on what makes their product truly special.

But saving engineering time is not the only benefit.

There is a thoughtful observation about this: building your own local agent harness is often futile, because as the model evolves, the carefully designed workarounds for old model limitations lose their purpose. The company that makes the model knows its limitations best. It can design a harness that fits the model’s specific characteristics – and then sell that harness to you. That is exactly what Managed Agents is.

Anthropic’s engineering blog gives a concrete example. Claude Sonnet 4.5 would become “anxious” near its context window limit and would end tasks too early. So Anthropic added context resets to the scheduling framework to work around this. But when Claude Opus 4.5 came out, that problem disappeared. The previous fix became unnecessary baggage.

If you build your own scheduling framework, you have to update it every time the model changes. If you let Anthropic handle it, they optimize it for you – and strictly speaking, what they optimize is what they sell to you.

Architecture evolution diagram

Who Is Already Using It? And How?

Notion

Notion lets users assign tasks like coding, making presentations, or整理 spreadsheets directly inside their workspace. Dozens of tasks can run in parallel, and whole teams can collaborate on the same output. Notion’s product manager said users can hand off open‑ended, complex tasks without ever leaving Notion.

Sentry

Sentry built a fully automated workflow that goes from “finding a bug” to “submitting a fix.” Their AI debugging tool, Seer, identifies the root cause. Then Claude writes the patch and opens a pull request. The engineering director said the system went live in just a few weeks – and they saved all the operational overhead of maintaining their own infrastructure.

Atlassian

Atlassian integrated Managed Agents into Jira. Developers can now assign tasks to a Claude agent directly inside Jira.

Asana

Asana created “AI Teammates” – AI collaborators inside project management that can take on tasks and deliver completed work.

General Legal

This legal tech company has the most interesting use case. Their agent can temporarily write custom tools to look up data based on a user’s question. Before, they had to anticipate every possible user question and build a dedicated retrieval tool in advance. Now the agent generates what it needs on the fly. The CTO said development time was cut by a factor of ten.

Rakuten

Rakuten deployed specialized agents across engineering, product, sales, marketing, and finance. Each agent went live within one week. The agents receive tasks through Slack and Teams, and they hand back actual deliverables – spreadsheets, presentations, applications.


How It Works Under the Hood: Separating the Brain From the Hands

Anthropic’s engineering team wrote a technical blog post explaining the architecture behind Managed Agents.

In the beginning, they put everything into a single container: the AI’s reasoning loop, the code execution environment, and the conversation history. Everything together. The advantage was simplicity. The disadvantage was putting all eggs in one basket – if the container failed, the entire session was lost, and you could not replace any part independently.

Then they made a critical split.

Brain and hands separation diagram
  • The “brain” – Claude and its scheduling framework. This is responsible for thinking and making decisions.
  • The “hands” – The sandbox and various tools. These are responsible for executing specific actions.
  • The “memory” – Independent session logs that record everything that happened.

These three parts do not depend on each other. If one fails, the other two keep working.

What This Split Actually Achieves

Faster response times

Not every task needs a full sandbox environment. Now, the sandbox starts only when the AI really needs to run code. The median first‑response latency dropped by about 60%. In extreme cases, it dropped by more than 90%.

Better security

The code generated by the AI runs inside the sandbox. But the credentials that access external systems live in a secure vault outside the sandbox. The two are physically separated.

For example, when accessing a Git repository, the system clones the code during initialization. The AI can use normal git push and git pull commands, but the access token itself is never visible to the AI. For services like Slack or Jira, the system uses the MCP protocol. Requests go through a proxy layer. The proxy fetches the credentials from the vault and calls the service. The AI never touches the credentials.

Security isolation diagram

Flexibility

The brain does not care what the “hands” actually are. The engineering blog has a memorable line: the scheduling framework does not know whether the sandbox is a container, a mobile phone, or a Pokémon emulator. As long as something follows the interface – “give it a name and some input, and it returns a string” – it works.

This also means that multiple brains can share the same hands. One brain can even hand off a task to another brain. This lays the foundation for multi‑agent collaboration.


What Does It Cost?

Pricing has two parts:

  • Token usage – Charged at Anthropic’s standard API rates.
  • Runtime fee – $0.08 per session hour. Idle time is not charged.
  • Web search – $10 per thousand searches.

A note on cost: $0.08 per hour does not sound like much. But if an agent runs a complex task for several hours, plus token consumption, the total can be significant. Businesses should estimate their expected usage carefully.


What Are the Limitations?

Managed Agents is not a magic solution. Here are a few things to keep in mind.

Some features are still in preview

Multi‑agent collaboration, advanced memory tools, and self‑evaluation iteration (where the agent judges its own work quality and improves repeatedly) are not yet fully available. You need to request access to use them.

Vendor lock‑in

Choosing Managed Agents means your agent infrastructure is tied to Anthropic’s ecosystem. If you want to switch models or platforms in the future, the migration cost is real.

Context management remains difficult

Even though session logs are stored independently, deciding what information to keep and what to discard during long‑running tasks involves irreversible decisions. This is an ongoing challenge. Anthropic’s current approach is to separate context storage from context management. Storage ensures nothing is lost. Management strategies evolve as the models improve.

Predictability of costs

As mentioned above, complex tasks that run for hours can become expensive. The per‑session‑hour pricing model requires careful evaluation of task types and durations.


How Do You Get Started?

If you are an enterprise developer who wants to embed AI agent capabilities into your own product, Managed Agents could save you months of infrastructure work.

Supported programming languages (with SDKs): Python, TypeScript, Java, Go, Ruby, PHP.

If you already use Claude Code, update to the latest version and type:

/claude-api managed-agents-onboarding

Then follow the prompts.


Frequently Asked Questions

How long can a single agent session run?

Theoretically, it can run for a very long time. The system is designed to recover from disconnections and preserve state. The actual limit depends on the task complexity and how context is managed.

Is idle time really free?

Yes. You are only charged for active session hours – when the agent is working or waiting for a response. Idle waiting time is not billed.

Can the agent access my company’s internal systems?

Yes, but through the secure credential management system. The AI itself never sees any keys or tokens. All external access goes through the proxy layer.

Can multiple agents collaborate on one task?

This feature is currently in research preview and not fully available. However, the architecture already supports the foundation for multi‑agent collaboration. You need to request access for now.

If the model is upgraded, will my agent automatically benefit?

Yes. Because Anthropic maintains the scheduling framework, they will optimize it for new models. You do not need to change any code yourself.

Can I deploy the agent inside my own cloud environment?

Managed Agents is a hosted service on Anthropic’s cloud. If you have strict compliance or data sovereignty requirements, you will need to evaluate whether this works for you.


The “AWS Moment” for AI Agent Infrastructure

This looks like the same path AWS took years ago. First came computing power. Then they packaged the runtime environment too. Ten years ago, businesses debated “should we move to the cloud?” Now they debate “should we build agent infrastructure ourselves or use a managed service?” History suggests most businesses will eventually choose managed services – because infrastructure has never been a core competitive advantage. OpenAI has also released its own agent platform, Frontier. Competition in this space is just getting started.

From a technical perspective, the “brain‑and‑hands separation” is worth paying attention to. It allows each part of the system to evolve independently. The model upgrades? Swap the brain. Need new tools? Add a pair of hands. Change the storage scheme? Replace the memory layer. The engineering blog makes a good analogy: the operating system’s read() command does not care whether the underlying hardware is a disk from the 1970s or a modern solid‑state drive. Once the abstraction layer is stable, you can change the implementation underneath freely.

From a user perspective, if you are an enterprise developer wanting to add AI agent capabilities to your product, Managed Agents might save you months of infrastructure work. If you are already a Claude Code user, update to the latest version and start exploring. If you are a regular user of software products, the short‑term effect you will likely notice is this: more and more SaaS products will have AI agents working for you in the background – and many of those agents will probably be running on Managed Agents.

As for whether AI agent infrastructure will eventually be dominated by a few big cloud companies like traditional cloud computing – that question does not have a clear answer yet. But one thing is certain. The infrastructure barrier is coming down. However, defining good tasks, designing good workflows, and building trust so that AI can touch core business data – these are problems that Managed Agents cannot solve for you. Every business will still need to think through those problems on its own.