Acontext: From Storage to Self-Learning, Building More Reliable AI Agent Systems
In the rapidly evolving landscape of AI agent technology, developers are increasingly focused on a core challenge: how to make agents complete tasks more stably and efficiently while continuously accumulating experience to achieve self-improvement. Acontext, a contextual data platform, is designed to address these pain points. It not only stores agents’ conversations and artifacts but also monitors task progress, collects user feedback, and transforms experience into long-term skills through learning—ultimately helping you build more scalable agent products.
I. What is Acontext?
Put simply, Acontext is a contextual data platform built around the workflow of AI agents. It primarily fulfills four key functions:
-
Storage: Saves agent conversation threads (including multimodal messages) and various artifacts (such as files, data, etc.); -
Observation: Tracks task status, progress, and user preferences through background agents, recording every step of the agent’s operations; -
Self-Learning: Extracts and stores the experience (Standard Operating Procedures, or SOPs) accumulated by agents during task execution into long-term memory, making agents “smarter” with use; -
Visualization: Provides a local dashboard for intuitive viewing of message logs, task statuses, artifact content, and learned skills.

Acontext’s core logic: The closed loop of storage, observation, and learning
Why do we need such a platform? In practical development, AI agents often face challenges like forgetting previous steps mid-task, repeating the same mistakes, or failing to adjust behavior based on user preferences. Acontext addresses these issues by building a “storage-observation-learning” closed loop, improving success rates, reducing unnecessary operational steps, and ultimately delivering greater value to users.
II. Core Concepts of Acontext: Understanding the Platform’s “Building Blocks”
To use Acontext effectively, it’s essential to understand its key components. These building blocks work together to support the platform’s full functionality:
1. Session: The Agent’s “Conversation Notebook”
A Session is a conversation thread—essentially the agent’s “notebook”—that records all interactions between users and the agent. It supports the storage of multimodal messages (text, images, etc.) and can unify conversation data regardless of whether you use OpenAI, Anthropic, or other models.
For example, when a user asks an agent to “write a landing page for the iPhone 15 Pro Max,” every exchange—from the user’s initial request and the agent’s proposed plan to subsequent communications—is fully documented in a Session.
2. Task Agent: The Background “Task Tracker”
Every Session is automatically associated with a Task Agent, which acts as a background “task tracker.” It extracts tasks from conversations, monitors progress, and records user preferences.
For instance, if an agent outlines a plan like “1. Search for the latest iPhone 15 Pro Max news; 2. Initialize a Next.js project; 3. Deploy the page,” the Task Agent will automatically identify these three tasks, update their statuses in real time (pending, in progress, completed), and note any special user requirements (e.g., “Collect news and report before coding”).
3. Disk: The Agent’s “File Cabinet”
Disk is a dedicated “file cabinet” for storing agent artifacts. You can organize and access these artifacts (documents, code, spreadsheets, etc.) using file paths, just like managing local files.
For example, a “todo-list.md” or “design-sketch.png” generated by the agent during a task can be stored in a specific path (e.g., “/project/iphone-landing/”) on Disk, and retrieved directly via that path when needed later.
4. Space: The Agent’s “Skill Knowledge Base”
Space is a Notion-like structured system that serves as the agent’s “skill knowledge base,” storing skills (SOPs) extracted from experience. It organizes content into folders, pages, and blocks for easy retrieval and use by the agent.
For example, skills related to “GitHub operations” might be structured as follows:
/
└── github/ (Folder)
└── GTM (Page)
├── find_trending_repos (SOP Block)
└── find_contributor_emails (SOP Block)
└── basic_ops (Page)
├── create_repo (SOP Block)
└── delete_repo (SOP Block)
Each SOP block contains specific operational guidelines. For instance, a skill for “starring a GitHub repo” might look like this:
{
"use_when": "star a repo on github.com",
"preferences": "use personal account. star but not fork",
"tool_sops": [
{"tool_name": "goto", "action": "goto github.com"},
{"tool_name": "click", "action": "find login button if any. login first"},
...
]
}
5. Experience Agent: The “Skill Refiner”
The Experience Agent is the “behind-the-scenes assistant” for Space. It extracts SOPs from completed tasks, evaluates skill complexity (whether it’s worth saving), and stores qualified skills in Space—all without manual intervention, running automatically in the background.
How Do These Components Work Together?
Their relationship can be summarized in a simple flow diagram:
┌──────┐ ┌────────────┐ ┌──────────────┐ ┌───────────────┐
│ User │◄──►│ Your Agent │◄──►│ Session │ │ Artifact Disk │
└──────┘ └─────▲──────┘ └──────┬───────┘ └───────────────┘
│ │
│ ┌────────▼────────┐
│ │ Observed Tasks │
│ └────────┬────────┘
│ │
│ ┌────────▼────────┐
│ │ Space (Learn) │
│ └────────┬────────┘
│ │
└──────────────────┘
Skill-Guided Agent
In plain language:
-
Interactions between users and the agent are recorded in a Session; -
The Task Agent extracts tasks from the Session and tracks progress; -
Files generated by the agent are stored in Disk; -
After a task is completed, the Experience Agent extracts skills from the process and saves them to Space; -
When the agent executes similar tasks in the future, it retrieves relevant skills from Space to guide operations.
III. How to Get Started with Acontext?
The first step to using Acontext is setting up a local environment. The process is straightforward, following these steps:
1. Install Acontext CLI
Acontext provides a command-line interface (CLI) to simplify project initialization and backend management. Open your terminal and run the following command to install it:
curl -fsSL https://install.acontext.io | sh
2. Prepare Required Dependencies
Before starting the Acontext backend, ensure your computer has two essential tools installed:
-
Docker: Used to run Acontext’s background services (Download Link); -
OpenAI API Key: Acontext requires calls to large language models (LLMs) to process tasks. We recommend using gpt-5.1orgpt-4.1(available via the OpenAI website).
3. Start the Acontext Backend
Once dependencies are installed, run the following command in your terminal to start the Acontext service:
acontext docker up
After successful startup, you can access the following addresses:
-
Acontext API Base URL: http://localhost:8029/api/v1 (for SDK calls); -
Acontext Dashboard: http://localhost:3000/ (for visual data viewing).

The dashboard displays key metrics like agent success rates and task progress
IV. Practical Usage of Acontext
Acontext offers SDKs for both Python and TypeScript, catering to developers with different tech stacks. Below, we use Python as an example to detail how to leverage Acontext for storage, observation, and learning.
Step 1: Install the SDK and Initialize the Client
First, install the Python SDK:
pip install acontext
Next, initialize the client to connect to your local Acontext service:
from acontext import AcontextClient
client = AcontextClient(
base_url="http://localhost:8029/api/v1",
api_key="sk-ac-your-root-api-bearer-token" # Default API key
)
# Test the connection
client.ping() # Returns confirmation if successful
Note: Acontext also provides an asynchronous client for high-concurrency scenarios. For details, refer to the official documentation.
Step 2: Storage Functionality: Saving Conversations and Artifacts
Acontext’s storage capabilities are divided into two main parts: conversation message storage and artifact storage.
1. Saving Conversation Messages
Whether you use OpenAI, Anthropic, or other models, you can store conversation messages in a Session. For example:
# Create a new Session (conversation thread)
session = client.sessions.create()
# Prepare conversation messages (compatible with OpenAI SDK format)
messages = [
{"role": "user", "content": "I need to write a landing page for the iPhone 15 Pro Max"},
{
"role": "assistant",
"content": "Sure, here’s my plan:\n1. Search for the latest iPhone 15 Pro Max news\n2. Initialize a Next.js project\n3. Deploy the page to a website",
}
]
# Save messages to the Session one by one
for msg in messages:
client.sessions.send_message(session_id=session.id, blob=msg, format="openai")
In addition to text, Acontext supports storage of multimodal messages (images, audio, etc.). For details, refer to the multimodal message documentation.
2. Loading Historical Messages
When continuing a conversation, you can load historical messages from the Session to use as context for new requests:
# Retrieve saved messages from the Session
response = client.sessions.get_messages(session.id)
history_messages = response.items
# Add a new user message
history_messages.append({"role": "user", "content": "How are you progressing?"})
# Call the OpenAI model to generate a response (install the OpenAI SDK first: pip install openai)
import openai
openai_client = openai.OpenAI(api_key="your-openai-api-key")
reply = openai_client.chat.completions.create(
model="gpt-4.1",
messages=history_messages
)
# Save the new reply to the Session
client.sessions.send_message(
session_id=session.id,
blob=reply.choices[0].message
)
You can view all saved conversations in the “Message Viewer” on the dashboard:

View conversation history for a Session in the dashboard
3. Storing and Managing Artifacts
Files generated by the agent during tasks (documents, code, data, etc.) can be managed via the Disk functionality:
from acontext import FileUpload
# Create a new Disk (file cabinet)
disk = client.disks.create()
# Prepare a file (e.g., a to-do list)
file = FileUpload(
filename="todo.md",
content=b"# Sprint Plan\n\n## Goals\n- Complete user authentication\n- Fix critical bugs"
)
# Store the file in a specific path on Disk (e.g., "/todo/")
artifact = client.disks.artifacts.upsert(
disk.id,
file=file,
file_path="/todo/"
)
# List files in the specified path
print(client.disks.artifacts.list(
disk.id,
path="/todo/"
))
# Retrieve file content and download URL
result = client.disks.artifacts.get(
disk.id,
file_path="/todo/",
filename="todo.md",
with_public_url=True, # Get download URL
with_content=True # Get file content
)
print(f"File Content: {result.content.raw}")
print(f"Download URL: {result.public_url}")
In the dashboard’s “Artifact Viewer,” you can browse and manage all stored files intuitively:

View and manage stored artifacts in the dashboard
Step 3: Observation Functionality: Tracking Task Progress and User Preferences
Acontext automatically observes the agent’s task execution process with no additional configuration. When you send messages to a Session, the Task Agent extracts tasks, tracks statuses, and records user preferences in the background.
Below is a complete example demonstrating how to retrieve observed task information via the SDK:
from acontext import AcontextClient
# Initialize the client
client = AcontextClient(
base_url="http://localhost:8029/api/v1",
api_key="sk-ac-your-root-api-bearer-token"
)
# Create a Session
session = client.sessions.create()
# Simulate a conversation containing tasks
messages = [
{"role": "user", "content": "I need to write a landing page for the iPhone 15 Pro Max"},
{
"role": "assistant",
"content": "Sure, here’s my plan:\n1. Search for the latest iPhone 15 Pro Max news\n2. Initialize a Next.js project\n3. Deploy the page to a website",
},
{
"role": "user",
"content": "That sounds good. Collect the news and report back to me before starting any coding.",
},
{
"role": "assistant",
"content": "No problem—I’ll collect the news, report back, and then begin coding.",
"tool_calls": [
{
"id": "call_001",
"type": "function",
"function": {
"name": "search_news",
"arguments": "{\"query\": \"latest iPhone 15 Pro Max news\"}"
}
}
]
},
]
# Send messages to the Session
for msg in messages:
client.sessions.send_message(session_id=session.id, blob=msg, format="openai")
# Wait for the Task Agent to finish task extraction (for development only; not needed in production)
client.sessions.flush(session.id)
# Retrieve extracted tasks
tasks_response = client.sessions.get_tasks(session.id)
for task in tasks_response.items:
print(f"\nTask #{task.order}:")
print(f" Title: {task.data['task_description']}")
print(f" Status: {task.status}")
# Display progress updates
if "progresses" in task.data:
print(f" Progress Updates: {len(task.data['progresses'])}")
for progress in task.data["progresses"]:
print(f" - {progress}")
# Display user preferences
if "user_preferences" in task.data:
print(" User Preferences:")
for pref in task.data["user_preferences"]:
print(f" - {pref}")
When you run this code, you’ll see output similar to the following:
Task #1:
Title: Search for the latest iPhone 15 Pro Max news and report findings to the user before starting any landing page coding.
Status: success
Progress Updates: 2
- I confirmed that the first step will be reporting before moving on to landing page development.
- I have already collected all iPhone 15 Pro Max information and reported to the user, waiting for approval to proceed.
User Preferences:
- The user expects a report on the latest iPhone 15 Pro Max news before any coding work on the landing page.
Task #2:
Title: Initialize a Next.js project for the iPhone 15 Pro Max landing page.
Status: pending
Task #3:
Title: Deploy the completed landing page to the website.
Status: pending
In the dashboard’s “Task Viewer,” you can view task statuses and progress more intuitively:

View task statuses and progress in the dashboard
Step 4: Self-Learning Functionality: Helping Agents Accumulate Skills
One of Acontext’s core values is enabling agents to learn skills from experience. This process is automated via Space, running in the background with no manual effort required.
1. Create a Space and Associate It with a Session
To enable learning, first create a Space (skill knowledge base) and associate a Session with it:
# Create a Space (skill knowledge base)
space = client.spaces.create()
print(f"Created Space ID: {space.id}")
# Create a Session linked to this Space
session = client.sessions.create(space_id=space.id)
# Send the agent’s work content to the Session (conversations, tool calls, etc.)
# ... (same as the message-sending steps above)
2. How Does the Learning Process Work?
After a task is completed, Acontext automatically learns following this workflow:
graph LR
A[Task Completed] --> B[Task Extraction]
B --> C{Space Connected?}
C -->|Yes| D[Queue for Learning]
C -->|No| E[Skip Learning]
D --> F[Extract SOP]
F --> G{Hard Enough?}
G -->|No - Too Simple| H[Skip Learning]
G -->|Yes - Complex| I[Store as Skill Block]
I --> J[Available for Future Sessions]
In short:
-
Only Sessions linked to a Space are eligible for learning; -
The Experience Agent only stores skills from tasks that are sufficiently complex (i.e., valuable to learn); -
The entire process has a 10-30 second delay and requires no manual intervention.
3. Search for Skills in Space
When the agent needs to execute a new task, it can search for relevant skills in Space to use as operational guidelines:
# Search for skills related to "implementing user authentication" in Space
result = client.spaces.experience_search(
space_id=space.id,
query="I need to implement authentication",
mode="fast" # Fast mode: matches skills based on embeddings
)
# Print the searched skills
for skill in result:
print(f"Use When: {skill['use_when']}")
print(f"Preferences: {skill['preferences']}")
print("Tool SOPs:")
for step in skill['tool_sops']:
print(f" - {step['tool_name']}: {step['action']}")
print("\n")
Acontext supports two search modes:
-
fast: Quickly matches relevant skills based on embeddings, ideal for scenarios requiring real-time responses; -
agentic: The Experience Agent deeply explores Space to cover all relevant skills, suitable for complex tasks.
You can browse all skills in Space via the dashboard’s “Skill Viewer”:

View skills in Space via the dashboard
Quick Start: Use Template Projects
To quickly experience Acontext’s functionality, you can use the official template projects. Run one of the following commands based on your tech stack:
| Tech Stack | Command |
|---|---|
| Python + OpenAI SDK | acontext create my-proj --template-path "python/openai-basic" |
| TypeScript + OpenAI SDK | acontext create my-proj --template-path "typescript/openai-basic" |
| Python + OpenAI Agent SDK | acontext create my-proj --template-path "python/openai-agent-basic" |
| Python + Agno | acontext create my-proj --template-path "python/agno-basic" |
| TypeScript + vercel/ai-sdk | acontext create my-proj --template-path "typescript/vercel-ai-basic" |
For more templates, refer to the Acontext-Examples repository.
V. Frequently Asked Questions (FAQ)
1. What hardware specifications are required to run Acontext?
Acontext runs on Docker and has modest hardware requirements. A standard personal computer (with 4GB+ RAM) is sufficient for local development. For processing large numbers of sessions or complex tasks, we recommend 8GB+ RAM to ensure smooth performance.
2. Does Acontext support LLMs other than OpenAI?
Currently, Acontext is optimized for OpenAI’s gpt-5.1 or gpt-4.1, but its message storage format supports multiple models (e.g., Anthropic). Support for additional models will be expanded in future updates.
3. What data can be viewed in the local dashboard?
The dashboard allows you to view:
-
Conversation history for all Sessions; -
Task statuses, progress, and user preferences; -
Artifacts (files) stored in Disk; -
Skills (SOPs) in Space; -
Key metrics like agent success rates and task completion times.
4. How can I ensure the security of stored conversations and artifacts?
Acontext runs locally by default, with all data stored on your computer and never uploaded to the cloud. For team collaboration, you can deploy Acontext to a private server. For details, refer to the deployment documentation.
5. Will skills in Space keep accumulating indefinitely, leading to clutter?
The Experience Agent automatically filters valuable skills—simple or repetitive tasks are not stored. Additionally, Space supports file manager-style organization (folders, pages), and you can manually organize or delete unwanted skills.
6. Can Acontext be integrated into existing agent systems?
Yes. Acontext provides Python and TypeScript SDKs, enabling integration with existing systems via simple API calls—no need to rebuild your entire agent logic. For details, refer to the integration guide.
VI. Conclusion
Acontext delivers a complete contextual management and self-improvement solution for AI agents through its “storage-observation-learning” closed loop. It not only documents every step of the agent’s operations but also extracts actionable experience, making agents more efficient over time.
Whether you’re building enterprise-grade agent products that require long-term operation or optimizing small personal development tools, Acontext saves time and effort by reducing redundant work and improving task success rates.
To experience Acontext, follow the installation and setup steps outlined in this article, or visit the official documentation for more details. Join the Discord community to connect with other developers, share experiences, and stay updated on the latest releases.
Let Acontext serve as your AI agent’s “memory hub” and “learning assistant”—together, build more reliable and intelligent agent systems.

