Trellis: The Architectural Framework for AI Coding – Making Claude Code & Cursor Controlled, Collaborative, and Persistent

When using Claude Code or Cursor for AI-assisted development, have you ever faced this dilemma: Yesterday you taught the AI your project’s coding standards, but today, in a new session, it has forgotten everything? Or, when handling complex features, does the randomness of AI-generated code force you to conduct repetitive code reviews and corrections?

This section answers the core question: Compared to using Cursor or Claude Code directly, what fundamental efficiency and quality pain points does introducing the Trellis framework solve?

Trellis is not just a tool; it is a one-stop AI development framework designed specifically for Claude Code and Cursor. Its core mission is to transform the AI from an “intern who occasionally has a bright idea” into a “senior engineer who strictly follows specifications, possesses continuous memory, and scales efficiently.” Through mechanisms like automatic injection, spec library management, parallel sessions, and persistent memory, Trellis resolves the three major chronic issues of AI programming: high randomness, context fragmentation, and collaboration difficulties.

AI Programming Architecture Concept
Image Source: Unsplash

Why Trellis? Decoding the Five Core Values

Before diving into technical details, we need to clarify exactly which functions Trellis uses to reshape the AI development workflow. This is not just a simple efficiency boost; it is a systematic refactoring of the development workflow.

1. Automatic Injection: Write Once, Apply Forever

This section answers the core question: How can we avoid repeatedly explaining project specifications to the AI every time a new session starts?

In traditional AI development, we often have to repeatedly emphasize in the prompt: “Please use TypeScript interfaces,” “Component naming must use PascalCase.” This is tedious and prone to omission. Trellis’s “Automatic Injection” function uses Hook mechanisms to automatically inject specifications and workflows into the AI’s context at the start of every conversation. This means you only need to write the specifications once in the configuration file, and all subsequent sessions will be forced to follow these rules, completely eliminating the randomness where “the AI follows standards when it’s in a good mood and writes randomly when it’s not.”

2. Self-Updating Spec Library: An AI That Learns the More You Use It

This section answers the core question: How can AI evolve its cognitive level as the project progresses?

Trellis introduces the concept of a “Spec Library,” where all best practices, architectural decisions, and code styles are stored in files within the spec directory. As the project iterates, these files can be continuously updated. More importantly, these files are maintained with AI assistance—you just tell the AI, “We are switching to Zustand instead of Redux,” and it will automatically update the spec files. The more you use it, the more knowledge is accumulated, and the deeper the AI’s understanding of the project becomes, creating a positive feedback loop of knowledge.

3. Parallel Sessions: Multi-Threaded Development with Physical Isolation

This section answers the core question: How can we let AI handle multiple complex development tasks simultaneously without blocking the main branch development?

Usually, when developing multiple features in a project simultaneously, we often have to process them serially due to file conflicts or chaotic contexts. With the /trellis:parallel command, Trellis utilizes Git’s worktree technology to launch independent session windows for each task. These windows run in physically isolated directories, with independent Agents working in each session. They do not interfere with each other, and each automatically submits a PR upon completion. This multi-threaded parallel capability greatly unleashes the computing potential of AI.

4. Team Sharing: Elevating the Baseline for Everyone

This section answers the core question: How can we quickly replicate the experience of senior developers to all team members’ AI assistants?

In a team, often only a few “experts” can write high-quality architectural designs. Trellis allows the team to share specification files in the .trellis directory. Once a senior engineer establishes a set of excellent specifications, other team members only need to initialize Trellis, and their AI assistants will immediately inherit this top-tier standard. This is no longer simple code sharing; it is “cognitive” sharing, effectively raising the average level of AI coding across the entire team.

5. Session Persistence: Project Memory Across Sessions

This section answers the core question: How does AI remember project context from days or weeks ago to avoid repetitive communication?

The context window is limited, but project memory should be infinite. Trellis uses a session persistence mechanism to write summaries of each conversation into journal files and build an index. When you start next time, the AI will automatically read recent logs and Git information, instantly restoring its memory of the project state. You don’t need to struggle to tell the AI “what I changed yesterday”; it already knows.

Feature Dimension Core Pain Point Solved Direct Value Brought
Automatic Injection Repetitive spec explanation, high AI randomness Ensures mandatory consistency in code style, reduces review costs
Self-Updating Spec Library AI cognition stuck at initial state, unable to iterate Accumulates project knowledge, AI gets smarter with use, architectural decisions are traceable
Parallel Sessions Low efficiency of single-threaded development, multi-task conflicts True multi-task parallelism, using multi-agent collaboration to accelerate delivery
Team Sharing Uneven member skill levels, difficult to unify standards Quickly replicate expert experience, improve overall team code quality
Session Persistence Context loss, high cross-session communication costs AI possesses long-term memory, zero cost for project handover and resumption

Team Collaboration and Knowledge Sharing
Image Source: Unsplash

Quick Start: Building the AI Development Framework in Three Steps

After understanding the core value, let’s experience the power of Trellis through actual operation. The process is extremely concise, aiming to get you from installation to launch in minutes.

This section answers the core question: How can I integrate Trellis into my existing development environment with the fewest commands possible?

Step 1: Global Installation

First, you need to install the latest version of Trellis globally via npm. This ensures that you can invoke the trellis command from any directory.

npm install -g @mindfoldhq/trellis@latest

Step 2: Project Initialization

Navigate to your project root directory and run the initialization command. The key parameter here is -u your-name; it is not just a label, but it creates a dedicated personal workspace for you.

trellis init -u your-name

Upon execution, Trellis will generate two core directory structures in the project root: .trellis and .claude. The name specified by the -u parameter will generate the path .trellis/workspace/your-name/. This is your private territory for storing personal session records and task states, ensuring no conflicts even during team collaboration.

Step 3: Launch Claude Code

Once initialization is complete, you simply launch Claude Code as usual. At this point, Trellis’s background Hooks are already working, silently loading your specifications and workflows as the session starts.

# Launch Claude Code and get to work
claude

Code Initialization Example
Image Source: Unsplash (Note: Illustrative of initialization process)

At this point, you are not directly “using” Trellis commands, but you are already under the protective umbrella of specifications built by Trellis. Every generation by the AI is now constrained by the architecture you have defined.

Deep Dive into Architecture: How Trellis Works and Its Directory Structure

To truly master Trellis, knowing how to install it is not enough; we need to understand the architectural design behind it. Trellis adopts a layered and modular file structure, a design that borrows from modern microservices and orchestration systems.

This section answers the core question: How does Trellis manage AI behavior, specifications, and memory through directory structures and configuration files?

Project Structure Panorama

When you run trellis init, the following key structures are added to the project. These are not just files; they are the “cerebral cortex” and “central nervous system” of the AI.

.trellis/
├── workflow.md              # Workflow guide (automatically injected on startup)
├── worktree.yaml            # Multi-agent configuration (for /trellis:parallel)
├── spec/                    # Specification library
│   ├── frontend/            #   Frontend specifications
│   ├── backend/             #   Backend specifications
│   └── guides/              #   Decision-making and analysis frameworks
├── workspace/{name}/        # Personal workspace
├── tasks/                   # Task management (progress tracking, etc.)
└── scripts/                 # Utility scripts

.claude/
├── settings.json            # Hook configuration
├── agents/                  # Agent definitions
│   ├── dispatch.md          #   Dispatcher Agent (pure routing, no spec reading)
│   ├── implement.md         #   Implementation Agent
│   ├── check.md             #   Reviewer Agent
│   └── research.md          #   Research Agent
├── commands/                # Slash commands
└── hooks/                   # Hook scripts
    ├── session-start.py     #   Inject context on startup
    ├── inject-subagent-context.py  #   Inject specs to sub-agents
    └── ralph-loop.py               #   Quality control loop

Core Component Breakdown

1. .trellis Directory: Knowledge Base and Workspace

  • 🍄
    workflow.md: This is the first file the AI sees. It defines the high-level workflow of the project, such as “research first, then implement, finally self-test.”
  • 🍄
    spec/: This is the most critical asset. Unlike stuffing all rules into a single CLAUDE.md file, Trellis stratifies specifications. Frontend specs, backend specs, and decision guides each have their own role. This layered architecture achieves “context compression”—the AI only needs to read specifications relevant to the current task, saving precious token space.
  • 🍄
    workspace/{name}/: This is your personal office. It stores session summaries and states, enabling physical isolation during multi-user collaboration.

2. .claude Directory: Agent Orchestration

  • 🍄
    agents/: Trellis subdivides the AI’s roles.

    • 🍄
      dispatch.md: The dispatcher, responsible for task distribution, without needing deep understanding of business specifications.
    • 🍄
      implement.md: The implementer, responsible for specific code writing, deeply relying on specifications in spec/.
    • 🍄
      check.md: The reviewer, responsible for code quality checks.
    • 🍄
      research.md: The researcher, responsible for technology selection and information gathering.
  • 🍄
    hooks/: This is Trellis’s “nervous system.” Through Python scripts (like session-start.py), Trellis can insert custom logic at various stages of AI content generation. For example, ralph-loop.py is a quality control loop ensuring generated code meets standards.

Reflection: The Significance of Layered Architecture

In the past, we were used to writing a giant .cursorrules or CLAUDE.md, piling all rules together. This is like throwing all library books on the floor; finding one requires rummaging through all the trash. Trellis’s layered structure is like building a well-classified library. When the AI needs to write a frontend component, it only “borrows books” from spec/frontend; when it needs to do technical research, it only calls the research.md Agent. This design not only improves efficiency but, more importantly, makes the system scalable.

Typical Use Cases and Live Demos

Theory combined with practice—let’s see how Trellis performs in real development scenarios.

Use Case 1: Teach Your AI – Establishing Specifications

This section answers the core question: How can I make the AI strictly follow specific code styles and architectural patterns via Trellis, rather than acting on whim?

Suppose you are developing a React project and have defined the following specification: “Components must use TypeScript Props interfaces, naming must use PascalCase, and functional writing with Hooks is mandatory.”

In Trellis, you don’t need to chatter at the AI every time. You just write these rules into the relevant files in the .trellis/spec/frontend/ directory. Trellis will “feed” this content to the AI via the session-start.py Hook at the start of every session.

Actual Result:
When you ask the AI to “add a user list component,” the code it generates will automatically include:

// AI generated code example
interface UserListProps {
  users: User[];
}

export const UserList: React.FC<UserListProps> = ({ users }) => {
  // ... Hooks logic
  return <div>...</div>;
};

Without you repeating yourself, the AI complies automatically. This capability of “teach once, apply forever” greatly reduces mental burden.

Use Case 2: Parallel Development – Concurrent Multi-Task Processing

This section answers the core question: How can I leverage AI to advance multiple independent feature developments simultaneously within the same project?

When facing three independent feature module development tasks, traditional methods might require completing them one by one. In Trellis, you can use the /trellis:parallel command.

Operation Flow:

  1. Configure .trellis/worktree.yaml to define the tasks that need to run in parallel.
  2. Execute the command, and Trellis will create an independent Git worktree for each task in the background.
  3. Each worktree runs an independent session window. The dispatcher Agent (dispatch) commands specific sub-agents (implement, check) to work in their respective isolated environments.

Value Manifestation:
This means that while the AI is writing code for the “Login Module,” another AI instance is running unit tests in a “Payment Module” isolated directory. The two are completely physically isolated with no file conflicts. Once a feature is developed, it submits a PR directly, without waiting for other tasks to end. For teams chasing project schedules, this is a leap in productivity.

Use Case 3: Custom Workflows – One-Click Context Loading

This section answers the core question: How can I quickly prepare a complex context environment through custom commands so the AI can immediately enter working state?

Before starting frontend development, one usually needs to go through a series of tedious operations: checking component specifications, verifying recent Git changes, pulling test modes, and viewing shared hooks.

Trellis allows you to define slash commands like /trellis:before-frontend-dev. When you type this command, pre-defined scripts automatically execute all the above operations, instantly filling the AI’s context.

Reflection: Workflow as Code
This embodies the thought that “everything is code.” We solidify the “ritual” of preparation into code. For new developers joining the project, they don’t need to explore “which documents to check before writing code”; they just run a command, and all necessary context is laid out before the AI. This not only improves efficiency but also lowers the barrier to entry.

Workflow Automation
Image Source: Unsplash

Deep Dive into Common Questions

In the process of getting to know Trellis, developers often have some doubts. Here we select a few of the most core questions for in-depth answers.

Why Use Trellis Instead of Cursor Skills?

Core Answer: Trellis solves the “determinism” problem, whereas Skills solve the “possibility” problem.

Skills are optional—the AI may decide whether to use a certain Skill based on its “mood,” leading to unstable code quality. Trellis enforces specifications through underlying Hook injection mechanisms. It locks “randomness in a cage,” ensuring that every generated code must meet the standards. Quality will not degrade over time and as the AI’s attention wanders.

Are Spec Files Hand-Written or AI-Written?

Core Answer: Most of the time, let the AI write them, but key architectural insights must be written by you.

Trellis advocates “human-AI collaboration” for document maintenance. For routine style choices (like “use Zustand not Redux”), you can just tell the AI, and it will automatically generate and update spec files. However, for those involving complex business logic, specific architectural constraints, or pitfalls the team has stepped on before, these are things the AI cannot imagine out of thin air; they must be written manually by senior developers. Being able to teach the team’s historical experience to the AI is why you won’t be replaced by the AI.

How is This Different from CLAUDE.md / AGENTS.md / .cursorrules?

Core Answer: Trellis uses a layered architecture to replace “monolithic” files, achieving efficient context compression.

Traditional practice is to stuff all rules into one file. The AI has to consume a large number of tokens to filter out irrelevant information every time it reads, and it easily leads to context overflow. Trellis adopts a layered architecture, loading only relevant spec files based on the current task type (e.g., frontend, backend, research). This refined control ensures information accuracy while greatly saving costs.

Will Multi-Person Collaboration Cause Conflicts?

Core Answer: No, because everyone has their own independent physical space.

Trellis considered team collaboration from the start of its design. Through the .trellis/workspace/{name}/ directory, each developer has their own dedicated workspace. Your session records and temporary files are stored here, completely isolated from others. Everyone shares the specifications in spec/, but the operating environment is personalized and independent.

How Does the AI Know Previous Conversation Content?

Core Answer: Through the persistent Journal mechanism and indexing system.

At the end of each conversation, you can run /trellis:record-session. The AI will automatically write the summary of this session into .trellis/workspace/{name}/journal-N.md and build an index in index.md. When you start /trellis:start next time, the AI will automatically read recent journals and Git information. This means the AI doesn’t just “see” the code; it also “reads” and understands your previous thinking and decision-making process. By the way, these journal files can even be submitted directly as your work daily report—killing two birds with one stone.

Roadmap and Future Outlook

Trellis already possesses powerful core functions, but its vision goes far beyond that. According to the project roadmap, we will see the following exciting evolution in the future:

  • 🍄
    Better Code Review: The automated review process will be more complete, including not just static checks but also deep verification of logical consistency.
  • 🍄
    Skill Packs: Pre-configured workflow packs will be launched in the future. Just like installing npm packages, you can one-click install a “React Native Best Practices Pack” or “Go Microservices Architecture Pack,” ready to use.
  • 🍄
    Broader Tool Support: Besides Claude Code and Cursor, Trellis plans to support more IDEs like OpenCode and Codex, building a unified AI development standard.
  • 🍄
    Stronger Session Continuity: Implement automatic saving and vectorized retrieval of full session history, bringing AI’s memory closer to human levels.
  • 🍄
    Visual Parallel Sessions: Providing a UI interface to view the progress, logs, and output of each Agent in real-time, making the parallel development process transparent and controllable.

Future Tech Vision
Image Source: Unsplash

Conclusion and Action Guide

Trellis is not just a collection of simple scripts; it is a set of engineering methodologies for the AI programming era. It upgrades the AI from a “chat toy” to an “engineering tool” and solidifies “personal tricks” into “team assets.”

Practical Summary / Action Checklist

  1. Install the Framework:

    npm install -g @mindfoldhq/trellis@latest
    
  2. Initialize the Project:

    trellis init -u your-name
    
  3. Write Specifications: Define your code styles and architectural decisions in the .trellis/spec/ directory.
  4. Start Development: Run claude and experience the automatic injection of standard compliance.
  5. Try Parallelism: Configure worktree.yaml and use /trellis:parallel to enable multi-task concurrency.
  6. Record Sessions: Use /trellis:record-session at the end of work to save context memory.

One-page Summary

  • 🍄
    Core Problem: High randomness in AI coding, easy context loss, and inconsistent team collaboration standards.
  • 🍄
    Solution: Trellis Framework.
  • 🍄
    Key Means: Hook automatic injection, layered Spec library, Git Worktree parallel sessions, Workspace physical isolation, Journal persistence.
  • 🍄
    Final Effect: AI becomes controllable, memorable, and collaborative; development efficiency significantly improves; code quality is uniformly stable.
  • 🍄
    Target Audience: Teams pursuing high-quality code, independent developers using Claude Code/Cursor, technical leads managing complex AI workflows.

Frequently Asked Questions (FAQ)

  1. Does Trellis support editors other than Claude Code and Cursor?
    Currently, it focuses on Claude Code and Cursor, but the roadmap has planned support for OpenCode and Codex.

  2. Can I introduce Trellis midway into an existing project?
    Yes, the Trellis initialization process is non-intrusive. It only adds .trellis and .claude directories and will not break the existing code structure.

  3. What if team specifications conflict?
    Files in the .trellis/spec/ directory can be version controlled (e.g., via Git). When specifications conflict, they can be resolved through the code Review process, and the merged specifications will automatically sync to all members.

  4. Will using Trellis significantly increase Token consumption?
    Trellis adopts a layered loading strategy, injecting only specifications relevant to the current task. This avoids loading irrelevant large files, so compared to brute-force stuffing a huge Prompt file, it actually helps save Tokens.

  5. How to protect privacy in the personal workspace workspace/{name}?
    This directory is generally recommended to be added to .gitignore. Therefore, personal session records and temporary states will not be committed to the public repository, remaining stored locally.

  6. Does the /trellis:parallel command have requirements for the Git version?
    Since it relies on the Git worktree feature, it is recommended to use a newer version of Git (2.17+) for the best experience.

  7. Is Trellis open source?
    Yes, Trellis is open source on GitHub under the FSL license. Community contributions and feedback are welcome.