5 Agent Skill Design Patterns Every ADK Developer Should Know

Most developers building AI agents spend their energy on the wrong problem. They obsess over SKILL.md formatting — getting the YAML syntax right, structuring directories correctly, following the spec to the letter. But with more than 30 agent tools — including Claude Code, Gemini CLI, and Cursor — now standardized on the same directory layout, the formatting problem is essentially solved.

The real challenge is what comes next: how do you design the logic inside a Skill?

A Skill that wraps FastAPI conventions operates completely differently from a four-step documentation pipeline — even though their SKILL.md files look identical from the outside. The spec tells you how to package a Skill. It tells you nothing about how to structure what’s inside it.

By studying how Skills are built across the ecosystem — from Anthropic’s repositories to Vercel and Google’s internal guidelines — five design patterns emerge that show up again and again. Each one solves a different class of problem, and each one comes with working ADK code you can use today.


Why Skill Design Patterns Matter

Think of a SKILL.md file as a job description for your AI agent. The format specifies how to package the role. The design pattern specifies how the agent should actually do the work.

Without clear internal structure, Skills tend to fail in predictable ways:

  • Output format changes between runs for no apparent reason
  • The agent skips critical steps and jumps straight to a result
  • The context window fills up with irrelevant information
  • Complex workflows become impossible to maintain or extend

These five patterns address each of those failure modes directly.


Pattern 1: The Tool Wrapper

What problem does it solve?

When your agent needs to work with a specific library or framework, you have two options: hardcode all the conventions into your system prompt, or load them dynamically when they’re actually needed. The first approach makes your prompt bloated and brittle. The second approach is exactly what the Tool Wrapper pattern implements.

The SKILL.md file listens for specific library keywords in the user’s prompt, dynamically loads your internal documentation from the references/ directory, and treats those rules as absolute truth. This is the mechanism that lets you distribute your team’s internal coding standards directly into every developer’s workflow — without anyone having to memorize them.

Tool Wrapper Pattern

How it works, step by step:

  1. Detect relevant keywords in the user’s prompt (library names, framework references)
  2. Dynamically load internal documentation from references/
  3. Apply those rules as the authoritative standard for the current task

Code example: FastAPI expert Skill

# skills/api-expert/SKILL.md
---
name: api-expert
description: FastAPI development best practices and conventions. Use when building, reviewing, or debugging FastAPI applications, REST APIs, or Pydantic models.
metadata:
  pattern: tool-wrapper
  domain: fastapi
---

You are an expert in FastAPI development. Apply these conventions to the user's code or question.

## Core Conventions

Load 'references/conventions.md' for the complete list of FastAPI best practices.

## When Reviewing Code
1. Load the conventions reference
2. Check the user's code against each convention
3. For each violation, cite the specific rule and suggest the fix

## When Writing Code
1. Load the conventions reference
2. Follow every convention exactly
3. Add type annotations to all function signatures
4. Use Annotated style for dependency injection

When to use it

  • Packaging internal framework conventions for your team
  • Giving your agent instant expertise in a specific technology stack
  • Distributing coding standards across your entire development workflow

Pattern 2: The Generator

What problem does it solve?

Ask an agent to produce the same document twice. The structure comes out differently each time. The Generator pattern fixes this by turning output creation into a fill-in-the-blank process rather than a freeform generation task.

The pattern uses two optional directories: assets/ holds your output template, and references/ holds your style guide. The Skill instructions act as a project manager — load the template, read the style guide, ask the user for any missing information, then populate the document section by section.

The key insight here is that the SKILL.md file itself contains neither the document structure nor the writing rules. It simply coordinates the retrieval of those assets and forces the agent to execute them in sequence.

Generator Pattern

How it works, step by step:

  1. Load the output template from assets/
  2. Load the style guide from references/
  3. Identify and request any missing variables from the user
  4. Populate the template section by section
  5. Return the completed document

Code example: Technical report generator

# skills/report-generator/SKILL.md
---
name: report-generator
description: Generates structured technical reports in Markdown. Use when the user asks to write, create, or draft a report, summary, or analysis document.
metadata:
  pattern: generator
  output-format: markdown
---

You are a technical report generator. Follow these steps exactly:

Step 1: Load 'references/style-guide.md' for tone and formatting rules.

Step 2: Load 'assets/report-template.md' for the required output structure.

Step 3: Ask the user for any missing information needed to fill the template:
- Topic or subject
- Key findings or data points
- Target audience (technical, executive, general)

Step 4: Fill the template following the style guide rules. Every section in the template must be present in the output.

Step 5: Return the completed report as a single Markdown document.

When to use it

  • Generating consistently structured API documentation
  • Standardizing commit message formats across a team
  • Scaffolding project architecture documents
  • Producing batch reports that must follow a uniform structure

Pattern 3: The Reviewer

What problem does it solve?

Most code review prompts grow longer over time as you add more rules — and they still miss things. The Reviewer pattern addresses this by separating what to check from how to check it.

Instead of stuffing every code smell, style violation, and security concern into your system prompt, you store a modular checklist in references/review-checklist.md. The Skill instructions stay static. The review criteria load dynamically.

The practical power of this separation: swap your Python style checklist for an OWASP security checklist, and you instantly have a completely different, specialized security audit — using exactly the same Skill infrastructure.

Reviewer Pattern

How it works, step by step:

  1. Load the review checklist from references/
  2. Read and understand the user’s code before critiquing it
  3. Apply each checklist rule, classifying violations by severity
  4. Produce a structured report grouped by severity level

Code example: Python code reviewer

# skills/code-reviewer/SKILL.md
---
name: code-reviewer
description: Reviews Python code for quality, style, and common bugs. Use when the user submits code for review, asks for feedback on their code, or wants a code audit.
metadata:
  pattern: reviewer
  severity-levels: error,warning,info
---

You are a Python code reviewer. Follow this review protocol exactly:

Step 1: Load 'references/review-checklist.md' for the complete review criteria.

Step 2: Read the user's code carefully. Understand its purpose before critiquing.

Step 3: Apply each rule from the checklist to the code. For every violation found:
- Note the line number (or approximate location)
- Classify severity: error (must fix), warning (should fix), info (consider)
- Explain WHY it's a problem, not just WHAT is wrong
- Suggest a specific fix with corrected code

Step 4: Produce a structured review with these sections:
- **Summary**: What the code does, overall quality assessment
- **Findings**: Grouped by severity (errors first, then warnings, then info)
- **Score**: Rate 1-10 with brief justification
- **Top 3 Recommendations**: The most impactful improvements

Understanding the three severity levels

Level Meaning Recommended action
error Must fix Block merge or deployment
warning Should fix Resolve before committing
info Worth considering Address in the next iteration

When to use it

  • Automating pull request code reviews
  • Running security vulnerability scans before deployment
  • Training junior developers through structured feedback
  • Running modular audits with different standards — performance, security, and style as separate checklists

Pattern 4: Inversion

What problem does it solve?

Agents have a default bias toward action. The moment a user says “help me design a system,” the agent starts generating architecture diagrams — often missing critical context, often producing something that needs to be thrown away and redone.

The Inversion pattern flips this dynamic entirely. Instead of the user driving the prompt and the agent immediately executing, the agent takes the role of interviewer. It asks structured questions in sequence, waits for complete answers, and refuses to synthesize a final output until it has a full picture of the requirements and constraints.

The mechanism that makes this work is explicit, non-negotiable gating instructions — phrases like “DO NOT start building or designing until all phases are complete.” Without that hard gate, the agent will find a way to start generating early.

Inversion Pattern

Code example: Project planner

# skills/project-planner/SKILL.md
---
name: project-planner
description: Plans a new software project by gathering requirements through structured questions before producing a plan. Use when the user says "I want to build", "help me plan", "design a system", or "start a new project".
metadata:
  pattern: inversion
  interaction: multi-turn
---

You are conducting a structured requirements interview. DO NOT start building or designing until all phases are complete.

## Phase 1 — Problem Discovery (ask one question at a time, wait for each answer)

Ask these questions in order. Do not skip any.

- Q1: "What problem does this project solve for its users?"
- Q2: "Who are the primary users? What is their technical level?"
- Q3: "What is the expected scale? (users per day, data volume, request rate)"

## Phase 2 — Technical Constraints (only after Phase 1 is fully answered)

- Q4: "What deployment environment will you use?"
- Q5: "Do you have any technology stack requirements or preferences?"
- Q6: "What are the non-negotiable requirements? (latency, uptime, compliance, budget)"

## Phase 3 — Synthesis (only after all questions are answered)

1. Load 'assets/plan-template.md' for the output format
2. Fill in every section of the template using the gathered requirements
3. Present the completed plan to the user
4. Ask: "Does this plan accurately capture your requirements? What would you change?"
5. Iterate on feedback until the user confirms

Why “one question at a time” matters

It might seem like a small detail, but it’s critical. If the agent fires all six questions at once, users typically answer two or three and skip the rest. Information gets lost. The sequential, one-question-at-a-time structure is what guarantees complete context before the agent does anything substantive.

When to use it

  • Gathering software project requirements
  • Organizing information before designing complex systems
  • Any scenario where acting on incomplete context produces bad results

Pattern 5: The Pipeline

What problem does it solve?

For complex, multi-step tasks, you cannot afford to have the agent skip steps or quietly ignore instructions. The Pipeline pattern enforces a strict sequential workflow using hard checkpoints that the agent cannot bypass.

The instructions themselves serve as the workflow definition. By implementing explicit gate conditions — requiring user approval before moving from one phase to the next — the Pipeline ensures the agent cannot shortcut a complex task and present an unvalidated final result.

This pattern also uses all optional directories strategically: different reference files and templates load only at the specific step where they’re needed. This keeps the context window clean throughout the workflow rather than loading everything upfront.

Pipeline Pattern

Code example: API documentation pipeline

# skills/doc-pipeline/SKILL.md
---
name: doc-pipeline
description: Generates API documentation from Python source code through a multi-step pipeline. Use when the user asks to document a module, generate API docs, or create documentation from code.
metadata:
  pattern: pipeline
  steps: "4"
---

You are running a documentation generation pipeline. Execute each step in order. Do NOT skip steps or proceed if a step fails.

## Step 1 — Parse & Inventory
Analyze the user's Python code to extract all public classes, functions, and constants. Present the inventory as a checklist. Ask: "Is this the complete public API you want documented?"

## Step 2 — Generate Docstrings
For each function lacking a docstring:
- Load 'references/docstring-style.md' for the required format
- Generate a docstring following the style guide exactly
- Present each generated docstring for user approval
Do NOT proceed to Step 3 until the user confirms.

## Step 3 — Assemble Documentation
Load 'assets/api-doc-template.md' for the output structure. Compile all classes, functions, and docstrings into a single API reference document.

## Step 4 — Quality Check
Review against 'references/quality-checklist.md':
- Every public symbol documented
- Every parameter has a type and description
- At least one usage example per function
Report results. Fix issues before presenting the final document.

Notice the instruction at the end of Step 2: Do NOT proceed to Step 3 until the user confirms. That’s the checkpoint — an explicit human approval gate. It’s what makes the entire pipeline trustworthy rather than just fast.

When to use it

  • Generating API documentation from source code
  • Multi-step data processing workflows
  • Content generation tasks that require human review at intermediate stages
  • Any complex workflow where sequence integrity is non-negotiable

How to Choose the Right Pattern

Each pattern answers a different core question about what your agent needs to do:

Pattern Decision Tree
Core need Recommended pattern
Give the agent expertise in a specific library or framework Tool Wrapper
Produce consistently structured documents every time Generator
Systematically evaluate code or content against fixed criteria Reviewer
Gather complete requirements before taking action Inversion
Execute complex tasks in a strict, checkpointed sequence Pipeline

Quick selection guide:

  1. Does the task require domain-specific technical knowledge? → Tool Wrapper
  2. Does the output need a consistent, repeatable structure? → Generator
  3. Does the task involve scoring or auditing existing content? → Reviewer
  4. Do you need complete context before the agent can act usefully? → Inversion
  5. Does the task have multiple steps with hard dependencies between them? → Pipeline

Patterns Compose — Use Them Together

These five patterns are not mutually exclusive. In practice, the most effective Skills combine them:

  • Pipeline + Reviewer: Add a Reviewer step at the end of any pipeline so the agent audits its own output before presenting it
  • Generator + Inversion: Use Inversion at the start to collect all necessary variables before the Generator fills in the template
  • Tool Wrapper + Pipeline: Load domain knowledge dynamically at the specific pipeline step where it’s needed, keeping earlier steps uncluttered

ADK’s SkillToolset and progressive context disclosure mean your agent only spends context tokens on the patterns it actually needs at runtime. The composition is efficient, not expensive.


Frequently Asked Questions

Is learning design patterns necessary if the SKILL.md format is already standardized?

Yes — they solve different problems. The format tells you how to package a Skill. Design patterns tell you how to structure the logic inside it. You need both.

Won’t Inversion make my agent feel slow and frustrating to use?

Only if you apply it to the wrong tasks. Inversion is for situations where acting on incomplete context produces bad results. For simpler tasks with well-defined inputs, Tool Wrapper or Generator will serve you better.

Do Pipeline checkpoints undermine automation?

Intentionally, yes — in the right places. The Pipeline trades some automation for reliability. If your workflow can run fully automatically without human confirmation, you can remove the approval gates. But you’re accepting the risk of errors propagating undetected through later steps.

Do these patterns work across different agent tools — Claude Code, Cursor, Gemini CLI?

Yes. The Agent Skills specification is open-source and natively supported across all of these tools. The patterns operate at the content design level, independent of any specific tool’s implementation.

Can I build my own patterns beyond these five?

Absolutely. These five emerged from studying real-world Skills across the ecosystem — they’re common patterns, not an exhaustive list. As your workflows grow more complex, you’ll likely discover variations and hybrids that fit your specific use cases.


The Core Takeaway

Stop trying to cram complex, fragile instructions into a single system prompt.

These five patterns give you a vocabulary for decomposing workflows into structured, maintainable, reliable components:

  • Tool Wrapper — on-demand domain expertise
  • Generator — consistent, templated output
  • Reviewer — modular evaluation with separated criteria
  • Inversion — context-first, action-second
  • Pipeline — sequenced execution with hard checkpoints

The Agent Skills specification is open-source. ADK supports all of these patterns natively. You already know how to handle the format. Now you know how to design what goes inside it.