Agent Skills: Transforming Best Practice Playbooks into Reusable Capabilities for AI Coding Agents
Core Question: How can we systematize industry best practices so that AI coding agents can understand, apply, and scale them effortlessly?
The evolution of software development is being accelerated by AI coding agents, but a persistent challenge remains: how do we ensure these agents write code that adheres to the high standards set by years of engineering experience? Vercel has released agent-skills, a collection of capabilities that transforms best practice playbooks into reusable skills for AI coding agents. This project implements the open Agent Skills specification, focusing first on React and Next.js performance, web design review, and claimable deployments. By packaging expert knowledge into a standardized format, developers can now install skills similar to how they manage npm packages, allowing AI agents to discover and apply them during normal coding flows.
This approach bridges the gap between abstract guidelines and practical application. Instead of relying on ad-hoc prompt engineering for every project, teams can now encode their rules—such as “eliminate waterfalls” or “ensure accessibility compliance”—into structured skills that agents can load on-demand. This article explores the architecture of Agent Skills, the three core skills currently available, the technical requirements for building custom skills, and how to integrate this powerful system into your development workflow.

Image Source: Unsplash
The Agent Skills Architecture: An Open Standard for AI Capabilities
What defines the Agent Skills format and why does it matter?
The Agent Skills format is an open standard designed to package capabilities for AI agents. At its core, a skill is simply a folder containing instructions and optional scripts, but the power lies in the specific structure that different AI tools—such as Claude Code, Cursor, and Copilot—can universally understand. This standardization ensures that a skill created today can be consumed by the coding agents of tomorrow, preventing tool lock-in and fostering a collaborative ecosystem.
A typical skill within the vercel-labs/agent-skills repository consists of three primary components:
-
SKILL.md: This is the brain of the skill. It contains natural language instructions that describe exactly what the skill does and how it should behave. It serves as the contract between the developer and the AI agent. -
scripts Directory: This contains helper commands that the agent can execute to inspect or modify the project. Instead of asking the AI to “write a script to deploy,” the AI simply invokes a pre-tested, reliable script from this directory. -
references Directory (Optional): This holds additional documentation, examples, or reference material that provides depth without cluttering the main instruction file.
Reflection: The elegance of this architecture is its simplicity. By relying on standard file system structures and Markdown, the format lowers the barrier to entry. It doesn’t require learning a complex new DSL or configuration language. Any developer who can write a Markdown file and a bash script can contribute to the ecosystem. This openness is crucial for long-term adoption, as it invites community participation rather than restricting it to a specific vendor’s platform.
The Role of AGENTS.md in Optimization
A specific optimization employed by the react-best-practices skill involves compiling individual rule files into a single AGENTS.md file. This file is specifically optimized for agents. By aggregating rules into one document, it serves as a comprehensive knowledge source that can be loaded as a single context block during a code review or refactor. This aggregation removes the need for repetitive, ad-hoc prompt engineering per project. The AI doesn’t need to be told “check for X, Y, and Z” every time; it simply loads the AGENTS.md and applies the rules systematically.
Deep Dive into the Core Skills
What specific engineering challenges do the three core skills address?
The repository currently ships with three main skills that target common frontend workflows: performance optimization, UI/UX compliance, and deployment. Each skill solves a distinct problem that developers face daily, turning subjective checks into objective, automated actions.
1. React Best Practices: Encoding 10 Years of Performance Knowledge
Core Question: How can we ensure AI agents systematically enforce performance rules rather than relying on chance?
The react-best-practices skill encodes React and Next.js performance guidance into a structured rule library. It is the distillation of a decade of optimization work from Vercel Engineering, containing more than 40 rules grouped into 8 categories.
The skill is designed to be used when writing new React components, implementing data fetching (client or server-side), reviewing code for performance issues, or optimizing bundle sizes. What makes this skill particularly powerful is the prioritization system. Each rule includes an impact rating, ensuring that agents tackle Critical issues first, before moving on to High or Medium priority items.
Categories covered include:
-
Eliminating waterfalls (Critical): Network waterfalls occur when requests depend on each other sequentially, slowing down page loads. The skill identifies patterns where a client-side fetch depends on a server-side render, for example, and suggests optimizations like moving data fetching to the server. -
Bundle size optimization (Critical): This involves identifying large dependencies, unused code, and opportunities for code splitting to reduce the amount of JavaScript the browser must download. -
Server-side performance (High): This covers caching strategies, reducing database queries, and optimizing the initial HTML generation speed. -
Client-side data fetching (Medium-High): Guidelines on using libraries like React Query or SWR efficiently, handling loading states, and avoiding redundant fetches. -
Re-render optimization (Medium): Identifying components that re-render unnecessarily and suggesting fixes like React.memo,useMemo, oruseCallback. -
Rendering performance (Medium): Advice on avoiding layout thrashing and using CSS transforms instead of top/left properties for animations. -
JavaScript micro-optimizations (Low-Medium): Fine-tuning code for execution speed, though this is deprioritized compared to architectural changes.
Every rule is expressed with concrete code examples—showing both an “anti-pattern” (the wrong way) and a corrected version. When an AI agent reviews a React component, it doesn’t just guess; it maps the findings directly onto these specific, validated rules.
Scenario: Imagine a developer has just built a new product listing page. They ask the agent, “Review this component for performance issues.” The agent loadsreact-best-practices, scans the code, and identifies a client-sideuseEffectfetching data that blocks rendering. It cites the “Eliminating waterfalls” rule and provides the corrected code snippet to move the fetch to the server component. This transforms a vague request into a precise, expert-level fix.
2. Web Design Guidelines: Automating Comprehensive UI Audits
Core Question: How can we catch subtle UI and accessibility issues that are often missed during manual code reviews?
The web-design-guidelines skill focuses on user interface and user experience quality. It is a massive resource, containing more than 100 rules that span accessibility, focus handling, form behavior, animation, typography, images, performance, navigation, dark mode, touch interaction, and internationalization.
This skill is triggered by prompts like “Review my UI,” “Check accessibility,” or “Audit design.” Its primary value lies in catching the “edge cases” of web development—the details that make an app feel professional and usable for everyone.
During a review, an agent can use these rules to detect:
-
Missing ARIA attributes: Essential for screen readers to interpret the page correctly. -
Incorrect label associations: Ensuring form inputs are properly linked to their labels. -
Misuse of animation: Detecting if animations run even when the user has requested reduced motion via system settings (a critical accessibility requirement). -
Missing alt text or lazy loading: Issues that affect both accessibility and Core Web Vitals performance scores. -
Focus management: Ensuring keyboard users can see where they are on the page (visible focus states).
Scenario: A team is preparing to launch a new dashboard. Before the merge request is approved, a developer asks, “Check this page against best practices.” The agent runs theweb-design-guidelinesskill and flags that the “Submit” button in a modal is not focusable when the modal opens, breaking keyboard navigation. It also notes that an image is missing analtattribute. These are issues easy to overlook visually but critical for a segment of the user base.
Image Source: Unsplash
3. Vercel Deploy Claimable: Bridging the Gap Between Code and Production
Core Question: How can we simplify the deployment process to enable instant sharing and iteration?
The vercel-deploy-claimable skill connects the agent review loop directly to deployment. It allows agents to package the current project into a tarball, auto-detect the framework based on package.json, and create a deployment on Vercel instantly.
This skill is particularly valuable for rapid prototyping and collaboration. It supports over 40 frameworks and handles static HTML projects automatically. Crucially, it excludes node_modules and .git from uploads to keep deployment times fast.
The Workflow:
-
The agent packages the project into a tarball. -
It detects the framework (Next.js, Vite, Astro, etc.). -
It uploads to the deployment service. -
It returns two URLs.
The dual-URL output is a key feature of this skill:
-
Preview URL: The live site that can be immediately shared. -
Claim URL: A link that allows a user or team to attach the deployment to their own Vercel account without sharing credentials from the original environment.
Scenario: A developer is working on a new feature branch and needs feedback from a product manager who doesn’t run the code locally. The developer types, “Deploy this and give me the link.” The agent executes the skill. Moments later, the developer has a livehttps://...vercel.applink to send to the PM. Once the feature is approved, the developer sends the Claim URL to the DevOps team, who securely transfer the deployment to the production organization. This flow removes the friction of configuring CI/CD pipelines for every experimental branch.
Installation and Integration: Getting Started with Agent Skills
How do you install and wire these skills into your existing coding environment?
Skills are designed to be installed via command line interfaces (CLI) that feel familiar to any developer who has used npm. The installation process is streamlined to scan your environment and place files exactly where they need to be.
The Installation Commands
The launch announcement highlights a simple installation path using a specialized CLI:
npx skills i vercel-labs/agent-skills
This command fetches the agent-skills repository and prepares it as a skills package.
However, the ecosystem also provides an add-skill CLI, which is more robust and designed to integrate skills directly into specific agents’ configurations. A typical flow looks like this:
npx add-skill vercel-labs/agent-skills
When you run this, the CLI performs an intelligent scan. It checks for installed coding agents by looking for their configuration directories. For example:
-
Claude Code: Scans for a .claudedirectory. -
Cursor: Scans for a .cursordirectory or a directory under the home folder.
The CLI then automatically installs the chosen skills into the correctskillsfolders for each detected tool. This means you don’t have to manually copy-paste files into obscure directories; the CLI handles the pathing for you.
Granular Control Over Installation
The add-skill CLI offers flags for non-interactive usage, allowing you to script the installation or control exactly what gets added. For instance, if you only want the React best practices and only for Claude Code, you can run:
npx add-skill vercel-labs/agent-skills --skill react-best-practices -g -a claude-code -y
-
--skill: Specifies which skill to install. -
-g: Indicates a global installation. -
-a: Specifies the target agent (e.g., claude-code). -
-y: Confirms “yes” to all prompts (non-interactive mode).
You can also list the available skills before committing to an installation:
npx add-skill vercel-labs/agent-skills --list
How Agents Discover Skills
Once installed, skills reside in agent-specific directories such as ~/.claude/skills or .cursor/skills. The agent automatically discovers these files upon startup or during a conversation. It reads the SKILL.md files to understand what capabilities are available.
When you interact with the agent using natural language, such as “Review this component for React performance issues” or “Check this page for accessibility problems,” the agent inspects the loaded skills. It matches your intent against the skill descriptions and routes your request to the appropriate skill. This discovery mechanism is seamless—the user doesn’t need to remember specific command names, just what they want to achieve.
Technical Guide: Building Your Own Agent Skills
How can you create custom skills to automate your team’s specific workflows?
While the provided skills cover general frontend needs, the true power of the Agent Skills format lies in its extensibility. You can create custom skills to encode your team’s internal logic, deployment pipelines, or proprietary APIs. The AGENTS.md file provides a comprehensive guide for this process.
Directory Structure and Naming Conventions
A skill must adhere to a strict directory structure to be recognized by agents.
skills/
{skill-name}/ # kebab-case directory name (e.g., vercel-deploy)
SKILL.md # Required: skill definition
scripts/ # Required: executable scripts
{script-name}.sh # Bash scripts (preferred)
{skill-name}.zip # Required: packaged for distribution
Naming Conventions:
-
Skill directory: Must use kebab-case(e.g.,log-monitor,api-gateway-deploy). -
SKILL.md: Always uppercase, always this exact filename. -
Scripts: Must use kebab-case.sh(e.g.,deploy.sh,fetch-logs.sh). -
Zip file: Must match the directory name exactly: {skill-name}.zip.
Drafting the SKILL.md
The SKILL.md is the interface definition for your skill. A standard format is recommended to ensure agents can parse it correctly.
---
name: {skill-name}
description: {One sentence describing when to use this skill. Include trigger phrases like "Deploy my app", "Check logs", etc.}
---
# {Skill Title}
{Brief description of what the skill does.}
## How It Works
{Numbered list explaining the skill's workflow}
## Usage
```bash
bash /mnt/skills/user/{skill-name}/scripts/{script}.sh [args]
Arguments:
-
arg1– Description (defaults to X)
Examples:
{Show 2-3 common usage patterns}
Output
{Show example output users will see}
Present Results to User
{Template for how Claude should format results when presenting to users}
Troubleshooting
{Common issues and solutions, especially network/permissions errors}
**Reflection:** The `description` field in the frontmatter is arguably the most important line. This is what the AI agent uses to decide *when* to load your skill. If the description is vague (e.g., "Does stuff"), the agent will never use it. If it is specific and includes trigger phrases (e.g., "Run the full regression test suite"), the agent will reliably invoke it when the user asks for a regression test.
#### Scripting Requirements and Best Practices
The `scripts` directory contains the logic that actually performs the work. To ensure reliability and predictability when called by an AI, scripts must adhere to specific requirements:
1. **Shebang:** Always use `#!/bin/bash`.
2. **Fail-Fast Behavior:** Use `set -e` at the start of the script. This ensures the script stops immediately if any command fails, preventing cascading errors.
3. **Output Channels:**
* **Status Messages:** Write these to `stderr` using `echo "Message" >&2`. This separates logging from data.
* **Machine-Readable Output:** Write data (JSON, URLs, etc.) to `stdout`. This allows the AI to capture and parse the result cleanly.
4. **Cleanup:** Include a trap to remove temporary files even if the script fails or is interrupted.
5. **Pathing:** Reference the script path explicitly as `/mnt/skills/user/{skill-name}/scripts/{script}.sh`.
#### Optimizing for Context Efficiency
AI agents have a limited "context window"—the amount of information they can process at once. To ensure your skills are efficient and don't consume too much context, follow these best practices:
* **Keep SKILL.md under 500 lines:** If you need to include extensive reference material, put it in separate files within the `references/` directory.
* **Write Specific Descriptions:** As mentioned, this helps the agent know exactly when to activate the skill, avoiding loading unnecessary ones.
* **Use Progressive Disclosure:** Link to supporting files that only get read when the specific need arises.
* **Prefer Scripts over Inline Code:** When the script executes, it doesn't consume the AI's context—only the final output does. Putting complex logic in a script is much more context-efficient than asking the AI to write and execute code inline.
* **File References:** File references work one level deep. Link directly from `SKILL.md` to supporting files.
#### Packaging for Distribution
Once you have created or updated a skill, you must package it for distribution:
```bash
cd skills
zip -r {skill-name}.zip {skill-name}/
This zip file is what other developers or team members will install.
End-User Installation Methods
For users to install your custom skill, you should document the following methods:
For Claude Code:
cp -r skills/{skill-name} ~/.claude/skills/
For claude.ai:
Add the skill to the project knowledge base or paste the SKILL.md contents directly into the conversation context.
Network Access:
If your skill requires network access (e.g., to hit an internal API), instruct users to add the required domains in their settings at claude.ai/settings/capabilities.
Conclusion: The Future of Human-AI Collaboration
The introduction of Agent Skills represents a maturation in how we interact with AI coding tools. We are moving away from generic prompts and towards structured, reusable expertise. By packaging capabilities as skills, we turn best practices into executable, version-controlled building blocks.
This model allows teams to scale their engineering culture. A senior engineer’s insights into performance optimization can now be encoded into a skill that every junior developer—and every AI agent they work with—has access to. It reduces the cognitive load on developers, allowing them to focus on high-level architecture and business logic while the agents handle the rigorous application of rules.
Whether you are deploying a simple static site or auditing a complex React application, Agent Skills provides a framework to make those workflows faster, safer, and more consistent. As the ecosystem grows, we can expect to see skills covering security auditing, API documentation generation, and database migrations, further blurring the line between human intent and machine execution.

Image Source: Unsplash
Practical Summary / Action Checklist
-
[ ] Understand the Format: Review the directory structure required for a skill (folder, SKILL.md, scripts, references). -
[ ] Install Core Skills: Run npx add-skill vercel-labs/agent-skillsto get the React, Web Design, and Deploy skills. -
[ ] Verify Integration: Check your agent’s directory (e.g., ~/.claude/skills) to ensure the skills are present. -
[ ] Test a Review: Ask your agent to “Review this React component for performance issues” to test the react-best-practicesskill. -
[ ] Test a Deployment: Ask your agent to “Deploy this and give me the link” to test the vercel-deploy-claimableskill. -
[ ] Build a Custom Skill: Identify a repetitive task in your workflow and create a bash script for it. -
[ ] Write the Manifest: Create a SKILL.mdwith a clear description and usage examples for your custom skill. -
[ ] Optimize Context: Ensure your scripts output JSON to stdout and logs to stderr to save AI context tokens.
One-page Summary
| Component | Description |
|---|---|
| Concept | An open format for packaging AI agent capabilities as folders with instructions and scripts. |
| Core Skills | 1. react-best-practices (40+ rules)2. web-design-guidelines (100+ rules)3. vercel-deploy-claimable (Instant deployment) |
| Installation | npx add-skill vercel-labs/agent-skills or npx skills i vercel-labs/agent-skills |
| File Structure | skills/{skill-name}/SKILL.md + scripts/{script}.sh |
| Key Benefit | Turns ad-hoc prompting into structured, reusable, version-controlled engineering knowledge. |
| Output Handling | User messages to stderr, Machine-readable data (JSON) to stdout. |
| Context Strategy | Lazy loading; full SKILL.md loads only when relevant to the user’s request. |
Frequently Asked Questions
Q: What tools currently support the Agent Skills format?
A: The format is designed to be open and is currently compatible with Claude Code, Cursor, and Copilot, among other AI coding agents that adhere to the specification.
Q: Can I use Agent Skills without an internet connection?
A: Yes, once the skills are downloaded and installed locally, the AI agent can access the instructions and scripts. However, if the skill itself requires network access (like the deployment skill), you will need an active connection.
Q: How does the “Claimable” deployment feature work?
A: The vercel-deploy-claimable skill generates a special “Claim URL.” When you open this URL while logged into Vercel, it transfers the ownership of the deployment to your account, allowing you to manage it going forward without needing the original deployer’s credentials.
Q: Is there a limit to how many skills I can install?
A: Technically, no. However, since only the skill names and descriptions are loaded into context at startup, having many skills should not significantly impact performance. The full skill documentation is only loaded when the agent determines it is relevant.
Q: Can skills be written in languages other than Bash?
A: The documentation and best practices provided in the reference files specify Bash scripts (using #!/bin/bash). While agents can theoretically execute other code types, the standardized format currently emphasizes Bash for consistency and reliability in the scripts directory.
Q: How do I update a skill to a new version?
A: You can re-run the installation command (e.g., npx add-skill vercel-labs/agent-skills) to fetch the latest versions. If you are manually maintaining a custom skill, you would update the files in the skill directory and repackage the zip file.
Q: What happens if a script fails during execution?
A: Best practices dictate using set -e in scripts, which causes them to stop immediately upon encountering an error. The agent will see the error message (typically output to stderr) and can report this back to the user, often suggesting troubleshooting steps defined in the SKILL.md.
Q: Are the skills free to use?
A: The repository is licensed under MIT, meaning the skills are free to use, modify, and distribute. However, deploying to Vercel may involve that platform’s specific usage limits or pricing tiers depending on your plan.

