Integrating OpenAI Codex into Claude Code: A Complete Dual-Model Programming Workflow

The core question this article answers is: How do you install and configure the OpenAI Codex plugin within Claude Code to enable two distinct AI models to collaborate on programming tasks?

The answer is straightforward: you add the official OpenAI plugin source to Claude Code’s marketplace, install the Codex plugin, verify your local environment and authentication, and then begin delegating coding tasks to Codex using natural language directly inside your existing Claude Code session. The entire process takes exactly five steps, all executed within your terminal.


Why Bother Adding Codex to Claude Code?

The core question this section answers is: Claude Code is already a capable coding assistant, so what is the practical benefit of integrating OpenAI Codex into it?

Claude Code, developed by Anthropic, is a powerful command-line programming assistant. OpenAI Codex CLI, on the other hand, is OpenAI’s official command-line coding tool. It is built on GPT-series models, specifically optimized for code tasks, and supports reading and writing local files, executing terminal commands, and debugging code. The two tools possess different architectural strengths.

OpenAI released a plugin called codex-plugin-cc specifically for Claude Code. Its core value lies in allowing you to delegate programming tasks directly from your Claude Code session to a locally running instance of Codex CLI. This establishes a Claude + Codex dual-model workflow. In this arrangement, Claude acts as the analytical layer—understanding requirements, deciding whether to delegate, and performing post-generation reviews—while Codex acts as the execution layer, handling the actual code generation.

Image

In a practical scenario, you can think of Claude as the project manager and Codex as the developer. Claude analyzes your request, determines if it is suitable for delegation, and dispatches it. If it is not suitable, Claude handles it internally. This division of labor is not strictly necessary for every task, but it provides a different output style and problem-solving approach for certain types of coding challenges.

Command line interface on a dark screen
Image Source: Unsplash

How the Billing Works

Using the codex:rescue skill triggers OpenAI API calls, which are billed based on actual token consumption. The rate depends on the specific GPT model utilized. This cost is entirely separate from Claude Code’s Anthropic usage quotas. You must have your own OpenAI API Key or account balance. Simply put: Claude’s costs go to Anthropic, and Codex’s costs go to OpenAI. The two bills are calculated independently.

Reflection: The dual-model collaboration sounds appealing, but the separated billing is easily overlooked. If you are an individual developer, I highly recommend setting a hard usage limit in the OpenAI console before running any bulk tasks. This architecture of “two providers, two billing systems” requires deliberate cost control in a production environment. It is easy to get excited about the workflow and forget that every automated delegation is metered.


Prerequisites Before You Begin

The core question this section answers is: What exact conditions must be met before you type the first installation command?

There are three mandatory prerequisites. If any one of these is missing, the setup will fail.

# Prerequisite Purpose
1 Claude Code CLI installed This is the host environment; the plugin runs inside it
2 Node.js (v18+) and npm installed The Codex CLI requires the Node.js runtime to execute
3 An OpenAI account with available API credits Codex calls GPT models in the cloud, which requires payment

The testing environment for this workflow was Windows 10. If you are using macOS or Linux, the file system paths will differ slightly. For example, the plugin cache path will be ~/.claude/ instead of C:/Users/YourUsername/.claude/. However, the commands and underlying logic remain completely identical across all operating systems. The differences will be detailed in a later section.

Unique Insight: These three prerequisites actually reveal the architectural nature of the Codex plugin. It is not a simple “cloud feature toggle.” It is a local bridging layer. Claude Code calls a local script named codex-companion.mjs through the plugin system, which in turn spins up the local Codex CLI, which finally uses your API key to call the cloud model. Understanding this chain is crucial for troubleshooting later. If something breaks, you will know exactly which link in the chain to inspect.


The Five-Step Installation and Configuration Process

The core question this section answers is: Starting from scratch, what exactly do you type, what should you see, and what does each step accomplish?

Step 1: Add the Plugin Marketplace Source

Run this command inside your Claude Code session:

/plugin marketplace add openai/codex-plugin-cc

Expected output:

Successfully added marketplace: openai-codex

This step registers OpenAI’s official plugin marketplace source within Claude Code. Without this, the subsequent installation command will not be able to locate the plugin package.

Scenario Context: This is conceptually identical to adding a new software repository to your operating system. Just as you might run add-apt-repository in Ubuntu before you can run apt install, you must first teach Claude Code where to find the OpenAI plugins.

Step 2: Install the Codex Plugin

/plugin install codex@openai-codex

Expected output:

✓ Installed codex. Run /reload-plugins to apply.

The plugin is downloaded and placed into your local cache directory. Notice that the output explicitly tells you the plugin is not active yet.

Step 3: Reload the Plugins

/reload-plugins

Expected output:

Reloaded: 5 plugins · 7 skills · 6 agents · 3 hooks · 1 plugin MCP server · 0 plugin LSP servers

Only after this reload do the Codex-related skills—such as codex:setup and codex:rescue—become officially active. Skipping this step will result in “skill not found” errors when you try to use Codex commands.

Step 4: Verify the Local Codex Environment

/codex:setup
Image

This command automatically runs a diagnostic check against four local environment indicators. A successful output looks like this:

{
  "ready": true,
  "node": { "available": true, "detail": "v24.12.0" },
  "npm":  { "available": true, "detail": "11.6.2" },
  "codex": { "available": true, "detail": "codex-cli 0.117.0; advanced runtime available" },
  "auth": { "available": true, "loggedIn": true, "detail": "authenticated" }
}

The meaning of each field is as follows:

Image
  • ready: A comprehensive boolean check; true only if everything below passes
  • node: Whether Node.js is accessible in your PATH and its version number
  • npm: Whether npm is accessible and its version number
  • codex: Whether the Codex CLI is installed locally and its version
  • auth: Whether your OpenAI authentication is currently valid

Scenario Context: Imagine you are setting this up on a fresh machine. The codex field might show available: false. There is no need to panic; the plugin is designed to guide you through the fix.

What to Do If Codex CLI Is Not Installed

When codex is missing, /codex:setup will present an interactive prompt. Select Install Codex (Recommended), and the plugin will automatically execute:

npm install -g @openai/codex

Once this global installation completes, the Codex CLI will be available in any terminal path.

What to Do If You Are Not Authenticated

If the auth field shows loggedIn: false, open your system terminal (not inside Claude Code) and run:

codex login

Follow the prompts to complete the OpenAI account authorization. Then, return to Claude Code and run /codex:setup again to verify that auth.loggedIn has changed to true.

Optional: Enabling the Review Gate

The /codex:setup output includes an optional recommendation:

/codex:setup --enable-review-gate

When activated, this forces a human review prompt every time Codex is about to execute a task. You must explicitly approve the action before it proceeds.

Scenario Context: This feature is ideal for two specific situations. First, if you are highly cautious about automated tools writing directly to your filesystem. Second, if you are working inside a critical production codebase where every automated change needs a safety valve. If you are in a personal experimental project focused on rapid iteration, you can safely leave this disabled, as the default behavior is automatic execution.

Step 5: Live Testing — Delegating a Task to Codex

The core question this step answers is: Now that everything is configured, what does the actual usage experience look like?

Once installation is complete, you do not need to manually type /codex:rescue. Simply describe your task using natural language. Claude will automatically assess the complexity of the task and decide whether to delegate it to Codex.

If you want to bypass Claude’s judgment and force the delegation, you can explicitly state “hand this over to Codex” in your prompt. Claude will immediately trigger the delegation without further analysis.

Example Task:

Write a Python script that reads a CSV file and calculates the mean of each column. Hand this over to Codex.

Codex Output:

Codex generated a complete Python script named csv_stats.py. The script included the following capabilities out of the box:

  • Command-line argument parsing to specify the file path
  • Automatic identification of numeric columns (skipping non-numeric ones)
  • UTF-8/GBK encoding compatibility (essential for Windows environments with Chinese characters)
  • Comprehensive exception handling (file not found, permission denied, encoding errors, empty files)
Image

Execution Result:

File: data.csv
Found 3 numeric columns

Column         Mean
---------------------------
age            34.2500
salary      75000.0000
score           88.5000

Scenario Breakdown: Imagine you are doing data analysis and receive a CSV file, but you have not yet set up your pandas environment. Instead of switching to another tool or writing boilerplate code manually, you simply speak a sentence in Claude Code. Codex generates a reusable script. Once the script is written, you can continue in the same session, asking Claude to review it, optimize it, or even write unit tests for it.

Python code on a screen
Image Source: Unsplash

Lessons Learned: During the first test, I specifically observed Claude’s decision-making logic. When I explicitly said “hand this over to Codex,” Claude did not hesitate; it immediately invoked codex:rescue. However, when I only said “help me write a script” without mentioning Codex, Claude sometimes wrote the code itself and sometimes delegated it. This confirms that an automatic judgment mechanism exists, but its current criteria are not entirely transparent to the user. If you have a strong preference for who does the work, specify it clearly in your prompt.


Best Practice: Using Claude to Review Codex Output

The core question this section answers is: Can you trust the code generated by Codex immediately, or does it require an additional layer of verification?

Codex operates as an independent model. Claude does not participate in the code generation process when a task is delegated. This means the quality of Codex’s output depends entirely on the GPT model; Claude has zero control over it. Because of this separation, it is highly recommended to append a single follow-up prompt after every Codex delegation:

Please review the code Codex just generated.

Claude will then perform a multi-dimensional audit of the generated file.

Using the earlier CSV statistics script as an example, Claude’s review covered the following dimensions:

Review Dimension Finding
Logical Correctness Mean calculations and numeric column identification were accurate
Exception Handling Comprehensive coverage of missing files, permission issues, encoding mismatches, and empty files
Encoding Compatibility UTF-8 + GBK automatic fallback, well-suited for Windows environments
Security Read-only file operations, no command injection risks present
Final Verdict Safe to use directly

Scenario Context: Suppose the script Codex generated contained a potential division-by-zero error—for instance, attempting to calculate a mean when an entire column is empty. Claude’s review process is capable of catching this logical flaw and suggesting a fix. This is the true value of the dual-model workflow: it is not about having two models work in isolation, but about using one model as a quality assurance inspector for the other.

While this step is technically optional, making it a habit will effectively mitigate the inconsistencies that sometimes occur with standalone model outputs.

Reflection: This workflow reminds me of the “Four-Eyes Principle” in software engineering, where critical changes must be reviewed by at least two separate individuals. Dual-model collaboration is essentially applying the Four-Eyes Principle to AI-assisted programming. Claude and Codex come from different vendors and use different underlying architectures. The probability of both models making the exact same mistake is significantly lower than a single model failing to catch its own error. At this stage of AI tooling, this might be the most practical quality assurance strategy available.


End-to-End Workflow Breakdown

The core question this section answers is: From the moment you speak a prompt to the moment a code file lands on your disk, what exactly happens under the hood?

The complete execution chain is as follows:

Your natural language request
       ↓
   Claude analyzes the request (auto-determines if delegation is needed)
       ↓
  Invokes the codex:rescue skill
       ↓
  codex-companion.mjs executes (local bridge script)
       ↓
  OpenAI Codex CLI spins up (runs locally)
       ↓
  GPT-series model processes the task (consumes OpenAI credits)
       ↓
  Code file generated locally + results returned to Claude Code

Several key points in this chain are worth examining closely:

  1. Claude is the dispatcher. It decides whether to do the work and who should do it, but it writes no code for delegated tasks.
  2. codex-companion.mjs is the bridge. This is a Node.js script running on your local machine. Its sole job is to pass messages between Claude Code’s plugin system and the Codex CLI.
  3. Codex CLI runs locally. This is not a remote API call to a “Codex service.” It is an actual local process spawned on your machine. All file reads and writes happen locally.
  4. GPT models process in the cloud. The local CLI uses your OpenAI API Key to call the cloud-based model. This is the exact moment token consumption occurs.
  5. Results write directly to your working directory. The generated code file appears in your current project folder. There is no need to copy and paste anything from a chat window.

Scenario Breakdown: Assume your project is located at /home/user/myproject/. If you launch Claude Code inside that directory and delegate a task to Codex, the resulting file will appear directly inside /home/user/myproject/. This is the inherent advantage of local execution: file permissions, path contexts, and project structures are automatically aligned without any manual configuration.


Available Skills and Parameters Reference

The core question this section answers is: Once the plugin is installed, what specific commands and parameters are at your disposal?

After installation, Claude Code gains access to the following new skills:

Image

The two most relevant for daily use are:

  • codex:setup: Environment detection and configuration, detailed extensively in the installation steps.
  • codex:rescue: The actual entry point for delegating programming tasks.

The parameters supported by codex:rescue:

Image

In practice, you will rarely need to manually invoke /codex:rescue or pass these parameters directly. Describing your task in natural language and adding “hand this over to Codex” is sufficient for the vast majority of use cases. These parameters exist primarily for advanced users or for the plugin’s internal logic to handle edge cases.


Cross-Platform Considerations

The core question this section answers is: If you do not use Windows, what adjustments do you need to make?

The command examples in this guide are based on a Windows 10 environment. For macOS and Linux users, the only difference lies in file system path formatting:

Item Windows Path macOS / Linux Path
Plugin Cache C:/Users/YourUsername/.claude/plugins/cache/ ~/.claude/plugins/cache/
Claude Config Dir C:/Users/YourUsername/.claude/ ~/.claude/
Command Syntax Identical Identical
Node.js / npm Behavior Identical Identical

Every single command—from /plugin to /codex:setup to codex login—operates exactly the same way across all three major operating systems. The only variation is the directory prefix.

Unique Insight: This uniformity indicates that the designers of codex-plugin-cc put significant effort into cross-platform compatibility. The plugin does not hardcode any absolute paths; instead, it relies on the path resolution capabilities native to Claude Code itself. For developers who switch between multiple machines and operating systems, this is a highly thoughtful design choice that eliminates a whole category of potential setup errors.


Frequently Asked Questions

Q: What should I do if /codex:setup says Codex is not installed?

Select Install Codex (Recommended) when prompted. The plugin will automatically run npm install -g @openai/codex to complete the global installation. After it finishes, run /codex:setup again to verify.

Q: What should I do if it says I am not authenticated?

Open your system terminal (outside of the Claude Code session) and run codex login. Follow the prompts to authorize your OpenAI account. Then return to Claude Code and run /codex:setup to confirm that auth.loggedIn shows true.

Q: How do I continue a conversation with Codex after a task finishes?

Just describe your new requirement again. If Claude detects a recoverable session context, it will ask whether you want to continue the current thread or start a new one. You can choose based on your needs.

Q: Will using Codex generate charges on my OpenAI account?

Yes. Every invocation of codex:rescue consumes OpenAI API credits, and this is billed entirely separately from your Claude Code Anthropic usage. It is recommended to set a strict usage limit in the OpenAI console to prevent unexpected overages.

Q: How do the paths differ on Mac or Linux?

The plugin cache path is ~/.claude/plugins/cache/. All commands remain completely identical; no further adaptation is necessary.

Q: Can I use Codex CLI directly without Claude Code?

Yes. Codex CLI is a fully independent command-line tool. Once installed via npm install -g @openai/codex, you can use it directly in any terminal. This article specifically covers the workflow of invoking that local CLI through the Claude Code plugin for dual-model collaboration.

Q: Will Claude and Codex conflict with each other?

No. When Claude delegates a task to Codex, Claude does not simultaneously attempt to modify the same files. The relationship during task execution is strictly serial, not a parallel competition for resources.


Actionable Setup Checklist

Here is the minimal path to a fully operational dual-model environment. Execute these steps in order:

□ Confirm Claude Code CLI is installed
□ Confirm Node.js v18+ and npm are installed
□ Confirm your OpenAI account has available API credits
□ Run in Claude Code: /plugin marketplace add openai/codex-plugin-cc
□ Run: /plugin install codex@openai-codex
□ Run: /reload-plugins
□ Run: /codex:setup to check environment status
□ If prompted that Codex CLI is missing, select the automatic installation option
□ If prompted that authentication is missing, run 'codex login' in your system terminal
□ (Optional) Run /codex:setup --enable-review-gate to enable manual approval
□ Describe a task in natural language and append "hand this over to Codex"
□ After the task completes, append "please review the code Codex just generated"

One-Page Summary

Aspect Details
What it is An official plugin for Claude Code that delegates programming tasks to OpenAI Codex
Core Value Dual-model synergy—Claude handles dispatch and review, Codex handles execution
Installation Steps Add marketplace source → Install plugin → Reload → Environment check → Authenticate
Usage Method Natural language prompt + “hand this over to Codex”, or let Claude auto-delegate
Cost Structure Billed separately via OpenAI API; completely independent of Anthropic quotas
Prerequisites Claude Code CLI + Node.js v18+ + OpenAI account with credits
Cross-Platform Commands are identical; only the local file path prefix differs
Quality Assurance Highly recommended to have Claude review all Codex output before using it
Key Commands /codex:setup (diagnostics), codex login (authentication), natural language (usage)

Laptop and coffee on a workspace
Image Source: Unsplash

Final Reflection: After documenting this entire workflow, my strongest takeaway is that dual-model collaboration is not a gimmick, but its current value lies more in “cross-validated quality” rather than “doubling your speed.” Claude can write code perfectly well on its own, and Codex can operate independently. The real significance of stringing them together is that the difference in how two separate architectures understand the same problem creates a natural complement. It is the exact same logic as assigning two engineers with different backgrounds to cross-review the same block of code. The tools are evolving rapidly, but the underlying logic of sound engineering practice remains entirely unchanged.