OpenClaw Setup Guide: Install, Configure, and Run Your Own AI Assistant
OpenClaw is a self-hosted AI assistant platform that runs entirely on your own machine or server. It lets you connect to large language models like Claude, GPT-4o, and Qwen, then interact with them through messaging apps like Telegram, Discord, and Slack — or directly from a web dashboard. This guide walks you through every step from installation to daily use, including the configuration pitfalls most people run into.
Table of Contents
- •
Prerequisites - •
Installing OpenClaw - •
Configuring and Starting the Gateway - •
Connecting a Third-Party AI Model Provider - •
Running Local Models with Ollama - •
Securing Your API Keys - •
Troubleshooting Common Issues - •
Auto-Start on Login - •
Updating OpenClaw - •
How to Use OpenClaw - •
FAQ
Prerequisites
Before installing OpenClaw, verify that you have Node.js 22 or higher installed:
node -v
If the output shows v22.x.x or above, you’re good. If your Node version is too old, download the latest LTS release from nodejs.org or use nvm to manage multiple Node versions.
Installing OpenClaw
Install OpenClaw globally using npm. The -g flag means “global install,” which makes the openclaw command available from any directory on your system:
npm install -g openclaw@latest
Once installed, run the setup wizard:
openclaw onboard
The wizard walks you through choosing a messaging platform and an AI model provider. Follow the prompts and you’ll have a basic configuration in place.
Configuring and Starting the Gateway
This is where most people get stuck. OpenClaw’s core is a local gateway service — the web dashboard, messaging platform integrations, and all AI requests depend on it being up and running.
Why can’t I access http://127.0.0.1:18789/?
After installation, OpenClaw tells you to visit http://127.0.0.1:18789/. If you get a “connection refused” error, the gateway isn’t running yet — either because it was never started or because a required configuration value is missing.
Step 1: Set the gateway mode
This single step is the most commonly skipped. Without it, the gateway will refuse to start entirely:
openclaw config set gateway.mode local
Step 2: Install and start the gateway service
“
⚠️ The command below requires an Administrator PowerShell. Right-click on PowerShell and select “Run as administrator,” otherwise you’ll get an “Access Denied” error.
openclaw gateway install
A successful install looks like this:
Installed Scheduled Task: OpenClaw Gateway
Then trigger the gateway to start:
schtasks /Run /TN "OpenClaw Gateway"
Step 3: Verify the gateway is reachable
Wait a few seconds, then run:
openclaw gateway probe
If you see Reachable: yes, the gateway is running. Open http://127.0.0.1:18789/ in your browser and the dashboard should load.
Step 4: Handle the token prompt
The first time you open the dashboard, you may see “unauthorized: gateway token missing.” This is expected — it’s a local security measure. Retrieve your token with:
openclaw config get gateway.auth.token
Paste the output into the browser’s authentication prompt and you’re in.
Connecting a Third-Party AI Model Provider
OpenClaw has built-in support for Anthropic, OpenAI, Google Gemini, OpenRouter, and several other providers. If you’re routing API calls through a proxy or relay service — for example, accessing Claude through a third-party platform — you’ll need to configure a custom provider manually.
Where is the config file?
All OpenClaw configuration lives in:
C:\Users\YourUsername\.openclaw\openclaw.json
Open it in Notepad:
notepad "C:\Users\YourUsername\.openclaw\openclaw.json"
Custom provider configuration template
The following example works with any provider that exposes an OpenAI-compatible endpoint (/v1/chat/completions):
{
"models": {
"mode": "merge",
"providers": {
"my-provider": {
"baseUrl": "https://your-provider.com/v1",
"apiKey": "your-api-key-here",
"api": "openai-completions",
"models": [
{
"id": "claude-opus-4-5",
"name": "Claude Opus 4.5",
"reasoning": false,
"input": ["text"],
"contextWindow": 200000,
"maxTokens": 8096,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "claude-sonnet-4-5",
"name": "Claude Sonnet 4.5",
"reasoning": false,
"input": ["text"],
"contextWindow": 200000,
"maxTokens": 8096,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "claude-haiku-4-5",
"name": "Claude Haiku 4.5",
"reasoning": false,
"input": ["text"],
"contextWindow": 200000,
"maxTokens": 8096,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "my-provider/claude-opus-4-5",
"fallbacks": ["my-provider/claude-sonnet-4-5"]
},
"models": {
"my-provider/claude-opus-4-5": { "alias": "opus" },
"my-provider/claude-sonnet-4-5": { "alias": "sonnet" },
"my-provider/claude-haiku-4-5": { "alias": "haiku" }
}
}
},
"gateway": {
"mode": "local",
"bind": "loopback",
"auth": {
"token": "auto"
}
}
}
Common configuration mistakes
Test whether the API endpoint actually works
Before restarting the gateway, verify the API is reachable from your machine. In PowerShell (not curl — that alias doesn’t behave the same way):
Invoke-WebRequest -Method POST `
-Uri "https://your-provider.com/v1/chat/completions" `
-Headers @{
"Authorization" = "Bearer your-api-key"
"Content-Type" = "application/json"
} `
-Body '{"model":"claude-opus-4-5","messages":[{"role":"user","content":"hi"}],"max_tokens":100}' |
Select-Object -ExpandProperty Content
A response containing a choices array means the endpoint is working. A 401 means the API key is wrong. A 404 usually means the baseUrl path is incorrect.
Restart the gateway after editing config
Any time you edit openclaw.json, the gateway needs a restart to pick up the changes:
schtasks /Run /TN "OpenClaw Gateway"
Running Local Models with Ollama
If you’d rather not rely on a cloud API — whether for privacy reasons, cost control, or offline use — you can run large language models locally using Ollama and connect OpenClaw to them. No API key required. The only ongoing cost is electricity.
Step 1: Install Ollama and pull a model
Download the Ollama installer from ollama.com. After installation, pull whichever models you want to use:
ollama pull qwen2.5:7b # Balanced quality and speed — good starting point
ollama pull deepseek-r1:7b # Strong reasoning capabilities
ollama pull llama3.3:latest # Meta's model, excellent for English tasks
ollama pull phi:3.8b # Microsoft's efficient lightweight model
Start the Ollama service and keep the terminal open:
ollama serve
Verify your models are available:
ollama list # Show downloaded models
ollama ps # Show currently running models
Step 2: Configure Ollama in openclaw.json
OpenClaw recommends a context window of at least 64k tokens for local models. Set contextWindow generously — most modern Ollama models support 128k.
The key things to know: baseUrl is always http://localhost:11434, apiKey can be any string (Ollama doesn’t validate it), and api should be openai-completions:
{
"models": {
"mode": "merge",
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434",
"apiKey": "ollama-local",
"api": "openai-completions",
"models": [
{
"id": "qwen2.5:7b",
"name": "Qwen2.5 7B",
"reasoning": false,
"input": ["text"],
"contextWindow": 131072,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "deepseek-r1:7b",
"name": "DeepSeek R1 7B",
"reasoning": true,
"input": ["text"],
"contextWindow": 131072,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "llama3.3:latest",
"name": "Llama 3.3",
"reasoning": false,
"input": ["text"],
"contextWindow": 131072,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "ollama/qwen2.5:7b",
"fallbacks": ["ollama/llama3.3:latest"]
},
"models": {
"ollama/qwen2.5:7b": { "alias": "qwen" },
"ollama/deepseek-r1:7b": { "alias": "deepseek" },
"ollama/llama3.3:latest": { "alias": "llama" },
"ollama": {}
}
}
},
"gateway": {
"mode": "local",
"bind": "loopback",
"auth": {
"token": "auto"
}
}
}
“
⚠️ The
idfield must exactly match whatollama listshows, including the tag — for exampleqwen2.5:7b, not justqwen2.5.
Step 3: Use Ollama as a cloud fallback
You can set Ollama as the last resort when a cloud provider hits rate limits or goes down. OpenClaw will automatically fall through the list:
{
"agents": {
"defaults": {
"model": {
"primary": "my-provider/claude-opus-4-5",
"fallbacks": [
"my-provider/claude-sonnet-4-5",
"ollama/qwen2.5:7b"
]
}
}
}
}
Recommended local models
Here’s a practical breakdown of models that work well with OpenClaw, sorted by VRAM requirement:
No dedicated GPU? Models will still run on CPU, though expect responses to be 5–10x slower. An NVIDIA GPU gives the best experience.
Ollama configuration checklist
- •
baseUrlishttp://localhost:11434— do not append/v1 - •
apiKeycan be any placeholder string likeollama-local— Ollama ignores it - •
The "ollama": {}entry underagents.defaults.modelsis required — without it, none of the Ollama models will be usable - •
Set "reasoning": truefor chain-of-thought models like DeepSeek R1; usefalsefor standard models - •
Always restart the gateway after editing config: schtasks /Run /TN "OpenClaw Gateway"
Securing Your API Keys
Putting API keys directly in openclaw.json works but isn’t ideal, especially if other users share your machine. The safer approach is to store secrets in a separate environment file and reference them by variable name.
Step 1 — Create or edit ~/.openclaw/.env:
MY_API_KEY=your-actual-key-here
GATEWAY_TOKEN=choose-a-strong-password
Step 2 — Reference the variable in openclaw.json using ${VARIABLE_NAME}:
{
"models": {
"providers": {
"my-provider": {
"apiKey": "${MY_API_KEY}"
}
}
}
}
Step 3 — Lock down file permissions so other users can’t read your keys (run as Administrator):
icacls "C:\Users\YourUsername\.openclaw\openclaw.json" /inheritance:r /grant:r "YourUsername:F" /grant:r "SYSTEM:F"
Troubleshooting Common Issues
The gateway window flashes and closes immediately
Run the gateway script directly to see the error output:
cmd /c "C:\Users\YourUsername\.openclaw\gateway.cmd"
The most common error you’ll see is:
Gateway start blocked: set gateway.mode=local (current: unset)
Fix it with:
openclaw config set gateway.mode local
Port 18789 is already in use
netstat -ano | findstr 18789
Find the PID in the output, kill it in Task Manager, or change OpenClaw’s port:
openclaw config set gateway.port 18790
The AI never responds — messages just disappear
This is a subtle one. OpenClaw doesn’t surface API errors visually — messages that fail to reach the model just vanish silently. Here’s how to diagnose it:
Open a new PowerShell window and watch the live log:
openclaw logs --follow
Send a message and check the timing. If the entire round trip completes in under one second, the request never actually reached the AI provider. A legitimate Claude API call takes at least 3–5 seconds. Common causes:
- •
baseUrlis missing/v1 - •
The API key is invalid or expired - •
The model iddoesn’t match what the provider expects
The fastest way to confirm is to test the API endpoint directly — see the “Test whether the API endpoint actually works” section above.
gateway.mode keeps getting reset
If you manually overwrite openclaw.json with new content, any settings previously written by CLI commands — including gateway.mode — will be lost. Always make sure the file contains:
"gateway": {
"mode": "local"
}
Quick reference: symptoms and fixes
Auto-Start on Login
By default, you need to manually trigger the gateway every time you log into Windows. To make it start automatically on login, run this command as Administrator:
schtasks /Change /TN "OpenClaw Gateway" /SC ONLOGON /ENABLE
Confirm the change took effect:
schtasks /Query /TN "OpenClaw Gateway" /FO LIST
If the trigger shows “At logon,” you’re all set — the gateway will start automatically after every login.
Updating OpenClaw
Check your current version:
openclaw --version
Update to the latest release (run as Administrator):
npm i -g openclaw@latest
Restart the gateway after updating:
openclaw gateway restart
“
Note: If
openclaw updatereturns anot-git-installmessage, it means OpenClaw was installed via npm and the built-in update command doesn’t apply. Always usenpm i -g openclaw@latestto update in that case.
How to Use OpenClaw
OpenClaw gives you three ways to interact with your AI assistant:
Option 1: Web Dashboard (Recommended)
Open http://127.0.0.1:18789/ in your browser. This is the most fully featured interface — it supports multi-turn conversations, file uploads, and skill management. Best for day-to-day use.
Option 2: Terminal UI
openclaw tui
Launches an interactive chat interface inside the terminal. Useful when you want to stay in the command line.
Option 3: Messaging Platforms
Once you configure a Telegram or Discord bot, you can chat with your AI assistant directly from your phone, anywhere you have internet access.
In-chat slash commands
Type these commands directly in any chat window to control behavior without leaving the conversation:
FAQ
Do I need to run any commands every time I start my computer?
No. gateway.mode local is written permanently to the config file. Once you set up the auto-start scheduled task (see the Auto-Start section), the gateway launches automatically on every login.
What do I do if openclaw.json gets corrupted?
OpenClaw automatically creates a backup at openclaw.json.bak every time the config changes. Copy the backup file back to restore your last working configuration.
How do I find the correct model ID for my provider?
Query your provider’s model list endpoint directly:
Invoke-WebRequest -Uri "https://your-provider.com/v1/models" `
-Headers @{"Authorization" = "Bearer your-api-key"} |
Select-Object -ExpandProperty Content
The id field in the JSON response is what you put in models[].id.
Does OpenClaw send my API keys to any external server?
No. OpenClaw runs entirely on your local machine. Your API keys are stored only in your config files, and all AI requests go directly from your computer to the model provider’s API — nothing passes through OpenClaw’s servers.
I edited the config file but nothing changed. What’s wrong?
The gateway caches configuration at startup. After any edit to openclaw.json, restart the gateway:
schtasks /Run /TN "OpenClaw Gateway"
How do I see what OpenClaw is doing in real time?
openclaw logs --follow
This streams live output and is the first thing to check when something behaves unexpectedly.
Why doesn’t openclaw chat work?
OpenClaw doesn’t have a chat subcommand. For command-line interaction, use openclaw tui to open the terminal interface, or use the web dashboard at http://127.0.0.1:18789/.
Error messages appear as garbled characters on Windows. What’s happening?
Some Windows system error messages display as garbled text due to encoding issues. The underlying error is almost always “Access Denied.” Re-run the failing command as Administrator and it should go through cleanly.
Complete openclaw.json Reference
Here’s a full template with every key section in place. Use this as a starting point and customize the provider block for your setup:
{
"models": {
"mode": "merge",
"providers": {
"my-provider": {
"baseUrl": "https://your-provider.com/v1",
"apiKey": "${MY_API_KEY}",
"api": "openai-completions",
"models": [
{
"id": "claude-opus-4-5",
"name": "Claude Opus 4.5",
"reasoning": false,
"input": ["text"],
"contextWindow": 200000,
"maxTokens": 8096,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "my-provider/claude-opus-4-5",
"fallbacks": []
},
"models": {
"my-provider/claude-opus-4-5": { "alias": "opus" }
}
}
},
"commands": {
"native": "auto",
"nativeSkills": "auto",
"restart": true
},
"gateway": {
"mode": "local",
"bind": "loopback",
"auth": {
"token": "auto"
}
}
}
The three issues that catch most people during setup are: forgetting to set gateway.mode, leaving /v1 off the baseUrl, and running the gateway install without Administrator privileges. Follow the steps in this guide and you should be up and running without any of those headaches. When something does go wrong, openclaw logs --follow is always the first place to look, and openclaw doctor can automatically detect and suggest fixes for the most common configuration problems.

