TL;DR: OpenClaw can call Claude and Codex through their CLI tools instead of hitting APIs directly. CLI backends give you better session handling and remove API key management. Switching over requires cleaning up some wizard-generated config. Extra usage billing still applies for Claude.
Why CLI backends Link to heading
OpenClaw supports two ways of talking to model providers: the embedded API (sends HTTP requests using an API key or token) and CLI backends (shells out to the provider’s CLI tool, e.g. claude or codex).
CLI backends have a few advantages:
- No API keys to manage. The CLIs handle their own OAuth sessions. You authenticate once with
claude auth loginorcodex auth loginand the CLI refreshes tokens on its own. - Better session handling. The built-in plugin defaults include session resumption, system prompt injection, MCP bridging, and watchdog timeouts. The Codex CLI also draws from your ChatGPT subscription rather than requiring separate API billing.
A note on Claude billing Link to heading
Since April 4 2026, Anthropic requires extra usage (pay-as-you-go billing) for third-party harnesses like OpenClaw. Claude Code is a first-party product, but Anthropic still applies third-party billing when OpenClaw manages the session. The exact detection mechanism isn’t documented. One likely signal: OpenClaw’s bundled Anthropic plugin sets CLAUDE_CODE_PROVIDER_MANAGED_BY_HOST=1 on every Claude CLI invocation (source). The system prompt, session metadata, and usage patterns could also contribute.
For Claude, CLI backends give you better session handling and simpler auth management, but you’ll still need extra usage enabled and funded. See the OpenClaw Anthropic provider docs for details on the policy change.
Installing the CLIs Link to heading
Both need to be on the PATH of the user running the OpenClaw gateway.
npm install -g @anthropic-ai/claude-code
npm install -g @openai/codex
Then authenticate each one:
claude auth login
codex auth login
For headless VMs, you’ll need to tunnel the OAuth callback. Forward the port the CLI listens on (check the URL it prints) back to your local machine, open the auth URL in your browser, and the callback will tunnel through.
Model naming: claude-cli/ vs anthropic/
Link to heading
OpenClaw routes to the CLI backend or the embedded API based on the model prefix. If your default model is anthropic/claude-opus-4-6, OpenClaw uses the embedded API. To use the CLI backend, set it to claude-cli/claude-opus-4-6:
openclaw config set agents.defaults.model.primary 'claude-cli/claude-opus-4-6'
The same applies to Codex: openai-codex/gpt-5.4 routes through the Codex CLI backend.
Clean up wizard-generated config Link to heading
OpenClaw’s plugin system registers built-in CLI backend defaults for each provider. The Anthropic plugin registers something like this:
{
command: "claude",
args: ["-p", "--output-format", "stream-json", "--verbose", ...],
output: "jsonl", // tells OpenClaw to parse the NDJSON stream
input: "stdin",
sessionArg: "--session-id",
systemPromptArg: "--append-system-prompt",
// ... session management, MCP bridging, watchdogs
}
The Codex plugin follows the same pattern. output: "jsonl" tells OpenClaw to parse the CLI’s NDJSON output and extract message text before sending it to your chat channel.
If you ran the onboard wizard before installing the CLIs, you’ll have an explicit agents.defaults.cliBackends.<provider> entry in openclaw.json that overrides the entire plugin default. The wizard-generated config looks like this:
{
"command": "codex",
"args": [
"exec",
"--json",
"--color",
"never",
"--dangerously-bypass-approvals-and-sandbox",
"--skip-git-repo-check"
],
"env": { "HOME": "/home/ubuntu" }
}
No output: "jsonl". Without that directive, OpenClaw treats stdout as raw text and forwards every JSON line to your chat channel verbatim — you’ll see raw NDJSON flooding your Telegram or Discord.
Remove any explicit CLI backend configs and let the plugin defaults take over.
openclaw config unset agents.defaults.cliBackends.openai-codex
openclaw config unset agents.defaults.cliBackends.anthropic
Then restart the gateway. The built-in plugin defaults handle JSONL parsing, session management, system prompt injection, MCP bridging, resume args, and watchdog timeouts. Your manual config replaces all of that with a bare-bones passthrough.
If you do need to override specific args (e.g. --max-turns or --permission-mode), be aware that any explicit cliBackends entry replaces the plugin default entirely, not just the fields you set. You’ll need to include output: "jsonl" and the other parser directives yourself, or you’ll end up with mangled output in your chat channels.
Setting up Claude CLI auth in OpenClaw Link to heading
If you previously used Claude via the embedded API, you’ll have an anthropic:claude auth profile with mode: "token" in your config. OpenClaw will prefer the embedded API path over the CLI backend while this profile exists.
Remove it:
openclaw config unset auth.profiles.anthropic:claude
OpenClaw still needs an auth profile for the provider, even though the Claude CLI handles the actual authentication itself. The token in the auth store isn’t passed to the claude CLI. OpenClaw uses it as a routing signal: “this provider has valid credentials, proceed with the CLI backend.” It also tracks rate limit cooldowns and usage quotas against the profile. Without any auth profile, you’ll get “Missing API key for provider anthropic” and the gateway won’t try the CLI backend at all.
Copy the Claude CLI’s OAuth token into OpenClaw’s auth store at ~/.openclaw/agents/main/agent/auth-profiles.json. The token lives in ~/.claude/.credentials.json under claudeAiOauth.accessToken. Add a profile entry like:
{
"anthropic:claude-cli": {
"type": "token",
"provider": "anthropic",
"token": "<your accessToken from .claude/.credentials.json>",
"managedBy": "claude-cli"
}
}
Then add the matching config profile:
openclaw config set 'auth.profiles.anthropic:claude-cli' \
'{"provider":"anthropic","mode":"token"}' --strict-json
Verifying it works Link to heading
Check the gateway logs after sending a message:
journalctl --user -u openclaw-gateway --since '1 min ago' | grep -E 'cli exec|embedded run'
[agent] cli exec: provider=claude-cli— CLI backend, what you want[agent] embedded run agent end:— embedded API, still using the old path
Quick reference Link to heading
| Problem | Cause | Fix |
|---|---|---|
| Raw JSON in chat | Wizard-generated cliBackends config overriding plugin defaults | openclaw config unset agents.defaults.cliBackends.<provider> |
| Still using embedded API | Model prefix is anthropic/ instead of claude-cli/ | Set model to claude-cli/claude-opus-4-6 |
| “Missing API key” error | No auth profile registered for anthropic | Copy Claude CLI token into auth store (see above) |
| Claude billing errors | Extra usage required since April 4 | Enable extra usage in Claude admin settings |
| Codex responses swallowed | Same as raw JSON — parser not engaged | Same fix: remove explicit config |
Versions tested Link to heading
- OpenClaw 2026.4.9
- Claude Code 2.1.97
- Codex CLI 0.118.0
Further reading Link to heading
- OpenClaw CLI Backends docs
- OpenClaw Anthropic provider docs
- GitHub issue #62505 — related Codex regression discussion