Key Takeaways
Connect OpenClaw to local Ollama, configure provider settings, create a minimal agent, and verify the end-to-end local AI flow.
Connect OpenClaw to Ollama: Local AI Agent in Practice
After installing both Ollama and OpenClaw, the final step is wiring them together correctly. This updated guide uses the same practical approach as the Chinese version: edit ~/.openclaw/openclaw.config and point OpenClaw to your Ollama host.
Target Architecture
OpenClaw
-> Ollama API (192.168.10.20:11434)
-> qwen2.5:14b
-> response1) Configure Models in ~/.openclaw/openclaw.config
models: {
mode: 'merge',
providers: {
ollama: {
baseUrl: 'http://192.168.10.20:11434/v1',
apiKey: 'ollama-local',
api: 'openai-completions',
models: [
{
id: 'qwen2.5:14b',
name: 'Qwen-14b',
api: 'openai-completions',
reasoning: false,
input: [
'text',
],
cost: {
input: 0,
output: 0,
cacheRead: 0,
cacheWrite: 0,
},
contextWindow: 32768,
maxTokens: 32768,
},
],
},
'custom-192-168-10-20-11434': {
baseUrl: 'http://192.168.10.20:11434/v1',
apiKey: 'ollama-local',
api: 'openai-completions',
models: [
{
id: 'qwen2.5:14b',
name: 'qwen2.5:14b (Custom Provider)',
reasoning: false,
input: [
'text',
],
cost: {
input: 0,
output: 0,
cacheRead: 0,
cacheWrite: 0,
},
contextWindow: 16000,
maxTokens: 4096,
},
],
},
},
},The key is to keep baseUrl and model IDs aligned with your actual Ollama endpoint and installed model.
2) Configure Agents
agents: {
defaults: {
model: {
primary: 'custom-192-168-10-20-11434/qwen2.5:14b',
},
models: {
'openai-codex/gpt-5.4': {},
'custom-192-168-10-20-11434/qwen2.5:14b': {},
},
workspace: '/root/.openclaw/workspace',
compaction: {
mode: 'safeguard',
},
maxConcurrent: 4,
subagents: {
maxConcurrent: 8,
},
},
},Keep defaults first, verify stability, then tune prompts and tools.
3) End-to-End Validation
Use a simple prompt:
Explain the benefits and limitations of running LLMs locally.A normal model response means your chain is working:
OpenClaw -> Ollama API -> qwen2.5:14b -> ResponseTroubleshooting Checklist
- Connection error: ensure
ollama serveis running and the configured host (for example192.168.10.20:11434) is reachable. - Model missing: run
ollama list, thenollama pull qwen2.5:14bif needed. - Slow response: check local CPU/RAM pressure or switch to a lighter model.
Wrap-up
You now have a complete local loop from model serving to agent orchestration. Next, you can extend this setup with tool calls (web_fetch, web_search) and proxy-enabled automation workflows.