Exclusive: Register for $2 credit. Access the world's most trusted residential proxy network.
artificial-intelligence

Connect OpenClaw to Ollama: Local AI Agent in Practice

Published
Reading Time5 min read
Share

Key Takeaways

Connect OpenClaw to local Ollama, configure provider settings, create a minimal agent, and verify the end-to-end local AI flow.

Connect OpenClaw to Ollama: Local AI Agent in Practice

After installing both Ollama and OpenClaw, the final step is wiring them together correctly. This updated guide uses the same practical approach as the Chinese version: edit ~/.openclaw/openclaw.config and point OpenClaw to your Ollama host.

Target Architecture

plain text
OpenClaw
  -> Ollama API (192.168.10.20:11434)
  -> qwen2.5:14b
  -> response

1) Configure Models in ~/.openclaw/openclaw.config

json
models: {
    mode: 'merge',
    providers: {
      ollama: {
        baseUrl: 'http://192.168.10.20:11434/v1',
        apiKey: 'ollama-local',
        api: 'openai-completions',
        models: [
          {
            id: 'qwen2.5:14b',
            name: 'Qwen-14b',
            api: 'openai-completions',
            reasoning: false,
            input: [
              'text',
            ],
            cost: {
              input: 0,
              output: 0,
              cacheRead: 0,
              cacheWrite: 0,
            },
            contextWindow: 32768,
            maxTokens: 32768,
          },
        ],
      },
      'custom-192-168-10-20-11434': {
        baseUrl: 'http://192.168.10.20:11434/v1',
        apiKey: 'ollama-local',
        api: 'openai-completions',
        models: [
          {
            id: 'qwen2.5:14b',
            name: 'qwen2.5:14b (Custom Provider)',
            reasoning: false,
            input: [
              'text',
            ],
            cost: {
              input: 0,
              output: 0,
              cacheRead: 0,
              cacheWrite: 0,
            },
            contextWindow: 16000,
            maxTokens: 4096,
          },
        ],
      },
    },
  },

The key is to keep baseUrl and model IDs aligned with your actual Ollama endpoint and installed model.

2) Configure Agents

json
agents: {
    defaults: {
      model: {
        primary: 'custom-192-168-10-20-11434/qwen2.5:14b',
      },
      models: {
        'openai-codex/gpt-5.4': {},
        'custom-192-168-10-20-11434/qwen2.5:14b': {},
      },
      workspace: '/root/.openclaw/workspace',
      compaction: {
        mode: 'safeguard',
      },
      maxConcurrent: 4,
      subagents: {
        maxConcurrent: 8,
      },
    },
  },

Keep defaults first, verify stability, then tune prompts and tools.

3) End-to-End Validation

Use a simple prompt:

plain text
Explain the benefits and limitations of running LLMs locally.

A normal model response means your chain is working:

plain text
OpenClaw -> Ollama API -> qwen2.5:14b -> Response

Troubleshooting Checklist

  • Connection error: ensure ollama serve is running and the configured host (for example 192.168.10.20:11434) is reachable.
  • Model missing: run ollama list, then ollama pull qwen2.5:14b if needed.
  • Slow response: check local CPU/RAM pressure or switch to a lighter model.

Wrap-up

You now have a complete local loop from model serving to agent orchestration. Next, you can extend this setup with tool calls (web_fetch, web_search) and proxy-enabled automation workflows.

ELITE INFRASTRUCTURE

Built for Engineers, by Engineers.

Access the reliability of production-grade infrastructure. Built for high-frequency data pipelines with sub-second latency.

Start Building Free

Trusted by companies worldwide