Key Takeaways
A practical starter guide to install Ollama on Ubuntu, pull Qwen2.5:14B, and verify local inference before connecting OpenClaw.
Install Ollama on Ubuntu: Step 1 for Local OpenClaw
Before wiring OpenClaw to any model provider, you should first make sure your local model runtime is stable. In this article, we install Ollama on Ubuntu, start the service, pull a model, and run a quick validation.
Why Start with Ollama
In this setup, Ollama is your local LLM serving layer. Once done, you get:
- A local API endpoint:
http://127.0.0.1:11434 - An installed model:
qwen2.5:14b - A provider endpoint ready for OpenClaw
1) Update the System
sudo apt update
sudo apt upgrade -y2) Install Ollama
Run the official installer:
curl -fsSL https://ollama.com/install.sh | shVerify installation:
ollama --version3) Start Ollama Service
ollama serveDefault endpoint:
http://127.0.0.1:11434Quick API check:
curl http://127.0.0.1:114344) Pull a Model (Qwen2.5:14B)
ollama pull qwen2.5:14bList local models:
ollama list5) Run a Validation Prompt
ollama run qwen2.5:14bExample prompt:
Explain what local LLMs are.If you get a normal answer, your local model service is ready.
Hardware Notes and Troubleshooting
- Recommended memory: 16GB minimum, 32GB preferred
- Suggested OS: Ubuntu 20.04+
- Slow pull: check network and model download status
- API not reachable: ensure
ollama serveis still running
Wrap-up
You now have a working local Ollama environment. In Part 2, we install OpenClaw and prepare it for local model integration.