|
|
5 дней назад | |
|---|---|---|
| .. | ||
| README.md | 5 дней назад | |
Deploy OpenClaw, a Telegram bot that provides access to Ollama models via Telegram
messaging. Always uses the best warm general-purpose model (slot1_general from the
last benchmark run).
{{ vault_secret_prefix }}/openclaw:telegram_tokenbenchmarks/results/model_selection.json must exist (produced by 03_benchmark.yml)08_openclaw.yml reads benchmarks/results/model_selection.json at deploy time and
sets openclaw_model to slot1_general — the highest-scoring general model that is
always warm on the Node 1 instance (port 11434). This ensures the bot always uses the
best available model without requiring manual updates after a benchmark run.
The fallback value (used when model_selection.json is absent) is set in
inventory/group_vars/all.yml under openclaw_model.
OpenClaw connects to localhost:11434 — the Node 1 general instance. Coding models on
port 11435 are not accessible to the bot; they are reserved for IDE and API integrations.
python-telegram-bot, requests, pyyaml) are installed via pip3/mnt/ai_data/openclaw/bot.py/mnt/ai_data/openclaw/config.ymlopenclaw.service) manages the processConfig file location: /mnt/ai_data/openclaw/config.yml
The configuration includes:
http://localhost:11434) and API key (from Vault)slot1_general){{ vault_secret_prefix }}/openclawtelegram_tokenThe Telegram token is read from Vault at deploy time and written to the config file.
If no Telegram bot token is configured (Vault secret absent or empty), the entire
OpenClaw installation is skipped. This allows running site.yml without a Telegram
bot token.
ansible-playbook playbooks/site.yml --tags openclaw -K -e @local.yml
ansible-playbook playbooks/08_openclaw.yml -K -e @local.yml