Summary
memos-local-openclaw fails with multiple provider combinations common in OpenClaw deployments:
- minimax-portal: OpenClaw default, uses Anthropic API, but switch cases only have explicit "anthropic"
- Ollama /v1/embeddings: Broken OpenAI compat endpoint, fallback loses vector embeddings
- Ollama /v1/chat/completions: Broken OpenAI compat endpoint, summarizer timeouts
- Timeout too short: judgeDedup default 15s insufficient for local model loading
Root Causes
- Provider switch cases only have generic names ("anthropic"), missing custom provider keys like "minimax-portal"
- Ollama OpenAI compatibility broken for both
/v1/embeddings and /v1/chat/completions (native APIs work)
- Default 15s timeout too short (Ollama loads 2-6s per cold request)
Proposed Solutions
1. Add explicit cases for common OpenClaw providers
Add minimax-portal and other known providers to all 6 switch functions:
case "anthropic":
case "minimax-portal":
return summarizeAnthropic(...);
2. Support Ollama native API
case "ollama":
return embedOllamaNative(texts, cfg, log); // uses /api/embeddings
return summarizeOllamaNative(text, cfg, log); // uses /api/generate
3. Increase default timeouts for local models
judgeDedup: 15s → 120s
summarize: 60s → 120s
4. Add startup health-check
Fail fast if configured providers are unreachable.
Related
Summary
memos-local-openclaw fails with multiple provider combinations common in OpenClaw deployments:
Root Causes
/v1/embeddingsand/v1/chat/completions(native APIs work)Proposed Solutions
1. Add explicit cases for common OpenClaw providers
Add
minimax-portaland other known providers to all 6 switch functions:2. Support Ollama native API
3. Increase default timeouts for local models
judgeDedup: 15s → 120ssummarize: 60s → 120s4. Add startup health-check
Fail fast if configured providers are unreachable.
Related