Skip to content

Support OpenAI-compatible API endpoints for all LLM providers #144

@Chungzter

Description

@Chungzter

We should unify LLM interaction in commizard to use OpenAI compatibleAPI format instead of provider-specific endpoints.

This will allow the same code path to work for:

  • Local Ollama (which supports OpenAI compatibility)
  • OpenAI itself with popular GPT models
  • Future providers that offer OpenAI-compatible APIs (Grok, Anthropic, Together, Fireworks, etc.)

Why not separate paths

  • Reduces code duplication
  • Easier to maintain and extend
  • Ollama's OpenAI compatibility is stable enough for our use case (simple prompt to commit message generation)

Tasks

  • Create a new, minimal LLM client abstraction

  • Refactor core generation function

    • Replace current Ollama /api/generate or other api/* calls
    • Map our prompt to messages list (single user message for now)
  • Add configuration / switching

    • Store in config.py states: base_url, api_key,
    • Environment variable fallback (OPENAI_API_KEY, OLLAMA_HOST, etc.)
  • Update README.md with changes

    • This OpenAI support from Ollama change will need a specific version of Ollama, so this requirement should be visibly stated.

later good features to keep in mind:

  • Automatic fallback if one provider fails
  • Support for /v1/completions (non-chat) as fallback

Metadata

Metadata

Assignees

Labels

refactorcode readability/structure enhancement

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions