Skip to content

Remove vllm backend and dependency #780

@avinash2692

Description

@avinash2692

Background

vLLM pins an older version of transformers as a hard dependency, which
blocks us from upgrading to transformers v5. Since vLLM is the only thing
holding us back, the cleanest path forward is to remove it from the project
entirely rather than waiting for vLLM to catch up.

What needs to change

  • Delete LocalVLLMBackend and the mellea/backends/vllm module
  • Remove the [vllm] optional dependency and drop it from [all] / [backends]
  • Replace the subprocess-based vLLM integration test with a lightweight mock
    OpenAI-compatible server fixture (the test was validating the generic
    OpenAI-compatible HTTP path anyway, not vLLM internals)
  • Remove all vLLM references from docs, README, and the nav config
  • Regenerate uv.lock — this should free up the transformers version constraint

Out of scope

The follow-on transformers v5 upgrade is a separate pr #418 that depends on this. This issue is
only the removal.

Acceptance criteria

  • from mellea.backends.vllm import LocalVLLMBackend raises ModuleNotFoundError
  • pip install mellea[vllm] fails with "no such extra"
  • uv.lock no longer pins a transformers version required by vLLM
  • The mock OpenAI server fixture is in place and the replacement test passes
  • No vllm string remains in source, docs, or CI config
  • Full test suite is green

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions