Load balancer for ChatGPT accounts. Pool multiple accounts, track usage, view everything in a dashboard.
docker run -d --name codex-lb \
-p 2455:2455 -p 1455:1455 \
-v ~/.codex-lb:/var/lib/codex-lb \
ghcr.io/soju06/codex-lb:latestuvx codex-lbOpen localhost:2455 → Add account → Done.
Add to ~/.codex/config.toml:
model = "gpt-5.2-codex"
model_reasoning_effort = "xhigh"
model_provider = "codex-lb"
[model_providers.codex-lb]
name = "OpenAI" # MUST be "OpenAI" - enables /compact endpoint
base_url = "http://127.0.0.1:2455/backend-api/codex"
wire_api = "responses"
chatgpt_base_url = "http://127.0.0.1:2455"
requires_openai_auth = true # Required: enables model selection in Codex IDE extensionRun:
opencode auth loginThen select OpenAI -> Manually enter API Key and enter any value.
Add the following to ~/.config/opencode/opencode.json:
All data stored in ~/.codex-lb/:
store.db– accounts, usage logsencryption.key– encrypts tokens (auto-generated)
Backup this directory to preserve your accounts.
Thanks goes to these wonderful people (emoji key):
Soju06 💻 |
Jonas Kamsker 💻 🐛 🚧 |
Quack 💻 🐛 🚧 🎨 |
Jill Kok, San Mou 💻 |
PARK CHANYOUNG 📖 |
This project follows the all-contributors specification. Contributions of any kind welcome!


{ ... "provider": { "openai": { "options": { "baseURL": "http://127.0.0.1:2455/v1" } }, ... } }