Skip to content

Conversation

@xingyaoww
Copy link
Collaborator

@xingyaoww xingyaoww commented Jan 10, 2026

Summary

This PR implements OAuth PKCE flow for authenticating with OpenAI's ChatGPT service, allowing users with ChatGPT Plus/Pro subscriptions to use Codex models without consuming API credits.

Key features:

  • OAuth PKCE flow with local callback server for secure authentication
  • Credential storage with automatic token refresh (stored in ~/.local/share/openhands/auth/)
  • LLM.subscription_login() classmethod for easy access
  • Support for multiple Codex models via ChatGPT subscription:
    • gpt-5.2-codex
    • gpt-5.2
    • gpt-5.1-codex-max
    • gpt-5.1-codex-mini

Usage:

from openhands.sdk import LLM

# First time: opens browser for OAuth login
llm = LLM.subscription_login(model="gpt-5.2-codex")

# Subsequent calls: reuses cached credentials
llm = LLM.subscription_login(model="gpt-5.2-codex")

# Force fresh login
llm = LLM.subscription_login(model="gpt-5.2-codex", force_login=True)

Implementation inspired by: opencode's implementation

New modules:

  • openhands/sdk/llm/auth/__init__.py - Auth module exports
  • openhands/sdk/llm/auth/credentials.py - Credential storage and retrieval
  • openhands/sdk/llm/auth/openai.py - OpenAI OAuth PKCE flow implementation

Checklist

  • If the PR is changing/adding functionality, are there tests to reflect this?
  • If there is an example, have you run the example to make sure that it works?
  • If there are instructions on how to run the code, have you followed the instructions and made sure that it works?
  • If the feature is significant enough to require documentation, is there a PR open on the OpenHands/docs repository with the same branch name?
  • Is the github CI passing?

Note: This feature requires a ChatGPT Plus/Pro subscription to test the actual OAuth flow. The unit tests cover the credential storage, PKCE generation, URL building, and mock token refresh scenarios.

@xingyaoww can click here to continue refining the PR


Agent Server images for this PR

GHCR package: https://github.com/OpenHands/agent-sdk/pkgs/container/agent-server

Variants & Base Images

Variant Architectures Base Image Docs / Tags
java amd64, arm64 eclipse-temurin:17-jdk Link
python amd64, arm64 nikolaik/python-nodejs:python3.12-nodejs22 Link
golang amd64, arm64 golang:1.21-bookworm Link

Pull (multi-arch manifest)

# Each variant is a multi-arch manifest supporting both amd64 and arm64
docker pull ghcr.io/openhands/agent-server:8f60e00-python

Run

docker run -it --rm \
  -p 8000:8000 \
  --name agent-server-8f60e00-python \
  ghcr.io/openhands/agent-server:8f60e00-python

All tags pushed for this build

ghcr.io/openhands/agent-server:8f60e00-golang-amd64
ghcr.io/openhands/agent-server:8f60e00-golang_tag_1.21-bookworm-amd64
ghcr.io/openhands/agent-server:8f60e00-golang-arm64
ghcr.io/openhands/agent-server:8f60e00-golang_tag_1.21-bookworm-arm64
ghcr.io/openhands/agent-server:8f60e00-java-amd64
ghcr.io/openhands/agent-server:8f60e00-eclipse-temurin_tag_17-jdk-amd64
ghcr.io/openhands/agent-server:8f60e00-java-arm64
ghcr.io/openhands/agent-server:8f60e00-eclipse-temurin_tag_17-jdk-arm64
ghcr.io/openhands/agent-server:8f60e00-python-amd64
ghcr.io/openhands/agent-server:8f60e00-nikolaik_s_python-nodejs_tag_python3.12-nodejs22-amd64
ghcr.io/openhands/agent-server:8f60e00-python-arm64
ghcr.io/openhands/agent-server:8f60e00-nikolaik_s_python-nodejs_tag_python3.12-nodejs22-arm64
ghcr.io/openhands/agent-server:8f60e00-golang
ghcr.io/openhands/agent-server:8f60e00-java
ghcr.io/openhands/agent-server:8f60e00-python

About Multi-Architecture Support

  • Each variant tag (e.g., 8f60e00-python) is a multi-arch manifest supporting both amd64 and arm64
  • Docker automatically pulls the correct architecture for your platform
  • Individual architecture tags (e.g., 8f60e00-python-amd64) are also available if needed

Implement OAuth PKCE flow for authenticating with OpenAI's ChatGPT service,
allowing users with ChatGPT Plus/Pro subscriptions to use Codex models
(gpt-5.2-codex, gpt-5.2, gpt-5.1-codex-max, gpt-5.1-codex-mini) without
consuming API credits.

Key features:
- OAuth PKCE flow with local callback server for secure authentication
- Credential storage with automatic token refresh
- LLM.subscription_login() classmethod for easy access
- Support for multiple Codex models via ChatGPT subscription

Usage:
  from openhands.sdk import LLM
  llm = LLM.subscription_login(model='gpt-5.2-codex')

Inspired by opencode's implementation (anomalyco/opencode#7537).

Co-authored-by: openhands <[email protected]>
@github-actions
Copy link
Contributor

github-actions bot commented Jan 10, 2026

Coverage

Coverage Report •
FileStmtsMissCoverMissing
openhands-sdk/openhands/sdk/llm
   llm.py4066683%349, 370–371, 407, 568, 669, 697, 770–775, 895, 898–901, 1027, 1049–1050, 1059, 1072, 1074–1079, 1081–1098, 1101–1105, 1107–1108, 1114–1123, 1174, 1176
openhands-sdk/openhands/sdk/llm/auth
   openai.py1538147%80–81, 92–94, 99–100, 109–111, 217–220, 238–241, 244, 247, 249–250, 252–256, 261–266, 272–276, 282–283, 286–292, 298, 300–302, 304–306, 308–310, 312, 316–317, 320–321, 325, 328, 334–336, 339, 399, 403, 451–452, 456, 459–463, 466–467, 481
TOTAL15194450070% 

- Use authlib for PKCE generation (generate_token, create_s256_code_challenge)
- Use aiohttp.web for OAuth callback server (cleaner than raw asyncio)
- Add Codex-specific parameters (store=false, instructions) via litellm_extra_body
- Add max_output_tokens=None for Codex (not supported by the API)
- Reduce code from ~575 lines to ~492 lines

This addresses the 'Instructions are not valid' error by properly
configuring the Codex API parameters.

Co-authored-by: openhands <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants