-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Parent Epic
Part of #19 - Hybrid ADK + AI SDK Migration
Objective
Refactor the ADK OpenRouter LLM adapter to use AI SDK internally instead of raw fetch calls, while maintaining the ADK BaseLlm interface.
Tasks
- Refactor
openrouter-llm.tsto use AI SDKgenerateText() - Update message format conversion for AI SDK
- Add reasoning token tracking to
usageMetadata - Update
runner.tstoken extraction for new format - Test with all 5 agents (root, architect, reviewer, cursor, deep)
- Remove raw fetch and retry logic (handled by AI SDK)
Current State
// Current: Raw fetch with manual format conversion
async *generateContentAsync(llmRequest: LlmRequest): AsyncGenerator<LlmResponse> {
const messages = geminiToOpenAI(llmRequest.contents, systemInstruction);
const response = await fetch(`${this.baseUrl}/chat/completions`, {
method: "POST",
headers: { ... },
body: JSON.stringify(requestBody),
});
// Manual retry logic, format conversion...
}Target State
// Target: AI SDK with native features
async *generateContentAsync(llmRequest: LlmRequest): AsyncGenerator<LlmResponse> {
const { text, toolCalls, usage, reasoningText } = await generateText({
model: this.openrouter(modelId),
messages: this.convertToAISDKMessages(llmRequest.contents),
tools: this.convertToAISDKTools(llmRequest.config?.tools),
providerOptions: isThinkingModel ? {
openrouter: { reasoning: { effort: 'high' } }
} : undefined,
});
yield this.convertToGeminiResponse(text, toolCalls, usage, reasoningText);
}Key Changes
1. Replace fetch with AI SDK
- Remove
geminiToOpenAI()manual conversion - Use AI SDK's native message format
- Remove manual retry logic (AI SDK handles it)
2. Add Reasoning Token Support
usageMetadata: {
promptTokenCount: usage.promptTokens,
candidatesTokenCount: usage.completionTokens,
totalTokenCount: usage.totalTokens,
reasoningTokenCount: usage.reasoningTokens, // NEW
}3. Support :thinking Variant
const isThinkingModel = modelId.includes(':thinking');
providerOptions: isThinkingModel ? {
openrouter: { reasoning: { effort: 'high' } }
} : undefined,Files to Modify
src/holons/adk/openrouter-llm.ts- Main refactorsrc/holons/adk/runner.ts- Update token extractionsrc/holons/adk/agents.ts- Test model configurations
Bugs to Fix During Migration
| Bug | Location | Fix |
|---|---|---|
| Unstable tool_call_id | Line 101 | Use crypto.randomUUID() |
| Token format mismatch | Lines 264-276 | Support both Gemini and OpenAI formats |
| No streaming | Line 268 | Document as future enhancement |
Acceptance Criteria
- All 5 ADK agents work with refactored adapter
- Tool calling functions correctly
- Token usage properly extracted from AI SDK response
- Reasoning tokens tracked for :thinking models
- No regression in existing functionality
Estimated Effort
2 days
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request