-
Notifications
You must be signed in to change notification settings - Fork 1
Sc 12927/implement streaming support for ai answers result #94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Sc 12927/implement streaming support for ai answers result #94
Conversation
WalkthroughThis PR introduces a comprehensive AI Answers API client with streaming and non-streaming support, consolidates sentiment rating functionality, and adds a setting to enable streaming mode. The old ai-answers-interactions-api module is removed, with its functionality migrated into the new ai-answers-api module. The apifetch layer is refactored to delegate AI Answers requests to the new client. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant apifetch
participant executeAiAnswersFetch
participant Endpoint as AI Answers Endpoint
participant Callback
Client->>apifetch: POST with conversation settings
apifetch->>executeAiAnswersFetch: delegate with useStreaming flag
alt Streaming Mode (useStreaming = true)
executeAiAnswersFetch->>Endpoint: GET streaming endpoint (SSE)
loop Parse SSE Events
Endpoint-->>executeAiAnswersFetch: data: {type, payload}
executeAiAnswersFetch->>executeAiAnswersFetch: accumulate (conversation/token/sources)
Note over executeAiAnswersFetch: Throttle tokens (500ms),<br/>immediate for sources/complete
executeAiAnswersFetch->>Callback: invoke with partial AiAnswersResponse
end
Endpoint-->>executeAiAnswersFetch: [DONE] or stream end
executeAiAnswersFetch->>Callback: final with is_streaming_complete=true
else Non-Streaming Mode (useStreaming = false)
executeAiAnswersFetch->>Endpoint: POST to non-streaming endpoint
Endpoint-->>executeAiAnswersFetch: AiAnswersResponse
executeAiAnswersFetch->>Callback: invoke with complete response
end
Note over executeAiAnswersFetch: Error handling: HTTP errors,<br/>parse failures, stream interruption
Callback-->>Client: populated response or error
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
src/ai-answers-api.ts(1 hunks)src/ai-answers-interactions-api.ts(0 hunks)src/apifetch.ts(3 hunks)src/index.ts(2 hunks)src/settings.ts(2 hunks)
💤 Files with no reviewable changes (1)
- src/ai-answers-interactions-api.ts
🧰 Additional context used
🧬 Code graph analysis (2)
src/apifetch.ts (1)
src/ai-answers-api.ts (1)
executeAiAnswersFetch(59-71)
src/ai-answers-api.ts (4)
src/settings.ts (1)
Settings(35-72)src/apifetch.ts (1)
ApiFetchCallback(50-52)src/api.ts (2)
RESPONSE_SERVER_ERROR(44-44)aiAnswersInteractionsInstance(41-41)src/index.ts (1)
putSentimentClick(142-147)
| fetch(streamingEndpoint, { | ||
| method: 'POST', | ||
| headers: { | ||
| 'Content-Type': 'application/json' | ||
| }, | ||
| body: JSON.stringify({ | ||
| question: settings?.keyword, | ||
| filter: settings?.aiAnswersFilterObject | ||
| }) | ||
| }) | ||
| .then(async (response) => { | ||
| if (!response.ok) { | ||
| throw new Error(`HTTP error! status: ${response.status}`); | ||
| } | ||
|
|
||
| const reader = response.body?.getReader(); | ||
| const decoder = new TextDecoder(); | ||
|
|
||
| if (!reader) { | ||
| throw new Error('No response body reader available'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Streaming fetch bypasses request interceptors
Both the streaming and non‑streaming code paths call fetch directly. That bypasses the apiInstance (and its interceptor stack) that clients wire up through AddSearchClient.setApiRequestInterceptor. Today customers rely on that hook to inject private-key/authorization headers and other per-request mutations. With this change those headers are never applied, so AI Answers requests will start failing as soon as an interceptor is required. Please route these calls back through apiInstance (or otherwise execute the same interceptor pipeline before issuing the request) so configured interceptors continue to run.
Also applies to: 323-338
🤖 Prompt for AI Agents
In src/ai-answers-api.ts around lines 99 to 118 (and similarly around 323 to
338), the direct fetch calls bypass the apiInstance interceptor pipeline so
per-request interceptors (auth/private-key/etc.) are never applied; change the
code to route requests through apiInstance so the same interceptor stack is
executed before sending the request — for non-streaming use apiInstance.request
(or equivalent wrapper) with the same method, headers and JSON body; for
streaming where you currently call fetch and use response.body.getReader(), call
the apiInstance variant that returns a Response (or run the interceptor chain to
produce a finalized Request/RequestInit and then call fetch) so the
response/reader logic remains identical but interceptors are applied; update
both the streaming and non-streaming paths (lines noted) to use apiInstance and
preserve error/reader handling.



Summary by CodeRabbit
Release Notes