-
-
Notifications
You must be signed in to change notification settings - Fork 7
Fix/resolution and prompting #501
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Integrated Recharts for data visualization - Added dataAnalysis tool with Zod schema for chart configuration - Extended chat file uploads to support CSV and JSON files - Updated researcher agent with guidelines for data visualization - Integrated MapQueryHandler for geospatial data in analysis results - Ensured real-time UI streaming for chart components Co-authored-by: ngoiyaeric <[email protected]>
…358342468692938721
- Added explicit examples for dataAnalysis tool in researcher system prompt - Clarified triggering queries in PR response - Verified consistent behavior across search and file-based data sources Co-authored-by: ngoiyaeric <[email protected]>
- Fixed "Invalid value" runtime error by splitting GraphSection into static and streamed components - Improved detection of StreamableValue vs static DataAnalysisResult - Added graceful fallbacks for missing title, chartType, or plotData - Moved error messages out of ResponsiveContainer to fix layout issues - Verified fix with static, stringified, and broken data cases Co-authored-by: ngoiyaeric <[email protected]>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughThis PR introduces a data analysis visualization feature with streaming support. It adds a new GraphSection component, dataAnalysisTool, and supporting types/schemas. The main submit action workflow is refactored to use server authentication and streaming, with researcher/writer agents updated to prioritize visual data representation. Changes
Sequence DiagramsequenceDiagram
participant Client
participant Server as Server (app/actions)
participant Researcher as Researcher Agent
participant DataAnalysisTool
participant UIStream
participant GraphSection
Client->>Server: submit(input)
Server->>Server: Authenticate user & get system prompt
Server->>Server: Trim message history
Server->>Researcher: Invoke with systemPrompt & streaming
Researcher->>Researcher: Analyze request
alt Visualization needed
Researcher->>DataAnalysisTool: Call with chart parameters
DataAnalysisTool->>DataAnalysisTool: Create streamable value
DataAnalysisTool->>UIStream: Append GraphSection component
DataAnalysisTool->>GraphSection: Stream DataAnalysisResult
GraphSection->>GraphSection: Render chart (bar/line/pie/etc)
GraphSection->>UIStream: Display chart
end
Researcher->>UIStream: Stream answer content
Server->>UIStream: Accumulate full answer
Server->>UIStream: Finalize response section with BotMessage
Server->>Server: Push assistant response to AI state
Server-->>Client: Return updated AI state & streamed UI
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
lib/agents/writer.tsx (1)
34-39:⚠️ Potential issue | 🟠 MajorDynamic prompt should not drop baseline safety/formatting constraints.
Right now, providing
dynamicSystemPromptcompletely replacesdefault_system_prompt, which can remove critical guardrails (including the new “no text charts” rule and citation/format constraints). Consider composing the dynamic prompt with a fixed baseline instead of replacing it.💡 Proposed fix (compose prompts)
- const systemToUse = dynamicSystemPrompt && dynamicSystemPrompt.trim() !== '' ? dynamicSystemPrompt : default_system_prompt; + const normalizedDynamic = dynamicSystemPrompt?.trim() + const systemToUse = + normalizedDynamic && normalizedDynamic !== '' + ? `${default_system_prompt}\n\n${normalizedDynamic}` + : default_system_promptlib/agents/resolution-search.tsx (1)
66-72:⚠️ Potential issue | 🔴 CriticalFix model incompatibility with
streamObjectstructured output.
getModel(hasImage)can return X.AI'sgrok-4-fast-non-reasoning(both as explicit "Grok 4.2" selection and as the default fallback). However,streamObject()with a schema parameter requires JSON Schema structured output, which X.AI's API rejects. This will cause runtime failures whenever X.AI is selected.Ensure
getModel()returns only compatible models for structured output tasks (e.g., OpenAI'sgpt-4o, Google Gemini, or Bedrock), or add logic to bypass structured output for X.AI models.
🤖 Fix all issues with AI agents
In `@app/actions.tsx`:
- Around line 334-336: Remove the unused variables and dead assignment: delete
the declarations of hasError and finalAnswer in app/actions.tsx and remove any
code that sets hasError = true; stop creating finalAnswer as an alias of
fullResponse and use fullResponse directly (update any return/usage sites that
reference finalAnswer to reference fullResponse). In short: keep fullResponse,
remove finalAnswer and hasError, and replace any finalAnswer reads with
fullResponse and remove the hasError assignments in the error handling code.
- Around line 384-391: The catch block in processEvents calls
answerStream.error(error) but never finalizes the AI state; ensure aiState is
always finalized on error by calling aiState.done(false) (or equivalent) when an
exception occurs—add aiState.done(false) in the catch block before/after
answerStream.error(error) (and/or include a safe aiState.done(false) in the
finally along with isGenerating.done(false) and uiStream.done() to guarantee
aiState is always closed); reference aiState, answerStream, isGenerating,
uiStream and processEvents when applying the change.
- Around line 422-427: Remove the unused saveChat import and either restore or
explicitly document the removal of automatic chat persistence: delete the import
of saveChat and any dead references; if persistence should remain, reintroduce
the onSetAIState handler that calls saveChat with the AI state (referencing
createAI/AI and onSetAIState), otherwise explicitly export or implement a
clearChat action (ensure clearChat is defined and exported from this file and
referenced in the AI.actions map or elsewhere) so users can clear history;
confirm which behavior (no persistence vs. persistence + clear) is intended and
make the corresponding change to AI.actions (submit/clearChat) and exports.
- Around line 312-320: The call to researcher currently hardcodes the map
provider to 'mapbox'; extract mapProvider from the incoming FormData in the same
place you read action and drawnFeaturesString (variable name mapProvider), and
pass that variable into the researcher call instead of the literal 'mapbox' so
the researcher(mapProvider, ...) parameter receives the user-selected provider
(researcher signature referenced at lib/agents/researcher.tsx:102; update the
call in app/actions.tsx that constructs result to use mapProvider).
- Around line 307-333: The current user input is appended to aiState.messages
only after calling researcher, so the messages passed into researcher (the
messages variable) lack the current user query; before invoking researcher(...)
at the call site, create a new messages array that spreads the existing messages
and includes the new user message object (use nanoid(), role: 'user', content:
userInput, type: 'input') and pass that updated array into researcher, and also
update aiState via aiState.update(...) with the same appended message to keep
state consistent.
In `@components/graph-section.tsx`:
- Line 29: Replace the hardcoded COLORS array in GraphSection with theme-aware
colors by reading CSS variables at runtime: implement a utility like
getCSSColor(variable, fallback) that uses
getComputedStyle(document.documentElement).getPropertyValue and normalizes
returned values to hex (handling hex, rgb(a), or hsl) and then build COLORS via
useMemo inside the GraphSection component (or a small hook) to return
[getCSSColor('--primary', '#0088FE'), getCSSColor('--secondary', '#00C49F'),
getCSSColor('--accent', '#FFBB28'), getCSSColor('--muted', '#FF8042'), ...];
ensure server-side rendering safety by returning fallbacks when window is
undefined and re-compute on theme changes.
- Around line 137-158: The pie chart breaks when config.series is missing
because dataKey is undefined; update the validation near the existing guard that
checks plotData/config (refer to plotData and config in the component) to also
require config.series?.length > 0 (or a valid series[0].key) and return an
appropriate fallback (null/placeholder) before rendering the Pie block, and/or
ensure the Pie's dataKey prop (used on the Pie component) falls back to a valid
key (e.g., config.xAxisKey or the first numeric key from plotData) so <Pie
dataKey={...}> is never passed undefined; adjust references to config.series,
Pie, dataKey, and ResponsiveContainer accordingly.
- Around line 35-51: The current GraphSection uses fragile duck-typing by
checking for 'data' and other common keys to detect a DataAnalysisResult; update
the type-guard in GraphSection to positively identify DataAnalysisResult by
checking only for the distinctive chartType property (e.g., change the isStatic
check to test `'chartType' in result` and remove the 'data'/'title' checks), and
ensure downstream branches still pass the value to GraphCard or
StreamedGraphSection unchanged (referencing GraphSection, GraphCard,
StreamedGraphSection, and the result param).
- Around line 53-68: StreamedGraphSection currently ignores the error value
returned by useStreamableValue; update StreamedGraphSection to check the
destructured error and render an error state (e.g., an error message/card inside
the same Section or return a dedicated ErrorCard) when error is truthy instead
of falling through to GraphCard with null data; reference the useStreamableValue
call and GraphCard render so the component returns a clear error UI when error
is present and only renders GraphCard when data is valid and no error exists.
- Around line 159-173: The scatter case currently renders identical points
because each <Scatter> uses data={plotData}; change the mapping so each series
uses its own dataKey or per-series dataset: inside the map over config.series,
either pass dataKey={s.key} (and ensure XAxis uses dataKey={config.xAxisKey}) or
compute seriesData = plotData.map(r => ({ x: r[config.xAxisKey], y: r[s.key] }))
and pass data={seriesData} with XAxis/YAxis configured to use "x"/"y";
alternatively, if config.series length === 1, render a single <Scatter> instead
of mapping. Update the <Scatter> elements (symbol: Scatter, config.series,
plotData, config.xAxisKey) accordingly.
- Line 70: GraphCard declares an unused optional prop pending; remove pending
from the component props to avoid dead/ confusing API surface. Edit the
GraphCard function signature (remove pending from the destructured props and its
type annotation), delete any internal references to pending inside GraphCard,
and update any related PropTypes/TS types if present; no changes are needed in
callers (GraphSection, StreamedGraphSection) since they never pass pending.
In `@lib/agents/index.tsx`:
- Around line 7-11: The DrawnFeature interface is too loose: change
DrawnFeature.type from string to the specific union 'Polygon' | 'LineString' and
replace geometry:any with a minimal GeoJSON geometry shape (e.g., an object with
type:string and coordinates:unknown or a Geometry interface) to tighten typings,
and move this interface out of the barrel file and co-locate it with other
shared types (e.g., next to DataAnalysisResult) so consumers import a dedicated
types module instead of defining it in the re-export file.
In `@lib/agents/tools/data-analysis.tsx`:
- Around line 10-15: The current execute implementation creates a
StreamableValue via createStreamableValue<DataAnalysisResult>(), immediately
calls streamResults.done(result) and then renders <GraphSection
result={streamResults.value} />, which wastes a streaming abstraction; instead,
pass the result directly to the UI. Replace the
createStreamableValue/streamResults usage in the execute method by calling
uiStream.append(<GraphSection result={result} />) (keep types:
DataAnalysisResult) and remove the unused streamResults/done flow so
GraphSection receives the static object synchronously.
- Around line 7-18: The dataAnalysisTool.execute already appends a GraphSection
to the uiStream, so remove the duplicate append in the processEvents handler
that reacts to a dataAnalysis tool-call (the code path that currently
creates/attaches GraphSection for tool "dataAnalysis"); keep any other logic in
processEvents for event handling but delete the GraphSection/uiStream.append
call there so only dataAnalysisTool.execute renders the chart.
In `@lib/schema/data-analysis.tsx`:
- Line 25: PartialDataAnalysis currently applies DeepPartial to the Zod schema
object rather than the inferred data shape; change its definition to apply
DeepPartial to the inferred type from dataAnalysisSchema (i.e., use
z.infer<typeof dataAnalysisSchema>), and ensure z is imported if not already;
update the type alias PartialDataAnalysis accordingly so it becomes
DeepPartial<z.infer<typeof dataAnalysisSchema>>.
- Line 8: The DataAnalysisResult type's data property is currently any[] but
must match the schema's stricter shape; update the DataAnalysisResult
interface/type so its data field is typed as Record<string, any>[] (or
equivalent zod-inferred shape) instead of any[], ensuring consistency with the
schema's data: z.array(z.record(z.any())). Reference the DataAnalysisResult type
and the data property when making this change.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (2)
bun.lockis excluded by!**/*.lockverification/fix_verification.pngis excluded by!**/*.png
📒 Files selected for processing (13)
app/actions.tsxcomponents/chat-panel.tsxcomponents/graph-section.tsxcomponents/header-search-button.tsxlib/agents/index.tsxlib/agents/researcher.tsxlib/agents/resolution-search.tsxlib/agents/tools/data-analysis.tsxlib/agents/tools/index.tsxlib/agents/writer.tsxlib/schema/data-analysis.tsxlib/types/index.tspackage.json
💤 Files with no reviewable changes (1)
- components/header-search-button.tsx
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-01-31T16:53:16.137Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-31T16:53:16.137Z
Learning: The GraphSection component in components/graph-section.tsx should use theme-aware colors that match the site's current theme (light, dark, or earth) by reading CSS variables like --primary, --secondary, --accent, and --muted, and converting them to hex colors for Recharts compatibility.
Applied to files:
components/graph-section.tsxapp/actions.tsx
📚 Learning: 2026-01-13T13:26:30.086Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-13T13:26:30.086Z
Learning: Workers that rely on ai.generateObject (geojsonParser, map-command-generator, feedback-analyzer) must not use the X.AI 'grok-4-fast-non-reasoning' model because X.AI’s chat/completions rejects JSON Schema structured-output arguments; prefer OpenAI gpt-4o for structured outputs.
Applied to files:
lib/agents/resolution-search.tsx
🧬 Code graph analysis (2)
lib/agents/index.tsx (4)
components/map/mapbox-map.tsx (3)
feature(79-165)map(67-168)feature(301-303)components/map/google-map.tsx (2)
features(38-52)df(39-46)components/map/map-data-context.tsx (1)
MapData(15-32)lib/agents/tools/geospatial.tsx (1)
Location(18-23)
lib/agents/tools/data-analysis.tsx (4)
lib/agents/tools/index.tsx (1)
ToolProps(10-14)lib/schema/data-analysis.tsx (1)
dataAnalysisSchema(4-23)lib/types/index.ts (1)
DataAnalysisResult(28-47)components/graph-section.tsx (1)
GraphSection(35-51)
🔇 Additional comments (11)
lib/agents/writer.tsx (1)
7-12: No call sites forwriter()were found in the codebase. The function is imported inapp/actions.tsxbut never invoked, so the signature change does not require updates elsewhere. Consider removing the unused import if the function is not intended to be used.Likely an incorrect or invalid review comment.
components/chat-panel.tsx (1)
218-218: LGTM — MIME type expansion aligns with data analysis features.The addition of
text/csvandapplication/jsonis consistent with the new data analysis tooling. Note that non-image files are only transmitted viaFormData(Line 114-116) and won't appear in theUserMessagecontent preview — this is a pre-existing limitation, not introduced here.lib/agents/resolution-search.tsx (1)
4-4: LGTM — Import updated to use shared type.The type-only import from the new shared location is correct and consistent with the
DrawnFeatureextraction.lib/agents/researcher.tsx (3)
15-15: LGTM — Import updated to shared type location.
35-36: Good addition of the no-text-charts directive.Clear instruction to the model. The backtick escaping around
`dataAnalysis`inside theString.rawtagged template is correct and will render as inline code in the prompt.
51-88: Tool is properly registered and integrated. ThedataAnalysistool is correctly exported fromlib/agents/tools/data-analysis.tsx, imported inlib/agents/tools/index.tsx(line 6), and registered in thegetToolsfunction (lines 30–33) with the required parameters. The prompt update is fully functional.lib/types/index.ts (2)
28-47: NewDataAnalysisResulttype is well-structured.The type aligns with the Zod schema in
lib/schema/data-analysis.tsx. One concern:data: any[]should beRecord<string, any>[]to match the schema'sz.array(z.record(z.any()))— see my comment on the schema file.
80-98:AIMessageadditions look good.The optional
createdAtand'drawing_context'type are backward-compatible additions that align with the new features in this PR.package.json (1)
89-89: No action needed. Therecharts@^3.7.0dependency is valid—version 3.7.0 is published on npm as the latest stable release and is properly installable.lib/agents/tools/index.tsx (1)
30-33: New tool wiring looks correct.
dataAnalysisToolis properly imported and instantiated with the requireduiStream. Minor note:fullResponseis passed but unused insidedataAnalysisTool— harmless, but could be omitted for clarity (the tool only destructuresuiStream).app/actions.tsx (1)
344-352: DuplicateGraphSectionrendering fordataAnalysistool calls.As noted in the
data-analysis.tsxreview, the tool's ownexecutefunction already appends aGraphSectiontouiStream. This handler appends a second one, causing duplicate charts. Remove this block.🐛 Proposed fix
} else if (event.type === 'tool-call') { - if (event.toolName === 'dataAnalysis') { - uiStream.append( - <Section title="Analysis"> - <GraphSection result={event.args as any} /> - </Section> - ) - } }[raise_critical_issue, duplicate_comment]
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| const userId = await getCurrentUserIdOnServer() | ||
| const systemPrompt = userId ? await getSystemPrompt(userId) : null | ||
|
|
||
| const answerStream = createStreamableValue<string>('') | ||
|
|
||
| const result = await researcher( | ||
| systemPrompt || '', | ||
| uiStream, | ||
| answerStream, | ||
| messages, | ||
| 'mapbox', | ||
| false, | ||
| drawnFeatures | ||
| ) | ||
|
|
||
| aiState.update({ | ||
| ...aiState.get(), | ||
| messages: [ | ||
| ...aiState.get().messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'user', | ||
| content: userInput, | ||
| type: 'input' | ||
| } | ||
| } | ||
| } | ||
|
|
||
| const hasImage = messageParts.some(part => part.type === 'image') | ||
| const content: CoreMessage['content'] = hasImage | ||
| ? messageParts as CoreMessage['content'] | ||
| : messageParts.map(part => part.text).join('\n') | ||
|
|
||
| const type = skip | ||
| ? undefined | ||
| : formData?.has('input') || formData?.has('file') | ||
| ? 'input' | ||
| : formData?.has('related_query') | ||
| ? 'input_related' | ||
| : 'inquiry' | ||
|
|
||
| if (content) { | ||
| aiState.update({ | ||
| ...aiState.get(), | ||
| messages: [ | ||
| ...aiState.get().messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'user', | ||
| content, | ||
| type | ||
| } | ||
| ] | ||
| }) | ||
| messages.push({ | ||
| role: 'user', | ||
| content | ||
| } as CoreMessage) | ||
| } | ||
|
|
||
| const userId = 'anonymous' | ||
| const currentSystemPrompt = (await getSystemPrompt(userId)) || '' | ||
| const mapProvider = formData?.get('mapProvider') as 'mapbox' | 'google' | ||
| ] | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the researcher function definition
rg -n "function researcher|const researcher|export.*researcher" -t ts -t tsx --max-count=10
# Also search for any async researcher definitions
rg -n "researcher\s*=" -t ts -t tsx -A 3 | head -50Repository: QueueLab/QCX
Length of output: 111
🏁 Script executed:
#!/bin/bash
# Look for the researcher function implementation in the file
fd . --type f -name "*.ts" -o -name "*.tsx" | xargs rg -l "researcher" | head -20Repository: QueueLab/QCX
Length of output: 425
🏁 Script executed:
#!/bin/bash
# Check app/actions.tsx for the researcher function and its implementation
cat -n app/actions.tsx | grep -A 30 "function researcher\|const researcher\|export.*researcher" | head -100Repository: QueueLab/QCX
Length of output: 38
🏁 Script executed:
#!/bin/bash
# Get line count of researcher.tsx
wc -l lib/agents/researcher.tsxRepository: QueueLab/QCX
Length of output: 85
🏁 Script executed:
#!/bin/bash
# Read the researcher.tsx file - first 100 lines to see function signature
head -100 lib/agents/researcher.tsxRepository: QueueLab/QCX
Length of output: 5584
🏁 Script executed:
#!/bin/bash
# Read from line 92 to end of file
tail -n +92 lib/agents/researcher.tsxRepository: QueueLab/QCX
Length of output: 2812
🏁 Script executed:
#!/bin/bash
# Check app/actions.tsx around line 87 to verify resolution_search branch
sed -n '75,100p' app/actions.tsxRepository: QueueLab/QCX
Length of output: 940
🏁 Script executed:
#!/bin/bash
# Double-check: verify researcher is called on line 312 in app/actions.tsx
sed -n '307,320p' app/actions.tsxRepository: QueueLab/QCX
Length of output: 385
Push user message to messages array before calling researcher.
The messages array passed to researcher() on line 312 does not include the current user input. messages is built from prior AI state (line 208), but userInput is never added before the researcher call. The AI state update on line 322 adds the user message after the LLM has already processed the request, leaving the model without the user's current query.
The resolution_search branch (line 86) correctly adds the user message to messages before processing.
Proposed fix
const userId = await getCurrentUserIdOnServer()
const systemPrompt = userId ? await getSystemPrompt(userId) : null
const answerStream = createStreamableValue<string>('')
+ messages.push({ role: 'user', content: userInput })
+
const result = await researcher(
systemPrompt || '',
uiStream,
answerStream,
messages,
'mapbox',
false,
drawnFeatures
)
aiState.update({
...aiState.get(),
messages: [
...aiState.get().messages,
{
id: nanoid(),
role: 'user',
content: userInput,
type: 'input'
}
]
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const userId = await getCurrentUserIdOnServer() | |
| const systemPrompt = userId ? await getSystemPrompt(userId) : null | |
| const answerStream = createStreamableValue<string>('') | |
| const result = await researcher( | |
| systemPrompt || '', | |
| uiStream, | |
| answerStream, | |
| messages, | |
| 'mapbox', | |
| false, | |
| drawnFeatures | |
| ) | |
| aiState.update({ | |
| ...aiState.get(), | |
| messages: [ | |
| ...aiState.get().messages, | |
| { | |
| id: nanoid(), | |
| role: 'user', | |
| content: userInput, | |
| type: 'input' | |
| } | |
| } | |
| } | |
| const hasImage = messageParts.some(part => part.type === 'image') | |
| const content: CoreMessage['content'] = hasImage | |
| ? messageParts as CoreMessage['content'] | |
| : messageParts.map(part => part.text).join('\n') | |
| const type = skip | |
| ? undefined | |
| : formData?.has('input') || formData?.has('file') | |
| ? 'input' | |
| : formData?.has('related_query') | |
| ? 'input_related' | |
| : 'inquiry' | |
| if (content) { | |
| aiState.update({ | |
| ...aiState.get(), | |
| messages: [ | |
| ...aiState.get().messages, | |
| { | |
| id: nanoid(), | |
| role: 'user', | |
| content, | |
| type | |
| } | |
| ] | |
| }) | |
| messages.push({ | |
| role: 'user', | |
| content | |
| } as CoreMessage) | |
| } | |
| const userId = 'anonymous' | |
| const currentSystemPrompt = (await getSystemPrompt(userId)) || '' | |
| const mapProvider = formData?.get('mapProvider') as 'mapbox' | 'google' | |
| ] | |
| }) | |
| const userId = await getCurrentUserIdOnServer() | |
| const systemPrompt = userId ? await getSystemPrompt(userId) : null | |
| const answerStream = createStreamableValue<string>('') | |
| messages.push({ role: 'user', content: userInput }) | |
| const result = await researcher( | |
| systemPrompt || '', | |
| uiStream, | |
| answerStream, | |
| messages, | |
| 'mapbox', | |
| false, | |
| drawnFeatures | |
| ) | |
| aiState.update({ | |
| ...aiState.get(), | |
| messages: [ | |
| ...aiState.get().messages, | |
| { | |
| id: nanoid(), | |
| role: 'user', | |
| content: userInput, | |
| type: 'input' | |
| } | |
| ] | |
| }) |
🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 307 - 333, The current user input is appended
to aiState.messages only after calling researcher, so the messages passed into
researcher (the messages variable) lack the current user query; before invoking
researcher(...) at the call site, create a new messages array that spreads the
existing messages and includes the new user message object (use nanoid(), role:
'user', content: userInput, type: 'input') and pass that updated array into
researcher, and also update aiState via aiState.update(...) with the same
appended message to keep state consistent.
| let finalAnswer = '' | ||
| let fullResponse = '' | ||
| let hasError = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
hasError is set but never read; finalAnswer is redundant.
hasError (line 336) is assigned true on error (line 386) but never consumed. finalAnswer (line 334) is just assigned from fullResponse (line 355) — it serves no purpose beyond aliasing.
♻️ Proposed cleanup
- let finalAnswer = ''
let fullResponse = ''
- let hasError = false
...
- finalAnswer = fullResponse
- answerStream.done(finalAnswer)
+ answerStream.done(fullResponse)
...
- content: finalAnswer,
+ content: fullResponse,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let finalAnswer = '' | |
| let fullResponse = '' | |
| let hasError = false | |
| let fullResponse = '' |
🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 334 - 336, Remove the unused variables and dead
assignment: delete the declarations of hasError and finalAnswer in
app/actions.tsx and remove any code that sets hasError = true; stop creating
finalAnswer as an alias of fullResponse and use fullResponse directly (update
any return/usage sites that reference finalAnswer to reference fullResponse). In
short: keep fullResponse, remove finalAnswer and hasError, and replace any
finalAnswer reads with fullResponse and remove the hasError assignments in the
error handling code.
| } catch (error) { | ||
| console.error('Error in processEvents:', error) | ||
| hasError = true | ||
| answerStream.error(error) | ||
| } finally { | ||
| isGenerating.done(false) | ||
| uiStream.done() | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Error in processEvents does not finalize AI state.
When the catch block runs, answerStream.error(error) is called, but aiState.done(...) is never invoked. In the RSC model, failing to finalize aiState can leave the conversation in a broken state where subsequent actions fail or hang.
🐛 Proposed fix — finalize AI state on error
} catch (error) {
console.error('Error in processEvents:', error)
- hasError = true
answerStream.error(error)
+ aiState.done({
+ ...aiState.get(),
+ messages: aiState.get().messages
+ })
} finally {
isGenerating.done(false)
uiStream.done()
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch (error) { | |
| console.error('Error in processEvents:', error) | |
| hasError = true | |
| answerStream.error(error) | |
| } finally { | |
| isGenerating.done(false) | |
| uiStream.done() | |
| } | |
| } catch (error) { | |
| console.error('Error in processEvents:', error) | |
| answerStream.error(error) | |
| aiState.done({ | |
| ...aiState.get(), | |
| messages: aiState.get().messages | |
| }) | |
| } finally { | |
| isGenerating.done(false) | |
| uiStream.done() | |
| } |
🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 384 - 391, The catch block in processEvents
calls answerStream.error(error) but never finalizes the AI state; ensure aiState
is always finalized on error by calling aiState.done(false) (or equivalent) when
an exception occurs—add aiState.done(false) in the catch block before/after
answerStream.error(error) (and/or include a safe aiState.done(false) in the
finally along with isGenerating.done(false) and uiStream.done() to guarantee
aiState is always closed); reference aiState, answerStream, isGenerating,
uiStream and processEvents when applying the change.
| export const AI = createAI<AIState, UIState>({ | ||
| actions: { | ||
| submit, | ||
| clearChat | ||
| submit | ||
| }, | ||
| initialUIState, | ||
| initialAIState, | ||
| onGetUIState: async () => { | ||
| 'use server' | ||
|
|
||
| const aiState = getAIState() as AIState | ||
| if (aiState) { | ||
| const uiState = getUIStateFromAIState(aiState) | ||
| return uiState | ||
| } | ||
| return initialUIState | ||
| }, | ||
| onSetAIState: async ({ state }) => { | ||
| 'use server' | ||
|
|
||
| if (!state.messages.some(e => e.type === 'response')) { | ||
| return | ||
| } | ||
|
|
||
| const { chatId, messages } = state | ||
| const createdAt = new Date() | ||
| const path = `/search/${chatId}` | ||
|
|
||
| let title = 'Untitled Chat' | ||
| if (messages.length > 0) { | ||
| const firstMessageContent = messages[0].content | ||
| if (typeof firstMessageContent === 'string') { | ||
| try { | ||
| const parsedContent = JSON.parse(firstMessageContent) | ||
| title = parsedContent.input?.substring(0, 100) || 'Untitled Chat' | ||
| } catch (e) { | ||
| title = firstMessageContent.substring(0, 100) | ||
| } | ||
| } else if (Array.isArray(firstMessageContent)) { | ||
| const textPart = ( | ||
| firstMessageContent as { type: string; text?: string }[] | ||
| ).find(p => p.type === 'text') | ||
| title = | ||
| textPart && textPart.text | ||
| ? textPart.text.substring(0, 100) | ||
| : 'Image Message' | ||
| } | ||
| } | ||
|
|
||
| const updatedMessages: AIMessage[] = [ | ||
| ...messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: `end`, | ||
| type: 'end' | ||
| } | ||
| ] | ||
|
|
||
| const { getCurrentUserIdOnServer } = await import( | ||
| '@/lib/auth/get-current-user' | ||
| ) | ||
| const actualUserId = await getCurrentUserIdOnServer() | ||
|
|
||
| if (!actualUserId) { | ||
| console.error('onSetAIState: User not authenticated. Chat not saved.') | ||
| return | ||
| } | ||
|
|
||
| const chat: Chat = { | ||
| id: chatId, | ||
| createdAt, | ||
| userId: actualUserId, | ||
| path, | ||
| title, | ||
| messages: updatedMessages | ||
| } | ||
| await saveChat(chat, actualUserId) | ||
| } | ||
| initialUIState: [], | ||
| initialAIState: { chatId: nanoid(), messages: [] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if saveChat is still used anywhere in app/actions.tsx
echo "=== Checking saveChat usage in app/actions.tsx ==="
rg -n 'saveChat' app/actions.tsx
# Check if clearChat is referenced elsewhere in the codebase
echo -e "\n=== Checking clearChat references in codebase ==="
rg -n 'clearChat' --type=ts --type=tsx
# Also check what's exported from app/actions.tsx to see if clearChat was previously exported
echo -e "\n=== Checking current exports from app/actions.tsx ==="
rg -n 'export' app/actions.tsx | head -20Repository: QueueLab/QCX
Length of output: 430
🏁 Script executed:
#!/bin/bash
# Check the full content of app/actions.tsx to see if saveChat is actually used
echo "=== Full content of app/actions.tsx ==="
cat -n app/actions.tsx
# Also check if there are any server actions or other functions that might use saveChat
echo -e "\n=== Checking for any 'save' or 'persist' patterns ==="
rg -n 'save|persist' app/actions.tsxRepository: QueueLab/QCX
Length of output: 15722
Remove unused saveChat import and confirm chat persistence removal is intentional.
The saveChat import on line 17 is unused throughout the file, confirming that automatic chat persistence via onSetAIState has been removed. Additionally, clearChat is not exported or referenced anywhere in the codebase, so users have no way to clear chat history. Verify this is intentional before merging.
🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 422 - 427, Remove the unused saveChat import
and either restore or explicitly document the removal of automatic chat
persistence: delete the import of saveChat and any dead references; if
persistence should remain, reintroduce the onSetAIState handler that calls
saveChat with the AI state (referencing createAI/AI and onSetAIState), otherwise
explicitly export or implement a clearChat action (ensure clearChat is defined
and exported from this file and referenced in the AI.actions map or elsewhere)
so users can clear history; confirm which behavior (no persistence vs.
persistence + clear) is intended and make the corresponding change to AI.actions
(submit/clearChat) and exports.
| import { DataAnalysisResult } from '@/lib/types' | ||
| import { StreamableValue, useStreamableValue } from 'ai/rsc' | ||
|
|
||
| const COLORS = ['#0088FE', '#00C49F', '#FFBB28', '#FF8042', '#8884d8', '#82ca9d'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Hardcoded colors won't adapt to the site's theme.
The COLORS array uses fixed hex values. Per project requirements, these should be derived from CSS variables (--primary, --secondary, --accent, --muted) to match the active theme (light/dark/earth). Recharts requires hex colors, so the CSS variable values need to be read at runtime and converted.
♻️ Suggested approach
// Example utility to read a CSS variable and return a hex color string
function getCSSColor(variable: string, fallback: string): string {
if (typeof window === 'undefined') return fallback;
const value = getComputedStyle(document.documentElement).getPropertyValue(variable).trim();
// Convert HSL to hex if needed, or return the raw value
return value || fallback;
}
// Then in GraphCard (or via a hook), build COLORS dynamically:
const themeColors = useMemo(() => [
getCSSColor('--primary', '#0088FE'),
getCSSColor('--secondary', '#00C49F'),
getCSSColor('--accent', '#FFBB28'),
getCSSColor('--muted', '#FF8042'),
// ...additional fallbacks
], [/* re-run on theme change */]);Based on learnings: "The GraphSection component in components/graph-section.tsx should use theme-aware colors that match the site's current theme (light, dark, or earth) by reading CSS variables like --primary, --secondary, --accent, and --muted, and converting them to hex colors for Recharts compatibility."
🤖 Prompt for AI Agents
In `@components/graph-section.tsx` at line 29, Replace the hardcoded COLORS array
in GraphSection with theme-aware colors by reading CSS variables at runtime:
implement a utility like getCSSColor(variable, fallback) that uses
getComputedStyle(document.documentElement).getPropertyValue and normalizes
returned values to hex (handling hex, rgb(a), or hsl) and then build COLORS via
useMemo inside the GraphSection component (or a small hook) to return
[getCSSColor('--primary', '#0088FE'), getCSSColor('--secondary', '#00C49F'),
getCSSColor('--accent', '#FFBB28'), getCSSColor('--muted', '#FF8042'), ...];
ensure server-side rendering safety by returning fallbacks when window is
undefined and re-compute on theme changes.
lib/agents/index.tsx
Outdated
| export interface DrawnFeature { | ||
| type: string; | ||
| measurement: string; | ||
| geometry: any; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider tightening the DrawnFeature type and co-locating it properly.
A few observations:
type: string— The source data inMapData(map-data-context.tsx) uses'Polygon' | 'LineString'. Usingstringhere loses that constraint. Consider using the union type for consistency.geometry: any— This is loose. If the shape is always GeoJSON geometry, even a minimal{ type: string; coordinates: unknown }would be safer.- Barrel file as type host — Defining new interfaces in a barrel/re-export file is unconventional. A dedicated types file (e.g.,
lib/types/index.ts, whereDataAnalysisResultalready lives) might be a better home.
Suggested tighter type
export interface DrawnFeature {
- type: string;
+ type: 'Polygon' | 'LineString';
measurement: string;
- geometry: any;
+ geometry: { type: string; coordinates: unknown };
}🤖 Prompt for AI Agents
In `@lib/agents/index.tsx` around lines 7 - 11, The DrawnFeature interface is too
loose: change DrawnFeature.type from string to the specific union 'Polygon' |
'LineString' and replace geometry:any with a minimal GeoJSON geometry shape
(e.g., an object with type:string and coordinates:unknown or a Geometry
interface) to tighten typings, and move this interface out of the barrel file
and co-locate it with other shared types (e.g., next to DataAnalysisResult) so
consumers import a dedicated types module instead of defining it in the
re-export file.
| export const dataAnalysisTool = ({ uiStream }: ToolProps) => ({ | ||
| description: 'Analyze data and generate a structured representation for visualization in a graph or chart. Use this tool when the user asks for a chart, graph, or data visualization, or when you have structured data (like from a CSV or search results) that would be better understood visually.', | ||
| parameters: dataAnalysisSchema, | ||
| execute: async (result: DataAnalysisResult) => { | ||
| const streamResults = createStreamableValue<DataAnalysisResult>() | ||
|
|
||
| uiStream.append(<GraphSection result={streamResults.value} />) | ||
|
|
||
| streamResults.done(result) | ||
|
|
||
| return result | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Double rendering: this tool AND processEvents in app/actions.tsx both append a GraphSection for the same tool call.
This tool's execute appends a GraphSection to uiStream (line 13). Simultaneously, processEvents in app/actions.tsx (lines 345–350) also appends a GraphSection when it encounters a dataAnalysis tool-call event. The user will see two identical charts for every data analysis invocation.
Pick one rendering site and remove the other. Since the tool's execute already handles UI rendering (the standard pattern for AI RSC tools), remove the duplicate in app/actions.tsx:
// app/actions.tsx – inside processEvents
} else if (event.type === 'tool-call') {
- if (event.toolName === 'dataAnalysis') {
- uiStream.append(
- <Section title="Analysis">
- <GraphSection result={event.args as any} />
- </Section>
- )
- }
}🤖 Prompt for AI Agents
In `@lib/agents/tools/data-analysis.tsx` around lines 7 - 18, The
dataAnalysisTool.execute already appends a GraphSection to the uiStream, so
remove the duplicate append in the processEvents handler that reacts to a
dataAnalysis tool-call (the code path that currently creates/attaches
GraphSection for tool "dataAnalysis"); keep any other logic in processEvents for
event handling but delete the GraphSection/uiStream.append call there so only
dataAnalysisTool.execute renders the chart.
| execute: async (result: DataAnalysisResult) => { | ||
| const streamResults = createStreamableValue<DataAnalysisResult>() | ||
|
|
||
| uiStream.append(<GraphSection result={streamResults.value} />) | ||
|
|
||
| streamResults.done(result) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
createStreamableValue is created and immediately .done() — streaming is unused.
The StreamableValue is resolved synchronously in the same tick. This adds overhead without benefit. If the result is already fully available in execute, pass it directly as a static object.
♻️ Simplified version
execute: async (result: DataAnalysisResult) => {
- const streamResults = createStreamableValue<DataAnalysisResult>()
-
- uiStream.append(<GraphSection result={streamResults.value} />)
-
- streamResults.done(result)
+ uiStream.append(<GraphSection result={result} />)
return result
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| execute: async (result: DataAnalysisResult) => { | |
| const streamResults = createStreamableValue<DataAnalysisResult>() | |
| uiStream.append(<GraphSection result={streamResults.value} />) | |
| streamResults.done(result) | |
| execute: async (result: DataAnalysisResult) => { | |
| uiStream.append(<GraphSection result={result} />) | |
| return result | |
| } |
🤖 Prompt for AI Agents
In `@lib/agents/tools/data-analysis.tsx` around lines 10 - 15, The current execute
implementation creates a StreamableValue via
createStreamableValue<DataAnalysisResult>(), immediately calls
streamResults.done(result) and then renders <GraphSection
result={streamResults.value} />, which wastes a streaming abstraction; instead,
pass the result directly to the UI. Replace the
createStreamableValue/streamResults usage in the execute method by calling
uiStream.append(<GraphSection result={result} />) (keep types:
DataAnalysisResult) and remove the unused streamResults/done flow so
GraphSection receives the static object synchronously.
| title: z.string().describe('The title of the chart'), | ||
| description: z.string().optional().describe('A brief description of the chart'), | ||
| chartType: z.enum(['bar', 'line', 'pie', 'area', 'scatter']).describe('The type of chart to render'), | ||
| data: z.array(z.record(z.any())).describe('The data points for the chart'), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
data field typing is inconsistent with DataAnalysisResult in lib/types/index.ts.
The schema defines data as z.array(z.record(z.any())), which infers to Record<string, any>[]. However, DataAnalysisResult.data in lib/types/index.ts (Line 32) is typed as any[]. These should be consistent — the schema is stricter (records), which is correct. Update the type in lib/types/index.ts to match:
Suggested fix in lib/types/index.ts
- data: any[];
+ data: Record<string, any>[];🤖 Prompt for AI Agents
In `@lib/schema/data-analysis.tsx` at line 8, The DataAnalysisResult type's data
property is currently any[] but must match the schema's stricter shape; update
the DataAnalysisResult interface/type so its data field is typed as
Record<string, any>[] (or equivalent zod-inferred shape) instead of any[],
ensuring consistency with the schema's data: z.array(z.record(z.any())).
Reference the DataAnalysisResult type and the data property when making this
change.
| })).optional().describe('Optional geospatial data points to be displayed on a map') | ||
| }) | ||
|
|
||
| export type PartialDataAnalysis = DeepPartial<typeof dataAnalysisSchema> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: PartialDataAnalysis wraps the Zod schema object, not the inferred data type.
typeof dataAnalysisSchema resolves to the Zod schema class (ZodObject<...>), not the inferred TypeScript data shape. DeepPartial of a Zod schema object will produce a nonsensical type. You need z.infer<> to extract the data shape first.
Proposed fix
-export type PartialDataAnalysis = DeepPartial<typeof dataAnalysisSchema>
+export type PartialDataAnalysis = DeepPartial<z.infer<typeof dataAnalysisSchema>>🤖 Prompt for AI Agents
In `@lib/schema/data-analysis.tsx` at line 25, PartialDataAnalysis currently
applies DeepPartial to the Zod schema object rather than the inferred data
shape; change its definition to apply DeepPartial to the inferred type from
dataAnalysisSchema (i.e., use z.infer<typeof dataAnalysisSchema>), and ensure z
is imported if not already; update the type alias PartialDataAnalysis
accordingly so it becomes DeepPartial<z.infer<typeof dataAnalysisSchema>>.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This diff introduces a new streaming flow and dataAnalysis graph rendering, but it currently has major correctness and behavior-regression risks: the messages array used for follow-ups isn’t updated with the user input, chart UI may be appended twice (outer tool-call handler vs tool execute()), and chat persistence/UI reconstruction hooks were removed entirely. There are also maintainability issues from broad any usage in GraphSection and as any tool-arg passing instead of schema validation. These should be addressed before merging to avoid broken follow-ups, duplicated UI, and lost chat history.
Summary of changes
Summary
This PR shifts the chat flow to stream the researcher’s output directly while adding a dataAnalysis tool + Recharts-based graph rendering.
Key updates
-
Server action flow rewrite (
app/actions.tsx)- Fetches the system prompt using
getCurrentUserIdOnServer(). - Starts an
answerStreamand streamsresearcher(...).result.fullStreamevents. - Appends an
Analysissection when the model calls thedataAnalysistool. - Simplifies
createAIconfiguration by removingclearChat, persistence hooks (onSetAIState/onGetUIState), andgetUIStateFromAIState.
- Fetches the system prompt using
-
New chart UI (
components/graph-section.tsx)- Adds a client component using Recharts to render
bar,line,area,pie, andscattercharts. - Supports
DataAnalysisResultpassed as an object, JSON string, orStreamableValue.
- Adds a client component using Recharts to render
-
New tool + schema
- Adds
lib/agents/tools/data-analysis.tsxand wires it intogetTools(). - Adds
lib/schema/data-analysis.tsxZod schema andDataAnalysisResulttype inlib/types/index.ts.
- Adds
-
Prompting changes
- Updates researcher/writer prompts to disallow text-based charts and require
dataAnalysisfor visualizations.
- Updates researcher/writer prompts to disallow text-based charts and require
-
Upload + deps
- Expands file uploads to accept
text/csvandapplication/json. - Adds
rechartsdependency.
- Expands file uploads to accept
| const userId = await getCurrentUserIdOnServer() | ||
| const systemPrompt = userId ? await getSystemPrompt(userId) : null | ||
|
|
||
| const answerStream = createStreamableValue<string>('') | ||
|
|
||
| const result = await researcher( | ||
| systemPrompt || '', | ||
| uiStream, | ||
| answerStream, | ||
| messages, | ||
| 'mapbox', | ||
| false, | ||
| drawnFeatures | ||
| ) | ||
|
|
||
| aiState.update({ | ||
| ...aiState.get(), | ||
| messages: [ | ||
| ...aiState.get().messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'user', | ||
| content: userInput, | ||
| type: 'input' | ||
| } | ||
| } | ||
| } | ||
|
|
||
| const hasImage = messageParts.some(part => part.type === 'image') | ||
| const content: CoreMessage['content'] = hasImage | ||
| ? messageParts as CoreMessage['content'] | ||
| : messageParts.map(part => part.text).join('\n') | ||
|
|
||
| const type = skip | ||
| ? undefined | ||
| : formData?.has('input') || formData?.has('file') | ||
| ? 'input' | ||
| : formData?.has('related_query') | ||
| ? 'input_related' | ||
| : 'inquiry' | ||
|
|
||
| if (content) { | ||
| aiState.update({ | ||
| ...aiState.get(), | ||
| messages: [ | ||
| ...aiState.get().messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'user', | ||
| content, | ||
| type | ||
| } | ||
| ] | ||
| }) | ||
| messages.push({ | ||
| role: 'user', | ||
| content | ||
| } as CoreMessage) | ||
| } | ||
|
|
||
| const userId = 'anonymous' | ||
| const currentSystemPrompt = (await getSystemPrompt(userId)) || '' | ||
| const mapProvider = formData?.get('mapProvider') as 'mapbox' | 'google' | ||
| ] | ||
| }) | ||
| let finalAnswer = '' | ||
| let fullResponse = '' | ||
| let hasError = false | ||
|
|
||
| async function processEvents() { | ||
| let action: any = { object: { next: 'proceed' } } | ||
| if (!skip) { | ||
| const taskManagerResult = await taskManager(messages) | ||
| if (taskManagerResult) { | ||
| action.object = taskManagerResult.object | ||
| } | ||
| } | ||
|
|
||
| if (action.object.next === 'inquire') { | ||
| const inquiry = await inquire(uiStream, messages) | ||
| uiStream.done() | ||
| isGenerating.done() | ||
| isCollapsed.done(false) | ||
| aiState.done({ | ||
| ...aiState.get(), | ||
| messages: [ | ||
| ...aiState.get().messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: `inquiry: ${inquiry?.question}` | ||
| try { | ||
| for await (const event of result.result.fullStream) { | ||
| if (event.type === 'text-delta') { | ||
| fullResponse += event.textDelta | ||
| answerStream.update(fullResponse) | ||
| } else if (event.type === 'tool-call') { | ||
| if (event.toolName === 'dataAnalysis') { | ||
| uiStream.append( | ||
| <Section title="Analysis"> | ||
| <GraphSection result={event.args as any} /> | ||
| </Section> | ||
| ) | ||
| } | ||
| ] | ||
| }) | ||
| return | ||
| } | ||
|
|
||
| isCollapsed.done(true) | ||
| let answer = '' | ||
| let toolOutputs: ToolResultPart[] = [] | ||
| let errorOccurred = false | ||
| const streamText = createStreamableValue<string>() | ||
| uiStream.update(<Spinner />) | ||
|
|
||
| while ( | ||
| useSpecificAPI | ||
| ? answer.length === 0 | ||
| : answer.length === 0 && !errorOccurred | ||
| ) { | ||
| const { fullResponse, hasError, toolResponses } = await researcher( | ||
| currentSystemPrompt, | ||
| uiStream, | ||
| streamText, | ||
| messages, | ||
| mapProvider, | ||
| useSpecificAPI, | ||
| drawnFeatures | ||
| ) | ||
| answer = fullResponse | ||
| toolOutputs = toolResponses | ||
| errorOccurred = hasError | ||
|
|
||
| if (toolOutputs.length > 0) { | ||
| toolOutputs.map(output => { | ||
| aiState.update({ | ||
| ...aiState.get(), | ||
| messages: [ | ||
| ...aiState.get().messages, | ||
| { | ||
| id: groupeId, | ||
| role: 'tool', | ||
| content: JSON.stringify(output.result), | ||
| name: output.toolName, | ||
| type: 'tool' | ||
| } | ||
| ] | ||
| }) | ||
| }) | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if (useSpecificAPI && answer.length === 0) { | ||
| const modifiedMessages = aiState | ||
| .get() | ||
| .messages.map(msg => | ||
| msg.role === 'tool' | ||
| ? { | ||
| ...msg, | ||
| role: 'assistant', | ||
| content: JSON.stringify(msg.content), | ||
| type: 'tool' | ||
| } | ||
| : msg | ||
| ) as CoreMessage[] | ||
| const latestMessages = modifiedMessages.slice(maxMessages * -1) | ||
| answer = await writer( | ||
| currentSystemPrompt, | ||
| uiStream, | ||
| streamText, | ||
| latestMessages | ||
| ) | ||
| } else { | ||
| streamText.done() | ||
| } | ||
| finalAnswer = fullResponse | ||
| answerStream.done(finalAnswer) | ||
|
|
||
| if (!errorOccurred) { | ||
| const relatedQueries = await querySuggestor(uiStream, messages) | ||
| uiStream.append( | ||
| <Section title="Follow-up"> | ||
| <FollowupPanel /> | ||
| </Section> | ||
| ) | ||
|
|
||
| await new Promise(resolve => setTimeout(resolve, 500)) | ||
|
|
||
| aiState.done({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
submit() now skips all the prior logic that:
- handled file uploads (images/plain text/CSV/JSON)
- derived
type(input,input_related,inquiry) - pushed a proper
CoreMessageintomessages
…but later it still calls querySuggestor(uiStream, messages) using the old messages array. In this diff, messages is never updated with the new user input, so follow-up suggestions (and any downstream tool behavior that depends on messages) may be based on stale context.
This is a correctness issue: the UI/AI state and the messages array can diverge, and followups may be unrelated to what the user just asked.
Suggestion
Re-introduce a minimal, consistent “append user message” path that updates both aiState and the local messages array before calling researcher()/querySuggestor(). For example:
// build content from userInput/file/etc.
const content = userInput
if (content) {
const userMsg: CoreMessage = { role: 'user', content }
messages.push(userMsg)
aiState.update({
...aiState.get(),
messages: [
...aiState.get().messages,
{ id: nanoid(), role: 'user', content, type: 'input' }
]
})
}If you want, I can add a commit with this fix—reply with "@CharlieHelps yes please".
| } else if (event.type === 'tool-call') { | ||
| if (event.toolName === 'dataAnalysis') { | ||
| uiStream.append( | ||
| <Section title="Analysis"> | ||
| <GraphSection result={event.args as any} /> | ||
| </Section> | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line introduces an as any cast for tool args. That defeats the purpose of having a Zod schema + DataAnalysisResult type and can hide malformed tool payloads until runtime (especially since the graph renderer assumes specific fields).
Given you already have dataAnalysisSchema, you should validate event.args against it (or at least narrow it) before rendering.
Suggestion
Validate and narrow tool args before passing to GraphSection. For example:
import { dataAnalysisSchema } from '@/lib/schema/data-analysis'
...
if (event.toolName === 'dataAnalysis') {
const parsed = dataAnalysisSchema.safeParse(event.args)
if (parsed.success) {
uiStream.append(
<Section title="Analysis">
<GraphSection result={parsed.data} />
</Section>
)
} else {
uiStream.append(
<Section title="Analysis">
<div className="text-sm text-muted-foreground">
Failed to render chart (invalid tool payload).
</div>
</Section>
)
}
}Reply with "@CharlieHelps yes please" if you’d like me to add a commit with this change.
| export const AI = createAI<AIState, UIState>({ | ||
| actions: { | ||
| submit, | ||
| clearChat | ||
| submit | ||
| }, | ||
| initialUIState, | ||
| initialAIState, | ||
| onGetUIState: async () => { | ||
| 'use server' | ||
|
|
||
| const aiState = getAIState() as AIState | ||
| if (aiState) { | ||
| const uiState = getUIStateFromAIState(aiState) | ||
| return uiState | ||
| } | ||
| return initialUIState | ||
| }, | ||
| onSetAIState: async ({ state }) => { | ||
| 'use server' | ||
|
|
||
| if (!state.messages.some(e => e.type === 'response')) { | ||
| return | ||
| } | ||
|
|
||
| const { chatId, messages } = state | ||
| const createdAt = new Date() | ||
| const path = `/search/${chatId}` | ||
|
|
||
| let title = 'Untitled Chat' | ||
| if (messages.length > 0) { | ||
| const firstMessageContent = messages[0].content | ||
| if (typeof firstMessageContent === 'string') { | ||
| try { | ||
| const parsedContent = JSON.parse(firstMessageContent) | ||
| title = parsedContent.input?.substring(0, 100) || 'Untitled Chat' | ||
| } catch (e) { | ||
| title = firstMessageContent.substring(0, 100) | ||
| } | ||
| } else if (Array.isArray(firstMessageContent)) { | ||
| const textPart = ( | ||
| firstMessageContent as { type: string; text?: string }[] | ||
| ).find(p => p.type === 'text') | ||
| title = | ||
| textPart && textPart.text | ||
| ? textPart.text.substring(0, 100) | ||
| : 'Image Message' | ||
| } | ||
| } | ||
|
|
||
| const updatedMessages: AIMessage[] = [ | ||
| ...messages, | ||
| { | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: `end`, | ||
| type: 'end' | ||
| } | ||
| ] | ||
|
|
||
| const { getCurrentUserIdOnServer } = await import( | ||
| '@/lib/auth/get-current-user' | ||
| ) | ||
| const actualUserId = await getCurrentUserIdOnServer() | ||
|
|
||
| if (!actualUserId) { | ||
| console.error('onSetAIState: User not authenticated. Chat not saved.') | ||
| return | ||
| } | ||
|
|
||
| const chat: Chat = { | ||
| id: chatId, | ||
| createdAt, | ||
| userId: actualUserId, | ||
| path, | ||
| title, | ||
| messages: updatedMessages | ||
| } | ||
| await saveChat(chat, actualUserId) | ||
| } | ||
| initialUIState: [], | ||
| initialAIState: { chatId: nanoid(), messages: [] } | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
createAI no longer persists chats (onSetAIState) nor reconstructs UI from state (onGetUIState / getUIStateFromAIState). That’s a major behavior change: users may lose chat history persistence/share page behavior.
If the intent is only to fix streaming/rendering issues, removing persistence entirely is too destructive. If it’s intentional, it needs replacement wiring elsewhere (not shown here).
Suggestion
Restore onSetAIState and onGetUIState (or provide equivalent persistence hooks) unless this PR explicitly intends to remove chat persistence.
A minimal safer change would be to keep the previous persistence code and only adjust the submit() logic + GraphSection integration.
Reply with "@CharlieHelps yes please" if you want me to propose a commit that reintroduces the persistence hooks while keeping your new streaming path.
| } else if (event.type === 'tool-call') { | ||
| if (event.toolName === 'dataAnalysis') { | ||
| uiStream.append( | ||
| <Section title="Analysis"> | ||
| <GraphSection result={event.args as any} /> | ||
| </Section> | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GraphSection is rendered with event.args as any. Tool-call args are not the tool result; they are the model-proposed parameters. That means you may be rendering incomplete/invalid chart configs before the tool actually executes/validates, and it duplicates rendering because dataAnalysisTool.execute() also appends a GraphSection to the UI stream.
Net effect: users can see two charts, and the first one may be wrong/unvalidated.
Suggestion
Only render charts from the tool result (i.e., the tool-result event), not from tool-call args, and avoid double-appending from both places.
Option A (recommended): remove this entire tool-call branch and let dataAnalysisTool.execute() handle UI rendering.
Option B: handle tool-result:
} else if (event.type === 'tool-result' && event.toolName === 'dataAnalysis') {
uiStream.append(
<Section title="Analysis">
<GraphSection result={event.result} />
</Section>
)
}Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
| } catch (error) { | ||
| console.error('Error in processEvents:', error) | ||
| hasError = true | ||
| answerStream.error(error) | ||
| } finally { | ||
| isGenerating.done(false) | ||
| uiStream.done() | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You’re calling answerStream.error(error) where error is unknown. Depending on the StreamableValue API expectations, this can lead to inconsistent client behavior (some consumers expect an Error). More importantly, you set hasError = true but never use it for UI/state decisions—so failures likely result in a silent/blank answer section with no user-facing message.
Suggestion
Normalize the error to an Error instance and append a user-visible fallback section when streaming fails.
} catch (e) {
const err = e instanceof Error ? e : new Error(String(e))
console.error('Error in processEvents:', err)
answerStream.error(err)
uiStream.append(
<Section title="Error">
<BotMessage content={'Something went wrong while generating the response.'} />
</Section>
)
}Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
| export function GraphSection({ result }: GraphSectionProps) { | ||
| if (!result) return null; | ||
|
|
||
| // Check if result is a static DataAnalysisResult object | ||
| // A StreamableValue is an opaque object and shouldn't have these properties | ||
| const isStatic = typeof result === 'object' && result !== null && | ||
| ('chartType' in (result as any) || 'title' in (result as any) || 'data' in (result as any)); | ||
| const isString = typeof result === 'string'; | ||
|
|
||
| if (isStatic || isString) { | ||
| return <GraphCard data={result as any} />; | ||
| } | ||
|
|
||
| // Handle case where it might be a streamable value or something else | ||
| // We use a safe wrapper to avoid crashing if useStreamableValue throws | ||
| return <StreamedGraphSection result={result as any} />; | ||
| } | ||
|
|
||
| function StreamedGraphSection({ result }: { result: StreamableValue<any> }) { | ||
| const [data, error, pending] = useStreamableValue(result); | ||
|
|
||
| if (pending && !data) { | ||
| return ( | ||
| <Section className="py-2"> | ||
| <div className="animate-pulse flex space-y-4 flex-col"> | ||
| <div className="h-4 bg-muted rounded w-3/4"></div> | ||
| <div className="h-64 bg-muted rounded"></div> | ||
| </div> | ||
| </Section> | ||
| ); | ||
| } | ||
|
|
||
| return <GraphCard data={data} />; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GraphSection uses multiple any-based escape hatches (result as any, StreamableValue<any>, data: any), and uses an in-operator heuristic to distinguish streamable vs static values.
This makes it easy for malformed data to slip through and cause runtime errors inside Recharts (which can be hard to debug). Since you already own DataAnalysisResult, you can tighten this significantly:
- avoid
anyin prop types - use a small type guard for
DataAnalysisResult - pass
StreamableValue<DataAnalysisResult>explicitly to the streamed component
Also: StreamedGraphSection ignores error from useStreamableValue() entirely, so failures will silently render a blank/partial card.
Suggestion
Replace the heuristic + any with proper narrowing and an error state. Example:
function isDataAnalysisResult(v: unknown): v is DataAnalysisResult {
return !!v && typeof v === 'object' && 'chartType' in (v as any) && 'data' in (v as any)
}
export function GraphSection({ result }: GraphSectionProps) {
if (!result) return null
if (typeof result === 'string' || isDataAnalysisResult(result)) {
return <GraphCard data={result} />
}
return <StreamedGraphSection result={result} />
}
function StreamedGraphSection({ result }: { result: StreamableValue<DataAnalysisResult> }) {
const [data, error, pending] = useStreamableValue(result)
if (error) return <div className="text-sm text-destructive">Failed to load chart.</div>
...
}Reply with "@CharlieHelps yes please" if you’d like me to add a commit that removes the any and adds streamed error handling.
| export const dataAnalysisTool = ({ uiStream }: ToolProps) => ({ | ||
| description: 'Analyze data and generate a structured representation for visualization in a graph or chart. Use this tool when the user asks for a chart, graph, or data visualization, or when you have structured data (like from a CSV or search results) that would be better understood visually.', | ||
| parameters: dataAnalysisSchema, | ||
| execute: async (result: DataAnalysisResult) => { | ||
| const streamResults = createStreamableValue<DataAnalysisResult>() | ||
|
|
||
| uiStream.append(<GraphSection result={streamResults.value} />) | ||
|
|
||
| streamResults.done(result) | ||
|
|
||
| return result | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The dataAnalysis tool appends a <GraphSection /> from within the tool execution itself, but app/actions.tsx also appends a graph UI on the tool-call event.
Depending on how researcher is implemented (and whether tool execution also appends UI), you can end up with duplicate charts for the same tool call: one appended when the call is emitted, and another appended when the tool finishes.
This is a design/flow issue: choose one place to render tool UI (either in the tool execute or in the outer event loop), not both.
Suggestion
Pick a single rendering strategy:
- Preferred: render the UI inside the tool
execute()(since it has the validatedresult), and remove thetool-callrendering branch inapp/actions.tsx. - Alternatively: keep the outer
tool-callhandler and remove UI appending from the tool.
If you want, I can add a commit implementing the preferred approach (remove tool-call chart appending and validate via schema in the tool)—reply with "@CharlieHelps yes please".
lib/agents/index.tsx
Outdated
| export interface DrawnFeature { | ||
| type: string; | ||
| measurement: string; | ||
| geometry: any; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DrawnFeature was moved to lib/agents/index.tsx, but the new type is broader/less strict than the old one (type: string instead of 'Polygon' | 'LineString', and it dropped id). If other parts of the system (e.g., map drawing, serialization, resolution search) relied on id or specific geometry types, this will allow invalid shapes through without any guardrails.
Suggestion
Restore the stricter shape (or introduce a second type) so downstream code can rely on required fields.
export interface DrawnFeature {
id: string
type: 'Polygon' | 'LineString'
measurement: string
geometry: unknown
}If you truly need a looser shape for other contexts, create DrawnFeatureLoose and keep DrawnFeature strict.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
| config: z.object({ | ||
| xAxisKey: z.string().describe('The key in the data object to use for the X axis'), | ||
| yAxisKey: z.string().optional().describe('The key in the data object to use for the Y axis (for scatter charts)'), | ||
| series: z.array(z.object({ | ||
| key: z.string().describe('The key in the data object for this series'), | ||
| name: z.string().describe('The display name for this series'), | ||
| color: z.string().optional().describe('Optional hex color for this series') | ||
| })).describe('The series to be plotted') | ||
| }).describe('Configuration for the chart layout'), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dataAnalysisSchema defines series as required (z.array(...)), but GraphCard treats config.series as optional (config.series?.map). That mismatch tends to hide broken tool outputs and can lead to empty charts without an obvious error state.
If the schema truly requires series, the UI should assume it exists and render a validation error when it doesn’t. If it’s optional, the schema should reflect that.
Suggestion
Align schema and renderer:
- If
seriesis required, remove optional chaining and show a clear message whenseries.length === 0. - If
seriescan be omitted (e.g., pie charts), make it optional in the schema or model it as a discriminated union bychartType.
Example quick fix:
series: z.array(...).min(1, 'At least one series is required')Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
…-search.tsx (take 2)
…mpting to prevent text-based charts
9d1fba4 to
d09773f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
lib/agents/writer.tsx (1)
7-11: 🧹 Nitpick | 🔵 Trivial
dynamicSystemPromptparameter should be optional or nullable.Since the writer is designed to fall back to
default_system_promptwhen no dynamic prompt is provided (line 34), the parameter should reflect that contract withstring | nullor a default value. Currently a caller must always pass a string, making the intent unclear.Proposed fix
export async function writer( - dynamicSystemPrompt: string, // New parameter + dynamicSystemPrompt: string | null, // New parameter uiStream: ReturnType<typeof createStreamableUI>,lib/agents/researcher.tsx (1)
141-170:⚠️ Potential issue | 🔴 CriticalStream is fully consumed here, making it unavailable for callers.
The
researcherfunction iteratesresult.fullStreamto completion (lines 141–170), then returns theresultobject. Inapp/actions.tsx(line 361),processEventstries to iterateresult.result.fullStreamagain — but the async iterable has already been exhausted. This meansprocessEventswill receive zero events, sofullResponseinactions.tsxstays empty,answerStreamnever updates, and the user sees no streamed answer.Either the researcher should not consume the stream (letting the caller do it), or it should not return
resultfor re-iteration.
🤖 Fix all issues with AI agents
In `@app/actions.tsx`:
- Around line 332-340: The code awaits researcher(...) which fully consumes
result.fullStream before processEvents can iterate it, so switch to Option B:
stop awaiting researcher and instead rely on its internal streaming — remove or
skip calling processEvents(result) and use the values researcher already updates
(e.g., result.fullResponse, streamText, uiStream) as the streamed output; ensure
any dataAnalysis/GraphSection rendering that processEvents handled is moved into
researcher or its tool callbacks so charts and tool-call UI are produced by
researcher’s streaming instead of by processEvents.
In `@components/graph-section.tsx`:
- Around line 88-89: The renderChart function currently only checks for plotData
and config but doesn’t validate config.series, so bar/line/area branches using
config.series?.map(...) can render empty charts; update renderChart to check
that config.series is defined and has length > 0 before entering chart-specific
branches (e.g., where 'bar', 'line', 'area' are handled) and return the same
"Missing chart data or configuration" (or a similar user-facing message) when
series is missing or empty to avoid rendering confusing empty charts.
- Around line 137-158: The Pie branch in components/graph-section.tsx directly
calls plotData.map which will throw if plotData is a truthy non-array; change
the rendering to first normalize/guard plotData (e.g., const safePlotData =
Array.isArray(plotData) ? plotData : []) and use safePlotData for mapping so map
is never called on a non-array, and also guard the dataKey used in <Pie>
(config.series?.[0]?.key) by computing a const dataKey = config.series?.[0]?.key
and either provide a sensible fallback or skip rendering the Pie/Cell elements
when dataKey is undefined to avoid passing undefined to Recharts.
In `@lib/agents/writer.tsx`:
- Around line 28-31: In the system prompt text in lib/agents/writer.tsx replace
the typo'd phrase "proconfigured example queires" with "preconfigured example
queries" (locate the string that currently reads '**IMPORTANT**: NEVER create
charts...' and update that sentence); ensure you only correct the wording and
preserve surrounding punctuation and the example query content including the
planet computer sentence.
In `@package.json`:
- Line 89: The package.json currently adds "recharts": "^3.7.0" which is
compatible with React 19, but you must also ensure the peer dependency for
react-is matches your React major version to avoid runtime rendering issues;
update package.json so "react-is" is set to the React 19-compatible range (e.g.,
"react-is": "^19") in dependencies or devDependencies and run install to resolve
and lock the correct version, keeping the existing "recharts": "^3.7.0" entry.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (2)
bun.lockis excluded by!**/*.lockverification/fix_verification.pngis excluded by!**/*.png
📒 Files selected for processing (12)
app/actions.tsxcomponents/chat-panel.tsxcomponents/graph-section.tsxcomponents/header-search-button.tsxlib/agents/researcher.tsxlib/agents/resolution-search.tsxlib/agents/tools/data-analysis.tsxlib/agents/tools/index.tsxlib/agents/writer.tsxlib/schema/data-analysis.tsxlib/types/index.tspackage.json
💤 Files with no reviewable changes (1)
- components/header-search-button.tsx
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2026-01-31T16:53:16.137Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-31T16:53:16.137Z
Learning: The GraphSection component in components/graph-section.tsx should use theme-aware colors that match the site's current theme (light, dark, or earth) by reading CSS variables like --primary, --secondary, --accent, and --muted, and converting them to hex colors for Recharts compatibility.
Applied to files:
components/graph-section.tsxapp/actions.tsx
🧬 Code graph analysis (3)
lib/types/index.ts (3)
components/map/map-data-context.tsx (2)
MapData(15-32)MapDataContextType(34-37)mapbox_mcp/hooks.ts (1)
DistanceResult(22-29)lib/agents/tools/geospatial.tsx (1)
Location(18-23)
components/graph-section.tsx (4)
lib/types/index.ts (1)
DataAnalysisResult(28-47)components/section.tsx (1)
Section(24-75)components/tool-badge.tsx (1)
ToolBadge(11-26)components/ui/card.tsx (5)
Card(79-79)CardHeader(79-79)CardTitle(79-79)CardDescription(79-79)CardContent(79-79)
lib/agents/tools/index.tsx (1)
lib/agents/tools/data-analysis.tsx (1)
dataAnalysisTool(7-19)
🔇 Additional comments (6)
lib/agents/resolution-search.tsx (1)
79-85: No issues in the updatedhasImagecheck or stream comment.
Formatting/readability tweak only; behavior remains unchanged.components/chat-panel.tsx (1)
216-216: LGTM — accept attribute expanded for data analysis feature.Adding
text/csvandapplication/jsonaligns with the newdataAnalysistool capabilities.Note: on submit, non-image files are attached to
FormData(line 115) but are not added to thecontentarray shown in theUserMessage(lines 98–103). Users won't see a preview of their CSV/JSON attachment in the chat — only the filename badge (lines 292–303). This is acceptable but worth being intentional about.lib/agents/researcher.tsx (1)
51-63: Data analysis tool guidelines are clear and well-structured.The prompt clearly defines when to use
dataAnalysis, provides good examples, and enforces the no-text-charts rule.app/actions.tsx (2)
252-254:splicemutatesmessagesin place — silent truncation of conversation history.
messages.splice(0, Math.max(messages.length - maxMessages, 0))keeps only the lastmaxMessages(5 or 10). This silently drops older context. WithmaxMessages = 5(whenUSE_SPECIFIC_API_FOR_WRITERis true), the model loses most conversation history quickly. Ensure this truncation is intentional and consider logging when messages are dropped.
366-371:event.argsontool-callcontains the tool's input arguments, not its result.The
tool-callevent fires when the model invokes a tool, soevent.argshas the arguments passed to the tool based ondataAnalysisSchema. Since the schema defines the complete chart specification (title, chartType, data, config), the LLM generates the full chart data as tool arguments.GraphSectionhandles this gracefully with type guards that detect static objects, strings, andStreamableValueinstances, andGraphCardvalidates that required fields likedataandconfigare present before rendering.lib/agents/tools/index.tsx (1)
29-33: LGTM — tool integration is consistent with the existing pattern.The
dataAnalysistool is wired in the same way as the other tools. Note thatdataAnalysisToolonly destructuresuiStreamfrom the props (ignoringfullResponse), but this is harmless since the spread matches theToolPropsinterface.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| const result = await researcher( | ||
| systemPrompt || '', | ||
| uiStream, | ||
| answerStream, | ||
| messages, | ||
| 'mapbox', | ||
| false, | ||
| drawnFeatures | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: await researcher(...) exhausts the stream before processEvents can consume it.
researcher is awaited here, which means it runs to completion — internally iterating result.fullStream (see lib/agents/researcher.tsx:141-170) and consuming all events. When processEvents (line 361) later tries to iterate result.result.fullStream, the async iterable is already exhausted. The user will see no streamed response in the UI.
The fix depends on the intended architecture:
- Option A: Don't
awaitresearcher; letprocessEventsbe the sole consumer. Refactorresearcherto not iterate the stream itself, just set it up and return the stream for the caller. - Option B: Remove
processEventsentirely and rely onresearcher's internal streaming (which already updatesstreamText/uiStream). Just useresult.fullResponsefor the final answer.
Option B is simpler given the current researcher implementation:
Proposed fix (Option B — use researcher's output directly)
const result = await researcher(
systemPrompt || '',
uiStream,
answerStream,
messages,
'mapbox',
false,
drawnFeatures
)
aiState.update({
...aiState.get(),
messages: [
...aiState.get().messages,
{
id: nanoid(),
role: 'user',
content: userInput,
type: 'input'
}
]
})
- let finalAnswer = ''
- let fullResponse = ''
- let hasError = false
-
- async function processEvents() {
- try {
- for await (const event of result.result.fullStream) {
- if (event.type === 'text-delta') {
- fullResponse += event.textDelta
- answerStream.update(fullResponse)
- } else if (event.type === 'tool-call') {
- if (event.toolName === 'dataAnalysis') {
- uiStream.append(
- <Section title="Analysis">
- <GraphSection result={event.args as any} />
- </Section>
- )
- }
- }
- }
-
- finalAnswer = fullResponse
- answerStream.done(finalAnswer)
-
- const relatedQueries = await querySuggestor(uiStream, messages)
-
- aiState.done({
- ...aiState.get(),
- messages: [
- ...aiState.get().messages,
- {
- id: groupeId,
- role: 'assistant',
- content: finalAnswer,
- type: 'response'
- },
- ...
- ]
- })
- } catch (error) { ... }
- }
-
- processEvents()
+ // researcher already streamed the response via answerStream and uiStream
+ answerStream.done(result.fullResponse)
+
+ const relatedQueries = await querySuggestor(uiStream, messages)
+
+ aiState.done({
+ ...aiState.get(),
+ messages: [
+ ...aiState.get().messages,
+ {
+ id: groupeId,
+ role: 'assistant',
+ content: result.fullResponse,
+ type: 'response'
+ },
+ {
+ id: groupeId,
+ role: 'assistant',
+ content: JSON.stringify(relatedQueries),
+ type: 'related'
+ },
+ {
+ id: groupeId,
+ role: 'assistant',
+ content: 'followup',
+ type: 'followup'
+ }
+ ]
+ })
+
+ isGenerating.done(false)
+ uiStream.done()Note: If you go with Option B, the dataAnalysis tool-call rendering (GraphSection) needs to be handled inside researcher or its tools rather than in processEvents.
🤖 Prompt for AI Agents
In `@app/actions.tsx` around lines 332 - 340, The code awaits researcher(...)
which fully consumes result.fullStream before processEvents can iterate it, so
switch to Option B: stop awaiting researcher and instead rely on its internal
streaming — remove or skip calling processEvents(result) and use the values
researcher already updates (e.g., result.fullResponse, streamText, uiStream) as
the streamed output; ensure any dataAnalysis/GraphSection rendering that
processEvents handled is moved into researcher or its tool callbacks so charts
and tool-call UI are produced by researcher’s streaming instead of by
processEvents.
| const renderChart = () => { | ||
| if (!plotData || !config) return <div className="flex items-center justify-center h-full text-muted-foreground italic">Missing chart data or configuration</div>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider validating config.series before entering chart-specific branches.
For bar, line, and area charts, if config.series is undefined or empty, the optional chaining (config.series?.map(...)) silently renders a chart with axes but no data series — a confusing empty chart with no feedback. A check here would cover all chart types uniformly.
♻️ Suggested addition after the existing guard
if (!plotData || !config) return <div className="flex items-center justify-center h-full text-muted-foreground italic">Missing chart data or configuration</div>;
+ if (!config.series?.length) return <div className="flex items-center justify-center h-full text-muted-foreground italic">No data series configured</div>;🤖 Prompt for AI Agents
In `@components/graph-section.tsx` around lines 88 - 89, The renderChart function
currently only checks for plotData and config but doesn’t validate
config.series, so bar/line/area branches using config.series?.map(...) can
render empty charts; update renderChart to check that config.series is defined
and has length > 0 before entering chart-specific branches (e.g., where 'bar',
'line', 'area' are handled) and return the same "Missing chart data or
configuration" (or a similar user-facing message) when series is missing or
empty to avoid rendering confusing empty charts.
| case 'pie': | ||
| return ( | ||
| <ResponsiveContainer width="100%" height="100%"> | ||
| <PieChart> | ||
| <Pie | ||
| data={plotData} | ||
| dataKey={config.series?.[0]?.key} | ||
| nameKey={config.xAxisKey} | ||
| cx="50%" | ||
| cy="50%" | ||
| outerRadius={80} | ||
| label | ||
| > | ||
| {plotData.map((entry, index) => ( | ||
| <Cell key={`cell-${index}`} fill={COLORS[index % COLORS.length]} /> | ||
| ))} | ||
| </Pie> | ||
| <Tooltip /> | ||
| <Legend /> | ||
| </PieChart> | ||
| </ResponsiveContainer> | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pie chart will crash at runtime if plotData is not a proper array.
Line 89 guards against falsy plotData, but if plotData is a truthy non-array (e.g., an object), plotData.map() on line 150 throws a TypeError. Unlike bar/line/area where plotData is only passed as a prop to Recharts (which handles it gracefully), the pie branch calls .map() directly.
Additionally, the previously flagged issue about config.series?.[0]?.key being undefined when series is empty still applies.
🛡️ Proposed guard
case 'pie':
+ if (!Array.isArray(plotData)) return <div className="flex items-center justify-center h-full text-muted-foreground italic">Invalid data for pie chart</div>;
+ if (!config.series?.length) return <div className="flex items-center justify-center h-full text-muted-foreground italic">Missing series configuration for pie chart</div>;
return (
<ResponsiveContainer width="100%" height="100%">
<PieChart>🤖 Prompt for AI Agents
In `@components/graph-section.tsx` around lines 137 - 158, The Pie branch in
components/graph-section.tsx directly calls plotData.map which will throw if
plotData is a truthy non-array; change the rendering to first normalize/guard
plotData (e.g., const safePlotData = Array.isArray(plotData) ? plotData : [])
and use safePlotData for mapping so map is never called on a non-array, and also
guard the dataKey used in <Pie> (config.series?.[0]?.key) by computing a const
dataKey = config.series?.[0]?.key and either provide a sensible fallback or skip
rendering the Pie/Cell elements when dataKey is undefined to avoid passing
undefined to Recharts.
| **IMPORTANT**: NEVER create charts, graphs, or visual representations using text-based formatting, asterisks (*), or other hardcoded characters. If you need to present data visually, ensure the appropriate tool has been used. Do not attempt to simulate a chart in your text response. | ||
|
|
||
| There are also some proconfigured example queires. | ||
| When asked about 'What is a planet computer?' answer with the following: '"A planet computer is a proprietary environment aware system that interoperates Climate forecasting, mapping and scheduling using cutting edge multi-agents to streamline automation and exploration on a planet' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typos in system prompt.
Line 30: "proconfigured example queires" → "preconfigured example queries".
Proposed fix
- There are also some proconfigured example queires.
+ There are also some preconfigured example queries. 📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| **IMPORTANT**: NEVER create charts, graphs, or visual representations using text-based formatting, asterisks (*), or other hardcoded characters. If you need to present data visually, ensure the appropriate tool has been used. Do not attempt to simulate a chart in your text response. | |
| There are also some proconfigured example queires. | |
| When asked about 'What is a planet computer?' answer with the following: '"A planet computer is a proprietary environment aware system that interoperates Climate forecasting, mapping and scheduling using cutting edge multi-agents to streamline automation and exploration on a planet' | |
| **IMPORTANT**: NEVER create charts, graphs, or visual representations using text-based formatting, asterisks (*), or other hardcoded characters. If you need to present data visually, ensure the appropriate tool has been used. Do not attempt to simulate a chart in your text response. | |
| There are also some preconfigured example queries. | |
| When asked about 'What is a planet computer?' answer with the following: '"A planet computer is a proprietary environment aware system that interoperates Climate forecasting, mapping and scheduling using cutting edge multi-agents to streamline automation and exploration on a planet' |
🤖 Prompt for AI Agents
In `@lib/agents/writer.tsx` around lines 28 - 31, In the system prompt text in
lib/agents/writer.tsx replace the typo'd phrase "proconfigured example queires"
with "preconfigured example queries" (locate the string that currently reads
'**IMPORTANT**: NEVER create charts...' and update that sentence); ensure you
only correct the wording and preserve surrounding punctuation and the example
query content including the planet computer sentence.
| "react-markdown": "^9.1.0", | ||
| "react-textarea-autosize": "^8.5.9", | ||
| "react-toastify": "^10.0.6", | ||
| "recharts": "^3.7.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🌐 Web query:
recharts 3.7.0 npm React 19 compatibility
💡 Result:
Recharts 3.7.0 is npm-install compatible with React 19: its peerDependencies explicitly allow react, react-dom, and react-is ^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0. [1]
Practical note: you should ensure react-is matches your React major (so with React 19, use react-is@^19), as mismatches have historically caused “charts don’t render” problems in React 19 setups. [1][2]
Sources:
[1] Recharts 3.7.0 package.json (jsDelivr CDN) https://cdn.jsdelivr.net/npm/[email protected]/package.json
[2] Recharts GitHub issue: “Support React 19” #4558 recharts/recharts#4558
🌐 Web query:
recharts version 3.7.0 release peer dependencies
💡 Result:
Recharts v3.7.0 (released Jan 2026) (newreleases.io) has these peer dependencies:
reactreact-domreact-is(jsdocs.io)
(Upstream discussion notes react-dom + react-is are kept as peers to ensure they match your installed React version.) (github.com)
Citations:
- 1: https://newreleases.io/project/github/recharts/recharts/release/v3.7.0?utm_source=openai
- 2: https://www.jsdocs.io/package/recharts
- 3: How should we define recharts dependencies in package.json? recharts/recharts#5701
Recharts 3.7.0 is compatible with React 19 — dependency is safe to add.
Recharts 3.7.0 (released Jan 2026) explicitly supports React 19 in its peer dependencies (^19.0.0). Ensure react-is matches your React major version (React 19 requires react-is@^19) to avoid rendering issues, as mismatches have historically caused problems in React 19 setups.
🤖 Prompt for AI Agents
In `@package.json` at line 89, The package.json currently adds "recharts":
"^3.7.0" which is compatible with React 19, but you must also ensure the peer
dependency for react-is matches your React major version to avoid runtime
rendering issues; update package.json so "react-is" is set to the React
19-compatible range (e.g., "react-is": "^19") in dependencies or devDependencies
and run install to resolve and lock the correct version, keeping the existing
"recharts": "^3.7.0" entry.
Summary by CodeRabbit
New Features
Refactor