feat(composer): A2UI Dojo — interactive JSONL playback and streaming viewer#987
feat(composer): A2UI Dojo — interactive JSONL playback and streaming viewer#987nan-yu wants to merge 47 commits intogoogle:mainfrom
Conversation
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
|
Warning Gemini encountered an error creating the review. You can try again by commenting |
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces significant enhancements to A2UI's streaming capabilities and developer tooling. Key changes include the implementation of a new stream_response_to_parts helper in the Python SDK for incremental parsing of LLM token streams into A2UI parts, and a refactoring of client-side request handling in the Angular client to support Server-Sent Events (SSE) for streaming responses. The DEFAULT_WORKFLOW_RULES were updated to include new guidelines for component ordering, crucial for incremental UI rendering. A new Design System Integration guide was added to the documentation, explaining how to wrap existing components as A2UI components. The Composer tool received a major update with a new 'Dojo' page, providing a streaming player for A2UI scenarios, including UI for playback controls, raw JSONL stream visualization, and lifecycle events. This involved new hooks (useA2UISurface, useStreamingPlayer) and a transcoder utility to convert v0.9 messages to v0.8. Several new A2UI scenario JSON files were added to the Composer, demonstrating various components and interactions, including a new YouTube component in the RizzCharts catalog. Dependency updates for TypeScript and Angular CLI were also included.
Review comments highlight several areas for improvement: the agent's error handling for empty responses might need re-evaluation, and the Angular client's clearSurfaces() call could cause UI flicker during streaming. Additionally, there are concerns about verbose logging in agent SDKs, a debug console.log statement in the React renderer that should be removed, the brittleness of a patching script in the Composer, and the presence of commented-out, potentially dead code in the Composer's scenario definitions.
| yield p.text | ||
|
|
||
| if selected_catalog: | ||
| from a2ui.core.parser.streaming import A2uiStreamParser | ||
|
|
||
| if session_id not in self._parsers: | ||
| self._parsers[session_id] = A2uiStreamParser(catalog=selected_catalog) | ||
|
|
||
| async for part in stream_response_to_parts( | ||
| self._parsers[session_id], | ||
| token_stream(), | ||
| ): | ||
| yield { | ||
| "is_task_complete": False, | ||
| "parts": [part], | ||
| } | ||
| else: | ||
| async for token in token_stream(): | ||
| yield { | ||
| "is_task_complete": False, | ||
| "updates": self.get_processing_message(), | ||
| "updates": token, | ||
| } | ||
|
|
||
| if final_response_content is None: | ||
| logger.warning( | ||
| "--- RestaurantAgent.stream: Received no final response content from" | ||
| f" runner (Attempt {attempt}). ---" | ||
| ) | ||
| if attempt <= max_retries: | ||
| current_query_text = ( | ||
| "I received no response. Please try again." | ||
| f"Please retry the original request: '{query}'" | ||
| ) | ||
| continue # Go to next retry | ||
| else: | ||
| # Retries exhausted on no-response | ||
| final_response_content = ( | ||
| "I'm sorry, I encountered an error and couldn't process your request." | ||
| ) | ||
| # Fall through to send this as a text-only error |
There was a problem hiding this comment.
The previous retry mechanism for when final_response_content was None has been removed. This means the agent will no longer explicitly retry if it receives no content from the runner. If full_content_list is empty, final_response_content will now be an empty string, which might lead to different validation outcomes or unexpected behavior compared to the previous None check. This change in error handling could impact the agent's ability to recover from transient issues or empty responses.
| const response = await this.send(request as Types.A2UIClientEventMessage); | ||
| messages = response; | ||
| // Clear surfaces at the start of a new request | ||
| this.processor.clearSurfaces(); |
There was a problem hiding this comment.
The this.processor.clearSurfaces() call at the beginning of makeRequest will clear all rendered A2UI surfaces for every new request. For streaming scenarios, where surfaces are built incrementally, this could cause a noticeable flicker or reset of the UI state with each new chunk, negatively impacting user experience. Consider clearing surfaces only when a new session starts or when explicitly instructed by a deleteSurface message from the agent, to allow for smooth incremental updates.
| logger.info("-----------------------------") | ||
| logger.info(f"--- AGENT: Received token:\n{token}") | ||
| response_parts = parser.process_chunk(token) | ||
| logger.info( | ||
| f"--- AGENT: Response parts:\n{[part.a2ui_json for part in response_parts]}\n" | ||
| ) | ||
| logger.info("-----------------------------") |
There was a problem hiding this comment.
The logger.info statements within the async for token in token_stream loop can be very verbose and might impact performance, especially when processing a large number of tokens. Consider changing these to logger.debug or removing them for production environments. This issue is also present in samples/agent/adk/contact_lookup/agent.py (lines 268-274) and samples/agent/adk/restaurant_finder/agent.py (lines 262-268).
| logger.info("-----------------------------") | |
| logger.info(f"--- AGENT: Received token:\n{token}") | |
| response_parts = parser.process_chunk(token) | |
| logger.info( | |
| f"--- AGENT: Response parts:\n{[part.a2ui_json for part in response_parts]}\n" | |
| ) | |
| logger.info("-----------------------------") | |
| logger.debug("-----------------------------") | |
| logger.debug(f"--- AGENT: Received token:\n{token}") | |
| response_parts = parser.process_chunk(token) | |
| logger.debug( | |
| f"--- AGENT: Response parts:\n{[part.a2ui_json for part in response_parts]}\n" | |
| ) | |
| logger.debug("-----------------------------") |
|
|
||
| // Get surface - this will re-render when version changes | ||
| const surface = getSurface(surfaceId); | ||
| console.log('A2UIRenderer: surfaceId=', surfaceId, 'surface=', surface, 'version=', version); |
There was a problem hiding this comment.
This console.log statement should be removed before committing to production. It can lead to noisy console output and potentially expose internal state. For debugging purposes, consider using a more controlled logging mechanism or removing it entirely.
| console.log('A2UIRenderer: surfaceId=', surfaceId, 'surface=', surface, 'version=', version); | |
| // console.log('A2UIRenderer: surfaceId=', surfaceId, 'surface=', surface, 'version=', version); |
| /** | ||
| * Copyright 2026 Google LLC | ||
| * | ||
| * Licensed under the Apache License, Version 2.0 (the "License"); | ||
| * you may not use this file except in compliance with the License. | ||
| * You may obtain a copy of the License at | ||
| * | ||
| * http://www.apache.org/licenses/LICENSE-2.0 | ||
| * | ||
| * Unless required by applicable law or agreed to in writing, software | ||
| * distributed under the License is distributed on an "AS IS" BASIS, | ||
| * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| * See the License for the specific language governing permissions and | ||
| * limitations under the License. | ||
| */ | ||
|
|
||
| const fs = require('fs'); | ||
| const file = 'page.tsx'; | ||
| let content = fs.readFileSync(file, 'utf8'); | ||
|
|
||
| const target = ` messages: (scenarios[selectedScenario] as any) || [], | ||
| autoPlay: false, | ||
| baseIntervalMs: 1000 | ||
| });`; | ||
|
|
There was a problem hiding this comment.
This script programmatically modifies page.tsx. Relying on such scripts for code changes is brittle and can lead to maintenance issues, merge conflicts, and makes the codebase harder to understand and debug. The logic for handling URL parameters should be integrated directly into page.tsx as part of the standard development workflow.
| * limitations under the License. | ||
| */ |
There was a problem hiding this comment.
…be component - New guide: design-system-integration.md — step-by-step for adding A2UI to an existing Material Angular application - Rewritten guide: custom-components.md — complete walkthrough for YouTube, Maps, and Charts custom components (replaces TODO skeleton) - New sample component: YouTube embed for rizzcharts catalog - Updated rizzcharts catalog.ts to include YouTube component - Friction log documenting 8 friction points (P2/P3) encountered during development, with recommendations - Added Design System Integration to mkdocs nav
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
- Remove friction log file (content already in issue google#825) - YouTube component: add video ID regex validation (security) - custom-components.md: rename to 'Custom Component Catalogs', reorder examples (media first), clarify basic catalog is optional, remove redundant heading, fix Maps input.required consistency, add encodeURIComponent to docs example - design-system-integration.md: rewrite to focus on wrapping Material components as A2UI components (not using DEFAULT_CATALOG), show custom catalog without basic components, add mixed catalog example - s/standard/basic/ throughout
…or A2UI Dojo - Redesigned Top Command Center with glassmorphic header and functional timeline scrubber. - Replaced native scenario select with Shadcn DropdownMenu. - Polished Data Stream view with active state highlighting, glow effects, and auto-scrolling. - Replaced native checkboxes with custom Tailwind styled toggles in Config view. - Added dynamic grid layout for the Renderers Panel with sophisticated styling per surface (React Web, Discord dark mode replica, Lit Components). - Applied custom slim scrollbars throughout for a premium feel.
…r Vercel over Cloudflare Pages
… sync sample data
… update transcoder
- Replace invented v0.9 scenarios with real v0.8 samples from samples/agent/adk/ - Add restaurant-booking, restaurant-list, restaurant-grid, restaurant-confirmation - Add contact-card, contact-list, org-chart, floor-plan scenarios - Update index.ts to surface real scenarios as defaults - Default scenario is now restaurant-booking (verified rendering) - Fix transcoder.ts: pass v0.8 messages through unchanged - Fix useA2UISurface.ts: only process v0.8 format components (id + component) - Fix dataModelUpdate: parse ValueMap format correctly - Restaurant booking now renders: column, text, image, textfield, datetime, button - Locally verified with headless Chromium: all A2UI CSS classes present - Build passes (bun run build)
- Switch import to @copilotkit/a2ui-renderer (npm-published) - Remove file:../../renderers/react dep that breaks Vercel builds - @copilotkit/a2ui-renderer uses @a2ui/lit under the hood (npm transitive dep)
- Remove Discord mock pane and multi-renderer grid - Show single A2UI renderer (full width, centered) - Add human-readable step summaries to JSONL pane (e.g. 'Update 9 components: Column, Text, Image, TextField...') - Raw JSON collapsed by default behind 'Raw JSON ▸' toggle - Steps are clickable to seek directly - Wider left pane (35%) for better readability - Remove unused renderer toggle state
- Remove northstar-tour, flight-status, weather-widget (v0.9 format) - Remove kitchen-sink, component-gallery-stream (not real scenarios) - Keep only verified v0.8 scenarios from samples/agent/adk/ - 12 working scenarios remain in dropdown
The route was incorrectly set to 'force-static' which caused Next.js to fail with a 500 error on every page load since the CopilotRuntime / InMemoryAgentRunner cannot be statically exported. Change to 'force-dynamic' so the route is properly handled server-side.
- URL state: scenario, step, renderer synced to query params (e.g. /dojo?scenario=contact-card&step=2&renderer=React) - Config panel: renderer dropdown + scenario dropdown (synced with header) - Sidebar nav: added 'Dojo' link with Play icon - Curated to 5 quality scenarios (removed broken/redundant ones) - Removed contact-lookup (crashes on step 4 — missing component refs)
…ario Major architecture change: - NEW useStreamingPlayer hook: explodes messages into individual JSONL lines that stream progressively (line-by-line) instead of whole-message chunks - Scrubber now has fine-grained control over 60+ stream positions per scenario - Three left pane tabs: (a) Events — lifecycle summaries (surface created, components registered, data bound) (b) Data — raw streaming lines appearing chunk by chunk with ↓/↑ badges (c) Config — scenario, renderer, transport dropdowns - Removed scenario dropdown from header (lives only in Config tab) - Streaming cursor animation shows which message is mid-delivery - Click any event/line to seek to that position - Server/client sections grouped with direction badges - Compact header with streaming status indicator
Each JSONL chunk is one complete JSON object on one line — that's how real SSE/JSONL works. Not individual formatted lines within a message. - Data tab now shows raw wire format (compact JSON, one chunk per card) - Shows byte size per chunk and total bytes received - Scrubber steps through 3 chunks for restaurant-booking (not 60+ fake lines) - Streaming cursor shows 'Waiting for next chunk...' between deliveries - Header shows total bytes received during playback
Adds 3 new chunks to the restaurant-booking scenario: - Chunk 4 (↑ CLIENT): User submits booking form with party size, date, dietary requirements via clientEvent/userAction - Chunk 5 (↓ SERVER): Agent responds with confirmation surface update (checkmark, title, details, summary, Done button) - Chunk 6 (↓ SERVER): Data model update with confirmation details Full bidirectional flow: server renders form → user fills and submits → server confirms with new surface state. Demonstrates the complete A2UI interaction lifecycle in 6 JSONL chunks.
By default, the Angular client uses the non-streaming API to communicate with the agent. To enable streaming, set the `ENABLE_STREAMIING` env var to `true. ```bash export ENABLE_STREAMING=true npm start -- restaurant ```
The agent samples yield the accumulated final response after sending
all streaming chunks, which is duplicate. It now cleanly yields
`{"is_task_complete": True, "parts": []}`.
It fixes compose CI build failure: https://github.com/google/A2UI/actions/runs/23568752574/job/68626383193?pr=987. ``` /home/runner/setup-pnpm/node_modules/.bin/pnpm store path --silent /home/runner/setup-pnpm/node_modules/.bin/store/v10 Error: Some specified paths were not resolved, unable to cache dependencies. ```
|
This PR depends on changes in other PRs, so it is not mergable yet. |
|
In the live demo, it appears that Markdown rendering is broken in Text components. |
I should probably remove the live demo link. The latest change has not been deployed to https://composer-ten.vercel.app/dojo yet, while the recorded demo does have it fixed. @zeroasterisk Do you have instructions on how to update the live demo? |
A2UI Dojo
A new
/dojoroute in the Composer that acts as a VCR for A2UI protocol traces. Load a scenario, press play, and watch JSONL chunks stream in one-by-one while the renderer updates in real-time.Features
?scenario=restaurant-booking&step=4links directly to any statesamples/agent/adk/(real v0.8 wire format)Files Changed
src/app/dojo/page.tsxsrc/components/dojo/useStreamingPlayer.tssrc/components/dojo/useA2UISurface.tssrc/components/layout/sidebar-nav.tsxsrc/data/dojo/index.tssrc/data/dojo/restaurant-booking.jsonRecording
Stream.viewer.for.the.contact_lookup.sample.webm
Stream.viewer.for.the.restaurant_finder.sample.webm
Live Demo
👉 https://composer-ten.vercel.app/dojo