-
Notifications
You must be signed in to change notification settings - Fork 13
feat: large recording playback with chunked streaming #70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This design document outlines the architecture for handling gigabyte-scale mission recordings without browser memory issues. Key features: - Storage engine abstraction (JSON legacy, Protobuf, FlatBuffers) - Database tracks storage format per recording - Background conversion queue for JSON → binary format - Chunked loading with OPFS/IndexedDB caching - Maximum memory budget of ~22MB regardless of recording size - New API endpoints for manifest and chunk streaming Addresses recurring memory limit issues reported by users. https://claude.ai/code/session_01X5htkP9AzhbWxEbVjtUoZ5
Adds database migration v3 to support multiple storage formats (json, protobuf, flatbuffers) and track conversion progress.
Extends Operation struct and database queries to track which storage format each recording uses and its conversion status.
Defines Engine interface for pluggable storage formats with Manifest, Chunk, and Frame types for chunked playback.
Provides backward-compatible reading of existing gzipped JSON recordings. Does not support chunked loading or conversion.
Defines Manifest, Chunk, Frame, EntityState, Event, and Marker messages for efficient binary serialization.
Reads manifest and chunks from protobuf files. Supports streaming via GetChunkReader for efficient chunk delivery.
Converts legacy JSON recordings to chunked protobuf format. Parses entities, events, markers, and times into manifest, then writes frame data into separate chunk files.
- Add FlatBuffers schema and generated Go code for zero-copy reads - Implement FlatBuffersEngine with Convert, GetManifest, GetChunk methods - Add --format flag to CLI convert command (protobuf/flatbuffers) - Update conversion worker to support configurable storage format - Add StorageFormat field to worker Config - Implement protobuf engine Convert method (was stub) - Fix operation Select query parameter order and date filtering - Add frontend streaming playback support: - ProtobufDecoder for parsing binary chunks - StorageManager for OPFS/IndexedDB caching - ChunkManager for LRU chunk management - loadOperation() helper to auto-detect format - Update index.html to load streaming scripts - Add loading indicators and Safari ITP warning
FlatBuffers: - Add comprehensive tests for FlatBuffers engine - Create JavaScript FlatBuffers decoder for frontend - Fix handler to return correct content type (x-flatbuffers) - Update ChunkManager to support format parameter - Update processOpStreaming to pass format to decoder Entity state updates (streaming mode): - Add Unit.updateFromState() for isInVehicle, isPlayer, name - Add Vehicle.updateFromState() for crewIds - Fix chunk loading race condition - don't remove markers during load The FlatBuffers implementation is now complete with end-to-end support for both protobuf and flatbuffers storage formats.
Move scattered schema files into a clean structure: Before (messy): - flatbuffers/ocap.fbs - ocap/fb/*.go - proto/ocap.proto - proto/ocap.pb.go - static/scripts/proto/ocap.js - static/scripts/flatbuffers/decoder.js After (clean): - schemas/protobuf/ocap.proto + ocap.pb.go - schemas/flatbuffers/ocap.fbs + generated/*.go - static/scripts/decoders/protobuf.js + flatbuffers.js Update all Go import paths accordingly.
Server-side: - Add GetManifestReader() to Engine interface for raw binary streaming - Implement GetManifestReader in JSON, Protobuf, and FlatBuffers engines - Update handler to stream raw manifest for binary formats - Add --set-format CLI command for testing format switching Client-side: - Fix FlatBuffers decoder field order (times=8, events=9 per schema) - Add markers array to manifest initialization - Make StorageManager cache format-aware (prevents cross-format cache hits) - Update ChunkManager to pass format when accessing storage
- Add overview of chunked streaming feature - Document configuration settings for conversion - Add storage formats comparison table - Include ASCII workflow diagram - Add Mermaid flowcharts for playback and conversion flows - Document CLI convert commands - Update Docker environment variables - Fix build commands (./src/web → ./cmd)
- Create docs/streaming-architecture.md with Mermaid flowcharts - Add browser caching explanation - Simplify README with link to detailed docs
- Add event-driven conversion: trigger conversion immediately after upload instead of waiting for the background worker interval - Use structured JSON logging (slog) for consistent log format - Make browser caching opt-in via ?cache=1 URL parameter to avoid stale cache issues during development - Fix marker position array order in streaming mode - Calculate and store mission duration from recording metadata - Return auto-generated ID from Store() for immediate use
The --all flag now converts all recordings in the database, including re-converting already converted ones. This is useful for: - Converting existing JSON recordings after upgrade - Re-converting when changing formats (protobuf → flatbuffers) The background worker still uses SelectPending for incremental processing.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
This PR implements chunked streaming playback for large recordings that would otherwise crash browsers due to memory constraints. Recordings are converted from JSON to binary formats (Protobuf/FlatBuffers) and split into chunks that are loaded on-demand during playback.
Key Features
?cache=1URL parameterArchitecture
New Endpoints
GET /api/v1/operations/:id/formatGET /api/v1/operations/:id/manifestGET /api/v1/operations/:id/chunk/:indexConfiguration
{ "conversion": { "enabled": true, "interval": "5m", "storageEngine": "protobuf" } }Files Changed
Test plan
?cache=1to enable browser caching