Skip to content

Conversation

@joelreymont
Copy link
Contributor

Improves #7115.

I'm not sure if I'm paying attention to the right metric but I decided to pay attention to the VisualizerSystem::execute scope. This is the baseline of main when running the RRD generated by tests/python/many_entity_transforms.

7715-many-transform-base

There's a fair bit of improvement after this PR.

7715-many-transform-new

@joelreymont
Copy link
Contributor Author

What UI metrics would I pay attention if I combine this with PRs #11720 and #11741?

Copy link
Member

@Wumpf Wumpf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

partial review. I think this could be a promising direction, but needs to be designed a bit better.

Something I'd like to see more of in the PR descriptions is

  • what exactly has changed
  • why has it changed
  • what are the impacts under what circumstances

it's nice that you provided a before/after screenshot of the method list in puffin for the many-entity test script, but a bit more context would be needed to evaluate this

  • how was this measured exactly
  • what overall impact does it have on frame time - flamegraph before/after can be useful
  • what exactly got faster
  • tradeoffs

let drag_and_drop_manager =
re_viewer_context::DragAndDropManager::new(ItemCollection::default());

// Tests don't compute visualizable entities per view, so we provide an empty map
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not? future tests will likely trip over that then

view_query_cache: HashMap<ViewId, ViewQueryCacheEntry>,

#[serde(skip)]
visualizable_entities_cache: HashMap<ViewId, VisualizableEntitiesCacheEntry>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need to find a better place for this. Probably best to refactor this together with ViewStates, but that exceeds the scope of the PR a bit.

So for starters I think these could go into a shared datastructure that lives on a cache struct that is impl Cache (and lazily created with other caches) so we can also easily oversee its memory usage
The viewer's cache trait isn't a perfect fit for this since it's more geared towards changes in the store rather than blueprint, but as good as any place.

Either way I'd like to avoid making the situation on ApState any worse

Comment on lines 153 to 203
struct ViewQueryCacheEntry {
key: ViewQueryCacheKey,
result: DataQueryResult,
}

#[derive(Clone, PartialEq, Eq)]
struct ViewQueryCacheKey {
recording_generation: re_chunk_store::ChunkStoreGeneration,
blueprint_generation: re_chunk_store::ChunkStoreGeneration,
blueprint_query: LatestAtQuery,
}

impl ViewQueryCacheKey {
fn new(
recording_generation: re_chunk_store::ChunkStoreGeneration,
blueprint_generation: re_chunk_store::ChunkStoreGeneration,
blueprint_query: &LatestAtQuery,
) -> Self {
Self {
recording_generation,
blueprint_generation,
blueprint_query: blueprint_query.clone(),
}
}
}

struct VisualizableEntitiesCacheEntry {
key: VisualizableEntitiesCacheKey,
result: PerVisualizer<VisualizableEntities>,
}

#[derive(Clone, PartialEq, Eq, Hash)]
struct VisualizableEntitiesCacheKey {
recording_generation: re_chunk_store::ChunkStoreGeneration,
blueprint_generation: re_chunk_store::ChunkStoreGeneration,
space_origin: EntityPath,
}

impl VisualizableEntitiesCacheKey {
fn new(
recording_generation: re_chunk_store::ChunkStoreGeneration,
blueprint_generation: re_chunk_store::ChunkStoreGeneration,
space_origin: &EntityPath,
) -> Self {
Self {
recording_generation,
blueprint_generation,
space_origin: space_origin.clone(),
}
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should all go to a separate module

Comment on lines 363 to 402
let visualizable_entities_per_view: HashMap<ViewId, PerVisualizer<VisualizableEntities>> = {
re_tracing::profile_scope!("visualizable_entities_per_view");
viewport_ui
.blueprint
.views
.values()
.map(|view| {
let cache_key = VisualizableEntitiesCacheKey::new(
recording_generation.clone(),
blueprint_generation.clone(),
&view.space_origin,
);

let visualizable_entities = if let Some(entry) = self.visualizable_entities_cache.get(&view.id)
&& entry.key == cache_key
{
PerVisualizer(entry.result.0.clone())
} else {
// This is the expensive operation we're caching
view.class(view_class_registry)
.determine_visualizable_entities(
&maybe_visualizable_entities_per_visualizer,
recording,
&view_class_registry
.new_visualizer_collection(view.class_identifier()),
&view.space_origin,
)
};

self.visualizable_entities_cache.insert(
view.id,
VisualizableEntitiesCacheEntry {
key: cache_key,
result: PerVisualizer(visualizable_entities.0.clone()),
},
);

(view.id, visualizable_entities)
})
.collect()
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all of this should be encapsulated by the cache itself

#[derive(Clone, PartialEq, Eq, Hash)]
struct VisualizableEntitiesCacheKey {
recording_generation: re_chunk_store::ChunkStoreGeneration,
blueprint_generation: re_chunk_store::ChunkStoreGeneration,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

afaik blueprint generation now changes continuously when playing since we store the time cursor position as a static component in it. That makes this cache a lot less effective and really only useful for a standstill viewer. Need to come up with a better cache key

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about removing blueprint_generation from VisualizableEntitiesCacheKey?

Visualizable entities only depend on recording data (which entities exist and what components they have) and view configuration (space_origin, view class). They do not depend on the blueprint time cursor.

It seems we can safely remove blueprint_generation from VisualizableEntitiesCacheKey (app_state.rs:184-189) and keep recording_generation since new entities being added should invalidate the cache. This should make the visualizable entities cache effective during playback.

What am I missing?

@joelreymont
Copy link
Contributor Author

It will be done! 🫡

@emilk emilk marked this pull request as draft November 5, 2025 12:53
@joelreymont joelreymont force-pushed the 7715-rerun-viewer-slow branch from 1291add to 027a0b7 Compare November 7, 2025 18:23
This commit addresses the performance bottlenecks described in issue rerun-io#7115
where the Rerun viewer becomes slow with several hundred entities.

Key improvements:

1. Share visualizable_entities_per_view across the frame
   - Added visualizable_entities_per_view to ViewerContext to avoid
     redundant computation
   - Previously, determine_visualizable_entities() was called multiple
     times per frame for each view:
       * Once in app_state.rs
       * Once in update_overrides()
       * Once or more in create_entity_add_info() for UI updates
   - Now computed once per frame and shared via ViewerContext

2. Update create_entity_add_info to use cached results
   - Modified entity_add_info.rs to use pre-computed visualizable
     entities from ViewerContext instead of recomputing

3. Improve cache invalidation strategy
   - Changed cache keys from latest_row_id to ChunkStoreGeneration
   - latest_row_id changes on every data insertion, invalidating cache
     every frame for streaming data
Remove blueprint_generation from VisualizableEntitiesCacheKey to prevent
cache invalidation on every frame during playback. Visualizable entities
only depend on recording data and view configuration, not the blueprint
time cursor which updates continuously during playback.
@joelreymont joelreymont force-pushed the 7715-rerun-viewer-slow branch from 027a0b7 to 01127bc Compare November 7, 2025 18:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants