-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Optimization of Triangulation_data_structure operations #9188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Optimization of Triangulation_data_structure operations #9188
Conversation
mglisse
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(I haven't read the full diff, just some quick comments)
For something motivated by performance, I would like
- benchmark results before/after
- separate patches for separate changes (all the queue->vector can be grouped, but the mirror stuff and caching is_infinite should not be with those, unless there is a good reason why they have to go together), to make it easier to understand the impact of each change
Without benchmarks, it is hard to guess if all the update_full_cell_infinite_flag are cheaper or more expensive than the is_infinite they avoid.
| // Update the mirror indices in the neighboring full cells so that the | ||
| // bidirectional neighbor relationship remains consistent after the swap. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment doesn't seem very useful (it is easy to guess). I would be more interested in a comment explaining what situation requires us to check != Full_cell_handle(), since this test is new.
| void mark_visited() { bits_ |= VISITED_BIT; } | ||
| void clear_visited() { bits_ &= ~VISITED_BIT; } | ||
| bool is_visited() const { return (bits_ & VISITED_BIT) != 0; } | ||
| // WARNING: if we use more bits and several bits can be set at once, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment looks outdated after this patch.
| #ifdef CGAL_DEBUG | ||
| if constexpr (!has_mirror_impl<Combinatorics>::value) | ||
| { | ||
| CGAL_assertion_msg(false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would a plain static_assert work?
If it doesn't, this still looks verbose compared to CGAL_assertion_msg(has_mirror_impl<Combinatorics>::value,"..."), without ifdef or if constexpr.
| } | ||
| else | ||
| { | ||
| // unreachable in debug, safe fallback in release |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In what way is -1 safe?
| { | ||
| CGAL_precondition(visited.empty()); | ||
| std::queue<Full_cell_handle> queue; | ||
| std::vector<Full_cell_handle> st; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a particular reason to use vector directly rather than queue<X,vector<XXX>> (or queue<X,boost::container::small_vector<X,N>>), which would have avoided changing push to push_back, etc?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The vector is used intentionally because traversal order does not matter here, and this avoids queue/stack overhead while allowing reserve() for better locality.
| Full_cell_handle inf_fc = infinity_->full_cell(); | ||
| // This initializes the cached "infinite" flag so that | ||
| // is_infinite(inf_fc) returns the correct result | ||
| inf_fc->tds_data().set_infinite(true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We cannot call set_infinite if it is not part of the concept.
| incident_full_cells(v, std::back_inserter(simps)); | ||
| for (auto &fc : simps){ | ||
| update_full_cell_infinite_flag(fc); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Supposedly you could wrap update_full_cell_infinite_flag in an output iterator and pass it to incident_full_cells instead of storing them.
Summary of Changes
This change improves the performance of Triangulation_data_structure in fixed dimensions by reducing overhead in frequently executed TDS operations. It caches the “infinite” status of full cells to avoid repeated linear scans over vertices, replaces queue-based traversals with stack-based ones, and tracks visited full cells to allow fast cleanup without re-traversing the graph. It also introduces safer and more efficient handling of mirror indices via the mirror storage policy, reducing the cost of neighbor lookups during cell rotations.
Release Management