Skip to content

feat: query multiple beacon nodes in parallel#751

Open
petarjuki7 wants to merge 14 commits intosigp:unstablefrom
petarjuki7:weigted_attestation_data
Open

feat: query multiple beacon nodes in parallel#751
petarjuki7 wants to merge 14 commits intosigp:unstablefrom
petarjuki7:weigted_attestation_data

Conversation

@petarjuki7
Copy link
Member

@petarjuki7 petarjuki7 commented Dec 5, 2025

Issue Addressed

WAD - Weighted Attestation Data

Proposed Changes

  • Added parallel fetching from all configured beacon nodes instead of sequential fallback
  • Implemented scoring algorithm based on checkpoint epochs + head slot proximity
  • Added dual timeout strategy: soft timeout at 500ms (return if we have responses), hard timeout at 1s (return unconditionally)

The implementation uses FuturesUnordered to query all beacon nodes concurrently and selects the highest-scoring attestation data. Scoring works by summing source and target epochs (base score) plus a proximity bonus based on how close the head slot is to the attestation slot.

Additional Info

@petarjuki7 petarjuki7 self-assigned this Dec 5, 2025
@claude-code-actions-sigp
Copy link

claude-code-actions-sigp bot commented Dec 5, 2025

Claude finished @petarjuki7's task —— View job


Code Review Complete

I've completed a comprehensive review of the parallel beacon node querying implementation. While the approach is sound, there are several critical issues that need to be addressed before this can be merged.

Critical Issues

  1. Compilation Error (Line 193): The futures mapping captures self incorrectly, which will cause compilation issues. The async blocks need proper handling of the self reference.

  2. Unused Return Value (Line 177): The weighted_calculation function returns attestation data, but the caller discards it and uses the fallback path instead. This makes the entire weighted calculation pointless in its current form.

  3. Feature Disabled (Line 91): The feature is hardcoded to false, making all this code unreachable. This should be a configuration parameter.

Design Concerns

  1. Scoring Algorithm Issues (Line 376):

    • Base scores (sum of epochs ~400k) completely dwarf the proximity bonus (~1.0)
    • Multiple attestations with same epochs but different roots get identical scores
    • No validation of checkpoint consistency or majority agreement
  2. Error Handling (Line 93): Weighted calculation errors are ignored and execution falls through to the old path, masking potential issues.

  3. Performance: The 250ms block lookup timeout per client (line 426) can accumulate to significant delays when querying many nodes.

Minor Issues

  1. Magic Numbers (Line 20): Timeout constants lack documentation explaining the rationale
  2. Race Condition (Line 259): Minor inefficiency in response counting logic

Testing Gaps

  • No tests for the new functionality
  • No tests for timeout behavior (soft vs hard)
  • No tests for scoring algorithm edge cases
  • No tests for error conditions (all nodes failing, some nodes timing out, etc.)

Recommendations

Before merging:

  1. Fix the compilation error - restructure the futures to properly handle self
  2. Integrate weighted_calculation properly - either use its return value or change the flow
  3. Make feature configurable - add proper configuration management
  4. Revise scoring algorithm - normalize scores or weight the bonus more significantly
  5. Add comprehensive tests - especially for the scoring logic and timeout behavior
  6. Implement safety checks - add the majority agreement validation mentioned in your TODO
  7. Add metrics - track which beacon node was selected (as mentioned in your TODO)

The parallel querying approach is a solid improvement over sequential fallback, but the implementation needs refinement before it's production-ready.

@petarjuki7 petarjuki7 changed the base branch from stable to unstable December 5, 2025 11:09

use crate::{AnchorValidatorStore, ContributionWaiter, SlotMetadata};

const SOFT_TIMEOUT: Duration = Duration::from_millis(500);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Magic Numbers: These timeout values should be documented or made configurable. Consider:

  1. Why is soft timeout exactly 500ms?
  2. How do these values relate to slot timing (typically 12s)?
  3. What happens if beacon nodes are consistently slower than 500ms but faster than 1s?

Add documentation explaining the rationale, or make these configurable based on network conditions.

.data;

// Calculate base score from checkpoint epochs (higher epochs = more recent)
let base_score = (attestation_data.source.epoch.as_u64()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Scoring Algorithm Concerns:

  1. Epoch Collision: Multiple attestation data with different roots but same source/target epochs get identical base scores. This could select suboptimal data.

  2. Bonus Weighting: The proximity bonus (1.0 / (1 + distance)) is tiny compared to base score. For example:

    • Base score: ~400,000 (epoch 200,000)
    • Max bonus: 1.0

    The bonus is essentially negligible and won't affect selection.

  3. Missing Safety Checks: No validation that source < target, or that checkpoints are reasonable.

Consider:

  • Normalizing scores or making bonus more significant
  • Adding validation of attestation data consistency
  • Implementing majority agreement checking (as mentioned in TODO)

client: &BeaconNodeHttpClient,
block_root: Hash256,
) -> Option<Slot> {
tokio::time::timeout(BLOCK_SLOT_LOOKUP_TIMEOUT, async {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unbounded Timeout: If the block lookup hangs or is very slow, this adds 250ms to every attestation fetch. Consider:

  1. Using a shared timeout for all block lookups
  2. Caching block slot lookups
  3. Making block lookup optional with a fast-fail path

The 250ms timeout per client could accumulate to significant delays when querying many nodes.

@petarjuki7 petarjuki7 force-pushed the weigted_attestation_data branch from 3b8ef15 to de9082f Compare December 9, 2025 23:52
@petarjuki7 petarjuki7 marked this pull request as ready for review December 9, 2025 23:53
@claude-code-actions-sigp
Copy link

claude-code-actions-sigp bot commented Dec 9, 2025

Claude finished @petarjuki7's task in 4m 21s —— View job


Code Review Complete

I've completed a comprehensive review of the parallel beacon node querying implementation. The PR adds good functionality, but there are several critical issues that must be addressed before merging.


Critical Issues

1. Compilation Error - Async Closure Captures self Incorrectly (Line 193-199)

The futures mapping captures self incorrectly, which will cause compilation failures. The async blocks each try to capture &self, but they're moved into FuturesUnordered.

Problem:

let mut futures: FuturesUnordered<_> = clients
    .into_iter()
    .map(|(addr, client)| async move {
        let result = self.fetch_and_score(&client, slot).await;
        (addr, result)
    })
    .collect();

Why this fails: Each async block needs to capture self to call self.fetch_and_score(), but multiple async blocks can't all capture the same &self when moved into the collection.

Solution: Restructure to properly handle the self reference. Consider:

  • Making the async blocks clone what they need from self
  • Restructuring to avoid capturing self in the closures
  • Using Arc or similar patterns if MetadataService can be cloned

2. Scoring Algorithm Ineffective (Line 376-406)

The scoring algorithm has a fundamental flaw that makes the proximity bonus meaningless:

  • Base score: Sum of source + target epochs (~400,000 for epoch 200,000)
  • Proximity bonus: 1.0 / (1 + distance) (maximum value: 1.0)

The bonus is 0.00025% of the base score, making it completely negligible. Multiple attestations with identical epochs but different roots will get identical scores, potentially selecting suboptimal data.

Recommendations:

  • Normalize scores to make the bonus meaningful
  • Weight the proximity bonus more significantly (e.g., multiply by 10,000)
  • Add validation that attestations have consistent checkpoints
  • Implement the majority agreement validation mentioned in the TODO

3. Missing Test Coverage (No tests found)

There are no tests for this critical new functionality. This is a serious gap for production code.

Required tests:

  • Parallel query success/failure scenarios
  • Scoring algorithm correctness and edge cases
  • Timeout behavior (soft vs hard)
  • Error handling (all nodes fail, partial failures)
  • Score comparison logic
  • Block lookup timeout handling

Design Concerns

4. Feature Requires Manual Enable Flag (Line 94)

The feature requires users to explicitly enable it via --with-weighted-attestation-data. This is good for gradual rollout, but:

  • No documentation about when to use this flag
  • No validation that multiple beacon nodes are configured
  • No metrics to track which beacon node was selected
  • No clear upgrade path or deprecation plan for the old behavior

Recommendations:

  • Document the flag in help text and configuration docs
  • Add validation: warn if flag is set with only one beacon node
  • Add metrics to track selected beacon node
  • Consider auto-enabling when multiple beacon nodes are configured

5. Block Slot Lookup Timeout May Accumulate (Line 440)

The 125ms timeout for block lookups is applied per-client in parallel, which is fine. However, if many beacon nodes are slow, this could impact overall performance. Consider:

  • Caching block slot lookups (same block root will be requested by multiple clients)
  • Making block lookup optional with fast-fail
  • Documenting why 125ms was chosen

Code Quality Issues

6. Magic Number Constants Need Documentation (Lines 20-22)

The timeout constants lack justification:

const SOFT_TIMEOUT: Duration = Duration::from_millis(500);
const HARD_TIMEOUT: Duration = Duration::from_secs(1);
const BLOCK_SLOT_LOOKUP_TIMEOUT: Duration = Duration::from_millis(125);

Why 500ms? Why 1s? How do these relate to slot timing (12s)? Add documentation explaining:

  • The rationale for these values
  • How they were determined
  • What happens if beacon nodes are consistently slower

7. Minor Race Condition in Response Counting (Line 259)

After processing a response, the code checks if succeeded + failed == num_clients to exit early. If exactly num_clients responses arrive between processing the last response and the next loop iteration, the code continues waiting unnecessarily until timeout. This is a minor efficiency issue, not a correctness bug.

Fix: Check the condition at the loop start:

loop {
    if succeeded + failed == num_clients {
        break;
    }
    tokio::select! { ... }
}

Security Considerations

8. No Checkpoint Consistency Validation

The code doesn't validate that:

  • Source checkpoint epoch < target checkpoint epoch
  • Checkpoints are reasonable (not far in the future)
  • Multiple beacon nodes agree on checkpoints (mentioned in TODO)

Recommendation: Add validation in fetch_and_score() to reject obviously invalid attestation data.


Performance Considerations

9. Parallel Queries Are Good, But...

The parallel querying approach is a solid improvement over sequential fallback. However:

  • No request cancellation: Once all futures are spawned, they all run to completion even after a winner is selected
  • No circuit breaker: Repeatedly failing beacon nodes aren't temporarily excluded
  • No adaptive timeouts: Timeouts are fixed regardless of observed beacon node performance

These aren't blockers, but worth considering for future improvements.


Documentation

10. Missing User Documentation

The PR doesn't include:

  • Documentation of the new CLI flag
  • Explanation of when to use parallel querying
  • Guidance on timeout configuration
  • Metrics documentation

Recommendation: Update relevant documentation files (README, configuration docs, etc.).


Comparison with Previous Review

The previous Claude review identified similar issues. Checking status:

  • CLI flag added: Feature is now configurable
  • Compilation error: Still present (line 193)
  • Scoring algorithm issues: Not addressed
  • No tests: Still missing
  • Magic numbers: Still undocumented
  • ⚠️ Error handling: Partially addressed (returns error if all fail)

Summary

Must Fix Before Merge:

  1. Fix compilation error in futures mapping (line 193-199)
  2. Fix scoring algorithm or document why it's acceptable
  3. Add comprehensive test coverage

Should Fix Before Merge:
4. Document timeout constants
5. Add checkpoint validation
6. Add metrics for beacon node selection

Nice to Have:
7. Cache block slot lookups
8. Fix minor race condition
9. Add user documentation

The parallel querying approach is architecturally sound and addresses a real need. However, the implementation needs refinement—particularly the compilation fix, scoring algorithm, and test coverage—before it's production-ready.

I've added inline comments on the specific code sections that need attention. See the "Files changed" tab for detailed feedback on each issue.



const SOFT_TIMEOUT: Duration = Duration::from_millis(500);
const HARD_TIMEOUT: Duration = Duration::from_secs(1);
const BLOCK_SLOT_LOOKUP_TIMEOUT: Duration = Duration::from_millis(125);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documentation Needed: These timeout constants need justification. Consider adding:

/// Soft timeout for beacon node responses. If we receive at least one valid
/// attestation data response by this timeout, we'll return early with the best
/// scored result. This is set to 500ms to balance responsiveness with giving
/// slower nodes a chance to respond.
const SOFT_TIMEOUT: Duration = Duration::from_millis(500);

/// Hard timeout for beacon node responses. After this timeout, we return
/// unconditionally with whatever responses we've received (or error if none).
/// Set to 1s to ensure we have time to fetch attestation data before the
/// attestation deadline (slot time is 12s).
const HARD_TIMEOUT: Duration = Duration::from_secs(1);

/// Timeout for looking up a block's slot number by root. This is an optional
/// enhancement to scoring, so we use a short timeout to avoid delaying the
/// overall attestation data fetch.
const BLOCK_SLOT_LOOKUP_TIMEOUT: Duration = Duration::from_millis(125);

Questions to address:

  • Why 500ms soft / 1s hard? How were these values determined?
  • What happens if beacon nodes are consistently slower than 500ms but faster than 1s?
  • How do these relate to attestation deadline constraints?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Timeouts taken inspiration from the same parts of code in the Go implementation of SSV.
They use:
Soft timeout = 2 seconds
Hard timeout = 5 seconds
Block slot lookup timeout = 1.25 seconds

I found that a bit too long so adjusted to 1 second, 3 seconds and 0.5 seconds respectively

@petarjuki7 petarjuki7 added the ready-for-review This PR is ready to be reviewed label Dec 12, 2025
@mergify
Copy link

mergify bot commented Dec 12, 2025

Some required checks have failed. Could you please take a look @petarjuki7? 🙏

@mergify mergify bot added waiting-on-author and removed ready-for-review This PR is ready to be reviewed labels Dec 12, 2025
@petarjuki7 petarjuki7 added ready-for-review This PR is ready to be reviewed and removed waiting-on-author labels Dec 12, 2025
Copy link
Member

@dknopik dknopik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! I did a first pass, see below.

Additionally, I think we need to integrate this into sign_attestation to have it use the attestation from the metadata instead of the passed attestation if the weighting is enabled.

@petarjuki7 petarjuki7 requested a review from dknopik December 22, 2025 14:15
Disabled by default.",
display_order = 0
)]
pub with_weighted_attestation_data: bool,
Copy link
Member

@diegomrsantos diegomrsantos Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is parallel querying with weighted selection strictly better when we have multiple beacon nodes? If so, then it should just be the implementation, not a user choice. We already have 54 parameters.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point about the number of parameters... I wouldn't say it's strictly better. SSV describes it as:

Improves attestation accuracy by scoring responses from multiple Beacon nodes based on epoch and slot proximity. Adds slight latency to duties but includes safeguards (timeouts, retries).

So it's a tradeoff between accuracy and latency so it should probably stay as an option.
More about trade offs here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, that makes sense. But then the question isn't whether a tradeoff exists; it's whether operators should be required to understand it and make a choice.

  1. The latency concern is already addressed by design; the timeouts enforce bounded latency regardless of beacon node response times.
  2. The tradeoff isn't operator-dependent: Unlike some configuration choices (e.g., "how much disk space to allocate"), there's no operator-specific context that changes the optimal decision here:
    • 1 beacon node → WAD provides no benefit (nothing to compare)
    • Multiple beacon nodes → WAD provides better accuracy with bounded latency
  3. Operators shouldn't need to read SSV WAD documentation: To make an informed choice about this flag, an operator would need to understand attestation scoring algorithms, epoch proximity, timeout implications, etc. That's an unreasonable cognitive load for what should be "run my validator correctly."

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What we could do is to collect data on mainnet that will help us to make an informed decision.

A practical measure first plan (no new operator config required) could look like this:

We ship it as “shadow mode” for one release:
• When --beacon-nodes has 2+ entries, run the parallel fetch/scoring in the background, but still use the current behavior for the actual duty result.
• Record metrics about what would have been selected and how long it took.

That gives us mainnet distributions with near-zero functional risk, and we can decide later whether it should become the default implementation. What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of collecting metrics first and maybe hiding the flag for the first release, but I would still leave the functionality as accessible if the users (staking pools) which requested it want to use it right away. And for the "general" public we track metrics and see if it should become the default implementation.

Copy link
Member

@diegomrsantos diegomrsantos Dec 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense that some pools want to try this ASAP - but after thinking more about it, I’m worried we’re framing it as a trade-off when it’s really an unvalidated hypothesis. So the real claim here is: waiting/doing more work to pick "better" attestation data will improve outcomes (head correctness / inclusion) more than it harms them via added delay / load. That needs evidence. Adding a new public flag effectively ships a production-path behavior change and asks operators to run the experiment for us.

I’d strongly prefer we measure first: ship instrumentation + run the weighted selection in shadow mode when 2+ beacon nodes are configured (compute scores/timings and export metrics, but keep the current selection for duties). Then we can decide default/kill based on real mainnet distributions - without permanently growing the CLI surface.

If we must unblock specific pools immediately, I’d rather keep it clearly experimental/temporary (e.g. hidden-from-help / config-only) + mandatory metrics, with an explicit revisit and remove/make-default after N releases plan. Also, making it very clear that they are using it at their own risk.


/// Calculate the score for attestation data based on checkpoint epochs and head slot proximity.
/// Extracted from fetch_and_score for testing.
fn calculate_attestation_score(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please elaborate on the point of this function?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a more stripped down version of fetch_and_score from the metadata_service.rs. It's to check the correctness and output of the scoring mechanism, but omits the parts about fetching attestations and block_slots, since they are provided in the test context.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get why you extracted this, but if the tests use calculate_attestation_score (or an identical copy of the formula) to compute the expected result, they’re tautological: the expected value is derived from the implementation, so a bug in the scoring logic won’t be caught. It's also code duplication. 

IMO, it's better to add tests with hard-coded fixtures and hand-computed expected outcomes (epoch dominates, proximity tie-break, missing head_slot behavior).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did some refactoring so I can have a proper extracted function in production code to test, let me know what do you think!

}

/// Get the slot number for a given block root with timeout
async fn get_block_slot(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Go SSV, they keep a blockRootToSlotCache populated from the CL event stream (so most lookups are zero-latency), and when scoring, they retry on cache miss every ~100ms up to a short timeout. The motivation seems to be a race they observed in practice: attestation_data can reference a block produced milliseconds ago, and the corresponding event/cache entry may arrive just after the first scoring attempt; plus the BeaconBlockHeader lookup can be slow/unreliable, so they wrap it in tight timeouts and retry to avoid hurting duty latency.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

@petarjuki7 petarjuki7 Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had that comment from Claude Code, had a small discussion somewhere up there in the comments.

Their cache is populated from HTTP responses as far as I can tell and managing cache state adds complexity. We'd have to add state management to our MetadataService struct as far as I can tell with something like a Arc<Mutex<Hashmap<...>>>. With our current setup (parallel queries, 1s soft / 3s hard timeout), we have plenty of opportunities to catch the block somewhere.

I think the simpler approach is fine for now, worst case we miss the proximity bonus on one node but get it from another. I think for now it could be fine without an explicit cache or retrys, but let me know what you think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready-for-review This PR is ready to be reviewed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants