Skip to content

Conversation

@BrendanChou
Copy link
Collaborator

@BrendanChou BrendanChou commented Jan 13, 2026

This is what I have before any changes to monorepo.

Seems to work when finding tip and starting from there. May panic when trying to sync from genesis

Added configs to try and follow along with the USA or Global clusters using the public endpoints

// Create a dummy public key for the buffer (not used for actual networking)
// Use a valid ed25519 public key (all zeros is not valid, so we decode from hex)
let dummy_key =
PublicKey::decode([0u8; 32].as_slice()).expect("failed to decode dummy public key");
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dummy public key uses invalid all-zeros value

Medium Severity

The comment explicitly states "all zeros is not valid, so we decode from hex" but the code uses [0u8; 32] which is all zeros. An all-zeros byte array is not a valid ed25519 public key encoding because it doesn't represent a point on the edwards25519 curve. If PublicKey::decode validates the encoding, this will panic at runtime. The same issue appears when creating dummy_target for hint_finalized. The code contradicts its own documented intent.

Additional Locations (1)

Fix in Cursor Fix in Web

cache.insert(height, block.clone());
order.push_back(height);
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Height cache may return wrong block for finalized requests

Low Severity

The cache_by_height uses first-in-wins semantics, caching the first block seen at a given height. When a notarization arrives before a finalization at the same height (with potentially different blocks), the notarized block is cached by height. When the finalization arrives with a different block, it won't overwrite the height cache entry. If the resolver later fetches by height during backfill via fetch_finalized_by_height, it could return the notarized block instead of the finalized one.

Additional Locations (1)

Fix in Cursor Fix in Web

//! This binary provides a template for creating "follower" nodes that track consensus
//! by consuming finalization certificates from an external source (like exoware or an indexer)
//! without participating in the validator P2P network.
//!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a latency increase by making this one hop (val -> exoware/indexer -> full node) vs direct (val -> full node)?

if there is some latency at which scale does the validator network begin to degrade in terms of performance?

@cloudflare-workers-and-pages
Copy link

cloudflare-workers-and-pages bot commented Jan 19, 2026

Deploying alto with  Cloudflare Pages  Cloudflare Pages

Latest commit: f687b89
Status: ✅  Deploy successful!
Preview URL: https://1b7aa7b8.alto-8k4.pages.dev
Branch Preview URL: https://bc-follower.alto-8k4.pages.dev

View logs

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 4 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

if let Some(block) = self.get_cached_by_height(height) {
debug!(height = height.get(), "block found in height cache");
return Some(block);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolver cache hit skips block delivery to marshal

High Severity

In fetch_finalized_by_height, when a block is found in the height cache, the function returns early without delivering the block to marshal via the ingress channel. The code path that fetches from the indexer (lines 297-312) properly delivers the block and hints about finalization, but the cache hit path just returns. When marshal requests a backfill for a height that's cached (e.g., from an earlier notarization), it will never receive the block and could hang waiting indefinitely.

Fix in Cursor Fix in Web

None
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolver fetch by digest never delivers blocks

Medium Severity

The fetch_block_by_digest method never delivers blocks to marshal through the ingress channel. It takes &self (immutable reference), making it impossible to call send() on ingress_tx. Whether the block is found in cache or fetched from the indexer, the block is only cached/returned but never delivered. If marshal ever issues a Request::Block(digest) fetch, the block will be retrieved but marshal won't receive it.

Fix in Cursor Fix in Web

let key = handler::Request::Block(digest);
if !self.deliver_block(key, finalized.block.clone()).await {
warn!(height = height.get(), "marshal did not accept finalized block");
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Floor set before block delivery success confirmed

Medium Severity

In handle_message, when processing the first finalization with auto_checkpoint enabled, set_floor is called and floor_set is marked true (lines 691-692) BEFORE the block delivery is attempted (line 702). If deliver_block fails, the floor remains set but the block at that height was never delivered to marshal. Since floor_set is true, subsequent finalizations won't retry setting the floor. Marshal would have a floor at height H but lack the actual block at H, potentially causing it to stall waiting for a block that won't be re-delivered.

Fix in Cursor Fix in Web

priority: false,
codec_config: (),
},
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Buffer engine dropped while mailbox still in use

High Severity

The buffered::Engine is created at line 424 but assigned to _buffer (indicating intentionally unused) and not stored in the FollowerEngine struct. When new() returns, the buffer engine is dropped. However, its buffer_mailbox is kept and later passed to marshal.start() at line 550. The existing validator engine in engine.rs correctly stores both buffer and buffer_mailbox in its struct (lines 100-101), demonstrating the required pattern. With the buffer engine dropped, any operations marshal performs through buffer_mailbox will fail because the backing engine no longer exists.

Additional Locations (1)

Fix in Cursor Fix in Web

@codecov-commenter
Copy link

Codecov Report

❌ Patch coverage is 0% with 414 lines in your changes missing coverage. Please review.
✅ Project coverage is 51.11%. Comparing base (9f4be68) to head (f687b89).

Files with missing lines Patch % Lines
chain/src/bin/follower.rs 0.00% 414 Missing ⚠️
@@            Coverage Diff             @@
##             main     #169      +/-   ##
==========================================
- Coverage   57.97%   51.11%   -6.86%     
==========================================
  Files          17       18       +1     
  Lines        3086     3500     +414     
==========================================
  Hits         1789     1789              
- Misses       1297     1711     +414     
Files with missing lines Coverage Δ
chain/src/bin/follower.rs 0.00% <0.00%> (ø)

Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9f4be68...f687b89. Read the comment docs.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants