Skip to content

Conversation

@AmineKhaldi
Copy link
Contributor

Purpose:

Converts TransactionQueue's pop from a simple round robin across peers to a deficit round robin.

Current Behavior:

TransactionQueue's pop implements a simple round robin across peers.

New Behavior:

TransactionQueue's pop implements an adapted version of the deficit round robin algorithm.

Testing Notes:

@AmineKhaldi AmineKhaldi self-assigned this Dec 15, 2025
@AmineKhaldi AmineKhaldi added the Changed Required label for PR that categorizes merge commit message as "Changed" for changelog label Dec 15, 2025
@github-actions github-actions bot added the merge_conflict Branch has conflicts that prevent merge to main label Dec 15, 2025
@coveralls-official
Copy link

Pull Request Test Coverage Report for Build 20236441715

Warning: This coverage report may be inaccurate.

This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.

Details

  • 103 of 105 (98.1%) changed or added relevant lines in 3 files are covered.
  • 81 unchanged lines in 27 files lost coverage.
  • Overall coverage decreased (-0.07%) to 90.818%

Changes Missing Coverage Covered Lines Changed/Added Lines %
chia/full_node/tx_processing_queue.py 49 51 96.08%
Files with Coverage Reduction New Missed Lines %
chia/cmds/plotnft_funcs.py 1 85.42%
chia/full_node/block_store.py 1 96.6%
chia/server/chia_policy.py 1 94.52%
chia/_tests/core/util/test_lockfile.py 1 89.86%
chia/_tests/rpc/test_rpc_server.py 1 98.97%
chia/_tests/simulation/test_simulation.py 1 96.47%
chia/wallet/wallet_state_manager.py 1 96.43%
chia/wallet/wallet_user_store.py 1 98.18%
chia/data_layer/data_store.py 2 95.52%
chia/farmer/farmer_api.py 2 95.04%
Totals Coverage Status
Change from base Build 20208481264: -0.07%
Covered Lines: 102857
Relevant Lines: 113085

💛 - Coveralls

Copy link
Contributor

@arvidn arvidn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding of your implementation is that it:

  1. scans for a transaction whose cost >= the peer's deficit counter
    1.2 If found, ingest transaction and decrement the deficit counter by the cost
  2. If not found, increment the deficit counter of all peers by the lowest cost transaction, and goto 1

I don't think you need the cursor for fairness anymore, you'll get fairness anyway, since you track the deficit counters.

I think it could be made a bit more efficient by reducing the linear scan into a heap pop. It may introduce some more complexity though. This is not a complete thought. But imagine if every peer had a sort "key" which was its deficit_counter - cost_of_top_tx. You could have a priority queue (or really, a heap would suffice) of those peers. Every time a deficit counter is adjusted, the peer is resorted. You always pop from the top (which is O(1)). You'd need to figure out when to increment the deficit counters and by how much. So maybe this wouldn't be so much simpler.

log: logging.Logger
_max_tx_clvm_cost: uint64
# Map of peer ID to deficit counter in the context of deficit round robin
_deficit_counters: dict[bytes32, int]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't _queue_dict also a map of the same peer IDs?
It would seem cheaper and simpler to stick this int in that dict instead. Am I missing something?

self._index_to_peer_map = new_peer_map
if result is not None:
return result
# Map of peer ID to its top transaction's advertised cost
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this deserves a comment explaining the idea behind this behavior. that we want to service transactions fairly between peers, based on cost.

num_peers = len(self._index_to_peer_map)
if num_peers == 0:
continue
start = self._list_cursor
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need the _list_cursor anymore, now that you look at every peer every time you pop. There's a risk it might not be very efficient though.
It might be OK for a first version. A heap might be a more efficient choice


_list_cursor: int # this is which index
_queue_length: asyncio.Semaphore
_index_to_peer_map: list[bytes32]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need _index_to_peer_map anymore, just like you don''t need _list_cursor anymore. You visit every peer every time you pop(), so there's no need to keep any cursor position in between calls to pop().

Comment on lines +173 to +178
if tx_info is None:
top_tx_advertised_cost = max(
(t.advertised_cost for t in entry.peers_with_tx.values()), default=self._max_tx_clvm_cost
)
else:
top_tx_advertised_cost = tx_info.advertised_cost
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this case warrants a comment. This is where we don't know the cost (or fee) for a transaction, so we want to assume a very high cost. I don't think we should search peer_with_tx, we should just assume it has a really high cost. This is just for backwards compatibility, right?

Suggested change
if tx_info is None:
top_tx_advertised_cost = max(
(t.advertised_cost for t in entry.peers_with_tx.values()), default=self._max_tx_clvm_cost
)
else:
top_tx_advertised_cost = tx_info.advertised_cost
top_tx_advertised_cost = self._max_tx_clvm_cost if tx_info is None else tx_info.advertised_cost

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Changed Required label for PR that categorizes merge commit message as "Changed" for changelog merge_conflict Branch has conflicts that prevent merge to main

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants