[None][fix] Batch addSequence with pre-claim to fix host offloading M…#12892
[None][fix] Batch addSequence with pre-claim to fix host offloading M…#12892liji-nv wants to merge 1 commit intoNVIDIA:mainfrom
Conversation
|
/bot run |
|
PR_Github #42543 [ run ] triggered by Bot. Commit: |
📝 WalkthroughWalkthroughIntroduces a new two-phase batch sequence onboarding API for KV cache management. Phase 1 claims matching cached blocks under lock; Phase 2 onboards and allocates remaining blocks. Adds corresponding methods to WindowBlockManager, BlockManager, BaseKVCacheManager, and KVCacheManager classes, along with Python bindings and integration into the resource manager's context allocation path when block reuse is enabled. Changes
Sequence Diagram(s)sequenceDiagram
participant ResourceMgr as Resource Manager
participant KVCacheMgr as KVCacheManager
participant BlockMgr as BlockManager
participant WindowBlockMgr as WindowBlockManager
ResourceMgr->>KVCacheMgr: addSequenceBatch(requestInfos, llmRequests)
activate KVCacheMgr
KVCacheMgr->>KVCacheMgr: Validate constraints (block reuse enabled)
KVCacheMgr->>KVCacheMgr: Create GenerationRequest sequences<br/>Compute inputLengths & numContextBlocksVec
KVCacheMgr->>BlockMgr: addSequenceBatch(sequences, inputLengths,<br/>numContextBlocksVec, llmRequests, windowSize)
activate BlockMgr
BlockMgr->>WindowBlockMgr: addSequenceBatch(...)
activate WindowBlockMgr
WindowBlockMgr->>WindowBlockMgr: Acquire mCachedBlocksRootMutex
loop For each sequence (Phase 1)
WindowBlockMgr->>WindowBlockMgr: claimMatchingBlocks(sequence, inputLength,<br/>numContextBlocks, llmRequest)
note over WindowBlockMgr: Radix-tree walk, claim blocks,<br/>update priorities
end
loop For each sequence (Phase 2)
WindowBlockMgr->>WindowBlockMgr: onboardAndAllocateBlocks(sequence,<br/>llmRequest, claimResult)
note over WindowBlockMgr: Onboard copied blocks, allocate<br/>non-matching & non-shared blocks,<br/>finalize prepopulated length
end
WindowBlockMgr->>WindowBlockMgr: Release mCachedBlocksRootMutex
WindowBlockMgr-->>BlockMgr: Return prepopulated lengths
deactivate WindowBlockMgr
BlockMgr-->>KVCacheMgr: Return prepopulated lengths
deactivate BlockMgr
KVCacheMgr->>KVCacheMgr: Update per-window cache offsets
KVCacheMgr->>KVCacheMgr: Set llmRequest prepopulated<br/>prompt lengths & clear reusable tokens
KVCacheMgr-->>ResourceMgr: Return (batch complete)
deactivate KVCacheMgr
ResourceMgr->>ResourceMgr: Apply per-token add_token calls<br/>& KV connector updates
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/include/tensorrt_llm/batch_manager/kvCacheManager.h`:
- Around line 1789-1796: The Python trampoline class PyKvCacheManager is missing
a pure-virtual override for the new C++ method addSequenceBatch, causing Python
subclasses not to dispatch; add an override method in PyKvCacheManager with the
signature matching addSequenceBatch that calls
NB_OVERRIDE_PURE(addSequenceBatch, requestInfos, llmRequests), and increment the
NB_TRAMPOLINE count for tbk::BaseKVCacheManager from 36 to 37 so the trampoline
table matches the new virtual. Ensure the override uses the exact parameter
types (std::vector<std::tuple<tb::LlmRequest::RequestIdType, SizeType32,
SizeType32>> const& requestInfos and
std::vector<std::reference_wrapper<tb::LlmRequest>> const& llmRequests) and that
NB_TRAMPOLINE is updated accordingly.
In `@cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp`:
- Around line 3408-3411: The current approach captures baseline totals in
numAllocTotalBlocksPre, numAllocNewBlocksPre, numReusedBlocksPre and
numMissedBlocksPre once for the whole batch and then computes per-request stats
by subtracting those global baselines, which causes earlier requests' work to be
attributed to later ones; instead compute per-request deltas either by
snapshotting the totals immediately before and after processing each individual
request (or at the exact request completion point) or by recording increments
during Phase 2 as each request performs allocations/reuses/misses; update the
logic in
updateAllocTotalPerRequest/updateAllocNewPerRequest/updateReusedPerRequest/updateMissedPerRequest
to use these per-request snapshots or Phase 2 counters rather than the
single-batch pre-snap variables so each request only accounts for its own
changes.
- Around line 3387-3449: Before calling mBlockManager.addSequenceBatch in
KVCacheManager::addSequenceBatch, enforce the same recurrent-state chunking
guard that WindowBlockManager::addSequence rejects: for each request, if its
kvCacheRetentionConfig indicates a recurrent-state retention mode and
inputLength != llmRequest.getPromptLen(), fail early (use TLLM_CHECK_WITH_INFO
or similar) with a message that batched recurrent-state requests must have
inputLength equal to the prompt length; perform this check using the
already-captured kvCacheRetentionConfig and llmRequest variables for each index
prior to invoking mBlockManager.addSequenceBatch.
In `@cpp/tensorrt_llm/nanobind/batch_manager/kvCacheManager.cpp`:
- Around line 396-413: The lambda bound to "add_sequence_batch" currently uses
nb::call_guard<nb::gil_scoped_release>() which drops the GIL for the entire body
and makes nb::len, list indexing, and nb::cast calls unsafe; change the
implementation so the GIL is held while marshalling the Python inputs into the
C++ locals (i.e., build requestInfos and llmRequests using nb::len, nb::cast,
nb::tuple, nb::list, std::ref) and only release the GIL immediately before
calling BaseKVCacheManager::addSequenceBatch (call self.addSequenceBatch under a
scoped GIL release). Ensure nb::call_guard<nb::gil_scoped_release>() is applied
only around the actual addSequenceBatch invocation or use an explicit
gil_scoped_release scope for that call.
In `@tensorrt_llm/_torch/pyexecutor/resource_manager.py`:
- Around line 644-678: The batching path currently calls
impl.add_sequence_batch(batch_request_infos, batch_llm_requests) for block-reuse
requests which breaks VSWA; change the control so batching only happens when
self.is_vswa is False (i.e., gate the add_sequence_batch call with not
self.is_vswa) and if self.is_vswa is True, fall back to the per-request
impl.add_sequence(...) flow used earlier (including the subsequent
impl.add_token(...) loops and kv_connector_manager.update_state_after_alloc
calls for each req). Ensure the logic around enable_block_reuse,
batch_request_infos, batch_llm_requests and batch_ctx_requests remains
consistent so VSWA continues using the single-request path.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 21f2daa7-dcc7-468a-a725-8af6a3a2a2f3
📒 Files selected for processing (4)
cpp/include/tensorrt_llm/batch_manager/kvCacheManager.hcpp/tensorrt_llm/batch_manager/kvCacheManager.cppcpp/tensorrt_llm/nanobind/batch_manager/kvCacheManager.cpptensorrt_llm/_torch/pyexecutor/resource_manager.py
|
PR_Github #42543 [ run ] completed with state
|
9d5acd8 to
a9a7f73
Compare
…NT overflow When host offloading is enabled, onboarding a host block to GPU during addSequence can trigger eviction of other reusable host blocks from the radix tree. This causes actual KV cache reuse to be less than the scheduler estimated, leading to max_num_tokens (MNT) overflow assertions. Add a new addSequenceBatch API that processes all first-chunk context requests in two phases: - Phase 1: Walk the radix tree and claimBlock() for all matching blocks across all requests. No onboarding, no allocation. This protects reusable blocks from eviction. - Phase 2: Onboard host blocks and allocate non-matching blocks. Since all reusable blocks are already claimed, evictions during onboarding cannot touch them. On the Python side, replace the TOCTOU-prone revalidation loop (count_reusable_blocks + budget check) with a single batch call. Signed-off-by: Jin Li <59594262+liji-nv@users.noreply.github.com>
a9a7f73 to
e5d8513
Compare
|
/bot run |
|
PR_Github #42669 [ run ] triggered by Bot. Commit: |
|
PR_Github #42669 [ run ] completed with state
|
…NT overflow
When host offloading is enabled, onboarding a host block to GPU during addSequence can trigger eviction of other reusable host blocks from the radix tree. This causes actual KV cache reuse to be less than the scheduler estimated, leading to max_num_tokens (MNT) overflow assertions.
Add a new addSequenceBatch API that processes all first-chunk context requests in two phases:
On the Python side, replace the TOCTOU-prone revalidation loop (count_reusable_blocks + budget check) with a single batch call.
Summary by CodeRabbit
Release Notes
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
To see a list of available CI bot commands, please comment
/bot help.