ComfyUI-ReservedVRAM-ROCm
A lightweight ComfyUI node that dynamically adjusts reserved GPU memory (VRAM) at runtime, helping prevent shared/system memory usage and improving stability under high load.
This ROCm-friendly version works on AMD (ROCm) and NVIDIA (CUDA) using PyTorch’s native memory reporting.
Recent versions of ComfyUI include a Pin Memory feature, which may offload parts of models into shared/system memory when VRAM is constrained.
If you are using Pin Memory:
Adjust sampling speed and GPU power limits appropriately
This node can still be useful to control or limit how aggressively shared memory is used
✨ Features Core Functionality
Dynamically adjust EXTRA_RESERVED_VRAM during workflow execution
Takes effect immediately when the node runs
Values are specified in GB
Auto Mode
Automatically detects currently used VRAM
Adds (or subtracts) the user-defined offset
Prevents multi-process or multi-workflow VRAM contention
Supports negative offsets for fine-tuning
Manual Mode
Explicitly sets reserved VRAM
Ignores auto limits and calculations
Useful for restoring defaults or enforcing strict caps
🔧 Advanced Node Behavior
Random Seed Support
Can act as a random seed node
Re-evaluates VRAM strategy on every run (optional)
Optional Connections
Front input does not need to be connected
Back-end outputs (seed / reserved value) are optional
VRAM Cleanup Mode
Optional pre-run GPU memory cleanup
Can be used as a standalone VRAM cleanup node
Manual mode can restore the environment variable to the default value (0.6 GB)
Auto Mode Safety Limit
Maximum reserved VRAM value (Auto mode only)
Prevents excessive reservation in edge cases
Slightly reduces Auto mode flexibility, but improves safety
🚀 Usage
Place the node early in your workflow
Set the amount of VRAM to reserve (GB)
Choose Auto or Manual mode
Run the workflow — changes apply immediately
This is especially useful when:
Running multiple workflows or processes
Pushing GPUs close to VRAM limits
Avoiding system/shared memory spillover
Tip: You can reserve slightly more VRAM than strictly necessary to avoid fragmentation issues.
📅 Changelog 2025-10-21 — Enhanced Node Features
Added random seed functionality
Inputs and outputs can be left unconnected
Added VRAM pre-cleanup toggle
Added Auto mode maximum reservation limit
2025-10-10 — Auto Mode Added
Automatically detects used VRAM
Applies user offset dynamically
Prevents VRAM contention in multi-process environments
Supports negative reserved values
🧠 How It Works (Short Version)
The node modifies ComfyUI’s VRAM reservation strategy at runtime using PyTorch memory reporting. On ROCm and CUDA systems, this avoids hard dependencies on vendor-specific tools while maintaining accurate memory control.
