-
Notifications
You must be signed in to change notification settings - Fork 4.8k
Closed
Description
Problem
The built-in webfetch tool returns entire page content directly into conversation context. For documentation pages (TanStack, MDN, React docs, etc.), this frequently exceeds the 200k token limit:
prompt is too long: 207654 tokens > 200000 maximum
This happens within a single tool response - the page itself is too large. Mid-turn compaction (#6480) can't help because overflow occurs before compaction can trigger.
Proposed Solution
Add a Fetch tool similar to Claude Code's implementation:
- Downloads content to a temp file (e.g.,
.opencode/tmp/fetch-{hash}.md) - Returns only metadata to context:
- File path
- Title
- Content length
- First ~500 chars as preview
- Agent uses
readtool to access content in chunks as needed
Claude Code's approach:
- Fetch downloads to temp, returns path + summary
- Agent reads sections as needed using standard file tools
- Large pages never blow context because content lives on disk
Benefits
- Any page size works - content is on disk, not in context
- Agent reads selectively - only loads what's relevant
- Consistent with existing
readtool patterns for large files - Solves a class of problems, not just individual pages
Workaround Question
Is there an existing custom tool or plugin that implements this pattern?
I see custom tools can be created in .opencode/tool/ - has anyone published a fetch-to-file tool that could serve as a workaround until this is built-in? If not, is there a recommended approach for implementing this as a custom tool?
Related Issues
- prompt is too long unrecoverable #4845 (prompt too long)
- prompt is too long #5360, prompt is too long when replacing in a file. #5478 (duplicates)
- fix(session): check for context overflow mid-turn in finish-step #6480 (mid-turn compaction - helps multi-turn but not single large fetches)
kamilchm and davidbernat
Metadata
Metadata
Assignees
Labels
No labels