fix(cloudflare): Send correct events in local development#19900
fix(cloudflare): Send correct events in local development#19900
Conversation
Semver Impact of This PR🟢 Patch (bug fixes) 📋 Changelog PreviewThis is how your changes will appear in the changelog. New Features ✨Deps
Bug Fixes 🐛Cloudflare
Core
Deps
Other
Internal Changes 🔧Deps Dev
Other
🤖 This preview updates automatically when you update the PR. |
size-limit report 📦
|
09affa3 to
f990653
Compare
Lms24
left a comment
There was a problem hiding this comment.
Looks reasonable to me! So to confirm: The behavior when deploying to cloudflare was unaffacted? I think the change makes sense either way. Was just curious based on our convo yesterday.
packages/cloudflare/src/flush.ts
Outdated
| await flush(timeout); | ||
| } | ||
|
|
||
| client?.dispose(); |
There was a problem hiding this comment.
super-l: We could get of the optional chaining by moving the dispose call into the if block above. Feel free to disregard though
There was a problem hiding this comment.
That is actually great. I'll turn it around and make it a fail fast approach
Yes on production it is unaffected as a DurableObject lives in a different isolate than the worker itself. But multiple calls to the same DurableObject could potentially live in the same isolate. |
node-overhead report 🧳Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.
|
When using
wrangler devthere is an isolation scope spawned as in production, but there is only one for the worker and the durable object (DO) (while in production these two get 1 each). With local development this could be a problem with the flush. The RPC is done inside the worker, which means at the time the DO is getting called the initial worker async local storage (ALS) is overwritten by the DO. That leads to a wrong client when the worker flushes after all calls are done. Here a little mermaid graph to showcase this a little better:sequenceDiagram participant W as Worker participant ALS as AsyncLocalStorage participant DO as Durable Object participant S as Sentry Proxy W->>ALS: Initialize ALS context (Worker client) W->>DO: stub.fetch(request) DO->>ALS: Initialize ALS context (DO client) Note over ALS: ALS now holds DO client<br/>(single-threaded wrangler dev<br/>overwrites Worker context) DO->>S: flush() ✅ (DO client correct) DO-->>W: Response W->>ALS: getIsolationScope().getClient() Note over W,ALS: ❌ Returns DO client instead<br/>of Worker client W->>S: flush() ❌ (wrong client) Note over W: Worker events not flushed<br/>or flushed to wrong clientAs we already have the
clientavailable inside the flush function (as we pass it through), we can directly call it if it is available.Closes #19901 (added automatically)