feat(connector): implement multitransport bootstrapping handshake #1098
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a design proposal — looking for feedback on the API approach before wiring up the async/blocking drivers.
Makes the MultitransportBootstrapping state functional instead of a no-op pass-through.
After licensing, the server may send 0, 1, or 2 Initiate Multitransport Request PDUs before starting capabilities exchange. This PR reads those PDUs by peeking at the BasicSecurityHeader flags (SEC_TRANSPORT_REQ), then pauses in a new MultitransportPending state so the application can establish UDP transport (RDPEUDP2 + TLS + RDPEMT) or decline.
The API follows the existing should_perform_X() / mark_X_as_done() pattern used by TLS upgrade and CredSSP:
Open question: the Demand Active PDU that signals the end of multitransport bootstrapping arrives while we're still in MultitransportBootstrapping. When the connector transitions to MultitransportPending, this PDU needs to be buffered and re-fed after the application completes. Two options:
(a) The driving code (ironrdp-async/ironrdp-blocking) buffers the PDU externally and re-feeds it. This keeps the connector stateless w.r.t. buffering but requires changes to connect_finalize().
(b) The connector buffers the PDU internally in MultitransportPending and replays it when complete_multitransport() / skip_multitransport() is called. This is self-contained but adds buffer state to the connector.
I've gone with (a) in this draft — the connector doesn't buffer. Feedback on which approach is preferred would be helpful before wiring up the async/blocking drivers.
Builds on: #1091
Related: #140