Skip to content

Add SPOG support for ?o= routing in httpPath#1316

Open
msrathore-db wants to merge 2 commits intodatabricks:mainfrom
msrathore-db:spog-support-v2
Open

Add SPOG support for ?o= routing in httpPath#1316
msrathore-db wants to merge 2 commits intodatabricks:mainfrom
msrathore-db:spog-support-v2

Conversation

@msrathore-db
Copy link
Copy Markdown
Collaborator

@msrathore-db msrathore-db commented Mar 24, 2026

Summary

  • Fix property parser to preserve ?o= value in httpPath (use indexOf to split on first = only)
  • Fix warehouse ID regex to stop at query params ((.+)([^?&]+))
  • Auto-extract ?o=<workspaceId> from httpPath and inject x-databricks-org-id header on all HTTP requests (Thrift, SEA, telemetry, feature flags)
  • Propagate custom headers to telemetry and feature flag clients

Context

SPOG (Single Panel of Glass) replaces workspace-specific hostnames with account-level vanity URLs. Per the SPOG Peco Clients doc, the Thrift endpoint format changes to {spogHost}/sql/1.0/warehouses/xxx?o=yyy. The driver needs to parse this ?o= and route all HTTP requests with the workspace ID.

Jira: XTA-15079

Test plan

  • DBSQL Warehouse + PAT (Thrift and SEA) on SPOG host with ?o=
  • GP Cluster + PAT (Thrift) on SPOG host with ?o=
  • OAuth M2M (Thrift and SEA) on SPOG host with ?o=
  • OAuth U2M (Thrift and SEA) on SPOG host with ?o=
  • Telemetry endpoint returns 200 on SPOG with org-id header (verified via instrumented logging)
  • Feature flags endpoint returns 200 on SPOG with org-id header (verified via instrumented logging)
  • Legacy host connections unaffected (no regression)
  • No routing on SPOG correctly fails with SQLException

NO_CHANGELOG=true

This pull request was AI-assisted by Isaac.

SPOG replaces workspace-specific hostnames with account-level vanity URLs.
This requires workspace routing info (?o=<workspaceId> or x-databricks-org-id
header) on all HTTP requests to the SPOG host.

Fixes:
1. Property parser: use indexOf to split on first '=' only, so values
   containing '=' (like httpPath with ?o=) are preserved correctly
   (DatabricksConnectionContext.java)
2. Warehouse ID regex: (.+) -> ([^?&]+) to stop at query params
   (DatabricksJdbcConstants.java)
3. Auto-inject x-databricks-org-id header from ?o= in httpPath into custom
   headers map, propagating to Thrift, SEA, telemetry, and feature flags
   (DatabricksConnectionContext.java)
4. Propagate custom headers to telemetry and feature flag clients
   (TelemetryPushClient.java, DatabricksDriverFeatureFlagsContext.java)

Co-authored-by: Isaac
Signed-off-by: Madhavendra Rathore <madhavendra.rathore@databricks.com>
DBFSVolumeClient uses the SDK's ApiClient directly for /api/2.0/fs/*
endpoints. These requests need the x-databricks-org-id header for SPOG
routing, same as telemetry and feature flags.

Add connectionContext.getCustomHeaders() to all 5 HTTP request paths in
DBFSVolumeClient (4 sync via apiClient, 1 async via requestBuilder).

Co-authored-by: Isaac
Signed-off-by: Madhavendra Rathore <madhavendra.rathore@databricks.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant