Skip to content

Right-size opendatahub cluster pool: smaller workers, larger pool#76484

Closed
macgregor wants to merge 1 commit intoopenshift:mainfrom
macgregor:opendatahub-pool-rightsize
Closed

Right-size opendatahub cluster pool: smaller workers, larger pool#76484
macgregor wants to merge 1 commit intoopenshift:mainfrom
macgregor:opendatahub-pool-rightsize

Conversation

@macgregor
Copy link
Contributor

@macgregor macgregor commented Mar 18, 2026

Right-sizes the opendatahub-ocp-4-19-amd64-aws cluster pool based on 90 days of CI utilization data. Workers are heavily overprovisioned while the pool is too small, causing a 37-41% cluster claim failure rate.

Change Before After Justification
Worker instance m5.2xlarge m5.xlarge Peak utilization: 28% CPU, 29% memory. Halving raises peaks to ~57% — safe headroom.
Pool size 12 20 37-41% claim failure rate (549/1492 attempts). Busiest day: 174 E2E builds, ~25-38 concurrent.
maxSize 40 48 Burst ceiling for release crunches.
runningCount 2 3 Avg claim wait is 14 min (p90: 44 min). More warm clusters reduce resume latency.

Smaller workers partially offset the cost of the larger pool.

/cc @opendatahub

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 18, 2026

@macgregor: GitHub didn't allow me to request PR reviews from the following users: opendatahub.

Note that only openshift members and repo collaborators can review this PR, and authors cannot review their own PRs.

Details

In response to this:

Summary

Right-sizes the opendatahub-ocp-4-19-amd64-aws cluster pool based on 90 days of utilization data from CI builds:

  • Workers: m5.2xlarge (8 vCPU, 32 GiB) → m5.xlarge (4 vCPU, 16 GiB). Peak observed utilization across 247 builds with test cluster data: 28% CPU (6.8/24 cores), 29% memory working set. After halving, peak utilization rises to ~57% on both — safe headroom.
  • Pool size: 12 → 20. Current claim failure rate is 37-41% (549 failures out of 1492 attempts over 30 days, worsening to 41% in the last 7 days). Busiest observed day had 174 E2E builds; estimated peak concurrency of 25-38 clusters. A pool of 20 covers p95 daily demand.
  • maxSize: 40 → 48. Ceiling for burst periods and release crunches.
  • runningCount: 2 → 3. Reduces hibernation resume latency by keeping more clusters warm, reducing avg claim wait time (currently 14 min avg, 44 min p90).

Data

Metric Value
Worker peak CPU (90d) 28.3% of 24 cores (6.8 cores across 3 workers)
Worker peak memory working set (90d) 29.0% per node
Post-resize peak CPU 56.7% of 12 cores
Post-resize peak memory 57.9% of 16 GiB per node
Claim failure rate (7d) 40.9% (251/613)
Claim failure rate (30d) 36.8% (549/1492)
Avg claim wait (successful) 14 min
P90 claim wait 44 min
Max claim wait 119 min
Busiest day E2E builds 174
Avg E2E duration 1.7h
Peak concurrent estimate 25-38 clusters

Net effect: smaller per-cluster cost (halved worker instance size) partially offsets the larger pool, while dramatically reducing claim failures and CI queue time.

/cc @opendatahub

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 18, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: macgregor
Once this PR has been reviewed and has the lgtm label, please assign steventobin for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the rehearsals-ack Signifies that rehearsal jobs have been acknowledged label Mar 18, 2026
@openshift-ci-robot
Copy link
Contributor

[REHEARSALNOTIFIER]
@macgregor: no rehearsable tests are affected by this change

Note: If this PR includes changes to step registry files (ci-operator/step-registry/) and you expected jobs to be found, try rebasing your PR onto the base branch. This helps pj-rehearse accurately detect changes when the base branch has moved forward.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 18, 2026

@macgregor: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@macgregor
Copy link
Contributor Author

/pj-rehearse

@openshift-ci-robot
Copy link
Contributor

@macgregor: now processing your pj-rehearse request. Please allow up to 10 minutes for jobs to trigger or cancel.

@openshift-ci-robot
Copy link
Contributor

@macgregor: no rehearsable tests are affected by this change

platform:
aws:
type: m5.2xlarge
type: m5.xlarge
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you sure? Are we checking the total utilization, or the resource requested in the cluster? (using m5.xlarge, I hit the limit in odh-gitops: https://github.com/opendatahub-io/odh-gitops/blob/main/.tekton/helm-chart-validation-ocp-4.19.yaml#L42). It depends also on the number of nodes

@macgregor
Copy link
Contributor Author

Going to hold off on this until we have more complete worker utilization dataset

@macgregor macgregor closed this Mar 19, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

rehearsals-ack Signifies that rehearsal jobs have been acknowledged

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants