feat(provider): add POST /v1/bid-screening endpoint (Stage 1)#3055
feat(provider): add POST /v1/bid-screening endpoint (Stage 1)#3055
Conversation
📝 WalkthroughWalkthroughA new bid-screening API feature is being added to screen providers based on resource requirements, constraints, and availability. The implementation includes a controller, HTTP schemas for request/response validation, a router handling Changes
Sequence DiagramsequenceDiagram
participant Client
participant Controller as BidScreeningController
participant Router as Route Handler
participant Service as BidScreeningService
participant Database as Chain DB
Client->>Router: POST /v1/bid-screening (BidScreeningRequest)
Router->>Router: Validate request schema
Router->>Controller: Resolve from container & call screen(data)
Controller->>Service: findMatchingProviders(data)
activate Service
Service->>Service: aggregateResources(resources)
Service->>Service: buildQuery (COUNT mode)
Service->>Service: buildQuery (main query with LIMIT)
par Concurrent Execution
Service->>Database: Execute COUNT query
Database-->>Service: Return total count
and
Service->>Database: Execute main query
Database-->>Service: Return provider rows
end
alt total count == 0
Service->>Service: diagnoseConstraints()
loop For each constraint type
Service->>Database: Execute targeted COUNT query
end
Service-->>Service: Populate constraints array
end
deactivate Service
Service-->>Controller: Return BidScreeningResult
Controller-->>Router: Return wrapped response
Router-->>Client: 200 JSON (BidScreeningResponse)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
❌ 2 Tests Failed:
View the top 2 failed test(s) by shortest run time
To view more test analytics, go to the Test Analytics Dashboard |
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api/src/provider/services/bid-screening/bid-screening.service.spec.ts`:
- Around line 385-393: The test "uses default limit of 50 when not provided" is
incorrectly passing limit: 50; update the test invocation of
service.findMatchingProviders in the spec so it omits the limit argument (or
passes undefined) to exercise the defaulting path, and keep assertions that
verify the service/chainDb behavior (e.g., that chainDb.query was called with a
LIMIT of 50 or that the result respects the default) — locate this in the test
using setup(), chainDb.query, and service.findMatchingProviders.
In `@apps/api/src/provider/services/bid-screening/bid-screening.service.ts`:
- Around line 403-413: diagnoseConstraints() currently only adds a check for
requirements.signedBy.anyOf so it never reports blockers when signedBy.allOf
fails; add a parallel check for requirements.signedBy.allOf that mirrors
buildQuery() semantics: push a check named "Auditor signature (allOf)" that
counts providers whose distinct pas.auditor IN (:auditors) and uses GROUP BY
p.owner HAVING COUNT(DISTINCT pas.auditor) = :requiredCount (replace :auditors
with requirements.signedBy.allOf and :requiredCount with its length), include
appropriate feedback text so the diagnostics payload reports missing auditors
when the allOf constraint is unmet.
- Around line 98-117: aggregateResources currently flattens per-resource GPU
attributes and persistentStorageClass into single variables (gpuVendor,
gpuModel, gpuInterface, gpuMemorySize, persistentStorageClass) which collapses
heterogeneous constraints; update aggregateResources to detect mixed values
across resources and either 1) reject the request early by throwing/returning an
error when multiple distinct GPU attribute sets or multiple
persistentStorageClass values are present, or 2) collect unique GPU attribute
tuples and unique persistentStorageClass values (e.g., sets/lists) and return
them so the calling code (e.g., the SQL predicate builder in
bid-screening.service) can generate separate predicates for each unique GPU
attribute tuple and each storage class instead of applying only the last-seen
value—modify references to gpuVendor/gpuModel/persistentStorageClass accordingly
so callers handle multiple constraints.
- Around line 215-226: The current code adds a pre-group WHERE filter
(wheres.push(`pas.auditor IN (:anyOfAuditors)`) ) which removes rows needed for
the later COUNT(*) FILTER checks for requirements.signedBy.allOf; instead set
replacements.anyOfAuditors = requirements.signedBy.anyOf but do NOT add a WHERE
on pas.auditor; add a HAVING clause that expresses the anyOf requirement against
the unfiltered pas rows, e.g. push a havingClauses entry like `COUNT(*) FILTER
(WHERE pas.auditor IN (:anyOfAuditors)) > 0` (or equivalent SUM/EXISTS
aggregate) so both anyOf and allOf are evaluated from the same joined signature
set; keep the join on "providerAttributeSignature" pas and the existing loop
that adds COUNT(*) FILTER clauses for allOf auditors.
In `@docs/superpowers/plans/2026-04-09-bid-precheck-stage1.md`:
- Line 5: The document still uses the old "bid-precheck" name and path in
multiple places; update all occurrences of the string "bid-precheck" and the
path "POST /v1/bid-precheck" to "bid-screening" and "POST /v1/bid-screening"
respectively (including headings, examples, API paths, links, and any
cross-references), and ensure any associated labels/IDs, prose, or numbered
sections that reference the old term (notably the blocks around the earlier
mention and the ranges you noted) are renamed consistently so the plan matches
the implemented API.
In `@docs/superpowers/specs/2026-04-09-bid-precheck-stage1-design.md`:
- Around line 25-33: The spec uses inconsistent terminology: rename all
BidPrecheck* symbols and any `bid-precheck` file/path references to use the
current `bid-screening` naming to match the endpoint `POST /v1/bid-screening`
(e.g., change BidPrecheckRequest -> BidScreeningRequest and update related
types, comments, and example file paths), and update any documentation text that
mentions `bid-precheck` to `bid-screening` so all occurrences (including the
items noted around lines 56-57, 119-123, 129-132) are consistent with the
endpoint.
- Around line 14-16: The markdown fenced code blocks in this spec are missing
language identifiers (MD040); update each fence to include the appropriate
language hint (for example use ```http for the POST route block and ```text for
file lists/diagrams) in the blocks shown (the POST /v1/bid-screening block and
the file-list/flow blocks referenced around lines 118-124 and 128-135) so static
analysis no longer flags them; ensure each opening triple-backtick is followed
by the chosen language token (e.g., http or text) to match the content of that
fence.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: eb7b90ee-eba7-43ec-8a39-386528eeb2db
📒 Files selected for processing (9)
apps/api/src/provider/controllers/bid-screening/bid-screening.controller.tsapps/api/src/provider/http-schemas/bid-screening.schema.tsapps/api/src/provider/routes/bid-screening/bid-screening.router.tsapps/api/src/provider/routes/index.tsapps/api/src/provider/services/bid-screening/bid-screening.service.spec.tsapps/api/src/provider/services/bid-screening/bid-screening.service.tsapps/api/src/rest-app.tsdocs/superpowers/plans/2026-04-09-bid-precheck-stage1.mddocs/superpowers/specs/2026-04-09-bid-precheck-stage1-design.md
apps/api/src/provider/services/bid-screening/bid-screening.service.spec.ts
Show resolved
Hide resolved
| for (const ru of resources) { | ||
| totalCpu += ru.cpu * ru.count; | ||
| totalMemory += ru.memory * ru.count; | ||
| totalGpu += ru.gpu * ru.count; | ||
| totalEphemeralStorage += ru.ephemeralStorage * ru.count; | ||
| totalPersistentStorage += (ru.persistentStorage ?? 0) * ru.count; | ||
|
|
||
| if (ru.gpu > 0) { | ||
| maxPerReplicaGpu = Math.max(maxPerReplicaGpu, ru.gpu); | ||
| if (ru.gpuAttributes) { | ||
| gpuVendor = ru.gpuAttributes.vendor; | ||
| gpuModel = ru.gpuAttributes.model; | ||
| gpuInterface = ru.gpuAttributes.interface; | ||
| gpuMemorySize = ru.gpuAttributes.memorySize; | ||
| } | ||
| } | ||
|
|
||
| if (ru.persistentStorage && ru.persistentStorageClass) { | ||
| persistentStorageClass = ru.persistentStorageClass; | ||
| } |
There was a problem hiding this comment.
Heterogeneous resource constraints get collapsed incorrectly.
aggregateResources() sums all resource requests, but it keeps only the last GPU attribute set and last persistent storage class it sees. If the request contains multiple resource entries with different GPU models or storage classes, the generated SQL will enforce only one of them and can return providers that cannot satisfy the full spec. Please either reject mixed per-resource constraints up front or preserve them as separate predicates instead of flattening them into one aggregate object.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/src/provider/services/bid-screening/bid-screening.service.ts` around
lines 98 - 117, aggregateResources currently flattens per-resource GPU
attributes and persistentStorageClass into single variables (gpuVendor,
gpuModel, gpuInterface, gpuMemorySize, persistentStorageClass) which collapses
heterogeneous constraints; update aggregateResources to detect mixed values
across resources and either 1) reject the request early by throwing/returning an
error when multiple distinct GPU attribute sets or multiple
persistentStorageClass values are present, or 2) collect unique GPU attribute
tuples and unique persistentStorageClass values (e.g., sets/lists) and return
them so the calling code (e.g., the SQL predicate builder in
bid-screening.service) can generate separate predicates for each unique GPU
attribute tuple and each storage class instead of applying only the last-seen
value—modify references to gpuVendor/gpuModel/persistentStorageClass accordingly
so callers handle multiple constraints.
| if (requirements.signedBy.anyOf.length > 0 || requirements.signedBy.allOf.length > 0) { | ||
| joins.push(`INNER JOIN "providerAttributeSignature" pas ON pas.provider = p.owner`); | ||
|
|
||
| if (requirements.signedBy.anyOf.length > 0) { | ||
| replacements.anyOfAuditors = requirements.signedBy.anyOf; | ||
| wheres.push(`pas.auditor IN (:anyOfAuditors)`); | ||
| } | ||
|
|
||
| requirements.signedBy.allOf.forEach((auditor, i) => { | ||
| replacements[`allOfAuditor${i}`] = auditor; | ||
| havingClauses.push(`COUNT(*) FILTER (WHERE pas.auditor = :allOfAuditor${i}) > 0`); | ||
| }); |
There was a problem hiding this comment.
signedBy.anyOf breaks signedBy.allOf when both are present.
Filtering pas with WHERE pas.auditor IN (:anyOfAuditors) removes rows needed by the later HAVING COUNT(*) FILTER (...) > 0 checks for allOf. A provider signed by both sets can still be excluded if an allOf auditor is not also in anyOf. anyOf needs to be expressed as an aggregate/exists condition against the unfiltered signature set, not as a pre-group WHERE.
Suggested fix
if (requirements.signedBy.anyOf.length > 0 || requirements.signedBy.allOf.length > 0) {
joins.push(`INNER JOIN "providerAttributeSignature" pas ON pas.provider = p.owner`);
if (requirements.signedBy.anyOf.length > 0) {
replacements.anyOfAuditors = requirements.signedBy.anyOf;
- wheres.push(`pas.auditor IN (:anyOfAuditors)`);
+ havingClauses.push(`COUNT(*) FILTER (WHERE pas.auditor IN (:anyOfAuditors)) > 0`);
}
requirements.signedBy.allOf.forEach((auditor, i) => {
replacements[`allOfAuditor${i}`] = auditor;
havingClauses.push(`COUNT(*) FILTER (WHERE pas.auditor = :allOfAuditor${i}) > 0`);
});
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (requirements.signedBy.anyOf.length > 0 || requirements.signedBy.allOf.length > 0) { | |
| joins.push(`INNER JOIN "providerAttributeSignature" pas ON pas.provider = p.owner`); | |
| if (requirements.signedBy.anyOf.length > 0) { | |
| replacements.anyOfAuditors = requirements.signedBy.anyOf; | |
| wheres.push(`pas.auditor IN (:anyOfAuditors)`); | |
| } | |
| requirements.signedBy.allOf.forEach((auditor, i) => { | |
| replacements[`allOfAuditor${i}`] = auditor; | |
| havingClauses.push(`COUNT(*) FILTER (WHERE pas.auditor = :allOfAuditor${i}) > 0`); | |
| }); | |
| if (requirements.signedBy.anyOf.length > 0 || requirements.signedBy.allOf.length > 0) { | |
| joins.push(`INNER JOIN "providerAttributeSignature" pas ON pas.provider = p.owner`); | |
| if (requirements.signedBy.anyOf.length > 0) { | |
| replacements.anyOfAuditors = requirements.signedBy.anyOf; | |
| havingClauses.push(`COUNT(*) FILTER (WHERE pas.auditor IN (:anyOfAuditors)) > 0`); | |
| } | |
| requirements.signedBy.allOf.forEach((auditor, i) => { | |
| replacements[`allOfAuditor${i}`] = auditor; | |
| havingClauses.push(`COUNT(*) FILTER (WHERE pas.auditor = :allOfAuditor${i}) > 0`); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/src/provider/services/bid-screening/bid-screening.service.ts` around
lines 215 - 226, The current code adds a pre-group WHERE filter
(wheres.push(`pas.auditor IN (:anyOfAuditors)`) ) which removes rows needed for
the later COUNT(*) FILTER checks for requirements.signedBy.allOf; instead set
replacements.anyOfAuditors = requirements.signedBy.anyOf but do NOT add a WHERE
on pas.auditor; add a HAVING clause that expresses the anyOf requirement against
the unfiltered pas rows, e.g. push a havingClauses entry like `COUNT(*) FILTER
(WHERE pas.auditor IN (:anyOfAuditors)) > 0` (or equivalent SUM/EXISTS
aggregate) so both anyOf and allOf are evaluated from the same joined signature
set; keep the join on "providerAttributeSignature" pas and the existing loop
that adds COUNT(*) FILTER clauses for allOf auditors.
| if (requirements.signedBy.anyOf.length > 0) { | ||
| checks.push({ | ||
| name: "Auditor signature (anyOf)", | ||
| sql: `SELECT COUNT(DISTINCT p.owner) AS c FROM provider p | ||
| INNER JOIN "providerAttributeSignature" pas ON pas.provider = p.owner | ||
| WHERE p."deletedHeight" IS NULL AND p."isOnline" = true | ||
| AND pas.auditor IN (:auditors)`, | ||
| replacements: { auditors: requirements.signedBy.anyOf }, | ||
| feedback: `Few providers are signed by the required auditor(s)` | ||
| }); | ||
| } |
There was a problem hiding this comment.
Constraint diagnosis never reports signedBy.allOf blockers.
buildQuery() enforces requirements.signedBy.allOf, but diagnoseConstraints() only adds a check for signedBy.anyOf. When zero providers match because one required auditor is missing, the diagnosis payload will miss the real blocker and return misleading feedback. Please add an allOf diagnosis path here, ideally mirroring the main query semantics.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/src/provider/services/bid-screening/bid-screening.service.ts` around
lines 403 - 413, diagnoseConstraints() currently only adds a check for
requirements.signedBy.anyOf so it never reports blockers when signedBy.allOf
fails; add a parallel check for requirements.signedBy.allOf that mirrors
buildQuery() semantics: push a check named "Auditor signature (allOf)" that
counts providers whose distinct pas.auditor IN (:auditors) and uses GROUP BY
p.owner HAVING COUNT(DISTINCT pas.auditor) = :requiredCount (replace :auditors
with requirements.signedBy.allOf and :requiredCount with its length), include
appropriate feedback text so the diagnostics payload reports missing auditors
when the allOf constraint is unmet.
|
|
||
| > **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking. | ||
|
|
||
| **Goal:** Build a `POST /v1/bid-precheck` API endpoint that filters providers from the indexer database by resource capacity, GPU model, storage class, attributes, and auditor signatures — returning ranked candidates for Stage 2 provider-side bid screening. |
There was a problem hiding this comment.
Update the plan terminology to bid-screening to match the implemented API.
Line 5 and multiple later sections still use bid-precheck names/paths, while the shipped endpoint is POST /v1/bid-screening. This makes the implementation plan stale and easy to misapply in future changes.
Also applies to: 19-27, 52-53, 1046-1148
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/superpowers/plans/2026-04-09-bid-precheck-stage1.md` at line 5, The
document still uses the old "bid-precheck" name and path in multiple places;
update all occurrences of the string "bid-precheck" and the path "POST
/v1/bid-precheck" to "bid-screening" and "POST /v1/bid-screening" respectively
(including headings, examples, API paths, links, and any cross-references), and
ensure any associated labels/IDs, prose, or numbered sections that reference the
old term (notably the blocks around the earlier mention and the ranges you
noted) are renamed consistently so the plan matches the implemented API.
docs/superpowers/specs/2026-04-09-bid-precheck-stage1-design.md
Outdated
Show resolved
Hide resolved
| BidPrecheckRequest { | ||
| resources: ResourceUnit[] // GroupSpec.resources (repeated ResourceUnit) | ||
| requirements: { // GroupSpec.requirements (PlacementRequirements) | ||
| attributes: { key: string, value: string }[] | ||
| signedBy: { allOf: string[], anyOf: string[] } | ||
| } | ||
| limit?: number // default 50, max 200 | ||
| } | ||
|
|
There was a problem hiding this comment.
The spec mixes bid-screening and bid-precheck terminology.
The current endpoint is POST /v1/bid-screening, but these sections still document BidPrecheck* names and bid-precheck file paths. This is likely to mislead future maintenance and implementation work.
Also applies to: 56-57, 119-123, 129-132
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/superpowers/specs/2026-04-09-bid-precheck-stage1-design.md` around lines
25 - 33, The spec uses inconsistent terminology: rename all BidPrecheck* symbols
and any `bid-precheck` file/path references to use the current `bid-screening`
naming to match the endpoint `POST /v1/bid-screening` (e.g., change
BidPrecheckRequest -> BidScreeningRequest and update related types, comments,
and example file paths), and update any documentation text that mentions
`bid-precheck` to `bid-screening` so all occurrences (including the items noted
around lines 56-57, 119-123, 129-132) are consistent with the endpoint.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (3)
apps/api/src/provider/services/bid-screening/bid-screening.service.ts (3)
401-411:⚠️ Potential issue | 🟠 MajorAdd an
allOfauditor diagnosis path.The main query enforces
requirements.signedBy.allOf, but the diagnosis only checksanyOf, so zero-match responses can miss the actual blocking auditor requirement.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/provider/services/bid-screening/bid-screening.service.ts` around lines 401 - 411, The diagnostic only adds an "Auditor signature (anyOf)" check but misses a parallel check for requirements.signedBy.allOf, so failing results can hide that a provider must be signed by all listed auditors; add a second check entry to the checks array (similar to the existing one) named e.g. "Auditor signature (allOf)" that uses requirements.signedBy.allOf as replacements and SQL that enforces all auditors are present (join providerAttributeSignature pas, filter pas.auditor IN (:auditors), GROUP BY p.owner HAVING COUNT(DISTINCT pas.auditor) = :numAuditors) and pass numAuditors = requirements.signedBy.allOf.length so the diagnosis reports providers missing one or more required auditors.
213-224:⚠️ Potential issue | 🟠 MajorDon't filter
pasinWHEREwhenallOfis evaluated inHAVING.
WHERE pas.auditor IN (:anyOfAuditors)removes signature rows before theCOUNT(*) FILTERchecks run, soallOfcan fail for providers that actually have every required auditor.anyOfneeds to be expressed as an aggregate predicate on the unfiltered joined signature set.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/api/src/provider/services/bid-screening/bid-screening.service.ts` around lines 213 - 224, The current code adds a WHERE filter `pas.auditor IN (:anyOfAuditors)` which prunes signature rows before the `COUNT(*) FILTER` HAVING checks, breaking `requirements.signedBy.allOf`; remove the wheres push for `pas.auditor` and instead keep setting `replacements.anyOfAuditors` but express `anyOf` as an aggregate HAVING predicate against the unfiltered `pas` join (e.g., push a `havingClauses` entry like `COUNT(*) FILTER (WHERE pas.auditor IN (:anyOfAuditors)) > 0`), while retaining the existing `requirements.signedBy.allOf` loop that adds `COUNT(*) FILTER (WHERE pas.auditor = :allOfAuditor{i}) > 0`.
99-119:⚠️ Potential issue | 🟠 MajorReject mixed GPU/storage constraints instead of taking the last one.
aggregateResources()overwrites GPU attributes andpersistentStorageClasson each iteration, so heterogeneous resource entries get flattened into a single predicate set. That can return providers that satisfy only the last entry rather than the full request.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/api/src/provider/services/bid-screening/bid-screening.service.ts`:
- Around line 366-384: The persistent-storage diagnostic always joins
providerSnapshotStorage (pss) and requires a single pss row to satisfy the size,
which diverges from the main query that uses ps."availablePersistentStorage"
when no class is requested; update the check generation in the
agg.hasPersistentStorage branch so that when agg.persistentStorageClass is set
you keep the current pss JOIN and storageWhere logic using pss and
storageReplacements.storageClass, but when agg.persistentStorageClass is NOT
set, remove the pss JOIN and instead set storageWhere to
`ps."availablePersistentStorage" >= :totalPersistentStorage` and
storageReplacements to only include totalPersistentStorage; ensure the final
checks.push SQL uses the same base FROM and WHERE (DISTINCT p.owner, ps from
"providerSnapshot") and feedback remains unchanged so the diagnostic aligns with
the main query behavior.
- Around line 328-363: The diagnosis only filters by gpu.vendor and gpu.name
while buildQuery() may also constrain gpu.interface and gpu."memorySize"; update
the GPU checks in buildGpu-related logic (the checks pushed where name: `GPU
model (${modelDesc})` and `GPU per-node (${agg.maxPerReplicaGpu}x
${modelDesc})`) to mirror any additional GPU filters present in buildQuery():
extend gpuWhere and gpuReplacements to include conditions for gpu.interface and
gpu."memorySize" when agg provides them, append those attributes to
modelDesc/feedback so messages reflect the full filter set, and ensure the
replacements object passed into both checks includes interface and memorySize
keys where used.
- Around line 171-193: The per-replica GPU fit check is only added when
agg.hasGpuAttributes is true, allowing providers to pass if total GPUs are
enough but no single node can satisfy per-replica GPU; move the per-node check
out so it runs whenever agg.totalGpu > 0: ensure you always add the
providerSnapshotNode join (`joins.push('INNER JOIN "providerSnapshotNode" psn ON
psn."snapshotId" = ps.id')`) and always set `replacements.perNodeGpu =
agg.maxPerReplicaGpu` and push the where clause `(psn."gpuAllocatable" -
psn."gpuAllocated") >= :perNodeGpu` when `agg.totalGpu > 0`; keep the
`providerSnapshotNodeGPU` join and GPU-specific wheres
(`replacements.gpuVendor`, `gpu.name`, `gpu.interface`, `gpu."memorySize"`)
inside the `agg.hasGpuAttributes` branch only.
---
Duplicate comments:
In `@apps/api/src/provider/services/bid-screening/bid-screening.service.ts`:
- Around line 401-411: The diagnostic only adds an "Auditor signature (anyOf)"
check but misses a parallel check for requirements.signedBy.allOf, so failing
results can hide that a provider must be signed by all listed auditors; add a
second check entry to the checks array (similar to the existing one) named e.g.
"Auditor signature (allOf)" that uses requirements.signedBy.allOf as
replacements and SQL that enforces all auditors are present (join
providerAttributeSignature pas, filter pas.auditor IN (:auditors), GROUP BY
p.owner HAVING COUNT(DISTINCT pas.auditor) = :numAuditors) and pass numAuditors
= requirements.signedBy.allOf.length so the diagnosis reports providers missing
one or more required auditors.
- Around line 213-224: The current code adds a WHERE filter `pas.auditor IN
(:anyOfAuditors)` which prunes signature rows before the `COUNT(*) FILTER`
HAVING checks, breaking `requirements.signedBy.allOf`; remove the wheres push
for `pas.auditor` and instead keep setting `replacements.anyOfAuditors` but
express `anyOf` as an aggregate HAVING predicate against the unfiltered `pas`
join (e.g., push a `havingClauses` entry like `COUNT(*) FILTER (WHERE
pas.auditor IN (:anyOfAuditors)) > 0`), while retaining the existing
`requirements.signedBy.allOf` loop that adds `COUNT(*) FILTER (WHERE pas.auditor
= :allOfAuditor{i}) > 0`.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0d721ef9-3e8f-43ea-86b5-32d65bfd6b93
📒 Files selected for processing (3)
apps/api/src/provider/controllers/bid-screening/bid-screening.controller.tsapps/api/src/provider/services/bid-screening/bid-screening.service.spec.tsapps/api/src/provider/services/bid-screening/bid-screening.service.ts
✅ Files skipped from review due to trivial changes (1)
- apps/api/src/provider/services/bid-screening/bid-screening.service.spec.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/api/src/provider/controllers/bid-screening/bid-screening.controller.ts
| if (agg.totalGpu > 0 && agg.hasGpuAttributes) { | ||
| joins.push(`INNER JOIN "providerSnapshotNode" psn ON psn."snapshotId" = ps.id`); | ||
| joins.push(`INNER JOIN "providerSnapshotNodeGPU" gpu ON gpu."snapshotNodeId" = psn.id`); | ||
|
|
||
| replacements.gpuVendor = agg.gpuVendor; | ||
| wheres.push(`gpu.vendor = :gpuVendor`); | ||
|
|
||
| if (agg.gpuModel) { | ||
| replacements.gpuModel = agg.gpuModel; | ||
| wheres.push(`gpu.name = :gpuModel`); | ||
| } | ||
| if (agg.gpuInterface) { | ||
| replacements.gpuInterface = agg.gpuInterface; | ||
| wheres.push(`gpu.interface = :gpuInterface`); | ||
| } | ||
| if (agg.gpuMemorySize) { | ||
| replacements.gpuMemorySize = agg.gpuMemorySize; | ||
| wheres.push(`gpu."memorySize" = :gpuMemorySize`); | ||
| } | ||
|
|
||
| replacements.perNodeGpu = agg.maxPerReplicaGpu; | ||
| wheres.push(`(psn."gpuAllocatable" - psn."gpuAllocated") >= :perNodeGpu`); | ||
| } |
There was a problem hiding this comment.
Enforce per-replica GPU fit even without GPU attributes.
maxPerReplicaGpu is only applied inside the agg.hasGpuAttributes branch. A provider with enough total GPUs across multiple nodes can pass this query even when no single node can host one replica's GPU request.
Proposed fix
- if (agg.totalGpu > 0 && agg.hasGpuAttributes) {
- joins.push(`INNER JOIN "providerSnapshotNode" psn ON psn."snapshotId" = ps.id`);
- joins.push(`INNER JOIN "providerSnapshotNodeGPU" gpu ON gpu."snapshotNodeId" = psn.id`);
+ if (agg.totalGpu > 0) {
+ joins.push(`INNER JOIN "providerSnapshotNode" psn ON psn."snapshotId" = ps.id`);
+ replacements.perNodeGpu = agg.maxPerReplicaGpu;
+ wheres.push(`(psn."gpuAllocatable" - psn."gpuAllocated") >= :perNodeGpu`);
+ }
+
+ if (agg.totalGpu > 0 && agg.hasGpuAttributes) {
+ joins.push(`INNER JOIN "providerSnapshotNodeGPU" gpu ON gpu."snapshotNodeId" = psn.id`);
replacements.gpuVendor = agg.gpuVendor;
wheres.push(`gpu.vendor = :gpuVendor`);
@@
- replacements.perNodeGpu = agg.maxPerReplicaGpu;
- wheres.push(`(psn."gpuAllocatable" - psn."gpuAllocated") >= :perNodeGpu`);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/src/provider/services/bid-screening/bid-screening.service.ts` around
lines 171 - 193, The per-replica GPU fit check is only added when
agg.hasGpuAttributes is true, allowing providers to pass if total GPUs are
enough but no single node can satisfy per-replica GPU; move the per-node check
out so it runs whenever agg.totalGpu > 0: ensure you always add the
providerSnapshotNode join (`joins.push('INNER JOIN "providerSnapshotNode" psn ON
psn."snapshotId" = ps.id')`) and always set `replacements.perNodeGpu =
agg.maxPerReplicaGpu` and push the where clause `(psn."gpuAllocatable" -
psn."gpuAllocated") >= :perNodeGpu` when `agg.totalGpu > 0`; keep the
`providerSnapshotNodeGPU` join and GPU-specific wheres
(`replacements.gpuVendor`, `gpu.name`, `gpu.interface`, `gpu."memorySize"`)
inside the `agg.hasGpuAttributes` branch only.
| if (agg.totalGpu > 0 && agg.hasGpuAttributes) { | ||
| const gpuReplacements: Record<string, unknown> = { gpuVendor: agg.gpuVendor }; | ||
| let gpuWhere = `gpu.vendor = :gpuVendor`; | ||
| let modelDesc = agg.gpuVendor!; | ||
|
|
||
| if (agg.gpuModel) { | ||
| gpuReplacements.gpuModel = agg.gpuModel; | ||
| gpuWhere += ` AND gpu.name = :gpuModel`; | ||
| modelDesc += `/${agg.gpuModel}`; | ||
| } | ||
|
|
||
| checks.push({ | ||
| name: `GPU model (${modelDesc})`, | ||
| sql: `SELECT COUNT(DISTINCT p.owner) AS c FROM provider p | ||
| INNER JOIN "providerSnapshot" ps ON ps.id = p."lastSuccessfulSnapshotId" | ||
| INNER JOIN "providerSnapshotNode" psn ON psn."snapshotId" = ps.id | ||
| INNER JOIN "providerSnapshotNodeGPU" gpu ON gpu."snapshotNodeId" = psn.id | ||
| WHERE p."deletedHeight" IS NULL AND p."isOnline" = true | ||
| AND (psn."gpuAllocatable" - psn."gpuAllocated") > 0 | ||
| AND ${gpuWhere}`, | ||
| replacements: gpuReplacements, | ||
| feedback: `No providers have ${modelDesc} GPUs available — try a different model` | ||
| }); | ||
|
|
||
| checks.push({ | ||
| name: `GPU per-node (${agg.maxPerReplicaGpu}x ${modelDesc})`, | ||
| sql: `SELECT COUNT(DISTINCT p.owner) AS c FROM provider p | ||
| INNER JOIN "providerSnapshot" ps ON ps.id = p."lastSuccessfulSnapshotId" | ||
| INNER JOIN "providerSnapshotNode" psn ON psn."snapshotId" = ps.id | ||
| INNER JOIN "providerSnapshotNodeGPU" gpu ON gpu."snapshotNodeId" = psn.id | ||
| WHERE p."deletedHeight" IS NULL AND p."isOnline" = true | ||
| AND ${gpuWhere} | ||
| AND (psn."gpuAllocatable" - psn."gpuAllocated") >= :perNodeGpu`, | ||
| replacements: { ...gpuReplacements, perNodeGpu: agg.maxPerReplicaGpu }, | ||
| feedback: `No single node has ${agg.maxPerReplicaGpu}x ${modelDesc} GPUs free — reduce GPU count per replica` | ||
| }); |
There was a problem hiding this comment.
Mirror all GPU filters in diagnosis.
buildQuery() can require gpu.interface and gpu."memorySize", but the diagnosis only checks vendor/model. If interface or memory size is the real blocker, the returned feedback becomes misleading.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/src/provider/services/bid-screening/bid-screening.service.ts` around
lines 328 - 363, The diagnosis only filters by gpu.vendor and gpu.name while
buildQuery() may also constrain gpu.interface and gpu."memorySize"; update the
GPU checks in buildGpu-related logic (the checks pushed where name: `GPU model
(${modelDesc})` and `GPU per-node (${agg.maxPerReplicaGpu}x ${modelDesc})`) to
mirror any additional GPU filters present in buildQuery(): extend gpuWhere and
gpuReplacements to include conditions for gpu.interface and gpu."memorySize"
when agg provides them, append those attributes to modelDesc/feedback so
messages reflect the full filter set, and ensure the replacements object passed
into both checks includes interface and memorySize keys where used.
| if (agg.hasPersistentStorage) { | ||
| const storageReplacements: Record<string, unknown> = { totalPersistentStorage: agg.totalPersistentStorage }; | ||
| let storageWhere = `(pss.allocatable - pss.allocated) >= :totalPersistentStorage`; | ||
|
|
||
| if (agg.persistentStorageClass) { | ||
| storageReplacements.storageClass = agg.persistentStorageClass; | ||
| storageWhere += ` AND pss.class = :storageClass`; | ||
| } | ||
|
|
||
| checks.push({ | ||
| name: `Persistent storage`, | ||
| sql: `SELECT COUNT(DISTINCT p.owner) AS c FROM provider p | ||
| INNER JOIN "providerSnapshot" ps ON ps.id = p."lastSuccessfulSnapshotId" | ||
| INNER JOIN "providerSnapshotStorage" pss ON pss."snapshotId" = ps.id | ||
| WHERE p."deletedHeight" IS NULL AND p."isOnline" = true | ||
| AND ${storageWhere}`, | ||
| replacements: storageReplacements, | ||
| feedback: `Reduce persistent storage or try a different class (beta3/nvme has the most providers)` | ||
| }); |
There was a problem hiding this comment.
Keep persistent-storage diagnosis aligned with the main query.
When no storage class is requested, the screening query only checks ps."availablePersistentStorage", but this diagnostic always joins providerSnapshotStorage and requires a single pss row to satisfy the full size. That can report persistent storage as the blocker even when the main query would not.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/api/src/provider/services/bid-screening/bid-screening.service.ts` around
lines 366 - 384, The persistent-storage diagnostic always joins
providerSnapshotStorage (pss) and requires a single pss row to satisfy the size,
which diverges from the main query that uses ps."availablePersistentStorage"
when no class is requested; update the check generation in the
agg.hasPersistentStorage branch so that when agg.persistentStorageClass is set
you keep the current pss JOIN and storageWhere logic using pss and
storageReplacements.storageClass, but when agg.persistentStorageClass is NOT
set, remove the pss JOIN and instead set storageWhere to
`ps."availablePersistentStorage" >= :totalPersistentStorage` and
storageReplacements to only include totalPersistentStorage; ensure the final
checks.push SQL uses the same base FROM and WHERE (DISTINCT p.owner, ps from
"providerSnapshot") and feedback remains unchanged so the diagnostic aligns with
the main query behavior.
Why
Part of CON-187 (part of CON-186)
Adds Stage 1 of the bid screening flow: a database pre-filtering endpoint that narrows the set of providers before calling each provider's
/v1/bid-screeninggRPC/REST endpoint (akash-network/provider#386) in Stage 2.Currently every online provider receives every order. As the network grows to 1000+ providers, calling each one is not scalable. This endpoint filters providers by resource capacity, GPU model, storage class, attributes, and auditor signatures using our indexer database — returning ranked candidates in ~65ms.
What
POST /v1/bid-screening— accepts a GroupSpec-like request body (resources, requirements, limit) and returns matching providers ranked by lease countSummary by CodeRabbit
/v1/bid-screeningAPI endpoint that screens and matches providers based on resource requirements, including CPU, memory, GPU, and storage specifications.