Skip to content

Conversation

@xueli181114
Copy link
Contributor

@xueli181114 xueli181114 commented Jan 28, 2026

Summary by CodeRabbit

  • Documentation

    • Added design documentation for Maestro transport integration to enable remote resource management via CloudEvents
    • Expanded Maestro architecture documentation with event flow and processing patterns
  • Chores

    • Reorganized adapter configuration templates for improved deployment structure

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 28, 2026

Note

Currently processing new changes in this PR. This may take a few minutes, please wait...

✏️ Tip: You can disable in-progress messages and the fortune message in your review settings.

Tip

You can disable sequence diagrams in the walkthrough.

Disable the reviews.sequence_diagrams setting in your project's settings in CodeRabbit to disable sequence diagrams in the walkthrough.

Walkthrough

This pull request reorganizes the HyperFleet Adapter Framework configuration structure by replacing two legacy template files (adapter-config-template-MVP.yaml and adapter-environment-config-template.yaml) with two new templates under a configs/ subdirectory: adapter-business-logic-template-MVP.yaml and adapter-deployment-config-template.yaml. The new deployment template introduces a Maestro transport configuration for remote Kubernetes resource management. Additionally, two documentation files are added: a spike document detailing the Maestro integration architecture and a document describing Maestro event flow patterns. A new AdapterConfig CRD is introduced with fields for Maestro transport, authentication, API configuration, and Kubernetes settings.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~30 minutes

Possibly related PRs

Suggested reviewers

  • ciaranRoche
  • rh-amarin
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'SPIKE: Maestro integration to adapter framework' clearly and concisely summarizes the main change: introducing Maestro integration into the adapter framework. It directly corresponds to the primary objective evident in all file modifications.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In
`@hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml`:
- Around line 38-50: The maestro configuration block is incorrectly nested under
spec.adapter (the symbol "maestro" appears inside "adapter"); move the "maestro"
section so it is a direct child of "spec" (i.e., use spec.maestro instead of
spec.adapter.maestro), ensuring any comments/indentation reflect the new
top-level placement and that any references to "maestro" in templates or docs
match spec.maestro.
🧹 Nitpick comments (7)
hyperfleet/components/adapter/maestro-integration/maestro-architecture.md (1)

114-162: Documentation content is comprehensive and well-structured.

The event flow documentation provides valuable insights into Maestro's architecture. One minor improvement: the code evidence section (lines 158-161) references specific line numbers without specifying the source file path.

Consider adding the file path for clarity:

 **Code Evidence:**
-- Line 93: `PageList(ctx, m.logger, m.apiClient, ...)` - REST API for initial state
-- Line 43: `cloudEventsClient.Subscribe(ctx, watcherStore.HandleReceivedResource)` - CloudEvents subscription
-- Line 117: `HandleReceivedResource` processes incoming work updates
+- `watcher.go` Line 93: `PageList(ctx, m.logger, m.apiClient, ...)` - REST API for initial state
+- `watcher.go` Line 43: `cloudEventsClient.Subscribe(ctx, watcherStore.HandleReceivedResource)` - CloudEvents subscription
+- `watcher.go` Line 117: `HandleReceivedResource` processes incoming work updates
hyperfleet/components/adapter/framework/configs/adapter-business-logic-template-MVP.yaml (2)

86-91: Consider documenting the expected API response structure.

The deeply nested field path status.conditions.placement.data.clusterName assumes a specific API response structure. If this path doesn't exist, the capture will silently fail or return null.

Consider either:

  1. Adding a comment with the expected API response schema
  2. Using a default value mechanism similar to CEL's .orValue()
    - name: "placementClusterName"
      field: "status.conditions.placement.data.clusterName"
      default: ""  # Optional: provide default if path doesn't exist

233-246: Consider adding error handling guidance for postActions.

The postActions section defines retry behavior but doesn't specify what happens on final failure after all retries are exhausted.

Consider documenting the expected behavior:

  postActions:
    - name: "reportClusterStatus"
      apiCall:
        # ... existing config ...
        retryAttempts: 3
        retryBackoff: "exponential"
        # Optional: Document or add failure handling
        # onFailure: "log"  # Options: log, skip, fail
hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml (2)

62-91: Clarify that TLS and token configs are mutually exclusive.

Both tlsConfig and tokenConfig are defined, but only one should be active based on auth.type. Consider adding comments or restructuring to make this clearer.

♻️ Suggested clarification
       # Authentication configuration
       auth:
         # Authentication type: "tls" (certificate-based) or "token" (bearer token)
+        # NOTE: Configure ONLY the section that matches your auth type
         type: "tls"

-        # TLS certificate configuration
+        # TLS certificate configuration (used when type: "tls")
         tlsConfig:
           # ...

-        # Alternative: Bearer token configuration (for token-based authentication)
+        # Bearer token configuration (used when type: "token")
+        # NOTE: Comment out or remove this section if using TLS authentication
         tokenConfig:

106-112: Consider increasing hyperfleetApi.timeout value.

A 2-second timeout for HTTP API calls may be too aggressive for production environments, especially during network congestion or when the API is under load. The business logic template uses 10s for precondition API calls and 30s for postActions.

♻️ Suggested change
   hyperfleetApi:
     # HTTP client timeout for API calls
-    timeout: 2s
+    timeout: 10s
     # Number of retry attempts for failed API calls
     retryAttempts: 3
hyperfleet/components/adapter/maestro-integration/SPIKE-maestro-adapter-integration.md (2)

34-46: Add language specifier to fenced code block.

The ASCII diagram code block lacks a language specifier, which triggers markdown lint warning MD040.

♻️ Suggested fix
-```
+```text
 ┌─────────────────┐     ┌─────────────────┐     ┌──────────────────-┐
 │   Sentinel      │────▶│ HyperFleet      │────▶│ Deployment Cluster│

488-493: Add language specifier to fenced code block.

The event flow diagram code block lacks a language specifier.

♻️ Suggested fix
-```
+```text
 Maestro Events → Adapter (Event Processor) → HyperFleet Broker → Sentinel → HyperFleet API
</details>

</blockquote></details>

</blockquote></details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

Comment on lines 38 to 51
spec:
# Adapter Information
adapter:
# Adapter version
version: "0.2.0"

# ============================================================================
# NEW: Transport Configuration
# ============================================================================
# Defines how resources are managed (direct K8s API vs Maestro transport)

# Maestro-specific configuration this is optional and only used if the adapter uses Maestro transport
maestro:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inconsistent structure placement for maestro configuration.

The maestro section is nested under spec.adapter (line 49-50), but the SPIKE document (SPIKE-maestro-adapter-integration.md lines 81-117) shows it at spec.maestro level.

♻️ Suggested structure to match SPIKE document
 spec:
   # Adapter Information
   adapter:
     # Adapter version
     version: "0.2.0"

-    # Maestro-specific configuration this is optional and only used if the adapter uses Maestro transport
-    maestro:
+  # Maestro-specific configuration - optional, only used if adapter uses Maestro transport
+  maestro:
       # Maestro gRPC server connection
       serverAddress: "maestro-grpc.aro-hcp.example.com:8090"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
spec:
# Adapter Information
adapter:
# Adapter version
version: "0.2.0"
# ============================================================================
# NEW: Transport Configuration
# ============================================================================
# Defines how resources are managed (direct K8s API vs Maestro transport)
# Maestro-specific configuration this is optional and only used if the adapter uses Maestro transport
maestro:
spec:
# Adapter Information
adapter:
# Adapter version
version: "0.2.0"
# ============================================================================
# NEW: Transport Configuration
# ============================================================================
# Defines how resources are managed (direct K8s API vs Maestro transport)
# Maestro-specific configuration - optional, only used if adapter uses Maestro transport
maestro:
🤖 Prompt for AI Agents
In
`@hyperfleet/components/adapter/framework/configs/adapter-deployment-config-template.yaml`
around lines 38 - 50, The maestro configuration block is incorrectly nested
under spec.adapter (the symbol "maestro" appears inside "adapter"); move the
"maestro" section so it is a direct child of "spec" (i.e., use spec.maestro
instead of spec.adapter.maestro), ensuring any comments/indentation reflect the
new top-level placement and that any references to "maestro" in templates or
docs match spec.maestro.

# ==========================================================================
- name: "clusterNamespace"
transport:
type: "direct"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick: is "direct" meaningful enough? should it be called kubernetes, or kubectl?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit of struggle here, kubernetes may caused confusion about which kubernetes or kubernetes to which cluster? Let me ask AI to generate a better name

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AI gave me

My Recommendation

Use Direct.

It provides the cleanest contrast. When someone looks at your Job CR or Epic, seeing Direct vs Maestro immediately explains the networking and security posture of that specific task without needing further documentation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about in-cluster?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's the second recommendation by AI. If no objections I think we can use it.

# Maestro-specific configuration this is optional and only used if the adapter uses Maestro transport
maestro:
# Maestro gRPC server connection
serverAddress: "maestro-grpc.aro-hcp.example.com:8090"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need also the HTTP address to query maestro?

In the other document we say:

Key Finding: Watch uses hybrid approach - HTTP REST API for initial state + CloudEvents for live updates!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. We need!


// ListOptions for both K8s and Maestro list operations
type ListOptions struct {
Namespace string // For Maestro: this is the consumer name (target cluster)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this what the DSL names as targetCluster ? should the property have that name?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is. It's named Namespace in maestro. I don't think it's a good idea to put Namespace in our config.

- ✅ **Documentation**: Complete development and operational documentation


## Alternative Approach: Ultra-High-Volume Watch Processing
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Markdown makes it VERY difficult to know what this alternative is related to...

I had to go to code view and find what other titles are using level 2 ##

Could we mark the alternatives more clearly?
e.g. prefixing it with a) b) c)
or having them listed before going into each one

A table of contents may be also useful at the top to understand the structure of the document

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated.

- name: "clusterName"
field: "name"
- name: "clusterPhase"
field: "status.phase"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be better to use the latest structure based on Angel’s recent changes to cluster status to avoid user's confusion? And also the below conditions.

# ==========================================================================
- name: "clusterNamespace"
transport:
type: "direct"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about in-cluster?

hyperfleet.io/managed-by: "{{ .metadata.name }}"
hyperfleet.io/resource-type: "namespace"
annotations:
hyperfleet.io/created-by: "hyperfleet-adapter"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be better to have this added uniformly by the adapter framework? If it’s not needed, can it be removed?

name: "{{ .clusterId | lower }}"
labels:
hyperfleet.io/cluster-id: "{{ .clusterId }}"
hyperfleet.io/managed-by: "{{ .metadata.name }}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does {{ .metadata.name }} represent the Namespace name, or the adapter’s name (e.g., validation-adapter)?

annotations:
hyperfleet.io/created-by: "hyperfleet-adapter"
hyperfleet.io/generation: "{{ .generationId }}"
discovery:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maestro supports submitting and configuring multiple Kubernetes resources in a single request. If we continue using it, does that mean we need to split the Kubernetes resources and submit them to Maestro in multiple requests?

About the resource discovery, any ideas or patterns we can take from Maestro?

#
# TRANSPORT MODES:
# ================
# - direct: Traditional direct Kubernetes API access (existing behavior)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it better to remove (existing behavior)?

# Client private key file path
keyFile: "/etc/maestro/certs/hyperfleet-client.key" # Can be overridden by environment variable MAESTRO_KEY_FILE
# Server name for TLS verification
serverName: "maestro-grpc.aro-hcp.example.com" # Can be overridden by environment variable MAESTRO_SERVER_NAME
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the difference or relationship between serverName and above serverAddress?

- **Capacity**: 10,000+ events per second
- **Architecture**: Direct broker subscription with event transformation

### Connection Management
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this section a little duplicated with Key Finding? They all outlines the details for gRPC Mode and MQTT/Pub-Sub Mode.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants