Conversation
There was a problem hiding this comment.
9 issues found across 36 files
Prompt for AI agents (all issues)
Check if these issues are valid — if so, understand the root cause of each and fix them.
<file name="kubernetes/postgres-configmap.yaml">
<violation number="1" location="kubernetes/postgres-configmap.yaml:5">
P2: Helm template syntax was introduced in a Kubernetes manifest stored outside a Helm chart templates/ directory. If this file is applied directly (as the kubernetes/ folder implies), the literal `{{ .Release.Namespace }}` is invalid and will cause deployment failures.</violation>
</file>
<file name="helm/templates/ingress.yaml">
<violation number="1" location="helm/templates/ingress.yaml:33">
P2: Ingress v1 requires pathType for each path, but the template only emits it when .pathType is provided. If values omit .pathType, the generated manifest is invalid.</violation>
</file>
<file name="n8n-self-hosted/templates/NOTES.txt">
<violation number="1" location="n8n-self-hosted/templates/NOTES.txt:15">
P3: The NOTES command indexes `.status.loadBalancer.ingress[0]` in a kubectl go-template; when the LoadBalancer is still provisioning and the ingress list is empty, this throws an “index out of range” template error and the command fails. Consider ranging over the ingress list without indexing so empty lists return no output.</violation>
</file>
<file name="helm/n8n-secrets.yaml">
<violation number="1" location="helm/n8n-secrets.yaml:9">
P1: Hardcoded database credentials are committed in the Secret manifest, exposing sensitive values in source control. Use Helm values or placeholders so secrets are provided at deploy time.</violation>
</file>
<file name="helm/templates/postgres-configmap.yaml">
<violation number="1" location="helm/templates/postgres-configmap.yaml:4">
P2: Hardcoded ConfigMap name will conflict when multiple releases are installed in the same namespace; Helm resources should be prefixed with the release name to avoid collisions.</violation>
</file>
<file name="helm/templates/postgres-secret.yaml">
<violation number="1" location="helm/templates/postgres-secret.yaml:10">
P1: Hardcoded fallback credentials (`changeUser`/`changePassword`) are rendered when no external secret exists, creating a secret with known weak credentials in normal deployments or rendered manifests.</violation>
</file>
<file name="n8n-self-hosted/values.yaml">
<violation number="1" location="n8n-self-hosted/values.yaml:10">
P2: Default image repository is set to `nginx`, so installing this n8n chart will deploy Nginx instead of n8n unless users override the value.</violation>
</file>
<file name="helm/templates/n8n-claim0-persistentvolumeclaim.yaml">
<violation number="1" location="helm/templates/n8n-claim0-persistentvolumeclaim.yaml:6">
P2: PVC name is hardcoded, so multiple releases in the same namespace will collide on the same resource name. Use a release-specific name via `.Release.Name`/fullname helper to allow multiple installs.</violation>
</file>
<file name="helm/README.md">
<violation number="1" location="helm/README.md:32">
P2: Helm 3 expects `helm install [NAME] [CHART]`; the README reverses these arguments, which will fail because `./helm` is treated as the release name.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| type: Opaque | ||
| stringData: | ||
| POSTGRES_USER: admin | ||
| POSTGRES_PASSWORD: admin |
There was a problem hiding this comment.
P1: Hardcoded database credentials are committed in the Secret manifest, exposing sensitive values in source control. Use Helm values or placeholders so secrets are provided at deploy time.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At helm/n8n-secrets.yaml, line 9:
<comment>Hardcoded database credentials are committed in the Secret manifest, exposing sensitive values in source control. Use Helm values or placeholders so secrets are provided at deploy time.</comment>
<file context>
@@ -0,0 +1,12 @@
+type: Opaque
+stringData:
+ POSTGRES_USER: admin
+ POSTGRES_PASSWORD: admin
+ POSTGRES_DB: n8n
+ POSTGRES_NON_ROOT_USER: standard
</file context>
There was a problem hiding this comment.
This is a exemplary value that is required to be applied before the helm install. The user may change this locally
| metadata: | ||
| name: init-data | ||
| namespace: n8n | ||
| namespace: {{ .Release.Namespace }} |
There was a problem hiding this comment.
P2: Helm template syntax was introduced in a Kubernetes manifest stored outside a Helm chart templates/ directory. If this file is applied directly (as the kubernetes/ folder implies), the literal {{ .Release.Namespace }} is invalid and will cause deployment failures.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At kubernetes/postgres-configmap.yaml, line 5:
<comment>Helm template syntax was introduced in a Kubernetes manifest stored outside a Helm chart templates/ directory. If this file is applied directly (as the kubernetes/ folder implies), the literal `{{ .Release.Namespace }}` is invalid and will cause deployment failures.</comment>
<file context>
@@ -2,7 +2,7 @@ apiVersion: v1
metadata:
name: init-data
- namespace: n8n
+ namespace: {{ .Release.Namespace }}
data:
init-data.sh: |
</file context>
| paths: | ||
| {{- range .paths }} | ||
| - path: {{ .path }} | ||
| {{- with .pathType }} |
There was a problem hiding this comment.
P2: Ingress v1 requires pathType for each path, but the template only emits it when .pathType is provided. If values omit .pathType, the generated manifest is invalid.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At helm/templates/ingress.yaml, line 33:
<comment>Ingress v1 requires pathType for each path, but the template only emits it when .pathType is provided. If values omit .pathType, the generated manifest is invalid.</comment>
<file context>
@@ -0,0 +1,43 @@
+ paths:
+ {{- range .paths }}
+ - path: {{ .path }}
+ {{- with .pathType }}
+ pathType: {{ . }}
+ {{- end }}
</file context>
| apiVersion: v1 | ||
| kind: ConfigMap | ||
| metadata: | ||
| name: init-data |
There was a problem hiding this comment.
P2: Hardcoded ConfigMap name will conflict when multiple releases are installed in the same namespace; Helm resources should be prefixed with the release name to avoid collisions.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At helm/templates/postgres-configmap.yaml, line 4:
<comment>Hardcoded ConfigMap name will conflict when multiple releases are installed in the same namespace; Helm resources should be prefixed with the release name to avoid collisions.</comment>
<file context>
@@ -0,0 +1,18 @@
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: init-data
+ namespace: {{ .Release.Namespace }}
+data:
</file context>
n8n-self-hosted/values.yaml
Outdated
|
|
||
| # This sets the container image more information can be found here: https://kubernetes.io/docs/concepts/containers/images/ | ||
| image: | ||
| repository: nginx |
There was a problem hiding this comment.
P2: Default image repository is set to nginx, so installing this n8n chart will deploy Nginx instead of n8n unless users override the value.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At n8n-self-hosted/values.yaml, line 10:
<comment>Default image repository is set to `nginx`, so installing this n8n chart will deploy Nginx instead of n8n unless users override the value.</comment>
<file context>
@@ -0,0 +1,123 @@
+
+# This sets the container image more information can be found here: https://kubernetes.io/docs/concepts/containers/images/
+image:
+ repository: nginx
+ # This sets the pull policy for images.
+ pullPolicy: IfNotPresent
</file context>
| metadata: | ||
| labels: | ||
| service: n8n-claim0 | ||
| name: n8n-claim0 |
There was a problem hiding this comment.
P2: PVC name is hardcoded, so multiple releases in the same namespace will collide on the same resource name. Use a release-specific name via .Release.Name/fullname helper to allow multiple installs.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At helm/templates/n8n-claim0-persistentvolumeclaim.yaml, line 6:
<comment>PVC name is hardcoded, so multiple releases in the same namespace will collide on the same resource name. Use a release-specific name via `.Release.Name`/fullname helper to allow multiple installs.</comment>
<file context>
@@ -0,0 +1,13 @@
+metadata:
+ labels:
+ service: n8n-claim0
+ name: n8n-claim0
+ namespace: {{ .Release.Namespace }}
+spec:
</file context>
| ``` | ||
| ### 3. Apply helm chart | ||
| ``` | ||
| helm install ./helm my-n8n -n n8n |
There was a problem hiding this comment.
P2: Helm 3 expects helm install [NAME] [CHART]; the README reverses these arguments, which will fail because ./helm is treated as the release name.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At helm/README.md, line 32:
<comment>Helm 3 expects `helm install [NAME] [CHART]`; the README reverses these arguments, which will fail because `./helm` is treated as the release name.</comment>
<file context>
@@ -0,0 +1,33 @@
+```
+### 3. Apply helm chart
+```
+helm install ./helm my-n8n -n n8n
+```
\ No newline at end of file
</file context>
| helm install ./helm my-n8n -n n8n | |
| helm install my-n8n ./helm -n n8n |
n8n-self-hosted/templates/NOTES.txt
Outdated
| {{- else if contains "LoadBalancer" .Values.service.type }} | ||
| NOTE: It may take a few minutes for the LoadBalancer IP to be available. | ||
| You can watch its status by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "n8n-self-hosted.fullname" . }}' | ||
| export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "n8n-self-hosted.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}") |
There was a problem hiding this comment.
P3: The NOTES command indexes .status.loadBalancer.ingress[0] in a kubectl go-template; when the LoadBalancer is still provisioning and the ingress list is empty, this throws an “index out of range” template error and the command fails. Consider ranging over the ingress list without indexing so empty lists return no output.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At n8n-self-hosted/templates/NOTES.txt, line 15:
<comment>The NOTES command indexes `.status.loadBalancer.ingress[0]` in a kubectl go-template; when the LoadBalancer is still provisioning and the ingress list is empty, this throws an “index out of range” template error and the command fails. Consider ranging over the ingress list without indexing so empty lists return no output.</comment>
<file context>
@@ -0,0 +1,22 @@
+{{- else if contains "LoadBalancer" .Values.service.type }}
+ NOTE: It may take a few minutes for the LoadBalancer IP to be available.
+ You can watch its status by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "n8n-self-hosted.fullname" . }}'
+ export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "n8n-self-hosted.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
+ echo http://$SERVICE_IP:{{ .Values.service.port }}
+{{- else if contains "ClusterIP" .Values.service.type }}
</file context>
Summary by cubic
Add a simple Helm chart to deploy n8n with a bundled PostgreSQL for homelab Kubernetes, including persistent storage, optional ingress, and a global secret bridge for DB credentials. Also switched static Kubernetes manifests to use the release namespace via templating.
New Features
Migration
Written for commit 190a82b. Summary will update on new commits.