Skip to content

issue with ssl setup #588

@ehsanafter

Description

@ehsanafter

What steps will reproduce the bug?

i have configured the cluster using below values file:

# Default values for nifi-cluster.
# All of these configurations are sourced from the nifikop docs:
# https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster/1_nifi_cluster

cluster:
  # -- defines if the operator will use basic or tls authentication to query the NiFi cluster. Operator will default to tls if left unset
  # @default -- tls
  clientType:
  # -- type of the cluster: internal or external. Operator will put internal by default
  # @default -- internal
  type:
  # -- the full name of the cluster. This is used to set a portion of the name of various nifikop resources
  nameOverride: "nifi-cluster"
  fullnameOverride: ""
  # -- the type of cluster manager: zookeeper or kubernetes. Operator will put zookeeper by default
  # @default -- zookeeper
  manager: kubernetes
  # -- the kubernetes manager serviceAccount details
  managerServiceAccount:
    # @default -- include "nifi-cluster.fullname" .
    name:
    # -- Annotations to apply to the serviceAccount
    annotations: {}
    # -- Labels to apply to the serviceAccount
    labels: {}
  # -- the hostname and port of the zookeeper service
  zkAddress: "nifi-cluster-zookeeper:2181"
  # -- the path in zookeeper to store this cluster's state
  zkPath: "/cluster"
  # -- whether or not to only deploy one nifi pod per node in this cluster
  oneNifiNodePerNode: false

  # -- the template to use to create nodes.
  # see https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster#nificlusterspec
  # @default -- "node-%d-<cluster-name>"
  nodeUserIdentityTemplate: "n-%d"

  # -- NifiControllerTemplate specifies the template to be used when naming the node controller (e.g. %s-mysuffix). **Warning: once defined dont' change
  # this value either the operator will no longer be able to manage the cluster**
  nifiControllerTemplate:

  # -- ControllerUserIdentity specifies what to call the static admin user's identity. **Warning: once defined don't change
  # this value either the operator will no longer be able to manage the cluster**
  controllerUserIdentity:

  service:
    # -- Annotations to apply to each nifi service
    annotations: {}
    # -- Labels to apply to each nifi service
    labels: {}
    # -- Whether or not to create a headless service
    headlessEnabled: true

  # -- see https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster#singleuserconfiguration
  singleUserConfiguration:
    enabled: false
    authorizerEnabled: false
    secretRef:
      name: "single-user-credentials"
      namespace: "nifi"
    secretKeys:
      username: "username"
      password: "password"

  pod:
    # -- Annotations to apply to every pod
    annotations: {}
    # -- Labels to apply to every pod
    labels: {}
    # -- host aliases to assign to each pod. See: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#hostalias-v1-core
    hostAliases: []
    # -- (object) The pod readiness probe override: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes
    readinessProbe: 
    # -- (object) The pod liveness probe override: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command
    livenessProbe: 

  image: 
    repository: "apache/nifi"
    # -- Only set this if you want to override the chart AppVersion
    tag: ""
  initContainerImage:
    repository: "bash"
    tag: "5.2.2"

  # -- list of init containers to run prior to the deployment
  initContainers: []
  
  # -- list of additional sidecar containers to run alongside the nifi pods. See: https://pkg.go.dev/k8s.io/api/core/v1#Container
  sidecarConfigs: []
    # - name: my-sidecar
    #   image: busybox:1.36
    #   args:
    #     - echo
    #     - helloworld

  # --  MaximumTimerDrivenThreadCount defines the maximum number of threads for timer driven processors available to the system.
  maximumTimerDrivenThreadCount: 10


  # -- list of additional environment variables to attach to all init containers and the nifi container
  # https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster/2_read_only_config#readonlyconfig
  additionalSharedEnvs: []
  # - name: MY_ENV_VAR
  #   value: some value

  # -- see https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster#managedusers
  managedAdminUsers: []
  # -- see https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster#managedusers
  managedReaderUsers: []

  # -- see https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster#disruptionbudget
  disruptionBudget: {}

  # -- specifies any TopologySpreadConstraint objects to be applied to all nodes. See https://pkg.go.dev/k8s.io/api/core/v1#TopologySpreadConstraint
  topologySpreadConstraints: []

  # -- see https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster#ldapconfiguration
  ldapConfiguration: {}
  
  logbackConfig:
    configPath: config/logback.xml
    # You may provide your own secret/config map override of the logback configuration. It will override the default.

    # -- A ConfigMap ref to override the default logback configuration
    # see https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster/2_read_only_config#logbackconfig
    replaceConfigMap: {}
    #   data: logback.xml
    #   name: logback-configmap
    #   namespace: nifi

    # -- A Secret ref to override the default logback configuration
    # see https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster/2_read_only_config#logbackconfig
    replaceSecretConfig: {}
    #   data: logback.xml
    #   name: logback-configmap
    #   namespace: nifi

  # -- You can override the individual properties via the overrideConfigs attribute. These will be provided to all pods via secrets.
  # https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#system_properties
  nifiProperties:
    overrideConfigs: |
      nifi.web.proxy.context.path=/nifi-cluster
      nifi.sensitive.props.key=SecretKeyASnfsakjdfSnasdas
      nifi.security.identity.mapping.pattern.dn=CN=([^,]*)(?:, (?:O|OU)=.*)?
      nifi.security.identity.mapping.value.dn=$1
      nifi.security.identity.mapping.transform.dn=NONE
    
    # -- A ConfigMap ref to override the default nifi properties
    # see https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster/2_read_only_config#nifiproperties
    overrideConfigMap: {}
    #   data: nifi.properties
    #   name: nifi-properties
    #   namespace: nifi

    # -- A Secret ref to override the default nifi properties
    # see https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster/2_read_only_config#nifiproperties
    overrideSecretConfig: {}
    #   data: nifi.properties
    #   name: nifi-properties
    #   namespace: nifi


    # -- List of allowed HTTP Host header values to consider when NiFi is running securely and will be receiving
    # requests to a different host[:port] than it is bound to. Operator will generate comma separated string from list
    # https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#web-properties
    webProxyHosts:
      - nifi2.test

    # -- In case `cluster.externalServices` contains a service of type `NodePort` and NiFi UI/API needs to be accessed over it,
    # this option will add host:nodePort to the `webProxyHosts` list inside `NiFiCluster`.
    # Note: When adding webProxyHosts as host:port, NiFi will also create entry for host as valid host header.
    webProxyNodePorts:
      enabled: false
      hosts: []
      # - 172.40.120.118
      # - domain-pointing-to-k8s-node

    # -- Nifi security client auth
    needClientAuth: false

  # -- You can override individual properties in conf/bootstrap.conf
  # https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#bootstrap_properties
  bootstrapProperties:
    overrideConfigs: |
      java.arg.4=-Djava.net.preferIPv4Stack=true
      java.arg.log4shell=-Dlog4j2.formatMsgNoLookups=true
    nifiJvmMemory: "512m"
  
  # -- This is only for embedded zookeeper configuration. This is ignored if `zookeeper.enabled` is true.
  zookeeperProperties:
    overrideConfigs: |
      initLimit=15
      autopurge.purgeInterval=24
      syncLimit=5
      tickTime=2000
      dataDir=./state/zookeeper
      autopurge.snapRetainCount=30
  
  # -- Defines configurations for nodes which can be used in list of nodes in cluster.
  #    See: https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster/3_node_config
  nodeConfigGroups:
    default-group:
      isNode: true
      # Extra volumes to add to the Pod spec.
      # See https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster/3_node_config#externalvolumeconfig
      #externalVolumeConfigs: []
      imagePullPolicy: IfNotPresent
      storageConfigs:
        - mountPath: "/opt/nifi/data"
          name: data
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "local-path"
            resources:
              requests:
                storage: 128Mi
      serviceAccountName: "nifi-nifi-cluster"
      resourcesRequirements:
        limits:
          cpu: 2
          memory: 4Gi
        requests:
          cpu: 1
          memory: 2Gi

  # -- Defines the list of nodes in the cluster with their id's and config to apply.
  #    See https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster#nificlusterspec
  nodes: 
    - id: 1
      nodeConfigGroup: "default-group"
    - id: 2
      nodeConfigGroup: "default-group"
    - id: 3
      nodeConfigGroup: "default-group"

  propagateLabels: true
  # -- The number of minutes the operator should wait for the cluster to be successfully deployed before retrying
  retryDurationMinutes: 10

  # -- https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster/6_listeners_config
  listenersConfig:
    # -- List of internal ports exposed for the nifi container. The `type` of port has specific meaning, see:
    #    https://konpyutaika.github.io/nifikop/docs/5_references/1_nifi_cluster/6_listeners_config#internallistener
    internalListeners:
      - type: "https"
        name: "https"
        containerPort: 8080
      - type: "cluster"
        name: "cluster"
        containerPort: 6007
      - type: "s2s"
        name: "s2s"
        containerPort: 10000
      #   # If enabling monitoring, you need to have a port named prometheus as below
      # - type: "prometheus"
      #   name: "prometheus"
      #   containerPort: 9090
    # -- Provides the SSL configuration for the cluster, can be fully user provided, user provided CA or auto generated
    #    with cert-manager. See: https://konpyutaika.github.io/nifikop/docs/3_manage_nifi/1_manage_clusters/1_deploy_cluster/4_ssl_configuration
    sslSecrets:
      tlsSecretName:
      create: true
  # -- Additional k8s services to create and target internal listener ports. Ingress will use these to route traffic to
  # the cluster
  externalServices:
    - name: "nifi-cluster-ip"
      spec:
        type: ClusterIP
        portConfigs:
          - port: 8080
            internalListenerName: "https"
      metadata:
        labels: {}
        annotations: {}
    # - name: "nifi-cluster-lb"
    #   spec:
    #     type: LoadBalancer
    #     portConfigs:
    #       - port: 8080
    #         internalListenerName: "http"
    #   metadata:
    #     labels: {}
    #     annotations: {}
    #   serviceAnnotations: {}
    # - name: "nifi-cluster-np"
    #   spec:
    #     type: NodePort
    #     portConfigs:
    #       - port: 8080
    #         internalListenerName: "http"
    #   metadata:
    #     labels: {}
    #     annotations: {}

ingress:
  enabled: true
  className: nginx
  annotations:
    # nginx.ingress.kubernetes.io/rewrite-target: /$2
    # nginx.ingress.kubernetes.io/configuration-snippet: |
    #   proxy_set_header 'X-ProxyContextPath' '/';
    #   proxy_set_header 'X-ProxyPort' '8080';
    #   proxy_set_header 'X-ProxyScheme' 'http';
  hosts:
    - host: nifi2.test
      paths:
        - path: /
          pathType: Prefix
          backend:
            # this needs to match the externalServices config above
            service:
              name: "nifi-cluster-ip"
              port:
                number: 8080
  tls: []
  #  - secretName: nifi-cluster-tls
  #    hosts:
  #      - my-host.example.com

# -- Monitoring is enabled by the Prometheus operator. This can be deployed stand-alone or as a part of the Rancher
# Monitoring application. Do not enable this unless you've installed rancher-logging or the Prometheus operator directly.
# https://rancher.com/docs/rancher/v2.6/en/monitoring-alerting/
# Enabling monitoring creates a `ServiceMonitor` custom resource and routes logs to the output of your choice
monitoring:
  enabled: false
  # For NiFi Version 2 use the https or http listener
  internalListenersType: "prometheus"
  # For NiFi Version 2 use the /nifi-api/flow/metrics/prometheus
  path: "/metrics"
  # Optional fields to provide the ServiceMonitor see the following documentation for more information
  # https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api-reference/api.md#monitoring.coreos.com/v1.Endpoint 
  #tlsConfig:
  #  caFile: 
  #  certFile:
  #  keyFile:
  #  insecureSkipVerify:
  #  serverName:
  #interval: 

# Logging is enabled by the banzai cloud logging operator. This can be deployed stand-alone or as a part of the Rancher Logging application.
# Do not enable this unless you've installed rancher-logging or the banzai logging operator directly.
# https://rancher.com/docs/rancher/v2.6/en/logging/
# Enabling logging creates `Flow` custom resources and outputs to the output of your choice
logging:
  # -- Whether or not log aggregation via the banzai cloud logging operator is enabled.
  enabled: false

  # -- https://banzaicloud.com/docs/one-eye/logging-operator/configuration/flow/
  flow:
    name: nifi-cluster-flow
    # -- The filters and match configs should be configured just like in the CRDs (linked above)
    filters:
      - parser:
          parse:
            type: regexp
            expression: '/^(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3}) (?<level>[^\s]+) \[(?<thread>.*)\] (?<message>.*)$/im'
            keep_time_key: true
            time_key: time
            time_format: '%iso8601'
            time_type: string
    match:
      - select:
          labels:
            app: "nifi"
  
  # -- Only global outputs that have been created separately to this helm chart supported for now
  # may consider changing this to a Flow per cluster in future
  outputs:
    globalOutputRefs:
      - loki-cluster-output

# -- Nifi NodeGroup Autoscaler configurations.
# Use this to autoscale any NodeGroup specified in `cluster.nodeConfigGroups`. To autoscale 
# See https://konpyutaika.github.io/nifikop/docs/5_references/7_nifi_nodegroup_autoscaler
nodeGroupAutoscalers:
  - name: default-group-autoscaler
    enabled: false
    nodeConfigGroupId: default-group
    readOnlyConfig: {}
    nodeConfig: {}
    nodeLabelsSelector:
      matchLabels:
        default-scale-group: "true"
    upscaleStrategy: simple
    downscaleStrategy: lifo
    replicas: ~
    # this is the horizontal pod autoscaler spec: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
    # you may include both metrics and behavior here.
    horizontalAutoscaler:
      maxReplicas: 2
      minReplicas: 1    

# -- registry client configurations.
# You'd use this to version control process groups & store the configuration in a registry bucket
# This is required if you wish to deploy versioned flows via the dataflow config
# The .name field must be safe in Kubernetes and match the pattern [A-Za-z0-9-]
# See https://konpyutaika.github.io/nifikop/docs/5_references/3_nifi_registry_client
registryClients:
  - name: default
    enabled: false
    endpoint: http://nifi-registry
    description: "Default NiFi Registry client"
  - name: alternate
    enabled: false
    endpoint: http://nifi-registry
    description: "Alternate NiFi Registry client"

# -- Parameter context configurations.
# This is required if you wish to deploy versioned flows via the dataflow config. However, 
# it is not required to provide secret refs. You must provide at least one parameter or nifikop
# will choke on updating dataflows.
# The `.name` field must be safe in Kubernetes and match the pattern [A-Za-z0-9-]
# See https://konpyutaika.github.io/nifikop/docs/5_references/4_nifi_parameter_context
parameterContexts:
  - name: default
    enabled: false
    # -- Use the given secret and put its values as sensitive values in this parameter context. The key will be the name
    # of parameter in NiFi.
    secretRefs: []
      # - name: secret-params
      #   namespace: nifi
    # -- List of references of Parameter Contexts from which this one inherits parameters
    inheritedParameterContexts: []
      # - name: base-param-context
      #   namespace: nifi
    # List of parameters to add to the ParameterContext
    parameters:
    - name: foo-prop
      value: bar-value
      sensitive: false
      description: my foo bar property
    - name: foo-prop-2
      value: bar-value-2
      sensitive: true
      description: my foo bar property

# -- Versioned dataflow configurations.
# This is used to configure versioned dataflows to be deployed to this nifi cluster. Any number may be configured.
# Note that a _registryClient_ and a _parameterContext_ must be enabled & present in order for a dataflow to be deployed to a cluster.
# See https://konpyutaika.github.io/nifikop/docs/5_references/5_nifi_dataflow
dataflows:
  - enabled: false
    # -- Name of the flow
    name: My Special Dataflow
    # -- reference to the nifi registry client to connect and get versioned flow
    registryClientRef:
      name: default
      namespace: nifi
    # -- Reference to the ParameterContext object which will be added to this flow
    parameterContextRef:
      name: default
      namespace: nifi
    # -- Bucket id can be found in the bucket.yml created when version controlling process groups
    bucketId: ""
    # -- Flow id can be found in the bucket.yml created when version controlling process groups
    flowId: ""
    # -- Version of the flow to take from registry
    flowVersion: 1
    flowPosition:
      # -- x coordinate of flow on canvas
      posX: 0
      # -- y coordinate of flow on canvas
      posY: 0
    # -- This is one of {never, always, once}
    syncMode: always
    skipInvalidControllerService: true
    skipInvalidComponent: true
    updateStrategy: drain

# -- Configure users. Each will result in the creation of a `NiFiUser` CRD in k8s, which the operator
# takes and applies to each nifi configuration.
# See https://konpyutaika.github.io/nifikop/docs/5_references/2_nifi_user.
# The object's `name` is used for k8s resource `metadata.name` and so should be alphanumeric and hyphenated & <= 64 bytes
users: []
  # - identity: CN=first last, OU=Apache NiFi, O=Apache, L=Santa Monica, ST=CA, C=US
  #   name: alphanumeric-hypen-identity-name
  #   secretName: ""
  #   createCert: false
  #   includeJKS: false
  #   # Specify access policies for this individual or assign them to a group via identity in `userGroups`.
  #   # These are optional.
  #   accessPolicies: 
  #       # type is either "global" or "component"
  #     - type: global
  #       # action is either "read" or "write"
  #       action: read
  #       resource: /counters
  #       # These should all be used if type is "component"
  #       componentType: ""
  #       componentId: ""

# -- Configure user groups. Each will result in the creation of a `NiFiUserGroup` CRD in k8s, which the operator
# takes and applies to each nifi configuration.
# See all properties here: https://konpyutaika.github.io/nifikop/docs/5_references/6_nifi_usergroup
userGroups: []
  # - name: "basic-user-group"
  #   identity is an optional field. It defaults to the CRD '<namespace>-<name>'.
  #   identity: "My Special Group"
  #   # Users listed here should match the identity of users provided via the `users` configuration
  #   users:
  #     - name: CN=first last, OU=Apache NiFi, O=Apache, L=Santa Monica, ST=CA, C=US
  #   accessPolicies:
  #       # type is either "global" or "component"
  #     - type: global
  #       # action is either "read" or "write"
  #       action: read
  #       resource: /counters
  #       # These should all be used if type is "component"
  #       componentType: ""
  #       componentId: ""

# -- A list of extra Kubernetes manifest with Helm template support, to apply
extraManifests: []
#  - apiVersion: v1
#    kind: ConfigMap
#    metadata:
#      name: "{{ .Release.Name }}-config-map"
#    data:
#      my-prop-1: foo
#      my-prop-2: bar

# -- zookeeper chart overrides. Please see all the options for the zookeeper chart here:
# https://github.com/bitnami/charts/tree/main/bitnami/zookeeper
zookeeper:
  # -- Whether or not to deploy an independent zookeeper.
  enabled: false
  replicaCount: 1
  resources:
    limits:
      cpu: 2
      memory: 500Mi
    requests:
      cpu: 0.5m
      memory: 250Mi
  persistence:
    storageClass: standard
    size: 10Gi

but getting this error constantly in operator

{"level":"error","time":"2025-06-11T13:17:00.204Z","logger":"nifi_client","caller":"nificlient/common.go:26","msg":"Non 200 response from nifi node","statusCode":"403 Forbidden","body":"Unable to view the controller. Contact the system administrator.","stacktrace":"github.com/konpyutaika/nifikop/pkg/nificlient.errorGetOperation\n\t/workspace/pkg/nificlient/common.go:26\ngithub.com/konpyutaika/nifikop/pkg/nificlient.(*nifiClient).DescribeCluster\n\t/workspace/pkg/nificlient/system.go:17\ngithub.com/konpyutaika/nifikop/pkg/nificlient.(*nifiClient).Build\n\t/workspace/pkg/nificlient/client.go:182\ngithub.com/konpyutaika/nifikop/pkg/nificlient.NewFromConfig\n\t/workspace/pkg/nificlient/client.go:203\ngithub.com/konpyutaika/nifikop/pkg/common.NewClusterConnection\n\t/workspace/pkg/common/common.go:56\ngithub.com/konpyutaika/nifikop/pkg/clientwrappers/scale.EnsureRemovedNodes\n\t/workspace/pkg/clientwrappers/scale/scale.go:185\ngithub.com/konpyutaika/nifikop/pkg/resources/nifi.(*Reconciler).Reconcile\n\t/workspace/pkg/resources/nifi/nifi.go:296\ngithub.com/konpyutaika/nifikop/internal/controller.(*NifiClusterReconciler).Reconcile\n\t/workspace/internal/controller/nificluster_controller.go:148\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:116\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:303\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:263\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller[...]).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:224"}
{"level":"info","time":"2025-06-11T13:17:00.204Z","logger":"controller.NifiCluster","caller":"controller/nificluster_controller.go:152","msg":"Nodes unreachable, may still be starting up","reason":"could not connect to nifi nodes: nifi-nifi-cluster-headless.nifi-operator.svc.cluster.local:8080: non 200 response from NiFi cluster"}```





what am i doing wrong?
cant get it working

### What is the expected behavior?

should reconcile successfully with ssl

### What do you see instead?

403 forbidden from operator

### Possible solution

_No response_

### NiFiKop version

v1.14.1-release

### Golang version

go

### Kubernetes version

1.28.4

### NiFi version

2.4.0

### Additional context

_No response_

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions