From a2fad3937db2f409d0bd5e36f1741ee6512faffd Mon Sep 17 00:00:00 2001 From: April M <36110273+aimurphy@users.noreply.github.com> Date: Mon, 16 Feb 2026 13:59:08 -0800 Subject: [PATCH 1/2] start attribute replacement --- antora.yml | 4 ++++ modules/ROOT/pages/index.adoc | 12 ++++++------ modules/getting-started/pages/operator.adoc | 4 +++- modules/migration/pages/migrate-cluster.adoc | 8 ++++---- modules/resource-sets/pages/pods.adoc | 6 +++--- modules/scaling-components/pages/kafka.adoc | 2 +- 6 files changed, 21 insertions(+), 15 deletions(-) diff --git a/antora.yml b/antora.yml index 03ffc62..2e000c4 100644 --- a/antora.yml +++ b/antora.yml @@ -8,7 +8,11 @@ nav: asciidoc: attributes: + company: 'DataStax' protocol_version: '2.10' pulsar-operator: 'KAAP Operator' pulsar-operator-full-name: 'Kubernetes Autoscaling for Apache Pulsar (KAAP)' pulsar-stack: 'KAAP stack' + pulsar: 'Apache Pulsar' + pulsar-short: 'Pulsar' + pulsar-reg: 'Apache Pulsar(TM)' \ No newline at end of file diff --git a/modules/ROOT/pages/index.adoc b/modules/ROOT/pages/index.adoc index ac75758..682fb9d 100644 --- a/modules/ROOT/pages/index.adoc +++ b/modules/ROOT/pages/index.adoc @@ -1,13 +1,13 @@ = {pulsar-operator-full-name} :navtitle: About {pulsar-operator-full-name} -{pulsar-operator-full-name} simplifies running https://pulsar.apache.org[Apache Pulsar] on Kubernetes by applying the familiar https://kubernetes.io/docs/concepts/extend-kubernetes/operator/[Operator pattern] to Pulsar's components, and horizonally scaling resources up or down based on CPU and memory workloads. +{pulsar-operator-full-name} simplifies running https://pulsar.apache.org[{pulsar-reg}] on Kubernetes by applying the familiar https://kubernetes.io/docs/concepts/extend-kubernetes/operator/[Operator pattern] to Pulsar's components, and horizonally scaling resources up or down based on CPU and memory workloads. -Operating and maintaining Apache Pulsar clusters traditionally involves complex manual configurations, making it challenging for developers and operators to effectively manage the system's lifecycle. However, with the KAAP operator, these complexities are abstracted away, enabling developers to focus on their applications rather than the underlying infrastructure. +Operating and maintaining {pulsar} clusters traditionally involves complex manual configurations, making it challenging for developers and operators to effectively manage the system's lifecycle. However, with the KAAP operator, these complexities are abstracted away, enabling developers to focus on their applications rather than the underlying infrastructure. Some of the key features and benefits of the KAAP operator include: -- **Easy Deployment**: Deploying an Apache Pulsar cluster on Kubernetes is simplified through declarative configurations and automation provided by the operator. +- **Easy Deployment**: Deploying an {pulsar} cluster on Kubernetes is simplified through declarative configurations and automation provided by the operator. - **Scalability**: The KAAP operator enables effortless scaling of Pulsar clusters by automatically handling the creation and configuration of new Pulsar brokers and bookies as per defined rules. The broker autoscaling is integrated with the Pulsar broker load balancer to make smart resource management decisions, and bookkeepers are scaled up and down based on storage usage in a safe, controlled manner. @@ -23,7 +23,7 @@ We also offer the xref:getting-started:stack.adoc[{pulsar-stack}] if you're look * Cert Manager * Keycloak -Whether you are a developer looking to leverage the power of Apache Pulsar in your Kubernetes environment or an operator seeking to streamline the management of Pulsar clusters, the {pulsar-operator} provides a robust and user-friendly solution. +Whether you are a developer looking to leverage the power of {pulsar} in your Kubernetes environment or an operator seeking to streamline the management of Pulsar clusters, the {pulsar-operator} provides a robust and user-friendly solution. This guide offers a starting point for {pulsar-operator}. We will cover installation and deployment, configuration points, and further options for managing Pulsar components with the {pulsar-operator}. @@ -55,11 +55,11 @@ Think of {pulsar-operator} as a manager for the individual components of Pulsar. A typical Pulsar cluster *requires* the following components: -* https://pulsar.apache.org/docs/concepts-architecture-overview/#metadata-store[Zookeeper - This is Pulsar’s meta data store. It stores data about a cluster’s configuration, helps the proxy direct messages to the correct broker, and holds Bookie configurations. +* https://pulsar.apache.org/docs/concepts-architecture-overview/#metadata-store[Zookeeper - This is Pulsar's meta data store. It stores data about a cluster's configuration, helps the proxy direct messages to the correct broker, and holds Bookie configurations. * https://pulsar.apache.org/docs/concepts-architecture-overview/#brokers[Broker - This is Pulsar's message router. -* https://pulsar.apache.org/docs/concepts-architecture-overview/#apache-bookkeeper[Bookkeeper (bookie) - This is Pulsar’s data store. +* https://pulsar.apache.org/docs/concepts-architecture-overview/#apache-bookkeeper[Bookkeeper (bookie) - This is Pulsar's data store. Bookkeeper stores message data in a low-latency, resilient way. In addition to the required components, you might want to include some *optional components*: diff --git a/modules/getting-started/pages/operator.adoc b/modules/getting-started/pages/operator.adoc index 6fdf519..ad7fbef 100644 --- a/modules/getting-started/pages/operator.adoc +++ b/modules/getting-started/pages/operator.adoc @@ -5,7 +5,9 @@ You can install just the operator and the PulsarCluster CRDs, or you can install [#operator] == Install {pulsar-operator} -Install the DataStax Helm repository: + +. Install the {company} Helm repository: ++ [source,shell] ---- helm repo add kaap https://datastax.github.io/kaap diff --git a/modules/migration/pages/migrate-cluster.adoc b/modules/migration/pages/migrate-cluster.adoc index ee94a18..bc0efa4 100644 --- a/modules/migration/pages/migrate-cluster.adoc +++ b/modules/migration/pages/migrate-cluster.adoc @@ -1,14 +1,14 @@ = Migrate existing cluster to KAAP operator -Migrating an existing Apache Pulsar cluster to one controlled by the {pulsar-operator} is a manual process, but we've included a migration tool to help you along the way. +Migrating an existing {pulsar-reg} cluster to one controlled by the {pulsar-operator} is a manual process, but we've included a migration tool to help you along the way. -The migration tool is a CLI application that connects to an existing Apache Pulsar cluster and generates a valid and equivalent PulsarCluster CRD. +The migration tool is a CLI application that connects to an existing {pulsar} cluster and generates a valid and equivalent PulsarCluster CRD. The migration tool simulates what would happen if the generated PulsarCluster would be submitted, retrieves the Kubernetes resources that would be created, and compares them with the existing cluster's resources, generating a detailed HTML report. You can then examine the report and decide if you want to proceed with the cluster migration, or if you need to make some changes first. == Prerequisites * Java 17 -* An existing Apache Pulsar cluster +* An existing {pulsar} cluster * Migration-tool JAR downloaded from the https://github.com/datastax/kaap/releases[latest release]. == Scan and generate cluster CRDs @@ -41,7 +41,7 @@ CURRENT NAME CLUSTER + Before running the migration tool, {company} recommends switching to K8's current context and ensuring connectivity, such as with `kubectl get pods`. -.. The `namespace` is the namespace with the Apache Pulsar resources you wish to scan. +.. The `namespace` is the namespace with the {pulsar} resources you wish to scan. .. The `clusterName` is the prefix of each pod. For example, if the broker pod is `pulsar-prod-cluster-broker-0`, then the `clusterName` is `pulsar-prod-cluster`. diff --git a/modules/resource-sets/pages/pods.adoc b/modules/resource-sets/pages/pods.adoc index 33cf1e7..5cb3a74 100644 --- a/modules/resource-sets/pages/pods.adoc +++ b/modules/resource-sets/pages/pods.adoc @@ -34,7 +34,7 @@ global: ---- With `requireRackAffinity=false`, each pod of the same rack will be placed where a new pod of the same rack exists (if any exists), *if possible*. -Set `requireRackAffinity=true` to strictly enforce this behavior. If the target node is full (it can’t accept the new pod with the stated requirements), the upgrade will be blocked and the pod will wait until the node is able to accept new pods. +Set `requireRackAffinity=true` to strictly enforce this behavior. If the target node is full (it can't accept the new pod with the stated requirements), the upgrade will be blocked and the pod will wait until the node is able to accept new pods. With `requireRackAntiAffinity=false`, each pod of the same rack will be placed in a node where any other pod of any other racks is already scheduled, if possible. Set `requireRackAntiAffinity=true`, to strictly enforce this behavior. If no node is free, the pod will wait until a new node is added. @@ -55,7 +55,7 @@ With `enableHostAntiAffinity=true`, unless you're placing pods in different avai Within a single resource set, you can specify anti-affinity behaviors in the relationships between pods and nodes. There are two types of anti-affinity, `zone` and `host`. -`zone` will set the failure domain to the region’s availability zone. +`zone` will set the failure domain to the region's availability zone. `host` will set the failure domain to the node. Soft or preferred constraints are acceptable - for example, you might prefer to place pods in different zones, but it's not a requirement. @@ -104,4 +104,4 @@ global: required: true ---- -If an availability zone is not available during upgrade, the pod won’t be scheduled and the upgrade will be blocked until a pod is manually deleted and the zone is free again. \ No newline at end of file +If an availability zone is not available during upgrade, the pod won't be scheduled and the upgrade will be blocked until a pod is manually deleted and the zone is free again. \ No newline at end of file diff --git a/modules/scaling-components/pages/kafka.adoc b/modules/scaling-components/pages/kafka.adoc index 30a78ae..3083e56 100644 --- a/modules/scaling-components/pages/kafka.adoc +++ b/modules/scaling-components/pages/kafka.adoc @@ -6,7 +6,7 @@ Thanks to Starlight for Kafka, you can run your Kafka workload on a Pulsar clust == Scale the Pulsar Broker with a Kafka Client Workload This folder contains a sample configuration and demo about how to run a workload -on an Apache Pulsar(R) cluster with the Broker Auto Scaling feature. +on an {pulsar-reg} cluster with the Broker Auto Scaling feature. Support for the Kafka wire protocol is provided by xref:starlight-for-kafka:ROOT:index.adoc[Starlight for Kafka]. From a16d3daf64b085189104922a073e1a221eb1080d Mon Sep 17 00:00:00 2001 From: April M <36110273+aimurphy@users.noreply.github.com> Date: Mon, 16 Feb 2026 14:04:51 -0800 Subject: [PATCH 2/2] attribute usage --- modules/ROOT/pages/index.adoc | 62 +++++++++---------- modules/authentication/pages/auth-tls.adoc | 2 +- modules/getting-started/pages/index.adoc | 4 +- modules/getting-started/pages/operator.adoc | 10 +-- modules/getting-started/pages/stack.adoc | 6 +- modules/getting-started/pages/upgrades.adoc | 2 +- modules/migration/pages/migrate-cluster.adoc | 4 +- modules/resource-sets/pages/index.adoc | 2 +- modules/resource-sets/pages/proxies.adoc | 4 +- .../pages/autoscale-bookies.adoc | 10 +-- .../pages/autoscale-brokers.adoc | 10 +-- modules/scaling-components/pages/index.adoc | 4 +- modules/scaling-components/pages/kafka.adoc | 10 +-- 13 files changed, 65 insertions(+), 65 deletions(-) diff --git a/modules/ROOT/pages/index.adoc b/modules/ROOT/pages/index.adoc index 682fb9d..42e9eb5 100644 --- a/modules/ROOT/pages/index.adoc +++ b/modules/ROOT/pages/index.adoc @@ -1,88 +1,88 @@ = {pulsar-operator-full-name} :navtitle: About {pulsar-operator-full-name} -{pulsar-operator-full-name} simplifies running https://pulsar.apache.org[{pulsar-reg}] on Kubernetes by applying the familiar https://kubernetes.io/docs/concepts/extend-kubernetes/operator/[Operator pattern] to Pulsar's components, and horizonally scaling resources up or down based on CPU and memory workloads. +{pulsar-operator-full-name} simplifies running https://pulsar.apache.org[{pulsar-reg}] on Kubernetes by applying the familiar https://kubernetes.io/docs/concepts/extend-kubernetes/operator/[Operator pattern] to the {pulsar-short} components, and horizontally scaling resources up or down based on CPU and memory workloads. -Operating and maintaining {pulsar} clusters traditionally involves complex manual configurations, making it challenging for developers and operators to effectively manage the system's lifecycle. However, with the KAAP operator, these complexities are abstracted away, enabling developers to focus on their applications rather than the underlying infrastructure. +Operating and maintaining {pulsar} clusters traditionally involves complex manual configurations, making it challenging for developers and operators to effectively manage the system's lifecycle. However, with the {pulsar-operator}, these complexities are abstracted away, enabling developers to focus on their applications rather than the underlying infrastructure. -Some of the key features and benefits of the KAAP operator include: +Some of the key features and benefits of the {pulsar-operator} include: - **Easy Deployment**: Deploying an {pulsar} cluster on Kubernetes is simplified through declarative configurations and automation provided by the operator. -- **Scalability**: The KAAP operator enables effortless scaling of Pulsar clusters by automatically handling the creation and configuration of new Pulsar brokers and bookies as per defined rules. The broker autoscaling is integrated with the Pulsar broker load balancer to make smart resource management decisions, and bookkeepers are scaled up and down based on storage usage in a safe, controlled manner. +- **Scalability**: The {pulsar-operator} enables effortless scaling of {pulsar-short} clusters by automatically handling the creation and configuration of new {pulsar-short} brokers and bookies as per defined rules. The broker autoscaling is integrated with the {pulsar-short} broker load balancer to make smart resource management decisions, and bookkeepers are scaled up and down based on storage usage in a safe, controlled manner. -- **High Availability**: The operator implements best practices for high availability, ensuring that Pulsar clusters are fault-tolerant and can sustain failures without service disruptions. +- **High Availability**: The operator implements best practices for high availability, ensuring that {pulsar-short} clusters are fault-tolerant and can sustain failures without service disruptions. -- **Lifecycle Management**: The operator takes care of common Pulsar cluster lifecycle tasks, such as cluster creation, upgrade, configuration updates, and graceful shutdowns. +- **Lifecycle Management**: The operator takes care of common {pulsar-short} cluster lifecycle tasks, such as cluster creation, upgrade, configuration updates, and graceful shutdowns. -We also offer the xref:getting-started:stack.adoc[{pulsar-stack}] if you're looking for more Kubernetes-native tooling deployed with your Pulsar cluster. Along with the PulsarCluster CRDs, KAAP stack also includes: +We also offer the xref:getting-started:stack.adoc[{pulsar-stack}] if you're looking for more Kubernetes-native tooling deployed with your {pulsar-short} cluster. Along with the PulsarCluster CRDs, {pulsar-stack} also includes: -* Pulsar Operator +* {pulsar-operator} * Prometheus Stack (Grafana) -* Pulsar Grafana dashboards +* {pulsar-short} Grafana dashboards * Cert Manager * Keycloak -Whether you are a developer looking to leverage the power of {pulsar} in your Kubernetes environment or an operator seeking to streamline the management of Pulsar clusters, the {pulsar-operator} provides a robust and user-friendly solution. +Whether you are a developer looking to leverage the power of {pulsar} in your Kubernetes environment or an operator seeking to streamline the management of {pulsar-short} clusters, the {pulsar-operator} provides a robust and user-friendly solution. This guide offers a starting point for {pulsar-operator}. -We will cover installation and deployment, configuration points, and further options for managing Pulsar components with the {pulsar-operator}. +We will cover installation and deployment, configuration points, and further options for managing {pulsar-short} components with the {pulsar-operator}. == Features After a new custom resource type is added to your cluster by installing a CRD, you can create instances of the resource based on its specification. -The Kubernetes API can be extended to support the new resource type, automating away the tedious aspects of managing a Pulsar cluster. +The Kubernetes API can be extended to support the new resource type, automating away the tedious aspects of managing a {pulsar-short} cluster. * xref:scaling-components:autoscale-bookies.adoc[Bookkeeper autoscaler] - Automatically scale the number of bookies based on memory usage. * xref:scaling-components:autoscale-brokers.adoc[Broker autoscaler] - Automatically scale the number of brokers based on CPU load. * xref:resource-sets:index.adoc[Rack-aware bookkeeper placement] - Place bookies in different racks to guarantee high availability. -* xref:scaling-components:kafka.adoc[Kafka API] - Use the Starlight for Kafka API to bring your Kafka message traffic to Pulsar. +* xref:scaling-components:kafka.adoc[Kafka API] - Use the Starlight for Kafka API to bring your Kafka message traffic to {pulsar-short}. -== How {pulsar-operator} makes Pulsar easier +== How {pulsar-operator} makes {pulsar-short} easier Operators are a common pattern for packaging, deploying, and managing Kubernetes applications. Operators extend Kubernetes functionality to automate common tasks in stateful applications. -Think of {pulsar-operator} as a manager for the individual components of Pulsar. By implementing the pulsarCluster Custom Resource Definition, the operator knows enough to manage the deployment, configuration, and scaling of Pulsar components with re-usable and automated tasks, such as: +Think of {pulsar-operator} as a manager for the individual components of {pulsar-short}. By implementing the pulsarCluster Custom Resource Definition, the operator knows enough to manage the deployment, configuration, and scaling of {pulsar-short} components with re-usable and automated tasks, such as: -* Deploying a Pulsar cluster +* Deploying a {pulsar-short} cluster * Deploying monitoring and logging components * Autoscaling bookies based on memory usage, or brokers based on CPU load * Assigning resources to specific availability zones (AZs) {pulsar-operator} is configured, deployed, and packaged with Helm charts and based on the https://quarkiverse.github.io/quarkiverse-docs/quarkus-operator-sdk/dev/index.html[Quarkus Operator SDK]. -== Pulsar component architecture +== {pulsar-short} component architecture -A typical Pulsar cluster *requires* the following components: +A typical {pulsar-short} cluster *requires* the following components: -* https://pulsar.apache.org/docs/concepts-architecture-overview/#metadata-store[Zookeeper - This is Pulsar's meta data store. It stores data about a cluster's configuration, helps the proxy direct messages to the correct broker, and holds Bookie configurations. +* https://pulsar.apache.org/docs/concepts-architecture-overview/#metadata-store[Zookeeper] - This is the {pulsar-short} metadata store. It stores data about a cluster's configuration, helps the proxy direct messages to the correct broker, and holds Bookie configurations. -* https://pulsar.apache.org/docs/concepts-architecture-overview/#brokers[Broker - This is Pulsar's message router. +* https://pulsar.apache.org/docs/concepts-architecture-overview/#brokers[Broker] - This is the {pulsar-short} message router. -* https://pulsar.apache.org/docs/concepts-architecture-overview/#apache-bookkeeper[Bookkeeper (bookie) - This is Pulsar's data store. +* https://pulsar.apache.org/docs/concepts-architecture-overview/#apache-bookkeeper[Bookkeeper (bookie)] - This is the {pulsar-short} data store. Bookkeeper stores message data in a low-latency, resilient way. In addition to the required components, you might want to include some *optional components*: -* https://bookkeeper.apache.org/docs/admin/autorecovery[Bookkeeper AutoRecovery - This is a Pulsar component that recovers Bookkeeper data in the event of a bookie outage. -* https://pulsar.apache.org/docs/concepts-architecture-overview/#pulsar-proxy[Pulsar proxy - The Pulsar proxy is just that - a proxy that runs at the edge of the cluster with public facing endpoints. -Pulsar proxy also offers special options for cluster extensions, like our [Starlight Suite of APIs]. -* https://pulsar.apache.org/docs/functions-worker-run-separately/[Dedicated functions worker(s) - You can optionally run dedicated function workers in a Pulsar cluster. -* xref:luna-streaming:components:admin-console-tutorial.adoc[Pulsar AdminConsole] - This is an optional web-based admin console for managing Pulsar clusters. -* xref:luna-streaming:components:heartbeat-vm.adoc[Pulsar Heartbeat] - This is an optional component that monitors the health of Pulsar cluster and emits metrics about the cluster that are helpful for observing and debugging issues. +* https://bookkeeper.apache.org/docs/admin/autorecovery[Bookkeeper AutoRecovery] - This is a {pulsar-short} component that recovers Bookkeeper data in the event of a bookie outage. +* https://pulsar.apache.org/docs/concepts-architecture-overview/#pulsar-proxy[{pulsar-short} proxy] - The {pulsar-short} proxy is just that - a proxy that runs at the edge of the cluster with public facing endpoints. +{pulsar-short} proxy also offers special options for cluster extensions, like our [Starlight Suite of APIs]. +* https://pulsar.apache.org/docs/functions-worker-run-separately/[Dedicated functions worker(s)] - You can optionally run dedicated function workers in a {pulsar-short} cluster. +* xref:luna-streaming:components:admin-console-tutorial.adoc[{pulsar-short} AdminConsole] - This is an optional web-based admin console for managing {pulsar-short} clusters. +* xref:luna-streaming:components:heartbeat-vm.adoc[{pulsar-short} Heartbeat] - This is an optional component that monitors the health of {pulsar-short} cluster and emits metrics about the cluster that are helpful for observing and debugging issues. * Prometheus/Grafana/Alert manager stack - This is the default observability stack for a cluster. The Luna Helm chart includes pre-made dashboards in Grafana and pre-wires all the metrics scraping. -== How {pulsar-operator} installs Pulsar +== How {pulsar-operator} installs {pulsar-short} {pulsar-operator} can be installed in two ways. -* xref:getting-started:operator.adoc[Pulsar Operator] - Installs just the operator and PulsarCluster CRDs into an existing Pulsar cluster. +* xref:getting-started:operator.adoc[{pulsar-operator}] - Installs just the operator and PulsarCluster CRDs into an existing {pulsar-short} cluster. -* xref:getting-started:stack.adoc[Pulsar Stack] - Installs and deploys the operator, a Pulsar cluster, and a full Prometheus monitoring stack. +* xref:getting-started:stack.adoc[{pulsar-stack}] - Installs and deploys the operator, a {pulsar-short} cluster, and a full Prometheus monitoring stack. [TIP] ==== -You can also scan an existing Pulsar cluster and generate an equivalent PulsarCluster CRD. For more, see xref:migration:migrate-cluster.adoc[]. +You can also scan an existing {pulsar-short} cluster and generate an equivalent PulsarCluster CRD. For more, see xref:migration:migrate-cluster.adoc[]. ==== To get started, see xref:getting-started:index.adoc[Getting Started]. diff --git a/modules/authentication/pages/auth-tls.adoc b/modules/authentication/pages/auth-tls.adoc index 835a6da..f2be840 100644 --- a/modules/authentication/pages/auth-tls.adoc +++ b/modules/authentication/pages/auth-tls.adoc @@ -1,6 +1,6 @@ = TLS communication -You can enable TLS communication for each component in the Pulsar cluster, or you can enable it only for specific components. +You can enable TLS communication for each component in the {pulsar-short} cluster, or you can enable it only for specific components. Each component has its own dedicated configuration section, but they're all under the `global.tls` section. Once the TLS setup is done, the operator updates the components configuration to use TLS. diff --git a/modules/getting-started/pages/index.adoc b/modules/getting-started/pages/index.adoc index 228d58f..a3f34f7 100644 --- a/modules/getting-started/pages/index.adoc +++ b/modules/getting-started/pages/index.adoc @@ -4,9 +4,9 @@ * xref:getting-started:operator.adoc[{pulsar-operator}] - Installs just the operator pod and PulsarCluster CRDs. -* xref:getting-started:stack.adoc[{pulsar-stack}] - Installs and deploys the operator, a Pulsar cluster, and a full Prometheus monitoring stack. +* xref:getting-started:stack.adoc[{pulsar-stack}] - Installs and deploys the operator, a {pulsar-short} cluster, and a full Prometheus monitoring stack. [TIP] ==== -If you have an existing Pulsar cluster, you can scan and generate an equivalent PulsarCluster CRD. For more, see xref:migration:migrate-cluster.adoc[]. +If you have an existing {pulsar-short} cluster, you can scan and generate an equivalent PulsarCluster CRD. For more, see xref:migration:migrate-cluster.adoc[]. ==== \ No newline at end of file diff --git a/modules/getting-started/pages/operator.adoc b/modules/getting-started/pages/operator.adoc index ad7fbef..3ce728f 100644 --- a/modules/getting-started/pages/operator.adoc +++ b/modules/getting-started/pages/operator.adoc @@ -1,7 +1,7 @@ = Install {pulsar-operator} Helm chart {pulsar-operator} is installed using Helm. -You can install just the operator and the PulsarCluster CRDs, or you can install xref:stack.adoc[Kaap Stack], which includes the operator, CRDs, and the Prometheus monitoring stack. +You can install just the operator and the PulsarCluster CRDs, or you can install xref:stack.adoc[{pulsar-stack}], which includes the operator, CRDs, and the Prometheus monitoring stack. [#operator] == Install {pulsar-operator} @@ -14,9 +14,9 @@ helm repo add kaap https://datastax.github.io/kaap helm repo update ---- -. The KAAP Operator Helm chart is available for download (https://github.com/datastax/kaap/releases/latest)[here]. +. The {pulsar-operator} Helm chart is available for download (https://github.com/datastax/kaap/releases/latest)[here]. -. Install the KAAP operator Helm chart: +. Install the {pulsar-operator} Helm chart: + [source,shell] ---- @@ -30,7 +30,7 @@ REVISION: 1 TEST SUITE: None ---- -. Ensure KAAP operator is up and running: +. Ensure {pulsar-operator} is up and running: + [source,shell] ---- @@ -115,7 +115,7 @@ Events: You've now installed KAAP. + By default, when KAAP is installed, the PulsarCluster CRDs are also created. -This setting is defined in the Pulsar operator values.yaml file as `crd: create: true`. +This setting is defined in the {pulsar-operator} values.yaml file as `crd: create: true`. . Get the available CRDs: + diff --git a/modules/getting-started/pages/stack.adoc b/modules/getting-started/pages/stack.adoc index 2507773..07bdf8c 100644 --- a/modules/getting-started/pages/stack.adoc +++ b/modules/getting-started/pages/stack.adoc @@ -4,9 +4,9 @@ Need more monitoring and management capabilities? Check out the {pulsar-stack}. {pulsar-stack} includes: -* Pulsar Operator +* {pulsar-operator} * Prometheus Stack (Grafana) -* Pulsar Grafana dashboards +* {pulsar-short} Grafana dashboards * Cert Manager * Keycloak @@ -125,7 +125,7 @@ You've now installed {pulsar-stack}. == Uninstall -Uninstall the KAAP operator and the cluster: +Uninstall the {pulsar-operator} and the cluster: [source,shell] ---- diff --git a/modules/getting-started/pages/upgrades.adoc b/modules/getting-started/pages/upgrades.adoc index dab12df..b7c82bb 100644 --- a/modules/getting-started/pages/upgrades.adoc +++ b/modules/getting-started/pages/upgrades.adoc @@ -38,7 +38,7 @@ stateDiagram-v2 == Upgrade example -For this example, assume you installed the operator and a Pulsar cluster with the following yaml file: +For this example, assume you installed the operator and a {pulsar-short} cluster with the following yaml file: [source,shell] ---- diff --git a/modules/migration/pages/migrate-cluster.adoc b/modules/migration/pages/migrate-cluster.adoc index bc0efa4..5a81a59 100644 --- a/modules/migration/pages/migrate-cluster.adoc +++ b/modules/migration/pages/migrate-cluster.adoc @@ -1,4 +1,4 @@ -= Migrate existing cluster to KAAP operator += Migrate existing cluster to {pulsar-operator} Migrating an existing {pulsar-reg} cluster to one controlled by the {pulsar-operator} is a manual process, but we've included a migration tool to help you along the way. @@ -53,7 +53,7 @@ For example, if the broker pod is `pulsar-prod-cluster-broker-0`, then the `clus java -jar migration-tool.jar generate -i input-cluster-specs.yaml -o output ---- -. Find the link to the generated report in the logs, open the generated report in your browser, and then examine the differences between the existing cluster and the KAAP operator. +. Find the link to the generated report in the logs, open the generated report in your browser, and then examine the differences between the existing cluster and the {pulsar-operator}. + If everything looks good, proceed to the <>. + diff --git a/modules/resource-sets/pages/index.adoc b/modules/resource-sets/pages/index.adoc index fe9911e..bdb6f07 100644 --- a/modules/resource-sets/pages/index.adoc +++ b/modules/resource-sets/pages/index.adoc @@ -1,6 +1,6 @@ = Resource sets -The operator allows you to create multiple sets of Pulsar proxies, brokers, and bookies, called resource sets. +The operator allows you to create multiple sets of {pulsar-short} proxies, brokers, and bookies, called resource sets. Each set is a dedicated deployment/statefulset with its own service and configmap. When multiple sets are specified, an umbrella service is created as the main entrypoint of the cluster, but otherwise, a dedicated service is created for each set. You can customize the service per set - for example, you might assign different DNS domains for each resource set. diff --git a/modules/resource-sets/pages/proxies.adoc b/modules/resource-sets/pages/proxies.adoc index 6e72b97..f320331 100644 --- a/modules/resource-sets/pages/proxies.adoc +++ b/modules/resource-sets/pages/proxies.adoc @@ -1,7 +1,7 @@ = Proxy Sets -Proxy resource sets are used to create multiple sets of Pulsar proxies. Each resource set has its own configuration. -Pulsar can communicate with many different application clients, such as Apache Kafka and RabbitMQ, through proxy extensions. +Proxy resource sets are used to create multiple sets of {pulsar-short} proxies. Each resource set has its own configuration. +{pulsar-short} can communicate with many different application clients, such as Apache Kafka and RabbitMQ, through proxy extensions. {pulsar-operator} can manage these dedicated proxy extensions with resource sets. [source,shell] ---- diff --git a/modules/scaling-components/pages/autoscale-bookies.adoc b/modules/scaling-components/pages/autoscale-bookies.adoc index 34886fa..6d8691d 100644 --- a/modules/scaling-components/pages/autoscale-bookies.adoc +++ b/modules/scaling-components/pages/autoscale-bookies.adoc @@ -1,8 +1,8 @@ = Bookkeeper autoscaler -In a Pulsar cluster managed by KAAP, BookKeeper nodes are scaled up in response to running low on storage, and because of Bookkeeper's segment-based design, the new storage is available immediately for use by the cluster, with no log stream rebalancing required. +In a {pulsar-short} cluster managed by KAAP, BookKeeper nodes are scaled up in response to running low on storage, and because of Bookkeeper's segment-based design, the new storage is available immediately for use by the cluster, with no log stream rebalancing required. -When KAAP sees low storage usage on a Bookkeeper node, the node is automatically scaled down (decommissioned) to free up volume usage and reduce storage costs. This scale-down is done in a safe, controlled manner which ensures no data loss and guarantees the configured replication factor for all messages. For example, if your replication factor is 3 (write and ack quorum of 3), 3 replicas are maintained at all times during the scale down to ensure data can be recovered, even if there is a failure during the scale-down phase. Scaling down bookies has been a consistent pain point in Pulsar, and KAAP automates this without sacrifing Pulsar's data guarantees. +When KAAP sees low storage usage on a Bookkeeper node, the node is automatically scaled down (decommissioned) to free up volume usage and reduce storage costs. This scale-down is done in a safe, controlled manner which ensures no data loss and guarantees the configured replication factor for all messages. For example, if your replication factor is 3 (write and ack quorum of 3), 3 replicas are maintained at all times during the scale down to ensure data can be recovered, even if there is a failure during the scale-down phase. Scaling down bookies has been a consistent pain point in {pulsar-short}, and KAAP automates this without sacrificing the {pulsar-short} data guarantees. == Install Operator with Bookkeeper autoscaler enabled [source,shell] @@ -94,7 +94,7 @@ The operator's thresholds are set in the values.yaml file: == Test bookie autoscaler -Once you've deployed a Pulsar cluster with bookie autoscaling enabled, test it by adding load to the cluster and watching the operator pod's logs. +Once you've deployed a {pulsar-short} cluster with bookie autoscaling enabled, test it by adding load to the cluster and watching the operator pod's logs. [TIP] ==== @@ -118,7 +118,7 @@ bastion: kubectl exec --stdin --tty -- /bin/bash ---- -. Run a https://pulsar.apache.org/docs/performance-pulsar-perf/[Pulsar perf] test in your deployment, and then follow the operator's logs to see the autoscaler in action: +. Run a https://pulsar.apache.org/docs/performance-pulsar-perf/[{pulsar-short} perf] test in your deployment, and then follow the operator's logs to see the autoscaler in action: + [source,shell] ---- @@ -146,7 +146,7 @@ The operator notices the differing values and patches the bookkeeper-set to keep │ 14:41:07 INFO [com.dat.oss.pul.con.boo.BookKeeperResourcesFactory] (ReconcilerExecutor-pulsar-bk-controller-69) Cleaning up orphan PVCs for bookie-s ---- -. Cancel the Pulsar perf test with Ctrl-C. +. Cancel the {pulsar-short} perf test with Ctrl-C. + The operator will notice the decreased load and scale down the number of bookies. Notice that the operator scales down the bookies one by one, as specified in the `scaleDownBy` parameter, and properly decommissions them: diff --git a/modules/scaling-components/pages/autoscale-brokers.adoc b/modules/scaling-components/pages/autoscale-brokers.adoc index 14b460e..1997ec2 100644 --- a/modules/scaling-components/pages/autoscale-brokers.adoc +++ b/modules/scaling-components/pages/autoscale-brokers.adoc @@ -1,10 +1,10 @@ = Broker autoscaler The operator scales the number of broker pods in a cluster up and down based on current CPU usage. -The CPU usage of each broker is checked at the Pulsar load balancer, not just at the Kubenetes pod level. +The CPU usage of each broker is checked at the {pulsar-short} load balancer, not just at the Kubenetes pod level. This means that the operator can scale brokers based on the CPU usage of all brokers in the cluster, not just the CPU usage of a single broker pod. -When the operator sees that the Pulsar load balancer is having trouble finding brokers to assign topic bundles to, it will scale up the number of brokers to handle the load. +When the operator sees that the {pulsar-short} load balancer is having trouble finding brokers to assign topic bundles to, it will scale up the number of brokers to handle the load. When the operator sees that the CPU usage of all brokers is low, it will scale down the number of brokers to save resources. CPU usage is tightly coupled to traffic, so you can expect to see significant scaling activity with broker autoscaler enabled. @@ -94,7 +94,7 @@ The operator's thresholds are set in the `values.yaml` file: == Test broker autoscaler -Once you've deployed a Pulsar cluster with broker autoscaling enabled, test it by adding load to the cluster and watching the operator pod's logs. +Once you've deployed a {pulsar-short} cluster with broker autoscaling enabled, test it by adding load to the cluster and watching the operator pod's logs. [TIP] ==== @@ -118,7 +118,7 @@ bastion: kubectl exec --stdin --tty -- /bin/bash ---- -. Run a https://pulsar.apache.org/docs/performance-pulsar-perf/[Pulsar perf] test in your deployment, and then follow the operator's logs to see the autoscaler in action: +. Run a https://pulsar.apache.org/docs/performance-pulsar-perf/[{pulsar-short} perf] test in your deployment, and then follow the operator's logs to see the autoscaler in action: + [source,shell] ---- @@ -149,7 +149,7 @@ now: 3 The default for `lowerCpuThreshold` is `0.4`, and the default for `higherCpuThreshold` is `0.8`. You might need to reduce these values in `values.yaml` to trigger broker scaling. -. Cancel the Pulsar perf test with Ctrl-C. +. Cancel the {pulsar-short} perf test with Ctrl-C. + The operator will notice the decreased load and scale down the number of brokers. Notice that the operator scales down the brokers one by one, as specified in the `scaleDownBy` parameter, and properly decommissions them: diff --git a/modules/scaling-components/pages/index.adoc b/modules/scaling-components/pages/index.adoc index d18576e..3452814 100644 --- a/modules/scaling-components/pages/index.adoc +++ b/modules/scaling-components/pages/index.adoc @@ -1,9 +1,9 @@ = Scaling components After a new custom resource type is added to your cluster by installing a CRD, you can create instances of the resource based on its specification. -The Kubernetes API can be extended to support the new resource type, automating away the tedious aspects of managing a Pulsar cluster. +The Kubernetes API can be extended to support the new resource type, automating away the tedious aspects of managing a {pulsar-short} cluster. * xref:scaling-components:autoscale-bookies.adoc[Bookkeeper autoscaler] - Automatically scale the number of bookies based on memory usage. * xref:scaling-components:autoscale-brokers.adoc[Broker autoscaler] - Automatically scale the number of brokers based on CPU load. * xref:resource-sets:bookies.adoc[Rack-aware bookkeeper placement] - Place bookies in different racks to guarantee high availability. -* xref:scaling-components:kafka.adoc[Kafka API] - Use the Starlight for Kafka API to bring your Kafka message traffic to Pulsar. \ No newline at end of file +* xref:scaling-components:kafka.adoc[Kafka API] - Use the Starlight for Kafka API to bring your Kafka message traffic to {pulsar-short}. \ No newline at end of file diff --git a/modules/scaling-components/pages/kafka.adoc b/modules/scaling-components/pages/kafka.adoc index 3083e56..08eea56 100644 --- a/modules/scaling-components/pages/kafka.adoc +++ b/modules/scaling-components/pages/kafka.adoc @@ -1,9 +1,9 @@ = Kafka Have an Apache Kafka(R) workload you want to control with {pulsar-operator}? -Thanks to Starlight for Kafka, you can run your Kafka workload on a Pulsar cluster, and with {pulsar-operator}, the scaling of the Kafka pods is handled for you. +Thanks to Starlight for Kafka, you can run your Kafka workload on a {pulsar-short} cluster, and with {pulsar-operator}, the scaling of the Kafka pods is handled for you. -== Scale the Pulsar Broker with a Kafka Client Workload +== Scale the {pulsar-short} Broker with a Kafka Client Workload This folder contains a sample configuration and demo about how to run a workload on an {pulsar-reg} cluster with the Broker Auto Scaling feature. @@ -14,8 +14,8 @@ The client work load is generated using the basic Kafka Performance tools. == Install -. Install the operator and a Pulsar cluster. -In this case, we're installing xref:getting-started:stack.adoc[Pulsar Stack] with the Kafka protocol enabled. +. Install the operator and a {pulsar-short} cluster. +In this case, we're installing xref:getting-started:stack.adoc[{pulsar-stack}] with the Kafka protocol enabled. + [source,shell] ---- @@ -41,7 +41,7 @@ kafka: config: {} ---- + -Additionally, you can proxy the Kafka connection in the Pulsar Proxy with `kafka:enabled:true`. +Additionally, you can proxy the Kafka connection in the {pulsar-short} Proxy with `kafka:enabled:true`. + [source,yaml] ----