This guide explaines how to deploy a Superstream on-prem engine on a Kubernetes platform.
If your environment is completely isolated from the public internet,
please use this procedure.
Overview
The Superstream chart will deploy the following pods:
2 Superstream engines
2 Superstream auto-scaler instances
3 NATS brokers
1 Superstream syslog adapter
1 Telegraf agent for monitoring
It is highly recommended to deploy one engine per environment (dev, staging, prod)
Getting started
1. Configure Environment Tokens
Create a custom_values.yaml file and edit the relevant values (An example can be found here)
custom_values.yaml
############################################################# GLOBAL configuration for Superstream Engine############################################################global: engineName: "" # Define the superstream engine name within 32 characters, excluding '.', and using only lowercase letters, numbers, '-', and '_'.
superstreamAccountId: "" # Provide the account ID associated with the deployment, which could be used for identifying resources or configurations tied to a specific account.
superstreamActivationToken: "" # Enter the activation token required for services or resources that need an initial token for activation or authentication.
skipLocalAuthentication:true############################################################# NATS config############################################################# NATS HA Deployment. Default "true"nats:config:cluster:enabled:true# NATS storageClass configuration. Default is blank "".jetstream:fileStore:pvc:storageClassName:""nats:port:4222tls:enabled:false# set secretName in order to mount an existing secret to dirsecretName:""localCa:enabled:falsesecretName:""############################################################# Kafka Autoscaler config############################################################# Optional service to automatically scale the Kafka cluster up/down based on CPU and memory metrics autoScaler:enabled:true
3. Expose (When Client connectivity is needed. Not a mandatory requirement)
For client connectivity from outside the Kubernetes environment being used, it is necessary to expose the Superstream engine on port 4222 outside of the Kubernetes cluster where Superstream is deployed.
Here is an example YAML file to illustrate the required service configuration:
If there is no default storageClass configured for the Kubernetes cluster or there is a need to choose a custom storageClass, it can be done by specifying its name in the values.yaml file.
# NATS storageClass configuration. The default is blank "".jetstream:fileStore:pvc:storageClassName:""
Appendix E - Deploy Superstream Engine with internal authentication mode on
To enable secure client authentication for the Superstream Engine, edit thevalues.yaml file and set the skipLocalAuthentication parameter to false.
############################################################# GLOBAL configuration for Superstream Engine############################################################global: engineName: "" # Define the superstream engine name within 32 characters, excluding '.', and using only lowercase letters, numbers, '-', and '_'.
superstreamAccountId: "" # Provide the account ID associated with the deployment, which could be used for identifying resources or configurations tied to a specific account.
superstreamActivationToken: "" # Enter the activation token required for services or resources that need an initial token for activation or authentication.
skipLocalAuthentication:false
Appendix F - Deploy Superstream Engine using labels, tolerations, nodeSelector and etc'
To inject custom labels into all services deployed by Superstream, utilize the global.labels variable.
############################################################# GLOBAL configuration for Superstream Engine############################################################global: engineName: "" # Define the superstream engine name within 32 characters, excluding '.', and using only lowercase letters, numbers, '-', and '_'.
superstreamAccountId: "" # Provide the account ID associated with the deployment, which could be used for identifying resources or configurations tied to a specific account.
superstreamActivationToken: "" # Enter the activation token required for services or resources that need an initial token for activation or authentication.
skipLocalAuthentication:truelabels:tests:ok
To configure tolerations, nodeSelector, and affinity settings for each deployed service, the adjustments in the following example need to be done:
############################################################# GLOBAL configuration for Superstream Engine############################################################global: engineName: "" # Define the superstream engine name within 32 characters, excluding '.', and using only lowercase letters, numbers, '-', and '_'.
superstreamAccountId: "" # Provide the account ID associated with the deployment, which could be used for identifying resources or configurations tied to a specific account.
superstreamActivationToken: "" # Enter the activation token required for services or resources that need an initial token for activation or authentication.
skipLocalAuthentication:truesuperstreamEngine:tolerations: - key:"app"value:"connectors"effect:"NoExecute"syslog:tolerations: - key:"app"value:"connectors"effect:"NoExecute"telegraf:tolerations: - key:"app"value:"connectors"effect:"NoExecute"nats:podTemplate:merge:spec:tolerations: - effect:NoSchedulekey:node-role.kubernetes.io/masteroperator:Exists
Best practices
Dev / Staging environments
Connecting your Development/Staging Kafka Clusters to Superstream is recommended. This can be done using either one or more dedicated Superstream engines (data planes) for each environment or the same engine connected to the production clusters.