Step 2: Engine deployment

This guide explaines how to deploy a Superstream on-prem engine on a Kubernetes platform.

If your environment is completely isolated from the public internet, please use this procedure.

Overview

The Superstream chart will deploy the following pods:

  • 2 Superstream engines

  • 2 Superstream auto-scaler instances

  • 3 NATS brokers

  • 1 Superstream syslog adapter

  • 1 Telegraf agent for monitoring

It is highly recommended to deploy one engine per environment (dev, staging, prod)

Getting started

1. Configure Environment Tokens

Create a custom_values.yaml file and edit the relevant values (An example can be found here)

custom_values.yaml
############################################################
# GLOBAL configuration for Superstream Engine
############################################################
global:
  engineName: ""               # Define the superstream engine name within 32 characters, excluding '.', and using only lowercase letters, numbers, '-', and '_'.
  superstreamAccountId: ""                 # Provide the account ID associated with the deployment, which could be used for identifying resources or configurations tied to a specific account.
  superstreamActivationToken: ""           # Enter the activation token required for services or resources that need an initial token for activation or authentication.
  skipLocalAuthentication: true
############################################################
# NATS config
############################################################
# NATS HA Deployment. Default "true"
nats:
  config:
    cluster:
      enabled: true
# NATS storageClass configuration. Default is blank "".
    jetstream:
      fileStore:
        pvc:
          storageClassName: ""
    nats:
      port: 4222
      tls:
        enabled: false
        # set secretName in order to mount an existing secret to dir
        secretName: ""
        localCa:
          enabled: false
          secretName: ""          
############################################################
# Kafka Autoscaler config
############################################################
# Optional service to automatically scale the Kafka cluster up/down based on CPU and memory metrics  
autoScaler:
  enabled: true

2. Deploy

  1. Go to the custom_values.yaml directory and run:

helm repo add superstream https://k8s.superstream.ai/ --force-update && helm install superstream superstream/superstream -f custom_values.yaml --create-namespace --namespace superstream --wait

Deployment verification:

helm list

3. Expose (When Client connectivity is needed. Not a mandatory requirement)

For client connectivity from outside the Kubernetes environment being used, it is necessary to expose the Superstream engine on port 4222 outside of the Kubernetes cluster where Superstream is deployed.

Here is an example YAML file to illustrate the required service configuration:

apiVersion: v1
kind: Service
metadata:
  name: superstream-host-external
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: external
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    service.beta.kubernetes.io/aws-load-balancer-name: superstream-host-external
spec:
  ports:
  - name: superstream-host-external
    port: 4222
    protocol: TCP
    targetPort: 4222
  selector:
    app.kubernetes.io/component: nats
    app.kubernetes.io/instance: nats
    app.kubernetes.io/name: nats
  type: LoadBalancer

4. Enter Superstream Console

Superstream Console can be found here: https://app.superstream.ai

Appendixes

Appendix A - Non-HA Deployment

For testing purposes only, Superstream can be deployed without HA capabilities. Change to false the following parameter in the values.yaml file:

# NATS HA Deployment. Default "true"
nats:
  config:
    cluster:
      enabled: false

Appendix B - Superstream Update

  1. Retrieve the Most Recent Version of the Superstream Helm Chart

 helm repo add superstream https://k8s.superstream.ai/ --force-update
  1. Make sure to use the same values:

helm get values superstream --namespace superstream
  1. Run the Upgrade command:

helm upgrade --install superstream superstream/superstream -f custom_values.yaml --namespace superstream --wait

Appendix C - Uninstall

Steps to Uninstall Superstream Engine.

  1. Delete Superstream Engine Helm Releases:

helm delete superstream -n <NAMESPACE>
  1. Remove Persistent Storage Bound to the Engine:

It's crucial to delete the stateful storage linked to the Engine. Ensure you carefully specify the namespace in the command below before executing it:

kubectl delete pvc -l app.kubernetes.io/instance=superstream -n <NAMESPACE>

Appendix D - Use Custom StorageClass

StorageClass definition

If there is no default storageClass configured for the Kubernetes cluster or there is a need to choose a custom storageClass, it can be done by specifying its name in the values.yaml file.

# NATS storageClass configuration. The default is blank "".
    jetstream:
      fileStore:
        pvc:
          storageClassName: ""

Appendix E - Deploy Superstream Engine with internal authentication mode on

To enable secure client authentication for the Superstream Engine, edit thevalues.yaml file and set the skipLocalAuthentication parameter to false.

############################################################
# GLOBAL configuration for Superstream Engine
############################################################
global:
  engineName: ""                   # Define the superstream engine name within 32 characters, excluding '.', and using only lowercase letters, numbers, '-', and '_'.
  superstreamAccountId: ""         # Provide the account ID associated with the deployment, which could be used for identifying resources or configurations tied to a specific account.
  superstreamActivationToken: ""   # Enter the activation token required for services or resources that need an initial token for activation or authentication.
  skipLocalAuthentication: false

Appendix F - Deploy Superstream Engine using labels, tolerations, nodeSelector and etc'

  • To inject custom labels into all services deployed by Superstream, utilize the global.labels variable.

############################################################
# GLOBAL configuration for Superstream Engine
############################################################
global:
  engineName: ""                   # Define the superstream engine name within 32 characters, excluding '.', and using only lowercase letters, numbers, '-', and '_'.
  superstreamAccountId: ""         # Provide the account ID associated with the deployment, which could be used for identifying resources or configurations tied to a specific account.
  superstreamActivationToken: ""   # Enter the activation token required for services or resources that need an initial token for activation or authentication.
  skipLocalAuthentication: true
  
  labels:
    tests: ok
  • To configure tolerations, nodeSelector, and affinity settings for each deployed service, the adjustments in the following example need to be done:

############################################################
# GLOBAL configuration for Superstream Engine
############################################################
global:
  engineName: ""                   # Define the superstream engine name within 32 characters, excluding '.', and using only lowercase letters, numbers, '-', and '_'.
  superstreamAccountId: ""         # Provide the account ID associated with the deployment, which could be used for identifying resources or configurations tied to a specific account.
  superstreamActivationToken: ""   # Enter the activation token required for services or resources that need an initial token for activation or authentication.
  skipLocalAuthentication: true
  
superstreamEngine:
  tolerations:
  - key: "app"
    value: "connectors"
    effect: "NoExecute"
syslog:
  tolerations:
  - key: "app"
    value: "connectors"
    effect: "NoExecute"
telegraf:
  tolerations:
  - key: "app"
    value: "connectors"
    effect: "NoExecute"
nats:
  podTemplate:
    merge:
      spec:
        tolerations:
          - effect: NoSchedule
            key: node-role.kubernetes.io/master
            operator: Exists

Best practices

Dev / Staging environments

Connecting your Development/Staging Kafka Clusters to Superstream is recommended. This can be done using either one or more dedicated Superstream engines (data planes) for each environment or the same engine connected to the production clusters.

Last updated

Logo

Copyright to Superstream.ai