Improve Your Producers Efficiency With This Powerful Calculator
Go
LogoLogo
WebsiteStart FreeSuperstream Console
  • Superstream
  • Getting started
    • Option 1: BYOC
      • Step 1: Agent Deployment
      • Step 2: Create a Kafka user
      • Step 3: Connect your Kafka cluster/s
      • Step 4: Activate
      • Additional resources
        • Engine deployment
          • Supertstream Engine Deployment using existing secrets
          • How to deploy and manage Superstream using ArgoCD
          • Superstream Engine deployment for environments with a local container registry
          • Supertstream Engine Deployment using custom resource limits
          • Superstream Platform deployment for Air-Gapped environments
        • Engine upgrade
          • Upgrading From Helmfile based Deployment to Superstream Helm Chart
    • Option 2: Fully managed
      • Step 1: Create a Kafka user
      • Step 2: Connect your Kafka cluster/s
      • Step 3: Activate
  • Products
    • Superclient for Kafka
      • Getting started
    • Superstream Kafka360
      • Right-size AWS MSK cluster size
  • Settings
    • Notifications
  • Security & Legal
    • Processed data
    • Compliance
    • Firewall rules
    • Authentication
    • Legal
      • Terms and Conditions
      • Privacy Policy
  • Solution briefs
    • Executive Summary
    • Superstream for Confluent Platform
    • Superstream for Confluent Cloud
    • Superstream for Redpanda
    • Superstream for Apache Kafka
    • Superstream for AWS MSK
    • Superstream for AWS MSK Serverless
    • Superstream for Aiven
  • Release notes
    • Changelog
      • Feb 2, 2025
      • Jan 11, 2025
      • Dec 5, 2024
      • Nov 1, 2024
      • Oct 14, 2024
      • Sep 24, 2024
      • Sep 10, 2024
      • August 26, 2024
Powered by GitBook
LogoLogo

Product

  • Superstream Console

Copyright to Superstream Labs Inc. 2025

On this page
  • By Kafka flavor/vendor:
  • AWS MSK
  • Confluent Cloud
  • Apache Kafka (Self-hosted)

Was this helpful?

Edit on GitHub
Export as PDF
  1. Getting started
  2. Option 1: BYOC

Step 2: Create a Kafka user

PreviousStep 1: Agent DeploymentNextStep 3: Connect your Kafka cluster/s

Last updated 1 month ago

Was this helpful?

Superstream requires a Kafka user with the following configuration to communicate and analyze connected clusters.

By Kafka flavor/vendor:

AWS MSK


Confluent Cloud

Apache Kafka (Self-hosted)

Page cover image

Step 1: Create a new Confluent service account

In Confluent Console: Top-right menu -> Accounts & access -> Accounts -> Service Accounts -> "Add service account"

In the "Add service account" wizard:

  1. Name the service account "Superstream" (The Service account name must include the word "Superstream".)

  2. Set account type to "None"

  3. Click on each organization -> Add role assignment(top right) and add the following permissions:

    1. BillingAdmin - on the organization level

    2. ResourceKeyAdmin - on the organization level

  4. Optional: In case you want Superstream to connect only with clusters in a specific environment, please grant:

    1. EnvironmentAdmin - for each environment you want to connect with Superstream

  5. Optional: In case you want Superstream to connect only with specific clusters, please grant CloudClusterAdmin for each such cluster

    1. A dedicated Cluster API key with the specified ACLs is required for direct integration into the cluster:

      {"CLUSTER", "kafka-cluster", "LITERAL", "ALTER_CONFIGS", "ALLOW"},
      {"CLUSTER", "kafka-cluster", "LITERAL", "DESCRIBE", "ALLOW"},
      {"CLUSTER", "kafka-cluster", "LITERAL", "DESCRIBE_CONFIGS", "ALLOW"},
      {"CLUSTER", "kafka-cluster", "LITERAL", "CREATE", "ALLOW"},
      
      // Consumer Group ACLs
      {"GROUP", "*", "LITERAL", "DELETE", "ALLOW"},
      {"GROUP", "*", "LITERAL", "DESCRIBE", "ALLOW"},
      {"GROUP", "*", "LITERAL", "READ", "ALLOW"},
      
      // Topic ACLs
      {"TOPIC", "*", "LITERAL", "ALTER", "ALLOW"},
      {"TOPIC", "*", "LITERAL", "ALTER_CONFIGS", "ALLOW"},
      {"TOPIC", "*", "LITERAL", "DELETE", "ALLOW"},
      {"TOPIC", "*", "LITERAL", "DESCRIBE", "ALLOW"},
      {"TOPIC", "*", "LITERAL", "DESCRIBE_CONFIGS", "ALLOW"},
      {"TOPIC", "*", "LITERAL", "READ", "ALLOW"},
      {"TOPIC", "superstream", "LITERAL", "CREATE", "ALLOW"},
      
      // Superstream topic ACLs
      {"TOPIC", "superstream.", "PREFIXED", "READ", "ALLOW"},
      {"TOPIC", "superstream.", "PREFIXED", "WRITE", "ALLOW"},
      {"TOPIC", "superstream.", "PREFIXED", "DELETE", "ALLOW"},
      {"TOPIC", "superstream.", "PREFIXED", "DESCRIBE", "ALLOW"},
      {"TOPIC", "superstream.", "PREFIXED", "DESCRIBE_CONFIGS", "ALLOW"},
      {"TOPIC", "superstream.", "PREFIXED", "ALTER", "ALLOW"},
      {"TOPIC", "superstream.", "PREFIXED", "ALTER_CONFIGS", "ALLOW"}

Step 2: Create a Confluent Cloud Resource Management Key

In Confluent Console: Top-right menu -> API Keys -> + Add API key

Follow the following steps:

Create and save the newly created credentials.

Option 1: Create or Update Superstream Role

  1. Enter required parameters (e.g., NodeGroupRoleArn).

  2. Acknowledge IAM resource creation.

  3. Click Create Stack or Update Stack (choose Update Stack if the Superstream IAM role already exists).

  4. Confirm status: CREATE_COMPLETE or UPDATE_COMPLETE.

  5. Click on Outputs to get IAM Role details:

Option 2: Create or Update Superstream User

  1. Acknowledge IAM resource creation.

  2. Click Create Stack or Update Stack (choose Update Stack if the Superstream IAM user already exists).

  3. Confirm status: CREATE_COMPLETE or UPDATE_COMPLETE.

  4. Click on Outputs to get the programmatic user details.

  1. Create a new Access secret key for the user and use it in SSM Console to connect the new cluster.

Be sure you’re signed in to the AWS Console with your default browser, then to:

Be sure you’re signed in to the AWS Console with your default browser, then to:

click here
click here

Step 1: Create a dedicated Kafka user for Superstream

For effective functioning, a user or token requires the following permissions:

  • Cluster-level:

    • Describe all topics, List all topics, Describe configs, Describe cluster

  • Topic-level:

    • Read: All topics

    • Alter: All topics

    • Delete: All topics

    • Describe: All topics

    • Alter: All topics

    • AlterConfigs: All topics

    • DescribeConfigs: All topics

  • Consumer group-level:

    • Describe

    • List Consumer Groups

    • Delete

ACL statement examples:

# Cluster-level permissions
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Describe --cluster
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation DescribeConfigs --cluster
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Describe --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation List --topic '*'

# Topic-level permissions
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Read --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Alter --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Delete --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Describe --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation AlterConfigs --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation DescribeConfigs --topic '*'

# Consumer group-level permissions
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Describe --group '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation List --group '*'

Step 2: Connection information per cluster

The following information will be required for each cluster:

  • Bootstrap servers (Kafka URL)

  • Authentication security protocol (No auth / SSL / SASL_SSL)

    • SSL with validation "on" would require a key.pem,cert.pem, and ca.pem

  • JMX port and token