Improve Your Producers Efficiency With This Powerful Calculator
Go
LogoLogo
WebsiteStart FreeSuperstream Console
  • Superstream
  • Getting started
    • Option 1: BYOC
      • Step 1: Agent Deployment
      • Step 2: Create a Kafka user
      • Step 3: Connect your Kafka cluster/s
      • Step 4: Activate
      • Additional resources
        • Engine deployment
          • Supertstream Engine Deployment using existing secrets
          • How to deploy and manage Superstream using ArgoCD
          • Superstream Engine deployment for environments with a local container registry
          • Supertstream Engine Deployment using custom resource limits
          • Superstream Platform deployment for Air-Gapped environments
        • Engine upgrade
          • Upgrading From Helmfile based Deployment to Superstream Helm Chart
    • Option 2: Fully managed
      • Step 1: Create a Kafka user
      • Step 2: Connect your Kafka cluster/s
      • Step 3: Activate
  • Optimizations
    • Improve Network Efficiency
    • Resize AWS MSK Size
  • Settings
    • Notifications
  • Security & Legal
    • Processed data
    • Compliance
    • Firewall rules
    • Authentication
    • Legal
      • Terms and Conditions
      • Privacy Policy
  • Solution briefs
    • Superstream for Confluent Platform
    • Superstream for Confluent Cloud
    • Superstream for Redpanda
    • Superstream for Apache Kafka
    • Superstream for AWS MSK
    • Superstream for AWS MSK Serverless
    • Superstream for Aiven
  • Release notes
    • Changelog
      • Feb 2, 2025
      • Jan 11, 2025
      • Dec 5, 2024
      • Nov 1, 2024
      • Oct 14, 2024
      • Sep 24, 2024
      • Sep 10, 2024
      • August 26, 2024
Powered by GitBook
LogoLogo

Product

  • Superstream Console

Copyright to Superstream Labs Inc. 2025

On this page
  • By Kafka flavor/vendor:
  • AWS MSK
  • Confluent Cloud
  • Apache Kafka (Self-hosted)

Was this helpful?

Edit on GitHub
Export as PDF
  1. Getting started
  2. Option 1: BYOC

Step 2: Create a Kafka user

PreviousStep 1: Agent DeploymentNextStep 3: Connect your Kafka cluster/s

Last updated 22 days ago

Was this helpful?

Superstream requires a Kafka user with the following configuration to communicate and analyze connected clusters.

By Kafka flavor/vendor:

AWS MSK


Confluent Cloud

Apache Kafka (Self-hosted)

Page cover image

Step 1: Create a dedicated Kafka user for Superstream

For effective functioning, a user or token requires the following permissions:

  • Cluster-level:

    • Describe all topics, List all topics, Describe configs, Describe cluster

  • Topic-level:

    • Read: All topics

    • Alter: All topics

    • Delete: All topics

    • Describe: All topics

    • Alter: All topics

    • AlterConfigs: All topics

    • DescribeConfigs: All topics

  • Consumer group-level:

    • Describe

    • List Consumer Groups

    • Delete

ACL statement examples:

# Cluster-level permissions
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Describe --cluster
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation DescribeConfigs --cluster
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Describe --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation List --topic '*'

# Topic-level permissions
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Read --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Alter --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Delete --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Describe --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation AlterConfigs --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation DescribeConfigs --topic '*'

# Consumer group-level permissions
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Describe --group '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation List --group '*'

Step 2: Connection information per cluster

The following information will be required for each cluster:

  • Bootstrap servers (Kafka URL)

  • Authentication security protocol (No auth / SSL / SASL_SSL)

    • SSL with validation "on" would require a key.pem,cert.pem, and ca.pem

  • JMX port and token

Step 1: Create or Update Superstream Role

  1. Enter required parameters (e.g., NodeGroupRoleArn).

  2. Acknowledge IAM resource creation.

  3. Click Create Stack or Update Stack (choose Update Stack if the Superstream IAM role already exists).

  4. Confirm status: CREATE_COMPLETE or UPDATE_COMPLETE.

  5. Click on Outputs to get IAM Role details:

Step 2: Create or Update Superstream User

  1. Acknowledge IAM resource creation.

  2. Click Create Stack or Update Stack (choose Update Stack if the Superstream IAM user already exists).

  3. Confirm status: CREATE_COMPLETE or UPDATE_COMPLETE.

  4. Click on Outputs to get the programmatic user details.

  1. Create a new Access secret key for the user and use it in SSM Console to connect the new cluster.

Be sure you’re signed in to the AWS Console with your default browser, then to:

Be sure you’re signed in to the AWS Console with your default browser, then to:

click here
click here

For connecting Confluent Cloud clusters to Superstream, two types of API keys are required to be created:

Step 1: Create a new Confluent service account

In Confluent Console: Top-right menu -> Accounts & access -> Accounts -> Service Accounts -> "Add service account"

In the "Add service account" wizard:

  1. Name the service account "Superstream"

  2. Permissions ("+ Add role assignment"):

    1. For each organization: BillingAdmin , ResourceKeyAdmin, and MetricsViewer

    2. For each environment: MetricsViewer ,DataDiscovery, Operator

      1. For environment -> Schema Registry

        1. Select resource: All schema subjects

        2. Select role: ResourceOwner

    3. For each cluster: CloudClusterAdmin , MetricsViewer

      1. For each designated cluster -> Topics

        1. DeveloperRead: All topics

        2. DeveloperManage: All topics

      2. For each designated cluster -> Consumer Groups

        1. Read all Consumer group

Step 2: Create a Confluent Cloud Resource Management Key

In Confluent Console: Top-right menu -> API Keys -> + Add API key

Follow the following steps:

Create and save the newly created credentials using the cluster name.

Step 3: Create a dedicated API key per cluster

In Confluent Console: Left menu -> Home -> Environments -> <environment name> -> <cluster name> -> API Keys

Click on "+ Add key"

  • Choose "Service account" -> "Superstream" (The one we created in Step 1)

  • ACLs:

    1. Cluster

      1. ALTER_CONFIGS: ALLOW

      2. DESCRIBE: ALLOW

      3. DESCRIBE_CONFIGS: ALLOW

    2. Consumer Group

      1. Rule 1:

        1. Consumer group ID: *

        2. Pattern type: LITERAL

        3. Operation: Delete

        4. Permission: ALLOW

      2. Rule 2:

        1. Consumer group ID: *

        2. Pattern type: LITERAL

        3. Operation: Describe

        4. Permission: ALLOW

      3. Rule 3:

        1. Consumer group ID: *

        2. Pattern type: LITERAL

        3. Operation: Read

        4. Permission: ALLOW

    3. Topic

      1. Rule 1:

        1. Topic name: *

        2. Pattern type: LITERAL

        3. Operation: ALTER

        4. Permission: ALLOW

      2. Rule 2:

        1. Topic name: *

        2. Pattern type: LITERAL

        3. Operation: ALTER_CONFIGS

        4. Permission: ALLOW

      3. Rule 3:

        1. Topic name: *

        2. Pattern type: LITERAL

        3. Operation: DELETE

        4. Permission: ALLOW

      4. Rule 4:

        1. Topic name: *

        2. Pattern type: LITERAL

        3. Operation: DESCRIBE

        4. Permission: ALLOW

      5. Rule 5:

        1. Topic name: *

        2. Pattern type: LITERAL

        3. Operation: DESCRIBE_CONFIGS

        4. Permission: ALLOW

      6. Rule 6:

        1. Topic name: superstream

        2. Pattern type: LITERAL

        3. Operation: Create

        4. Permission: ALLOW

      7. Rule 7:

        1. Topic name: *

        2. Pattern type: LITERAL

        3. Operation: READ

        4. Permission: ALLOW

  • Create and save the newly created credentials using the cluster name.