Page cover image

Step 2: Create a Kafka user

Superstream requires a Kafka user with the following configuration to communicate and analyze connected clusters.

By Kafka flavor/vendor:

AWS MSK

Option 1: Create or Update Superstream Role

Be sure you’re signed in to the AWS Console with your default browser, then click here to:

  1. Enter required parameters (e.g., NodeGroupRoleArn).

  2. Acknowledge IAM resource creation.

  3. Click Create Stack or Update Stack (choose Update Stack if the Superstream IAM role already exists).

  4. Confirm status: CREATE_COMPLETE or UPDATE_COMPLETE.

  5. Click on Outputs to get IAM Role details:

Option 2: Create or Update Superstream User

Be sure you’re signed in to the AWS Console with your default browser, then click here to:

  1. Acknowledge IAM resource creation.

  2. Click Create Stack or Update Stack (choose Update Stack if the Superstream IAM user already exists).

  3. Confirm status: CREATE_COMPLETE or UPDATE_COMPLETE.

  4. Click on Outputs to get the programmatic user details.

  1. Create a new Access secret key for the user and use it in SSM Console to connect the new cluster.


Confluent Cloud

Step 1: Create a new Confluent service account

In Confluent Console: Top-right menu -> Accounts & access -> Accounts -> Service Accounts -> "Add service account"

In the "Add service account" wizard:

  1. Name the service account "Superstream" (The Service account name must include the word "Superstream".)

  2. Set account type to "None"

  3. Click on each organization -> Add role assignment(top right) and add the following permissions:

    1. BillingAdmin - on the organization level

    2. ResourceKeyAdmin - on the organization level

  4. Optional: In case you want Superstream to connect only with clusters in a specific environment, please grant:

    1. EnvironmentAdmin - for each environment you want to connect with Superstream

  5. Optional: In case you want Superstream to connect only with specific clusters, please grant CloudClusterAdmin for each such cluster

    1. A dedicated Cluster API key with the specified ACLs is required for direct integration into the cluster:

      {"CLUSTER", "kafka-cluster", "LITERAL", "ALTER_CONFIGS", "ALLOW"},
      {"CLUSTER", "kafka-cluster", "LITERAL", "DESCRIBE", "ALLOW"},
      {"CLUSTER", "kafka-cluster", "LITERAL", "DESCRIBE_CONFIGS", "ALLOW"},
      {"CLUSTER", "kafka-cluster", "LITERAL", "CREATE", "ALLOW"},
      
      // Consumer Group ACLs
      {"GROUP", "*", "LITERAL", "DELETE", "ALLOW"},
      {"GROUP", "*", "LITERAL", "DESCRIBE", "ALLOW"},
      {"GROUP", "*", "LITERAL", "READ", "ALLOW"},
      
      // Topic ACLs
      {"TOPIC", "*", "LITERAL", "ALTER", "ALLOW"},
      {"TOPIC", "*", "LITERAL", "ALTER_CONFIGS", "ALLOW"},
      {"TOPIC", "*", "LITERAL", "DELETE", "ALLOW"},
      {"TOPIC", "*", "LITERAL", "DESCRIBE", "ALLOW"},
      {"TOPIC", "*", "LITERAL", "DESCRIBE_CONFIGS", "ALLOW"},
      {"TOPIC", "*", "LITERAL", "READ", "ALLOW"},
      {"TOPIC", "superstream", "LITERAL", "CREATE", "ALLOW"},
      
      // Superstream topic ACLs
      {"TOPIC", "superstream.", "PREFIXED", "READ", "ALLOW"},
      {"TOPIC", "superstream.", "PREFIXED", "WRITE", "ALLOW"},
      {"TOPIC", "superstream.", "PREFIXED", "DELETE", "ALLOW"},
      {"TOPIC", "superstream.", "PREFIXED", "DESCRIBE", "ALLOW"},
      {"TOPIC", "superstream.", "PREFIXED", "DESCRIBE_CONFIGS", "ALLOW"},
      {"TOPIC", "superstream.", "PREFIXED", "ALTER", "ALLOW"},
      {"TOPIC", "superstream.", "PREFIXED", "ALTER_CONFIGS", "ALLOW"}

Step 2: Create a Confluent Cloud Resource Management Key

In Confluent Console: Top-right menu -> API Keys -> + Add API key

Follow the following steps:

Create and save the newly created credentials.

Apache Kafka (Self-hosted)

Step 1: Create a dedicated Kafka user for Superstream

For effective functioning, a user or token requires the following permissions:

  • Cluster-level:

    • Describe all topics, List all topics, Describe configs, Describe cluster

  • Topic-level:

    • Read: All topics

    • Alter: All topics

    • Delete: All topics

    • Describe: All topics

    • Alter: All topics

    • AlterConfigs: All topics

    • DescribeConfigs: All topics

  • Consumer group-level:

    • Describe

    • List Consumer Groups

    • Delete

ACL statement examples:

# Cluster-level permissions
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Describe --cluster
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation DescribeConfigs --cluster
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Describe --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation List --topic '*'

# Topic-level permissions
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Read --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Alter --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Delete --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Describe --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation AlterConfigs --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation DescribeConfigs --topic '*'

# Consumer group-level permissions
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Describe --group '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation List --group '*'

Step 2: Connection information per cluster

The following information will be required for each cluster:

  • Bootstrap servers (Kafka URL)

  • Authentication security protocol (No auth / SSL / SASL_SSL)

    • SSL with validation "on" would require a key.pem,cert.pem, and ca.pem

  • JMX port and token

Last updated

Was this helpful?