
Step 2: Create a Kafka user
Superstream requires a Kafka user with the following configuration to communicate and analyze connected clusters.
By Kafka flavor/vendor:
AWS MSK
Option 1: Create or Update Superstream Role
Be sure you’re signed in to the AWS Console with your default browser, then click here to:
Enter required parameters (e.g., NodeGroupRoleArn).
Acknowledge IAM resource creation.
Click Create Stack or Update Stack (choose Update Stack if the Superstream IAM role already exists).
Confirm status: CREATE_COMPLETE or UPDATE_COMPLETE.
Click on Outputs to get IAM Role details:

Option 2: Create or Update Superstream User
Be sure you’re signed in to the AWS Console with your default browser, then click here to:
Acknowledge IAM resource creation.
Click Create Stack or Update Stack (choose Update Stack if the Superstream IAM user already exists).
Confirm status: CREATE_COMPLETE or UPDATE_COMPLETE.
Click on Outputs to get the programmatic user details.

Create a new Access secret key for the user and use it in SSM Console to connect the new cluster.
Confluent Cloud
Step 1: Create a new Confluent service account
In Confluent Console: Top-right menu -> Accounts & access -> Accounts -> Service Accounts -> "Add service account"

In the "Add service account" wizard:
Name the service account "
Superstream
" (The Service account name must include the word "Superstream".)Set account type to "None"
Click on each organization -> Add role assignment(top right) and add the following permissions:
BillingAdmin
- on the organization levelResourceKeyAdmin
- on the organization level
Optional: In case you want Superstream to connect only with clusters in a specific environment, please grant:
EnvironmentAdmin
- for each environment you want to connect with Superstream
Optional: In case you want Superstream to connect only with specific clusters, please grant
CloudClusterAdmin
for each such clusterA dedicated Cluster API key with the specified ACLs is required for direct integration into the cluster:
{"CLUSTER", "kafka-cluster", "LITERAL", "ALTER_CONFIGS", "ALLOW"}, {"CLUSTER", "kafka-cluster", "LITERAL", "DESCRIBE", "ALLOW"}, {"CLUSTER", "kafka-cluster", "LITERAL", "DESCRIBE_CONFIGS", "ALLOW"}, {"CLUSTER", "kafka-cluster", "LITERAL", "CREATE", "ALLOW"}, // Consumer Group ACLs {"GROUP", "*", "LITERAL", "DELETE", "ALLOW"}, {"GROUP", "*", "LITERAL", "DESCRIBE", "ALLOW"}, {"GROUP", "*", "LITERAL", "READ", "ALLOW"}, // Topic ACLs {"TOPIC", "*", "LITERAL", "ALTER", "ALLOW"}, {"TOPIC", "*", "LITERAL", "ALTER_CONFIGS", "ALLOW"}, {"TOPIC", "*", "LITERAL", "DELETE", "ALLOW"}, {"TOPIC", "*", "LITERAL", "DESCRIBE", "ALLOW"}, {"TOPIC", "*", "LITERAL", "DESCRIBE_CONFIGS", "ALLOW"}, {"TOPIC", "*", "LITERAL", "READ", "ALLOW"}, {"TOPIC", "superstream", "LITERAL", "CREATE", "ALLOW"}, // Superstream topic ACLs {"TOPIC", "superstream.", "PREFIXED", "READ", "ALLOW"}, {"TOPIC", "superstream.", "PREFIXED", "WRITE", "ALLOW"}, {"TOPIC", "superstream.", "PREFIXED", "DELETE", "ALLOW"}, {"TOPIC", "superstream.", "PREFIXED", "DESCRIBE", "ALLOW"}, {"TOPIC", "superstream.", "PREFIXED", "DESCRIBE_CONFIGS", "ALLOW"}, {"TOPIC", "superstream.", "PREFIXED", "ALTER", "ALLOW"}, {"TOPIC", "superstream.", "PREFIXED", "ALTER_CONFIGS", "ALLOW"}
Step 2: Create a Confluent Cloud Resource Management Key
In Confluent Console: Top-right menu -> API Keys -> + Add API key
Follow the following steps:



Create and save the newly created credentials.
Apache Kafka (Self-hosted)
Step 1: Create a dedicated Kafka user for Superstream
For effective functioning, a user or token requires the following permissions:
Cluster-level:
Describe all
topics
, List all topics, Describe configs, Describe cluster
Topic-level:
Read: All topics
Alter: All topics
Delete: All topics
Describe: All topics
Alter: All topics
AlterConfigs: All topics
DescribeConfigs: All topics
Consumer group-level:
Describe
List Consumer Groups
Delete
ACL statement examples:
# Cluster-level permissions
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Describe --cluster
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation DescribeConfigs --cluster
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Describe --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation List --topic '*'
# Topic-level permissions
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Read --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Alter --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Delete --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Describe --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation AlterConfigs --topic '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation DescribeConfigs --topic '*'
# Consumer group-level permissions
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation Describe --group '*'
kafka-acls --authorizer-properties zookeeper.connect=<ZK_HOST:PORT> --add --allow-principal User:<USER> --operation List --group '*'
Step 2: Connection information per cluster
The following information will be required for each cluster:
Bootstrap servers (Kafka URL)
Authentication security protocol (No auth / SSL / SASL_SSL)
SSL with validation "on" would require a
key.pem
,cert.pem
, andca.pem
JMX port and token
Last updated
Was this helpful?