Step 1: Preparations
Create a user
Step 1: Create a new policy
Log in to the AWS Console and navigate to the IAM section to create a new policy with the permissions below: (Copy and paste)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "EC2VpcEndpoint1",
"Effect": "Allow",
"Action": "ec2:CreateVpcEndpoint",
"Resource": "arn:*:ec2:*:*:vpc-endpoint/*",
"Condition": {
"StringEquals": {
"aws:RequestTag/AWSMSKManaged": "true"
},
"StringLike": {
"aws:RequestTag/ClusterArn": "*"
}
}
},
{
"Sid": "EC2VpcEndpoint2",
"Effect": "Allow",
"Action": "ec2:CreateTags",
"Resource": "arn:*:ec2:*:*:vpc-endpoint/*",
"Condition": {
"StringEquals": {
"ec2:CreateAction": "CreateVpcEndpoint"
}
}
},
{
"Sid": "EC2VpcEndpoint3",
"Effect": "Allow",
"Action": "ec2:DeleteVpcEndpoints",
"Resource": "arn:*:ec2:*:*:vpc-endpoint/*",
"Condition": {
"StringEquals": {
"ec2:ResourceTag/AWSMSKManaged": "true"
},
"StringLike": {
"ec2:ResourceTag/ClusterArn": "*"
}
}
},
{
"Sid": "EC2VpcEndpoint4",
"Effect": "Allow",
"Action": "ec2:CreateVpcEndpoint",
"Resource": [
"arn:*:ec2:*:*:vpc/*",
"arn:*:ec2:*:*:security-group/*",
"arn:*:ec2:*:*:subnet/*"
]
},
{
"Sid": "IAM1",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:PassedToService": "kafka.amazonaws.com"
}
}
},
{
"Sid": "IAM2",
"Effect": "Allow",
"Action": "iam:CreateServiceLinkedRole",
"Resource": "arn:aws:iam::*:role/aws-service-role/kafka.amazonaws.com/AWSServiceRoleForKafka*",
"Condition": {
"StringEquals": {
"iam:AWSServiceName": "kafka.amazonaws.com"
}
}
},
{
"Sid": "IAM3",
"Effect": "Allow",
"Action": "iam:CreateServiceLinkedRole",
"Resource": "arn:aws:iam::*:role/aws-service-role/delivery.logs.amazonaws.com/AWSServiceRoleForLogDelivery*",
"Condition": {
"StringEquals": {
"iam:AWSServiceName": "delivery.logs.amazonaws.com"
}
}
},
{
"Sid": "Kafka",
"Effect": "Allow",
"Action": [
"kafka:UpdateBrokerCount",
"kafka:DescribeConfiguration",
"kafka:ListScramSecrets",
"kafka:ListKafkaVersions",
"kafka:GetBootstrapBrokers",
"kafka:ListClientVpcConnections",
"kafka:UpdateBrokerType",
"kafka:DescribeCluster",
"kafka:ListClustersV2",
"kafka:DescribeClusterOperation",
"kafka:ListNodes",
"kafka:ListClusterOperationsV2",
"kafka:UpdateClusterConfiguration",
"kafka:ListClusters",
"kafka:GetClusterPolicy",
"kafka:DescribeClusterOperationV2",
"kafka:DescribeClusterV2",
"kafka:ListReplicators",
"kafka:ListConfigurationRevisions",
"kafka:ListVpcConnections",
"kafka:ListTagsForResource",
"kafka:GetCompatibleKafkaVersions",
"kafka:DescribeConfigurationRevision",
"kafka:UpdateConfiguration",
"kafka:ListConfigurations",
"kafka:ListClusterOperations",
"kafka:TagResource",
"kafka:UntagResource",
"kafka:DescribeVpcConnection",
"kafka:DescribeReplicator"
],
"Resource": "*"
},
{
"Sid": "KafkaCluster",
"Effect": "Allow",
"Action": [
"kafka-cluster:DescribeTransactionalId",
"kafka-cluster:CreateTopic",
"kafka-cluster:*Topic*",
"kafka-cluster:AlterCluster",
"kafka-cluster:Connect",
"kafka-cluster:DeleteTopic",
"kafka-cluster:ReadData",
"kafka-cluster:DescribeTopicDynamicConfiguration",
"kafka-cluster:AlterTopicDynamicConfiguration",
"kafka-cluster:AlterGroup",
"kafka-cluster:AlterClusterDynamicConfiguration",
"kafka-cluster:DescribeGroup",
"kafka-cluster:DescribeClusterDynamicConfiguration",
"kafka-cluster:DeleteGroup",
"kafka-cluster:DescribeCluster",
"kafka-cluster:AlterTopic",
"kafka-cluster:DescribeTopic",
"kafka-cluster:WriteData"
],
"Resource": "*"
},
{
"Sid": "Others",
"Effect": "Allow",
"Action": [
"logs:ListLogDeliveries",
"ec2:DescribeRouteTables",
"logs:CreateLogDelivery",
"logs:PutResourcePolicy",
"logs:UpdateLogDelivery",
"ec2:DescribeVpcEndpoints",
"ec2:DescribeSubnets",
"ec2:DescribeInstanceTypes",
"cloudwatch:GetMetricData",
"ce:GetCostAndUsage",
"ec2:DescribeVpcAttribute",
"cloudwatch:ListMetrics",
"logs:GetLogDelivery",
"kms:DescribeKey",
"logs:DeleteLogDelivery",
"firehose:TagDeliveryStream",
"kms:CreateGrant",
"logs:DescribeResourcePolicies",
"S3:GetBucketPolicy",
"logs:DescribeLogGroups",
"ec2:DescribeSecurityGroups",
"ec2:DescribeVpcs",
"iam:SimulatePrincipalPolicy",
"iam:GetUser",
"iam:GetPolicy",
"iam:GetPolicyVersion",
"iam:SimulatePrincipalPolicy",
"ce:GetCostAndUsageWithResources",
"ce:ListTagsForResource",
"ce:UpdateCostAllocationTagsStatus",
"ce:ListCostAllocationTags",
"ce:GetTags"
],
"Resource": "*"
}
]
}
Step 2: If you are using an IAM Role
Create a new role with a trusted entity type: Custom trust policy
The "Principal" value will be provided by the Superstream team
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<ACCOUNT_ID>:role/<ROLE_ASSIGNED_TO_NODEGROUP>"
},
"Action": "sts:AssumeRole"
}
]
}
Step 3: Attach the policy created above to the role.
Step 4: Add the following AWS-managed policy to the IAM Role:
AWSBillingReadOnlyAccess
Step 2: If you are using an IAM User
Attach the policy created above to the AWS IAM User and use ACCESS KEY to create the API Key
Step 3: Add the following AWS-managed policy to the IAM User:
AWSBillingReadOnlyAccess
For connecting Confluent Cloud clusters to Superstream, two types of API keys are required to be created:
Step 1: Create a new Confluent service account
In Confluent Console: Top-right menu -> Accounts & access -> Accounts -> Service Accounts -> "Add service account"
In the "Add service account" wizard:
Name the service account "
Superstream
"Permissions ("+ Add role assignment"):
For each organization:
BillingAdmin
andMetricsViewer
For each environment:
MetricsViewer
,DataDiscovery
,Operator
For environment -> Schema Registry
Select resource:
All schema subjects
Select role:
ResourceOwner
For each cluster:
CloudClusterAdmin
,MetricsViewer
For each designated cluster -> Topics
DeveloperRead
: All topicsDeveloperManage
: All topics
For each designated cluster -> Consumer Groups
Read all
Consumer groups
Step 2: Create a Confluent Cloud Resource Management Key
In Confluent Console: Top-right menu -> API Keys -> + Add API key
Follow the following steps:
Create and save the newly created credentials using the cluster name.
Step 3: Create a dedicated API key per cluster
In Confluent Console: Left menu -> Home -> Environments -> <environment name>
-> <cluster name>
-> API Keys
Click on "+ Add key"
Choose "Service account" -> "
Superstream
" (The one we created in Step 1)ACLs:
Cluster
ALTER_CONFIGS
: ALLOWDESCRIBE
: ALLOWDESCRIBE_CONFIGS
: ALLOW
Consumer Group
Rule 1:
Consumer group ID:
*
Pattern type:
LITERAL
Operation:
Delete
Permission:
ALLOW
Rule 2:
Consumer group ID:
*
Pattern type:
LITERAL
Operation:
Describe
Permission:
ALLOW
Rule 3:
Consumer group ID:
*
Pattern type:
LITERAL
Operation:
Read
Permission:
ALLOW
Topic
Rule 1:
Topic name:
*
Pattern type:
LITERAL
Operation:
ALTER
Permission:
ALLOW
Rule 2:
Topic name:
*
Pattern type:
LITERAL
Operation:
ALTER_CONFIGS
Permission:
ALLOW
Rule 3:
Topic name:
*
Pattern type:
LITERAL
Operation:
DELETE
Permission:
ALLOW
Rule 4:
Topic name:
*
Pattern type:
LITERAL
Operation:
DESCRIBE
Permission:
ALLOW
Rule 5:
Topic name:
*
Pattern type:
LITERAL
Operation:
DESCRIBE_CONFIGS
Permission:
ALLOW
Rule 6:
Topic name:
superstream
Pattern type:
LITERAL
Operation:
Create
Permission:
ALLOW
Rule 7:
Topic name:
*
Pattern type:
LITERAL
Operation:
READ
Permission:
ALLOW
Create and save the newly created credentials using the cluster name.
For effective functioning, a user or token requires the following permissions:
Cluster-level:
Describe all topics, List all topics, Describe configs, Describe cluster
Topic-level:
Read: All topics
Alter: All topics
Delete: All topics
Describe: All topics
Alter: All topics
AlterConfigs: All topics
DescribeConfigs: All topics
Read, Create, and Write: single topic named
superstream.metadata
(A dedicated Superstream topic with infinite retention and a single partition).
Consumer group-level:
Describe
List Consumer Groups
ACL statement that grants read
access to a user named Superstream for all topics
in the Kafka cluster:
kafka-acls --bootstrap-server : -add --allow-principal User:Superstream --operation read --topic '' --group '' --command-config <PATH_TO_CRED_FILE>
ACL statement that grants describe
access to a user named Superstream for all topics
in the Kafka cluster:
kafka-acls --bootstrap-server : -add --allow-principal User:Superstream --operation Describe --topic '*' --command-config <PATH_TO_CRED_FILE>
ACL statement that grants DescribeConfigs
access to a user named Superstream for all topics
in the Kafka cluster:
kafka-acls --bootstrap-server <URL>:<PORT> -add --allow-principal User:Superstream --operation DescribeConfigs --topic '*' --command-config <PATH_TO_CRED_FILE>
Last updated