Only this pageAll pages
Powered by GitBook
1 of 46

Superstream external

Loading...

Getting started

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Products

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Settings

Loading...

Security & Legal

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Solution briefs

Loading...

Release notes

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Step 4: What's next

Congratulations on reaching this stage!

Here is what's happening now -

  • Superstream performs ongoing analysis of your clusters to highlight system insights, diagnose health problems, and recommend performance optimizations.

  • The first batch of insights will be available within couple of minutes.

  • While the Superstream local agent is connected to your clusters, the system continuously samples data on an hourly and daily basis to refresh insights and optimization recommendations.

  • When automation is configured, SuperClient and SuperCluster will automatically initiate fixes for any detected issues.

Kafka-related

Agent (Engine) deployment

Step 2: Create a Kafka User

Superstream requires a Kafka user with the following configuration to communicate and analyze connected clusters.

By Kafka flavor/vendor:

AWS MSK


Datadog Integration: JMX Requirements for Kafka

To successfully integrate Datadog with Apache Kafka for metrics collection and monitoring, JMX (Java Management Extensions) must be enabled.

Configuration Steps

For Self-Managed Apache Kafka

Requirement: Native JMX must be enabled on all Kafka brokers

Supertstream Agent (Engine) Deployment using existing secrets

When the ACTIVATION_TOKEN cannot be exposed in the values.yaml file, it is possible to provide it to the Agent using a pre-created Kubernetes secret containing the relevant data. Follow these steps to create and configure the secret:

Create Kubernetes Secret

Use the following command to create a Kubernetes secret with the required data:

Required JMX rules

Superstream

Superstream automates Kafka optimization so you can focus on building, not babysitting brokers and clients.

Kafka Optimization. Automated. End-to-End.

Superstream is a fully automated optimization platform for Apache Kafka that continuously analyzes and tunes both your clusters and clients. It helps engineering teams reduce cloud costs, improve reliability, and eliminate Kafka configuration drift—without needing deep expertise or manual intervention.

⚙️ Cluster Optimization

Superstream inspects your cluster daily to detect misconfigurations, inactive topics, idle consumer groups, and inefficient resource usage. It then applies right-sized recommendations or automatically remediates issues like over-provisioned brokers or misaligned topic settings.

🚀 Client Optimization

Zero-Code Client Tuning for Maximum Efficiency. By observing Kafka producer behavior in real time, Superstream suggests and applies optimized client-side settings—such as batching, compression type, and linger. This can reduce compute and data transfer costs by up to 60%, without requiring any code changes

🔒 Reliable, Automated, and Fully Controllable

Observability-First with Optional Automation. Whether you prefer to review changes manually or turn on full auto-remediation, Superstream offers flexible automation settings, audit logs, and detailed reports—giving you complete control and confidence in every optimization.


Schema Registry Optimization

Superstream helps Confluent Cloud users reduce Schema Registry costs by automatically identifying and cleaning up unused schemas. This feature is built specifically for environments using Confluent’s Schema Registry with the topic name strategy, where schema proliferation can quickly inflate costs and clutter your registry.

🧹 Unused Schema Cleanup

Over time, Kafka clusters often accumulate schemas tied to topics that no longer exist or have been renamed. These orphaned schemas continue to occupy storage and increase billing, even though they serve no active producers or consumers.

Superstream automatically detects and cleans up those unused schemas:

  • Scans schema subjects and versions daily

  • Identifies schemas with no matching topic in the environment (based on topic name strategy)

  • Flags inactive or orphaned schemas for cleanup

  • Optionally auto-deletes them according to your automation settings

You can choose to run in observability-only mode to preview cleanup recommendations, or enable full automation to let Superstream safely remove stale schemas on its own. All cleanup actions are logged and reversible.

💰 Why It Matters

Confluent Cloud charges per stored schema and version. Over time, orphaned schemas can significantly inflate registry costs—especially in dynamic environments with frequent topic churn. By automatically cleaning them up, Superstream helps you:

  • Reduce Confluent Schema Registry costs

  • Keep your registry organized and lightweight

  • Avoid manual cleanup or risk of deleting active schemas

Schema Registry Optimization is available exclusively for Confluent Cloud users and integrates seamlessly with Superstream’s cluster health scans and automation engine.

Additional resources

Steps:

  1. Set JMX environment variables before starting Kafka:

  1. For Kubernetes/Strimzi, add to Kafka custom resource:

  1. Restart Kafka brokers to apply changes

  2. Verify JMX is accessible:

For AWS MSK (Managed Streaming for Kafka)

Requirement: Open Monitoring with Prometheus must be enabled

Steps:

  1. Via AWS Console:

    • Navigate to Amazon MSK console

    • Select your cluster → "Edit monitoring"

    • Enable "Open monitoring with Prometheus"

    • Check "JMX Exporter"

    • Save changes

  2. Via AWS CLI

Verification: MSK exposes metrics on port 11001 (JMX Exporter) after enabling

Note: Open Monitoring is offered at no additional cost for MSK clusters

Important Notes

  • Network Access:

    • For self-managed Kafka: Ensure JMX port (9999) is accessible from Datadog agents within your network.

    • For AWS MSK: Ensure security group rules allow inbound traffic on port 11001 (JMX Exporter) from Datadog agent instances.

Generate ENCRYPTION_SECRET_KEY

To create the ENCRYPTION_SECRET_KEY, run the following command:

Specify the existing secret in custom_values.yaml

Indicate that you are using an existing secret by adding the following lines to your custom_values.yaml file:

Final Configuration

After configuring the secret, your overall configuration should look like this:

Deploy

kubectl create secret generic superstream-creds --from-literal=ACTIVATION_TOKEN=<TOKEN> --from-literal=ENCRYPTION_SECRET_KEY=<RANDOM_STRING_OF_32_CHAR> -n superstream
export JMX_PORT=9999
export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote \
  -Dcom.sun.management.jmxremote.authenticate=false \
  -Dcom.sun.management.jmxremote.ssl=false \
  -Djava.rmi.server.hostname=<BROKER_HOSTNAME> \
  -Dcom.sun.management.jmxremote.port=9999 \
  -Dcom.sun.management.jmxremote.rmi.port=9999"
spec:
  kafka:
    jmxOptions: {}
telnet <broker-host> 9999
aws kafka update-monitoring \
  --cluster-arn <CLUSTER_ARN> \
  --current-version <CLUSTER_VERSION> \
  --open-monitoring '{
    "Prometheus": {
      "JmxExporter": {
        "EnabledInBroker": true
      }
    }
  }'
openssl rand -hex 16
superstreamAgent:  
  secret:
    useExisting: true
############################################################
# GLOBAL configuration for Superstream Agent
############################################################
global:
  agentName: ""                       # Define the superstream agent name within 32 characters, excluding '.', and using only lowercase letters, numbers, '-', and '_'.
  superstreamAccountId: ""            # Provide the account ID associated with the deployment, which could be used for identifying resources or configurations tied to a specific account.
  superstreamActivationToken: ""      # Enter the activation token required for services or resources that need an initial token for activation or authentication.
############################################################

superstreamAgent:  
  secret:
    useExisting: true
helm repo add superstream-agent https://superstream-agent.k8s.superstream.ai/ --force-update && helm upgrade --install superstream superstream-agent/superstream-agent -f custom_values.yaml --create-namespace --namespace superstream --wait
Confluent Cloud

Aiven

Step 1: Create a Token

  1. In Aiven console: Click on user information (top right) -> Tokens -> Generate token

  2. Use the created credentials in the Superstream console.

Step 2: Creating a Kafka User

  1. Make sure the Kafka user you are giving to Superstream has the ACLs appear below.

Other

Create a dedicated Kafka user for Superstream with the following ACLs

Getting started - Kafka Connect

This guide shows how to integrate superstream-client into Kafka Connect so your connectors.

What you’ll set up

  • Kafka Source Connector with the superstream-client Java package

  • Required environment variables

  • Optional environment variables

1. Download the SuperClient package

Always ensure you are using the most up-to-date version of the SuperClient package from Maven Central, GitHub Releases, or your internal artifact repository.

From Maven Central – Add the in your build system and include the JAR in your Kafka Connect image or plugin path.

Or,

From GitHub – and place it in the Kafka Connect plugin path (for example /opt/kafka/plugins/superstream-clients/).

2. Set Up Environment Variables

Required Environment Variable: Attaches the Superstream Java agent to the Connect JVM. Ensure the path matches the actual location of the JAR inside the container/host.

Optional Variables

  • SUPERSTREAM_LATENCY_SENSITIVE Set to "true" to prevent any modifications to linger.ms values.

  • SUPERSTREAM_DISABLED Set to "true" to disable all Superstream optimizations.

  • SUPERSTREAM_DEBUG Set to "true" to enable debug logs.

⚠️ If your environment already uses JAVA_TOOL_OPTIONS, append the -javaagent=... flag without overwriting existing options.

Step 1: Agent Deployment

Superstream BYOC lets you run agents inside your own cloud—ideal when Kafka clusters can’t be exposed externally.

  • Deploy one or more agents, distributing clusters however you prefer.

  • Ensure your Docker or Kubernetes environment has network access to the target Kafka clusters.

  • Once the agent is running, you're ready for the next step.

Superstream MCP

Connect with Model Context Protocol (MCP)

Integrate Superstream MCP's remote server to allow AI assistants to access your Kafka directly.

MCP Server URL: https://api.superstream.ai/mcp

Kafka Cluster Optimization

Superstream is automatically analyzing, optimizing, and remediating inefficiencies in Apache Kafka clusters. It helps platform teams keep their infrastructure lean, reliable, and cost-effective—without the manual grind of tuning broker configs or hunting down stale topics.

🔍 Daily Cluster Health Scans

Superstream runs a full diagnostic sweep of your cluster every 24 hours. It inspects topic configurations, broker metadata, partition distribution, resource usage, and consumer group activity to detect misconfigurations or signs of inefficiency.

  • Scans run automatically—no setup needed

Cursor

Add to your MCP configuration

Add to Cursor

Click "Connect" to complete the connection with Superstream.

Claude

Connect via Claude Web

  • Please ensure you are on a Team, Enterprise, Pro, or Max plan, and that you are an account Owner

  • Open Admin Settings → Connectors → Add Custom Connector

  • Enter name "Superstream" and paste the server URL below

  • Open Settings → Connectors

  • Find and connect to "Superstream"

ChatGPT

Connect via ChatGPT Web

  • Open Settings → Apps & Connectors

  • Click on Advanced Settings

  • Turn on Developer Mode

  • Go back and click "Create" in the top right

  • Type in the name "Superstream" and paste the server URL below

  • Choose "OAuth" in the authentication dropdown

  • Leave Client ID and Client Secret empty


Available Tools

  • list_clusters

  • get_cluster_details

  • get_cluster_health

  • get_cluster_optimization_settings_and_their_potential_savings

  • get_cluster_autoscaling_policy

  • test_cluster_connectivity

  • get_cluster_savings

  • get_cluster_metrics

  • search_documentation

  • Results appear in the dashboard as a daily report

  • Highlights include config anomalies, unused topics, and skewed partitions

  • This gives your team visibility into drift and decay that typically go unnoticed until there's an outage.

    🔧 Auto-Remediation

    When enabled, Superstream doesn't just detect issues—it fixes them. Our automation engine applies safe remediations based on best practices, usage patterns, and your cluster-specific thresholds.

    • Fixes misaligned configs (e.g., retention.ms, cleanup.policy)

    • Resolves replication factor issues or ISR shrinkage

    • Normalizes partition count and distribution

    All remediations are logged and reversible. You can choose to run in observability-only mode first, and turn on automation gradually.

    📦 Cluster Right-Sizing

    Superstream evaluates broker resource usage (CPU, memory, disk, throughput) and matches it against your current infrastructure provisioning.

    • Works with AWS MSK, and Aiven

    • Recommends better instance types or plan tiers

    • Flags over-provisioned and under-resourced setups

    • automatically performs safe reconfiguration—including intelligent partition rebalancing—to align resources with actual workload demand

    This helps you reduce cloud costs without compromising performance.

    🧹 Idle Topics & Consumer Groups Cleanup

    Clusters often accumulate unused topics and consumer groups over time. Superstream identifies those and can clean them up automatically (with protection rules, if needed).

    • Detects topics with zero traffic or unassigned partitions

    • Identifies idle consumer groups that haven’t polled in weeks

    • Supports manual review or automated deletion

    You can protect critical topics with exclusion rules to avoid accidental cleanup.

    ⚙️ Topic Configuration Policies

    Superstream allows you to enforce organization-wide policies on how Kafka topics should be configured. This ensures consistency, prevents drift, and reduces risk across your entire environment.

    • Define global rules for critical configs like retention.ms, retention.bytes, min.insync.replicas, replication.factor and more.

    • Detects and auto-corrects drifted or non-compliant topic settings

    • Supports environment- or team-specific policies using tags or naming conventions

    • Automatically applies corrections or flags for manual approval depending on your automation settings

    These policies help standardize Kafka usage across services and teams—whether you’re running dozens or thousands of topics.

    Authentication

    Superstream provides flexible and secure authentication methods to suit teams of all sizes and access models.

    🔐 Native Authentication with RBAC and Tag-Based Permissions

    All users can authenticate using Superstream's native login system. Access is controlled using:

    • RBAC Roles:

      • admin: Full access to manage, configure, and automate.

      • read-only: View-only access without permission to modify settings.

    • Tag-Based Permissions:

      • Assign granular permissions by associating users with resource tags (e.g. team, environment, service).

      • Enables scoped visibility and control across large organizations.

    This system ensures users only see and interact with resources relevant to their role or team.

    🔐 Single Sign-On (SSO) via Active Directory

    Superstream supports SSO integration for enterprise customers using Active Directory.

    • SSO is available upon request—please contact our team to get started.

    • A custom user attribute named superstream_role must be defined to assign user permissions (admin or read-only).

    • Future support will include tag-based roles via directory attributes.

    For detailed guidance on setting up SSO with Active Directory, please reach out to support.

    Changelog

    dependency
    Download the JAR
    - name: JAVA_TOOL_OPTIONS
      value: "-javaagent:/opt/kafka/plugins/superstream-clients/superstream-clients-<latest>.jar"
    🔑 Retrieve required info

    In the Superstream console (under your user profile):

    • Account ID – copy from console

    • Activation Token – copy from console

    • Agent Name – choose a unique name (max 32 chars, lowercase only). Allowed: a-z, 0-9, -, _ Not allowed: .

    🐳 Deploy via Docker

    Run the following command to download and start the Superstream agent via Docker Compose:

    ☸️ Deploy via Kubernetes

    Superstream provides a Helm chart to deploy the agent.

    1. Create and configure custom_values.yaml . Define required values like account ID, activation token, and agent name → View example.

    2. Navigate to the directory containing your custom_values.yaml file and run:

    Deployment verification:

    📦 What Gets Deployed

    Whether using Docker or Kubernetes, the Superstream agent setup includes the following components:

    • superstream-data-plane Core service that connects to your Kafka clusters, collects metadata, and generates insights.

    • superstream-auto-scaler (optional) Automatically scales AWS MSK and Aiven Kafka clusters when enabled.

    • superstream-telegraf Monitors internal agent components for health and metrics.

    • superstream-datadog (optional) Collects and exports Kafka JMX metrics to Datadog, providing deep visibility into broker, topic, and consumer performance.

    Appendixes

    Appendix A - Superstream Update

    1. Retrieve the most recent version of the Superstream Helm chart

    1. Make sure to use the same values:

    1. Run the upgrade command:

    Appendix B - Uninstall

    Appendix C - Deploy Superstream Agent using labels, tolerations, nodeSelector and etc'

    • To inject custom labels into all services deployed by Superstream, utilize the global.labels variable.

    • To configure tolerations, nodeSelector, and affinity settings for each deployed service, the adjustments in the following example need to be done:

    Appendix D - Deploy Superstream from local registry

    • To deploy Superstream from local registry, override default values using global.image variable in custom.valuesfile:

    Kafka Clients Optimization

    Intelligent Kafka Client Optimization

    This is the Superstream’s solution for tuning and monitoring your Kafka clients—automatically and at scale. It analyzes real-time producer behavior, recommends or applies optimized client configurations, and helps platform teams reduce data transfer costs, improve throughput efficiency, and minimize load on Kafka brokers. Whether you’re running hundreds of microservices or a handful of batch jobs, Superstream ensures your clients are well-behaved, efficient, and production-ready.

    📡 Real-Time Client Observability

    Superstream continuously tracks Kafka producer activity and surfaces insights into how each client interacts with the system.

    • Monitors throughput, compression ratios, batching, and message sizes

    • Tracks client metadata like environment, and topic usage

    • Highlights inefficient producers or topics

    This observability allows teams to understand behavior patterns that directly affect broker load, latency, and throughput.

    🧠 Smart Client Config Recommendations

    Superstream recommends optimal Kafka producer configurations based on observed patterns—without requiring any changes to your application code.

    • Suggests values for batch.size, linger.ms, compression.type

    • Recommendations tailored to actual runtime behavior and topic-level throughput

    • Includes per-topic savings estimates and efficiency scores

    By tuning these parameters, Superstream helps reduce broker CPU utilization, shrink network overhead, and stabilize throughput at scale.

    📊 Topic-Level Savings Reports

    Every optimization is tied to real, measurable impact. Superstream provides detailed reporting on how much you’re saving—and where.

    • Visualize total data transfer and compute usage per topic

    • See estimated cost and resource savings after applying suggestions

    • Identify which clients or topics are most impactful to optimize

    This helps you prioritize tuning efforts and demonstrate the value of optimization.

    📉 Reduce Broker Load and Stabilize Infrastructure

    Client misconfigurations—like sending too many small messages or not compressing data—put unnecessary pressure on Kafka brokers. Superstream mitigates this at the source.

    • Reduces broker-side CPU and memory load

    • Helps avoid backpressure, ISR flapping, and queue buildup

    • Leads to smoother consumer behavior and more predictable system throughput

    Less noisy clients mean healthier Kafka clusters with fewer fire drills.

    🧩 Instrumentation-Only, No Code Changes Required

    Superstream integrates into your Kafka ecosystem as an instrumentation layer. It observes producer behavior and injects optimized configuration without requiring developers to modify application code.

    • Fully decoupled from client code

    This design enables organizations to enforce optimization standards and roll out tuning at scale—without introducing friction into developer workflows.

    How

    1. Superstream's local agent is deployed in your VPC and securely connects to designated clusters.

    2. Continuous analysis is performed per topic and partition, outputting the current recommended set of properties to maximize network efficiency.

    3. From this point, there are two options to proceed:

      1. Manual changes – Operators or engineers can review the recommended properties and apply them manually for each producer and its source code /

    FAQ

    Q: How dynamic are the config changes? I.e., How often are the optimizations re-evaluated and potentially changed?

    A: There are two ongoing processes involved:

    1. Daily Workload Analysis Every day, the system performs a workload analysis that may identify more optimal configuration properties. This means new recommendations could, in theory, be available on a daily basis.

    2. Application Restart Required for Changes However, for any new properties to take effect, the application must be restarted. Once the application starts with a particular set of optimized properties, it will continue operating with those settings until the next manual restart, rebuild, or redeployment.

    Q: We have some producers in Kafka clusters that we didn’t previously connect. Do we need to do anything for these clusters to work with Superstream?

    A: First, ensure the new cluster is connected, and the Superstream local agent has permission to analyze it. Then, install the Superstream package — and that’s it.

    Getting started - Python

    Kafka's performance can often be constrained by inefficient network usage—especially in high-throughput or multi-region deployments. Improving Kafka’s network efficiency means optimizing how data flows between clients and brokers, reducing bandwidth usage, minimizing latency, and ultimately ensuring cost-effective and reliable data pipelines.

    At Superstream, we can make it easier to manage and optimize Kafka networking, particularly through our open-source superstream-clients library. This guide walks through how to use the library to boost network efficiency when interacting with Kafka.


    Superstream Client For Python

    A Python library for automatically optimizing Kafka producer configurations based on topic-specific recommendations.

    Overview

    Superstream Clients works as a Python import hook that intercepts Kafka producer creation and applies optimized configurations without requiring any code changes in your application. It dynamically retrieves optimization recommendations from Superstream and applies them based on impact analysis.

    Supported Libraries

    Works with any Java library that depends on kafka-clients, including:

    • kafka-python

    • aiokafka

    • confluent-kafka

    • Faust

    Features

    • Zero-code integration: No code changes required in your application

    • Dynamic configuration: Applies optimized settings based on topic-specific recommendations

    • Intelligent optimization: Identifies the most impactful topics to optimize

    • Graceful fallback: Falls back to default settings if optimization fails


    Installation

    Superstream package:

    Step 0: Add permissions

    Any app that runs Superstream lib should be able to READ/WRITE/DESCRIBE from all topics with the prefix superstream.*

    Step 1: Install the Superstream lib

    Step 2: Add Environment Variables

    ENV
    Required?
    Description
    Example

    That's it! Superclient will now automatically load and optimize all Kafka producers in your Python environment.

    After installation, SuperClient works automatically. Just use your Kafka clients as usual.

    Optional. Docker Integration

    When using Superstream Clients with containerized applications, include the package in your Dockerfile:

    Prerequisites

    • Python 3.8 or higher

    • Kafka cluster that is connected to the Superstream's console

    • Read and write permissions to the superstream.* topics

    Supertstream Agent Deployment using custom resource limits

    Superstream Agent is deployed with default resource limits designed to ensure high performance. In some cases, these configured limits may not be sufficient. To address potential performance bottlenecks, you can adjust the resource limits using the procedure outlined below.

    Specify desired resource limits in the custom_values.yaml .

    To adjust the resource limits for the Superstream Agent Data Plane, add the following configuration:

    superstreamAgent:  
      resources:
        limits:
          cpu: '8'
          memory: 8Gi

    To modify the resource limits for Superstream Autoscaller, add the following configuration:

    autoScaller:  
      resources:
        limits:
          cpu: '8'
          memory: 8Gi

    Example: Final Configuration File

    Below is an example of a complete configuration file (custom_values.yaml) after setting custom resource limits:

    Deploy

    Once you have updated the custom_values.yaml file with your desired resource limits, deploy the Superstream Engine using Helm:

    Superstream Agent deployment for environments with a local container registry

    How to deploy a Superstream Agent in a fully air-gapped environment with a private container registry.

    This guide focuses on critical applications managed by Superstream Helm Chart: Telegraf, and Superstream. We will cover each application's specific Docker images that are managed by the helm chart, ensuring you have the information to deploy Superstream in your environment.

    Prerequisites

    How to deploy and manage Superstream using ArgoCD

    Deployment Process:

    Create Values YAML Files

    • For easiness create custom_values.yaml file and edit relevant values, an example can be found :

    Processed data

    This page describes the processed data and metadata by Superstream engine

    The data stays on the customer's premises, and only the metadata, which consists of calculated results, is transmitted according to the customer's choice to either:

    • The Superstream control plane located off the premises.

    • An on-premises control plane ensures that no data whatsoever leaves the premises.

    Nov 1, 2024

    V1.0.700

    We've been busy making things better, smarter, and faster.

    Latest updates:

    Compliance

    Superstream maintains a robust compliance posture, adhering to internationally recognized standards and regulations

    Superstream is committed to maintaining the highest standards of data security and compliance.

    Our platform is certified with leading industry standards, including ISO 27001 for information security management, GDPR for data protection, and SOC 2 (Type I+II) for managing customer data based on trust service principles. These certifications demonstrate our dedication to protecting client data and ensuring the integrity and confidentiality of your information.

    1. SOC 2 Type 1 and 2: Superstream meets the stringent requirements of Service Organization Control (SOC) 2, both Type 1 and Type 2. This ensures the platform's security, availability, processing integrity, confidentiality, and privacy of customer data are in line with the American Institute of Certified Public Accountants (AICPA) standards.

    2. ISO 27001: Superstream aligns with ISO 27001, a globally recognized standard for information security management systems (ISMS). This certification indicates a commitment to a systematic and ongoing approach to managing sensitive company and customer information.

    ACCOUNT_ID=<account id> AGENT_NAME=<name> ACTIVATION_TOKEN=<token> bash -c 'curl -o docker-compose.yaml https://raw.githubusercontent.com/superstreamlabs/helm-charts/master/docker/docker-compose.yaml && docker compose up -d'
    custom_values.yaml
    global:
      agentName: ""               # Define the superstream agent name within 32 characters, excluding '.', and using only lowercase letters, numbers, '-', and '_'.
      superstreamAccountId: ""
      superstreamActivationToken: ""
    helm repo add superstream-agent https://superstream-agent.k8s.superstream.ai/ --force-update && helm upgrade --install superstream superstream-agent/superstream-agent -f custom_values.yaml --create-namespace --namespace superstream --wait
    helm list
     helm repo add superstream-agent https://superstream-agent.k8s.superstream.ai/ --force-update
    helm get values superstream --namespace superstream
    helm upgrade --install superstream superstream-agent/superstream-agent -f custom_values.yaml --namespace superstream --wait
    helm delete superstream -n <NAMESPACE>
    global:
      agentName: ""                    # Define the superstream agent name within 32 characters, excluding '.', and using only lowercase letters, numbers, '-', and '_'.
      superstreamAccountId: ""         # Provide the account ID associated with the deployment, which could be used for identifying resources or configurations tied to a specific account.
      superstreamActivationToken: ""   # Enter the activation token required for services or resources that need an initial token for activation or authentication.
      
      labels:
        tests: ok
    global:
      agentName: ""                    # Define the superstream agent name within 32 characters, excluding '.', and using only lowercase letters, numbers, '-', and '_'.
      superstreamAccountId: ""         # Provide the account ID associated with the deployment, which could be used for identifying resources or configurations tied to a specific account.
      superstreamActivationToken: ""   # Enter the activation token required for services or resources that need an initial token for activation or authentication.
      
    superstreamAgent:
      tolerations:
      - key: "app"
        value: "connectors"
        effect: "NoExecute"
    autoScaler:
      tolerations:
      - key: "app"
        value: "connectors"
        effect: "NoExecute"
    telegraf:
      tolerations:
      - key: "app"
        value: "connectors"
        effect: "NoExecute"
    datadog:
      tolerations:
      - key: "app"
        value: "connectors"
        effect: "NoExecute"    
    global:
      agentName: ""                    # Define the superstream agent name within 32 characters, excluding '.', and using only lowercase letters, numbers, '-', and '_'.
      superstreamAccountId: ""         # Provide the account ID associated with the deployment, which could be used for identifying resources or configurations tied to a specific account.
      superstreamActivationToken: ""   # Enter the activation token required for services or resources that need an initial token for activation or authentication.
      
      image:
        # Global image pull policy to use for all container images in the chart
        # can be overridden by individual image pullPolicy
        pullPolicy:
        # Global list of secret names to use as image pull secrets for all pod specs in the chart
        # secrets must exist in the same namespace
        # https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
        pullSecretNames: []
        # Global registry to use for all container images in the chart
        # can be overridden by individual image registry
        registry: 
    A specific list of data to be fetched from each Kafka cluster:

    Topic Metadata:

    • List of Topics: Retrieve all available topics in the Kafka cluster.

    • Topic Configuration: Access detailed configuration settings for each topic, such as retention policies, partition count, replication factor, activity events, connected CGs, and segment file settings.

    • Partition Information: Details about each topic's partitions, including partition IDs and their current leaders.

    • Replicas placements

    • Payload samples: The component that consumes the payload is the deployed local agent, which runs within your premises and, after analyzing the payload structure (regardless of its contents), immediately dumps it.

    Consumer Group Metadata:

    • Consumer Group List: List of all consumer groups connected to the Kafka cluster.

    • Consumer Group Offsets: Information about the offset position for each consumer group in each partition of a topic.

    • Consumer Group State: Current state of consumer groups, like whether they are active or in a rebalance phase.

    • Static membership

    Broker Metadata:

    • Broker IDs and Addresses: Information about each broker in the Kafka cluster, including their IDs and network addresses.

    • Broker Configurations: Configuration details of each broker, like log file size limits, message size limits, and more.

    • Broker Metrics and Health: Data about broker performance, such as CPU and memory usage, network I/O, and throughput metrics.

    • Rack aware id

    Cluster Metadata:

    • Cluster ID: Unique identifier of the Kafka cluster.

    • Controller Broker Info: Details about the current controller broker, which is responsible for maintaining the leader election for partitions.

    Log Metadata:

    • Log Size and Health: Information on the log size for each topic and partition, and details on log segment files.

    Brand new UI!

  • Superstream will now automatically discover clusters from a vendor API key

  • Charts and other impact-related information such as savings, have been refactored and calculations were tuned

  • Recent fixes:

    • Autoscaler issues

    • UI fixes

    Thanks for using Superstream!

  • GDPR: Compliance with the General Data Protection Regulation (GDPR) underscores Superstream's dedication to data privacy and protection in accordance with European Union regulations. This includes ensuring user data rights and implementing appropriate data handling and processing measures.

  • By adhering to these standards, Superstream is committed to maintaining high levels of security and privacy, instilling trust in its users and stakeholders regarding data handling and protection. More information can be found in our legal hub.

    client.properties
    .
  • Automatic changes – Use the Superstream for Kafka library. This library acts as a sidecar—an interceptor between the Superstream control plane and individual applications.

    1. Each application, during initialization and when connecting to an already analyzed topic, will receive an optimized set of properties tailored to its workload and topics.

    2. Superstream will overwrite any existing properties—such as compression.type, batch.size, and linger.ms—with optimized values.

    3. Results should be visible immediately through the Superstream Console or any other third-party APM tool.

  • Getting started - Java

    Sep 10, 2024

    V1.0.400

    We've been busy making things better, smarter, and faster.

    Latest updates:

    • UI performance improvements

    • Redpanda support is now generally available

    • Auditing page improvements

    • All client properties per client are collected and displayed in the UI

    • Ability to perform bulk fixes

    • System notifications center

    • Python library is now generally available: 2.4.0

    • New Java client library: 3.5.114

    Recent fixes:

    • Analysis algorithm improvements

    Thanks for using Superstream!

    Jan 11, 2025

    V1.1.100

    We've been busy making things better, wiser, and faster.

    Latest updates:

    • Autoscaler improvements: -force flag to force MSK clusters to scale down even when partition limitation is blocking the operation

    • Cluster auto-discovery improvements

    • New cluster cost report on each cluster's page

    • Cluster summary usage update

    • Warning indication for connected clusters in case some information is missing

    • Users management

    Recent fixes:

    • Autoscaler issues

    • UI fixes

    • Cluster information

    Thanks for using Superstream!

    Feb 2, 2025

    V1.0.200

    We've been busy making things better, wiser, and faster.

    Latest updates:

    • UI improvements: the ability to pin clusters, enhanced visibility at the all clusters page

    • Manual / Automatic mode: Ability to set Superstream automation to manual for validation before automatic execution

    • Notifications improvements

    Recent fixes:

    • Autoscaler issues

    • UI fixes

    Thanks for using Superstream!

    Legal

    Sep 24, 2024

    V1.0.500

    We've been busy making things better, smarter, and faster.

    Latest updates:

    • Topic protection's user experience got improved

    • The clients' tab was improved and enriched

    • Enhancements to the notifications center

    • Support in retrying all the jobs

    • New Java client library: 3.5.116

    Recent fixes:

    • Empty owner in different audit logs

    • UI fixes

    • Issue when trying to add an ARN to an already connected MSK cluster

    • Enabled compression log

    Thanks for using Superstream!

    Dec 5, 2024

    V1.0.800

    We've been busy making things better, wiser, and faster.

    Latest updates:

    • Autoscaler improvements

    • Cluster auto-discovery improvements

    • Algorithms and calculations have been tuned

    • The Todo list has been removed. Tasks cannot be fixed manually or individually but rather through automation only

    • New cluster page

    • Users management - backend only.

    Recent fixes:

    • Autoscaler issues

    • UI fixes

    Thanks for using Superstream!

    ############################################################
    # GLOBAL configuration for Superstream Agent
    ############################################################
    global:
      agentName: "" # Define the superstream agent name within 32 characters, excluding '.', and using only lowercase letters, numbers, '-', and '_'.
      superstreamAccountId: "" # Provide the account ID associated with the deployment, which could be used for identifying resources or configurations tied to a specific account.
      superstreamActivationToken: "" # Enter the activation token required for services or resources that need an initial token for activation or authentication.
    
    superstreamAgent:  
      resources:
        limits:
          cpu: '8'
          memory: 8Gi
    1. Container images

    Please store the following images in your container registry:

    Telegraf: As a versatile agent for collecting, processing, and writing metrics, Telegraf is pivotal in monitoring and observability.

    • Helm version: 1.8.62

    • Container:

      • docker.io/library/telegraf:1.36-alpine

    Datadog: A powerful monitoring platform for collecting, and alerting on Kafka JMX metrics in real time.

    • Helm version: 1.0.0

    • Containers:

      • grc.io/datadoghq/agent:7.71.1-jmx

      • superstreamlabs/superstream-connection-config:latest

    Superstream: The agent itself.

    • Helm version: Releases

    • Helm Chart URL: https://superstream-agent.k8s.superstream.ai/

    • Containers:

      • superstreamlabs/superstream-data-plane-be:latest

      • superstreamlabs/superstream-kafka-auto-scaler:latest

    To ensure that your private repositories use the correct Docker images, follow these steps to pull images from public repositories and tag them for your private repository. Below are command examples for the related Docker images you might use:

    Telegraf (docker.io/library/telegraf:1.36-alpine):

    Superstream Agent (superstreamlabs/superstream-data-plane-be:latest):

    Superstream Autoscaller (superstreamlabs/superstream-kafka-auto-scaler:latest):

    Superstream Connection Config (superstreamlabs/superstream-connection-config:latest):

    Getting started

    1. Download Helm Chart

    Download the Superstream Helm chart from the official source as described above.

    2. Publish to Private Environments

    Once downloaded, publish the chart to your private Helm chart repositories. This step ensures that you maintain control over the versions and configurations of the chart used in your deployments.

    Docker Image Names: You must change the Docker image names within the Helmfile to reflect those stored in your private Docker registries. This customization is crucial for ensuring that your deployments reference the correct resources within your secure environment:

    3. Configure All Services to Use a Private Docker Repository and Private PullSecret

    For easiness create/use custom_values.yaml file and add global.image section values, an example can be found here:

    4. Deploy

    To apply the Helmfile configurations and deploy your Kubernetes resources:

    Apply Helmfile: Run the following command to apply the Helmfile configuration. This will sync your Helm releases to match the state declared in your helmfile.yaml:

    helm repo add superstream-agent https://superstream-agent.k8s.superstream.ai/ --force-update && helm upgrade --install superstream superstream-agent/superstream-agent -f custom_values.yaml --create-namespace --namespace superstream --wait
    docker pull library/telegraf:1.36-alpine
    docker tag library/telegraf:1.36-alpine YOURREPOSITORY/library/telegraf:1.36-alpine
    docker push YOURREPOSITORY/library/telegraf:1.36-alpine
    docker pull superstreamlabs/superstream-data-plane-be:latest
    docker tag superstreamlabs/superstream-data-plane-be:latest YOURREPOSITORY/superstreamlabs/superstream-data-plane-be:latest
    docker push YOURREPOSITORY/superstreamlabs/superstream-data-plane-be:latest
    docker pull superstreamlabs/superstream-kafka-auto-scaler:latest
    docker tag superstreamlabs/superstream-kafka-auto-scaler:latest YOURREPOSITORY/superstreamlabs/superstream-kafka-auto-scaler:latest
    docker push YOURREPOSITORY/superstreamlabs/superstream-kafka-auto-scaler:latest
    docker pull superstreamlabs/superstream-connection-config:latest
    docker tag superstreamlabs/superstream-connection-config:latest YOURREPOSITORY/superstreamlabs/superstream-connection-config:latest
    docker push YOURREPOSITORY/superstreamlabs/superstream-connection-config:latest
    ```yaml
    ############################################################
    # GLOBAL configuration for Superstream Agent
    ############################################################
    global:
      agentName: ""                       # Define the superstream agent name within 32 characters, excluding '.', and using only lowercase letters, numbers, '-', and '_'.
      superstreamAccountId: ""            # Provide the account ID associated with the deployment, which could be used for identifying resources or configurations tied to a specific account.
      superstreamActivationToken: ""      # Enter the activation token required for services or resources that need an initial token for activation or authentication.
      
      image:
        # global image pull policy to use for all container images in the chart
        # can be overridden by individual image pullPolicy
        pullPolicy:
        # global list of secret names to use as image pull secrets for all pod specs in the chart
        # secrets must exist in the same namespace
        # https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
        pullSecretNames: []
        # global registry to use for all container images in the chart
        # can be overridden by individual image registry
        registry:
    ```
    helm repo add superstream <YOURREPOSITORY> --force-update
    helm install superstream superstream/superstream -f custom_values.yaml --create-namespace --namespace superstream --wait
    FastAPI event publishers
  • Celery Kafka backends

  • Any custom wrapper around these Kafka clients

  • SUPERSTREAM_TOPICS_LIST

    Yes

    Comma-separated list of topics your application produces to

    SUPERSTREAM_LATENCY_SENSITIVE=false

    No

    Set to true to prevent any modification to linger.ms values

    SUPERSTREAM_DISABLED=false

    No

    Set to true to disable optimization

    https://pypi.org/project/superstream-clients

    Edit global.image the section in case it's an air-gapped environment.

    • Official values.file with all abilities can be found here.

    • Push these values.yaml files to your ArgoCD repository.

    Deploy Application YAMLs:

    Follow the examples below to deploy the application YAML files. Pay close attention to the comments provided for each:

    Sync and Monitor Deployment:

    • Once your values YAML files are pushed and the applications YAML are deployed, navigate to your ArgoCD dashboard.

    • Find the application you just deployed and click the 'Sync' button to initiate the deployment process.

    • Monitor the deployment status to ensure all components are successfully deployed and running.

    By following these steps, you should be able to deploy and upgrade the Superstream Agent using ArgoCD successfully. If you have any questions or need further assistance, refer to the documentation or reach out to the support team.

    here

    Option 1: Create or Update Superstream Role

    Be sure you’re signed in to the AWS Console with your default browser, then click here:

    1. Enter required parameters (e.g., NodeGroupRoleArn).

    2. Acknowledge IAM resource creation.

    3. Click Create Stack or Update Stack (choose Update Stack if the Superstream IAM role already exists).

    4. Confirm status: CREATE_COMPLETE or UPDATE_COMPLETE.

    5. Click on "Resources," then select "SuperstreamAgentRole" to retrieve the IAM Role ARN. Use this ARN in the Superstream console.

    Option 2: Create or Update Superstream User

    Be sure you’re signed in to the AWS Console with your default browser, then :

    1. Acknowledge IAM resource creation.

    2. Click Create Stack or Update Stack (choose Update Stack if the Superstream IAM user already exists).

    3. Confirm status: CREATE_COMPLETE or UPDATE_COMPLETE (appears on the left side of the screen).

    Step 3: Connect a Cluster

    Prerequisites Checklist

    Setting up Slack Notifications

    Superstream is designed to deliver a seamless experience, using lightweight notifications to keep you and your team informed about actions across your clusters.

    To configure notifications, please head to the page.

    For Slack notifications

    August 26, 2024

    August 22, 2024 —

    V1.0.300

    We've been busy making things better, smarter, and faster.

    Oct 14, 2024

    V1.0.600

    We've been busy making things better, smarter, and faster.

    Latest updates:

    pip install superstream-clients && python -m superclient install_pth
    SUPERSTREAM_TOPICS_LIST=orders,payments,user-events
    SUPERSTREAM_LATENCY_SENSITIVE=true
    FROM python:3.8-slim
    
    # Install superclient
    RUN pip install superstream-clients
    RUN python -m superclient install_pth
    
    # Your application code
    COPY . /app
    WORKDIR /app
    
    # Run your application
    CMD ["python", "your_app.py"]
    ############################################################
    # GLOBAL configuration for Superstream Agent
    ############################################################
    global:
      agentName: ""                    # Define the superstream engine name within 32 characters, excluding '.', and using only lowercase letters, numbers, '-', and '_'.
      superstreamAccountId: ""         # Provide the account ID associated with the deployment, which could be used for identifying resources or configurations tied to a specific account.
      superstreamActivationToken: ""   # Enter the activation token required for services or resources that need an initial token for activation or authentication.
      
      image:
        # global image pull policy to use for all container images in the chart
        # can be overridden by individual image pullPolicy
        pullPolicy:
        # global list of secret names to use as image pull secrets for all pod specs in the chart
        # secrets must exist in the same namespace
        # https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
        pullSecretNames: []
        # global registry to use for all container images in the chart
        # can be overridden by individual image registry
        registry:
    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: superstream
      namespace: argocd
      labels:
        app.kubernetes.io/managed-by: Helm
    spec:
      destination:
        server: https://kubernetes.default.svc # Destination cluster
        namespace: superstream # Adjust the destination namespace with the file environments/default.yaml
      ignoreDifferences:
      - group: apps
        jsonPointers:
        - /spec/replicas
        kind: Deployment
      project: default
      sources:
      - chart: superstream
        helm:
          valueFiles:
          # Path to the values files in your ArgoCD repository.
          - $values/kubernetes-values/superstream/custom-values.yaml
        repoURL: https://superstream-agent.k8s.superstream.ai/
        targetRevision: 0.4.5 # Adjust chart version
      - ref: values
        # Your ArgoCD repository  
        repoURL: [email protected]:superstreamlabs/argocd-yamls.git
        targetRevision: master
    Latest updates:
    • All client properties are now being collected and can be viewed. This feature will be useful for the next release, as it allows users to modify them manually or via Superstream suggestions. Note: A client library upgrade is required.

    • Dashboard performance improvements and additional metrics

    • New optimization: Check for uncompliant retention policies

    • Auditing improvements

    • New Java client library: 3.5.113

    Recent fixes:

    • Minor engine issues

    • Error while switching compression algorithm

    Thanks for using Superstream!

    A new auto scaler for Aiven and AWS MSK!

  • An ability to remediate optimizations automatically!

  • Client properties modifications at runtime

  • Recent fixes:

    • Setting different enforced retention policies

    • UI fixes

    • Missing consumer params

    Thanks for using Superstream!

    Log In to Slack
    • Navigate to Slack and log in to your workspace using your credentials.

  • Access Slack API Management

    • Visit the Slack API page.

    • Click on the "Your Apps" button in the top-right corner.

  • Create a New App

    • On the Your Apps page, click "Create an App".

    • Select "From Scratch".

    • Provide a name for your app and choose the workspace where you want to send messages.

    • Click "Create App".

  • Enable Incoming Webhooks

    • In the app settings, locate the "Features" section on the left-hand menu and click on "Incoming Webhooks".

    • Toggle the Activate Incoming Webhooks option to On.

  • Create a Webhook URL

    • Scroll down to the Webhook URLs for Your Workspace section.

    • Click "Add New Webhook to Workspace".

    • Choose the Slack channel where you want the messages to be sent.

    • Click "Allow" to grant the necessary permissions.

  • Copy the Webhook URL

    • Once created, you’ll see the webhook URL under the Webhook URLs for Your Workspace section.

    • Copy the URL to use it in your application or script.

  • In Superstream Console

    1. Toggle the REST channel

    2. Paste the URL

    3. Add the following header Content-type: application/json

  • System Settings

    Terms and Conditions

    These Terms govern your use of Superstream’s services (“Services”). By signing up or using the Services, you agree to these Terms.


    1. Services & Access

    • We provide access to our hosted platform and related tools.

    • You may use the Services for your company’s internal business purposes.

    • You are responsible for keeping login credentials secure and ensuring your team uses the Services properly.


    2. Customer Responsibilities

    • You’ll provide accurate information needed to set up and run the Services.

    • You’ll comply with all applicable laws, including data privacy rules.

    • You are responsible for the actions of your users within the Services.


    3. Fees & Payment

    • Fees are shown at checkout or in your order form.

    • Payments can be made by credit card or other supported methods.

    • All fees are exclusive of applicable taxes, which are your responsibility.


    4. Intellectual Property

    • Superstream owns all rights to the platform, technology, and related materials.

    • You own your data. We may use anonymized and aggregated data to provide analytics and improve our services.

    • You may not copy, resell, or use the Services to build a competing product.


    5. Confidentiality

    • Each party will keep the other’s confidential information safe and use it only as needed to provide or use the Services.

    • These confidentiality obligations last during the term of this Agreement and for 2 years after termination.


    6. Warranties & Liability

    • We provide the Services “as is,” but will make reasonable efforts to keep them available and reliable.

    • Our total liability to you is limited to the fees you paid in the 12 months before a claim.

    • Neither party is liable for indirect damages such as lost profits, revenue, or data.


    7. Term & Termination

    • You may cancel at any time by stopping use of the Services.

    • We may suspend or terminate your use if you violate these Terms, after giving reasonable notice and an opportunity to fix the issue.

    • Sections on confidentiality, IP, liability, and publicity will survive termination.


    8. Publicity & Logo Use

    • By signing up with a company email domain, you consent to Superstream displaying your company’s name and logo on our website and marketing materials as part of our customer list.

    • You may withdraw this consent at any time by contacting us at [email protected].


    9. Governing Law

    These Terms, and any dispute arising from them, will be governed by and interpreted under the laws of the State of New York, without regard to conflict of law principles. Any legal action or proceeding relating to these Terms shall be brought exclusively in the courts located in New York, New York, and both parties consent to the personal jurisdiction of those courts.

    SUPERSTREAM_DISABLED=true
    Click on "Resources
    "
    and then click on the created user called "SuperstreamAgentUser".
  • Click on the "Security Credentials" tab, then select "Create access key." Choose "Third-party service" and generate the key. Use this key in the Superstream Console.

  • click here
    // cluster ACLs
    {"CLUSTER", "kafka-cluster", "LITERAL", "ALTER_CONFIGS", "ALLOW"}
    {"CLUSTER", "kafka-cluster", "LITERAL", "DESCRIBE", "ALLOW"}
    {"CLUSTER", "kafka-cluster", "LITERAL", "DESCRIBE_CONFIGS", "ALLOW"}
    {"CLUSTER", "kafka-cluster", "LITERAL", "CREATE", "ALLOW"}
    
    // consumers groups ACLs
    {"GROUP", "*", "LITERAL", "DESCRIBE", "ALLOW"}
    {"GROUP", "*", "LITERAL", "READ", "ALLOW"}
    {"GROUP", "*", "LITERAL", "DELETE", "ALLOW"}
    
    // topics ACLs
    {"TOPIC", "*", "LITERAL", "ALTER", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "ALTER_CONFIGS", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "DELETE", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "DESCRIBE", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "DESCRIBE_CONFIGS", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "READ", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "WRITE", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "CREATE", "ALLOW"}

    Step 1: Create a new Service Account

    1. In Confluent Console: Top-right menu -> Accounts & access -> Accounts -> Service Accounts -> "Add service account"

    2. Name the service account "Superstream" (The Service account name must include the word "Superstream".)

    3. Set account type to "None"

    4. Permissions:

      1. Organization -> Add role assignment(top right) and add the following permissions:

        1. MetricsViewer (* Required) - Allows Superstream to show metrics and cluster observability in the UI.

    Step 2: Create a Cloud Resource Management Key

    1. In Confluent Console: Top-right menu -> API Keys -> + Add API key

    2. Select the Service account

    3. Select Cloud Resource Management

    4. Use the created key in the Superstream console

    Step 1: Create a new Service Account

    1. In Confluent Console: Top-right menu -> Accounts & access -> Accounts -> Service Accounts -> "Add service account"

    2. Name the service account "Superstream

    Step 1: Fill in cluster details

    Each vendor has a slightly different connection approach

    Confluent Cloud / AWS MSK

    Automatic cluster discovery will initiate once an API key is provided. Metrics will be collected via the vendor API.

    Aiven

    You'll need an API token, Kafka cluster connection details, and the project and service names.

    Redpanda / Apache (Self-hosted)

    No automatic cluster discovery. Each cluster should be added manually. To enable metric collection in Superstream, a JMX connection must also be configured.

    Superstream will fetch metrics from the /metrics endpoint, regardless of whether they are exposed by Prometheus exporters or directly from JMX sources.

    To get Apache Kafka JMX port and token information, here are the key approaches:

    Getting JMX Port

    1. Check Kafka Server Configuration

    • Look in your server.properties file for JMX-related settings

    • Common JMX port configurations:

    # Default JMX port is often 9999
    export JMX_PORT=9999
    # Or set via KAFKA_JMX_OPTS
    export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote.port=9999"

    2. Check Environment Variables

    3. Check Running Processes

    4. Check Startup Scripts

    • Look in kafka-server-start.sh or similar startup scripts

    • Check for JMX_PORT or KAFKA_JMX_OPTS variables

    Testing JMX Connection

    Common Default Locations

    • Confluent Platform: JMX typically on port 9581-9585

    • Standard Kafka: Often port 9999

    • Docker/Kubernetes: Check container environment variables

    If JMX isn't enabled, you'll need to configure it by adding the appropriate JMX options to your Kafka startup configuration.

    Required JMX Rules/metrics

    To collect detailed Kafka JMX metrics, add the following rules section to the JMX Exporter YAML configuration. These patterns match Kafka server, network, controller, log, and Java metrics, and convert them into Prometheus-compatible metrics.

    Include this full rules list in the configuration to ensure comprehensive metric coverage:

    These rules should be added to the JMX Exporter YAML configuration to expose comprehensive metrics for the Kafka broker, controller, network, log, and JVM.

    Step 3: Verify that all discovered or added clusters are in a healthy state

    When clusters are added or discovered, the system may surface warnings related to permissions or network connectivity. It’s recommended to resolve these promptly to ensure proper functionality.

    Step 4: What's next

    Firewall rules

    This page overviews the Superstream required networking

    Firewall rules

    All pods

    Telegraf

    6514

    UDP

    Internal

    Logs

    Getting started - Java

    Kafka's performance can often be constrained by inefficient network usage—especially in high-throughput or multi-region deployments. Improving Kafka’s network efficiency means optimizing how data flows between clients and brokers, reducing bandwidth usage, minimizing latency, and ultimately ensuring cost-effective and reliable data pipelines.

    At Superstream, we can make it easier to manage and optimize Kafka networking, particularly through our open-source superstream-clients library. This guide walks through how to use the library to boost network efficiency when interacting with Kafka.


    Superstream Client For Java

    A Java library for automatically optimizing Kafka producer configurations based on topic-specific recommendations.

    To collect detailed Kafka JMX metrics, add the following rules section to the JMX Exporter YAML configuration. These patterns match Kafka server, network, controller, log, and Java metrics, and convert them into Prometheus-compatible metrics.

    Include this full rules list in the configuration to ensure comprehensive metric coverage:

    These rules should be added to the JMX Exporter YAML configuration to expose comprehensive metrics for the Kafka broker, controller, network, log, and JVM.

    # Test connection with JConsole
    jconsole localhost:9999
    
    # Or use command line tools
    jmxterm -l localhost:9999

    1-5 MBs/day

    Telegraf

    All pods

    7777

    TCP

    Internal

    Logs

    1-5 MBs/day

    Agent

    Kafka

    Kafka Port

    TCP

    Internal

    Kafka boostrap urls

    Metadata such as topic names, consumer groups, configuration

    10-50 MBs/day

    Datadog

    Kafka

    Kafka JMX Port

    TCP

    External

    https://*.us5.datadoghq.com

    Metrics

    100-150 MBs/day

    Telegraf

    Superstream Platform

    443

    HTTPS

    External

    https://loki.mgmt.superstream.aihttps://prometheus.mgmt.superstream.ai

    Logs

    1-5 MBs/day

    Agent

    Superstream Platform

    9440

    TCP

    External

    hr72spwylm.us-east-1.aws.clickhouse.cloud

    Metadata such as topic names, consumer groups, configuration

    1-5 MBs/day

    Agent

    Superstream Platform

    4222

    TCP

    External

    broker.superstream.ai

    Client commands, remediations

    < 1 MBs/day

    Agent

    AWS / Aiven / Confluent API

    443

    TCP

    External

    ANY

    Metrics, Billing

    < 1 MBs/day

    echo $JMX_PORT
    env | grep JMX
    # Find Kafka process and check JMX arguments
    ps aux | grep kafka
    # Or use netstat to see what ports are listening
    netstat -tlnp | grep java
    EnvironmentAdmin
    or
    ClusterAdmin
    (Required) - You must choose one of these. This defines whether Superstream can access an entire environment or only specific clusters.
  • BillingAdmin (* Optional) - Enables billing data and savings insights.

  • ResourceKeyAdmin (* Optional) - Lets Superstream auto-create API keys for the clusters it can access. Without it, you'll need to create keys manually and update each discovered cluster with its SASL credentials. You can limit the scope of this permission by explicitly setting EnvironmentAdmin in a specific environment. Once that setting exists in one particular environment, the ResourceKeyAdmin permission will no longer control the entire organization.

  • " (The Service account name must include the word "Superstream".)
  • Set account type to "None"

  • Permissions:

    1. Organization -> Add role assignment(top right) and add the following permissions:

      1. BillingAdmin (* Optional)

      2. MetricsViewer (* Required)

  • Step 2: Create a Cloud Resource Management Key

    1. In Confluent Console: Top-right menu -> API Keys -> + Add API key

    2. Select the Service account

    3. Select Cloud Resource Management

    4. Use the created key in the Superstream console

    Step 3: Create a Cluster-level API key

    1. In Confluent Console: Main menu -> Cluster -> API Keys -> + Add API key

    2. If ACLs are enabled, please use the following:

    For READ+WRITE (Superstream to perform actions)

    For READ only (Superstream to analyze only)

    1. Edit the cluster in the Superstream UI and enter the SASL credentials you created.

    rules:
      # Special cases and very specific rules
      - pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value
        name: kafka_server_$1_$2
        type: GAUGE
        labels:
          clientId: "$3"
          topic: "$4"
          partition: "$5"
    
      - pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), brokerHost=(.+), brokerPort=(.+)><>Value
        name: kafka_server_$1_$2
        type: GAUGE
        labels:
          clientId: "$3"
          broker: "$4:$5"
    
      - pattern: kafka.server<type=(.+), cipher=(.+), protocol=(.+), listener=(.+), networkProcessor=(.+)><>connections
        name: kafka_server_$1_connections_tls_info
        type: GAUGE
        labels:
          cipher: "$2"
          protocol: "$3"
          listener: "$4"
          networkProcessor: "$5"
    
      - pattern: kafka.server<type=(.+), clientSoftwareName=(.+), clientSoftwareVersion=(.+), listener=(.+), networkProcessor=(.+)><>connections
        name: kafka_server_$1_connections_software
        type: GAUGE
        labels:
          clientSoftwareName: "$2"
          clientSoftwareVersion: "$3"
          listener: "$4"
          networkProcessor: "$5"
    
      - pattern: "kafka.server<type=(.+), listener=(.+), networkProcessor=(.+)><>(.+):"
        name: kafka_server_$1_$4
        type: GAUGE
        labels:
          listener: "$2"
          networkProcessor: "$3"
    
      - pattern: kafka.server<type=(.+), listener=(.+), networkProcessor=(.+)><>(.+)
        name: kafka_server_$1_$4
        type: GAUGE
        labels:
          listener: "$2"
          networkProcessor: "$3"
    
      # Percent metrics
      - pattern: kafka.(\w+)<type=(.+), name=(.+)Percent\w*><>MeanRate
        name: kafka_$1_$2_$3_percent
        type: GAUGE
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+)Percent\w*><>Value
        name: kafka_$1_$2_$3_percent
        type: GAUGE
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+)Percent\w*, (.+)=(.+)><>Value
        name: kafka_$1_$2_$3_percent
        type: GAUGE
        labels:
          "$4": "$5"
    
      # Generic per-second counters
      - pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*, (.+)=(.+), (.+)=(.+)><>Count
        name: kafka_$1_$2_$3_total
        type: COUNTER
        labels:
          "$4": "$5"
          "$6": "$7"
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*, (.+)=(.+)><>Count
        name: kafka_$1_$2_$3_total
        type: COUNTER
        labels:
          "$4": "$5"
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*><>Count
        name: kafka_$1_$2_$3_total
        type: COUNTER
    
      # Generic gauges with optional key-value pairs
      - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+), (.+)=(.+)><>Value
        name: kafka_$1_$2_$3
        type: GAUGE
        labels:
          "$4": "$5"
          "$6": "$7"
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+)><>Value
        name: kafka_$1_$2_$3
        type: GAUGE
        labels:
          "$4": "$5"
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+)><>Value
        name: kafka_$1_$2_$3
        type: GAUGE
    
      # Histogram-like metrics (summary emulation)
      - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+), (.+)=(.+)><>Count
        name: kafka_$1_$2_$3_count
        type: COUNTER
        labels:
          "$4": "$5"
          "$6": "$7"
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.*), (.+)=(.+)><>(\d+)thPercentile
        name: kafka_$1_$2_$3
        type: GAUGE
        labels:
          "$4": "$5"
          "$6": "$7"
          quantile: "0.$8"
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+)><>Count
        name: kafka_$1_$2_$3_count
        type: COUNTER
        labels:
          "$4": "$5"
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.*)><>(\d+)thPercentile
        name: kafka_$1_$2_$3
        type: GAUGE
        labels:
          "$4": "$5"
          quantile: "0.$6"
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+)><>Count
        name: kafka_$1_$2_$3_count
        type: COUNTER
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+)><>(\d+)thPercentile
        name: kafka_$1_$2_$3
        type: GAUGE
        labels:
          quantile: "0.$4"
    
      # Controller metrics
      - pattern: kafka.controller<type=(ControllerChannelManager), name=(QueueSize), broker-id=(\d+)><>(Value)
        name: kafka_controller_$1_$2_$4
        labels:
          broker_id: "$3"
    
      - pattern: kafka.controller<type=(ControllerChannelManager), name=(TotalQueueSize)><>(Value)
        name: kafka_controller_$1_$2_$3
    
      - pattern: kafka.controller<type=(KafkaController), name=(.+)><>(Value)
        name: kafka_controller_$1_$2_$3
    
      - pattern: kafka.controller<type=(ControllerStats), name=(.+)><>(Count)
        name: kafka_controller_$1_$2_$3
    
      # Network metrics
      - pattern: kafka.network<type=(Processor), name=(IdlePercent), networkProcessor=(.+)><>(Value)
        name: kafka_network_$1_$2_$4
        labels:
          network_processor: "$3"
    
      - pattern: kafka.network<type=(RequestMetrics), name=(.+), request=(.+)><>(Count|Value)
        name: kafka_network_$1_$2_$4
        labels:
          request: "$3"
    
      - pattern: kafka.network<type=(SocketServer), name=(.+)><>(Count|Value)
        name: kafka_network_$1_$2_$3
    
      - pattern: kafka.network<type=(RequestChannel), name=(.+)><>(Count|Value)
        name: kafka_network_$1_$2_$3
    
      # Additional server metrics
      - pattern: kafka.server<type=(.+), name=(.+), topic=(.+)><>(Count|OneMinuteRate)
        name: kafka_server_$1_$2_$4
        labels:
          topic: "$3"
    
      - pattern: kafka.server<type=(ReplicaFetcherManager), name=(.+), clientId=(.+)><>(Value)
        name: kafka_server_$1_$2_$4
        labels:
          client_id: "$3"
    
      - pattern: kafka.server<type=(DelayedOperationPurgatory), name=(.+), delayedOperation=(.+)><>(Value)
        name: kafka_server_$1_$2_$3_$4
    
      - pattern: kafka.server<type=(.+), name=(.+)><>(Count|Value|OneMinuteRate)
        name: kafka_server_$1_total_$2_$3
    
      - pattern: kafka.server<type=(.+)><>(queue-size)
        name: kafka_server_$1_$2
    
      # Java memory and GC metrics
      - pattern: java.lang<type=(.+), name=(.+)><(.+)>(\w+)
        name: java_lang_$1_$4_$3_$2
    
      - pattern: java.lang<type=(.+), name=(.+)><>(\w+)
        name: java_lang_$1_$3_$2
    
      - pattern: java.lang<type=(.*)>
    
      # Kafka log metrics
      - pattern: kafka.log<type=(.+), name=(.+), topic=(.+), partition=(.+)><>Value
        name: kafka_log_$1_$2
        labels:
          topic: "$3"
          partition: "$4"
    rules:
      # Special cases and very specific rules
      - pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value
        name: kafka_server_$1_$2
        type: GAUGE
        labels:
          clientId: "$3"
          topic: "$4"
          partition: "$5"
    
      - pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), brokerHost=(.+), brokerPort=(.+)><>Value
        name: kafka_server_$1_$2
        type: GAUGE
        labels:
          clientId: "$3"
          broker: "$4:$5"
    
      - pattern: kafka.server<type=(.+), cipher=(.+), protocol=(.+), listener=(.+), networkProcessor=(.+)><>connections
        name: kafka_server_$1_connections_tls_info
        type: GAUGE
        labels:
          cipher: "$2"
          protocol: "$3"
          listener: "$4"
          networkProcessor: "$5"
    
      - pattern: kafka.server<type=(.+), clientSoftwareName=(.+), clientSoftwareVersion=(.+), listener=(.+), networkProcessor=(.+)><>connections
        name: kafka_server_$1_connections_software
        type: GAUGE
        labels:
          clientSoftwareName: "$2"
          clientSoftwareVersion: "$3"
          listener: "$4"
          networkProcessor: "$5"
    
      - pattern: "kafka.server<type=(.+), listener=(.+), networkProcessor=(.+)><>(.+):"
        name: kafka_server_$1_$4
        type: GAUGE
        labels:
          listener: "$2"
          networkProcessor: "$3"
    
      - pattern: kafka.server<type=(.+), listener=(.+), networkProcessor=(.+)><>(.+)
        name: kafka_server_$1_$4
        type: GAUGE
        labels:
          listener: "$2"
          networkProcessor: "$3"
    
      # Percent metrics
      - pattern: kafka.(\w+)<type=(.+), name=(.+)Percent\w*><>MeanRate
        name: kafka_$1_$2_$3_percent
        type: GAUGE
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+)Percent\w*><>Value
        name: kafka_$1_$2_$3_percent
        type: GAUGE
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+)Percent\w*, (.+)=(.+)><>Value
        name: kafka_$1_$2_$3_percent
        type: GAUGE
        labels:
          "$4": "$5"
    
      # Generic per-second counters
      - pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*, (.+)=(.+), (.+)=(.+)><>Count
        name: kafka_$1_$2_$3_total
        type: COUNTER
        labels:
          "$4": "$5"
          "$6": "$7"
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*, (.+)=(.+)><>Count
        name: kafka_$1_$2_$3_total
        type: COUNTER
        labels:
          "$4": "$5"
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+)PerSec\w*><>Count
        name: kafka_$1_$2_$3_total
        type: COUNTER
    
      # Generic gauges with optional key-value pairs
      - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+), (.+)=(.+)><>Value
        name: kafka_$1_$2_$3
        type: GAUGE
        labels:
          "$4": "$5"
          "$6": "$7"
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+)><>Value
        name: kafka_$1_$2_$3
        type: GAUGE
        labels:
          "$4": "$5"
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+)><>Value
        name: kafka_$1_$2_$3
        type: GAUGE
    
      # Histogram-like metrics (summary emulation)
      - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+), (.+)=(.+)><>Count
        name: kafka_$1_$2_$3_count
        type: COUNTER
        labels:
          "$4": "$5"
          "$6": "$7"
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.*), (.+)=(.+)><>(\d+)thPercentile
        name: kafka_$1_$2_$3
        type: GAUGE
        labels:
          "$4": "$5"
          "$6": "$7"
          quantile: "0.$8"
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.+)><>Count
        name: kafka_$1_$2_$3_count
        type: COUNTER
        labels:
          "$4": "$5"
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+), (.+)=(.*)><>(\d+)thPercentile
        name: kafka_$1_$2_$3
        type: GAUGE
        labels:
          "$4": "$5"
          quantile: "0.$6"
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+)><>Count
        name: kafka_$1_$2_$3_count
        type: COUNTER
    
      - pattern: kafka.(\w+)<type=(.+), name=(.+)><>(\d+)thPercentile
        name: kafka_$1_$2_$3
        type: GAUGE
        labels:
          quantile: "0.$4"
    
      # Controller metrics
      - pattern: kafka.controller<type=(ControllerChannelManager), name=(QueueSize), broker-id=(\d+)><>(Value)
        name: kafka_controller_$1_$2_$4
        labels:
          broker_id: "$3"
    
      - pattern: kafka.controller<type=(ControllerChannelManager), name=(TotalQueueSize)><>(Value)
        name: kafka_controller_$1_$2_$3
    
      - pattern: kafka.controller<type=(KafkaController), name=(.+)><>(Value)
        name: kafka_controller_$1_$2_$3
    
      - pattern: kafka.controller<type=(ControllerStats), name=(.+)><>(Count)
        name: kafka_controller_$1_$2_$3
    
      # Network metrics
      - pattern: kafka.network<type=(Processor), name=(IdlePercent), networkProcessor=(.+)><>(Value)
        name: kafka_network_$1_$2_$4
        labels:
          network_processor: "$3"
    
      - pattern: kafka.network<type=(RequestMetrics), name=(.+), request=(.+)><>(Count|Value)
        name: kafka_network_$1_$2_$4
        labels:
          request: "$3"
    
      - pattern: kafka.network<type=(SocketServer), name=(.+)><>(Count|Value)
        name: kafka_network_$1_$2_$3
    
      - pattern: kafka.network<type=(RequestChannel), name=(.+)><>(Count|Value)
        name: kafka_network_$1_$2_$3
    
      # Additional server metrics
      - pattern: kafka.server<type=(.+), name=(.+), topic=(.+)><>(Count|OneMinuteRate)
        name: kafka_server_$1_$2_$4
        labels:
          topic: "$3"
    
      - pattern: kafka.server<type=(ReplicaFetcherManager), name=(.+), clientId=(.+)><>(Value)
        name: kafka_server_$1_$2_$4
        labels:
          client_id: "$3"
    
      - pattern: kafka.server<type=(DelayedOperationPurgatory), name=(.+), delayedOperation=(.+)><>(Value)
        name: kafka_server_$1_$2_$3_$4
    
      - pattern: kafka.server<type=(.+), name=(.+)><>(Count|Value|OneMinuteRate)
        name: kafka_server_$1_total_$2_$3
    
      - pattern: kafka.server<type=(.+)><>(queue-size)
        name: kafka_server_$1_$2
    
      # Java memory and GC metrics
      - pattern: java.lang<type=(.+), name=(.+)><(.+)>(\w+)
        name: java_lang_$1_$4_$3_$2
    
      - pattern: java.lang<type=(.+), name=(.+)><>(\w+)
        name: java_lang_$1_$3_$2
    
      - pattern: java.lang<type=(.*)>
    
      # Kafka log metrics
      - pattern: kafka.log<type=(.+), name=(.+), topic=(.+), partition=(.+)><>Value
        name: kafka_log_$1_$2
        labels:
          topic: "$3"
          partition: "$4"
    // cluster ACLs
    {"CLUSTER", "kafka-cluster", "LITERAL", "ALTER_CONFIGS", "ALLOW"}
    {"CLUSTER", "kafka-cluster", "LITERAL", "DESCRIBE", "ALLOW"}
    {"CLUSTER", "kafka-cluster", "LITERAL", "DESCRIBE_CONFIGS", "ALLOW"}
    {"CLUSTER", "kafka-cluster", "LITERAL", "CREATE", "ALLOW"}
    
    // consumers groups ACLs
    {"GROUP", "*", "LITERAL", "DELETE", "ALLOW"}
    {"GROUP", "*", "LITERAL", "DESCRIBE", "ALLOW"}
    {"GROUP", "*", "LITERAL", "READ", "ALLOW"}
    
    // topics ACLs
    {"TOPIC", "*", "LITERAL", "ALTER", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "ALTER_CONFIGS", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "DELETE", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "DESCRIBE", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "DESCRIBE_CONFIGS", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "READ", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "WRITE", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "CREATE", "ALLOW"}
    {"CLUSTER", "kafka-cluster", "LITERAL", "DESCRIBE", "ALLOW"}
    {"CLUSTER", "kafka-cluster", "LITERAL", "DESCRIBE_CONFIGS", "ALLOW"}
    {"CLUSTER", "kafka-cluster", "LITERAL", "CREATE", "ALLOW"}
    
    // consumers groups ACLs
    {"GROUP", "*", "LITERAL", "DESCRIBE", "ALLOW"}
    {"GROUP", "*", "LITERAL", "READ", "ALLOW"}
    
    // topics ACLs
    {"TOPIC", "*", "LITERAL", "DESCRIBE", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "DESCRIBE_CONFIGS", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "READ", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "WRITE", "ALLOW"}
    {"TOPIC", "*", "LITERAL", "CREATE", "ALLOW"}

    Overview

    Superstream Clients works as a Java agent that intercepts Kafka producer creation and applies optimized configurations without requiring any code changes in your application. It dynamically retrieves optimization recommendations from Superstream and applies them based on impact analysis.

    Supported Libraries

    Works with any Java library that depends on kafka-clients, including:

    • Apache Kafka Clients

    • Spring Kafka

    • Alpakka Kafka (Akka Kafka)

    • Kafka Streams

    • Kafka Connect

    • Any custom wrapper around the Kafka Java client

    Features

    • Zero-code integration: No code changes required in your application

    • Dynamic configuration: Applies optimized settings based on topic-specific recommendations

    • Intelligent optimization: Identifies the most impactful topics to optimize

    • Graceful fallback: Falls back to default settings if optimization fails

    Java Version Compatibility

    The library fully supports Java versions 11 through 21.

    Producer Configuration Suggestion

    When initializing your Kafka producers, please ensure you pass the configuration as a mutable object. The Superstream library needs to modify the producer configuration to apply optimizations.

    ✅ Fully Supported (Recommended):

    ❌ Not Fully Supported (Avoid if possible):

    Spring Applications

    Spring applications that use @Value annotations and Spring's configuration loading (like application.yml or application.properties) are fully supported. The Superstream library will be able to modify the configuration when it's loaded into a mutable Map or Properties object in your Spring configuration class.

    Example of supported Spring configuration:

    Pekko/Akka Kafka Applications

    Pekko and Akka Kafka applications typically use immutable configuration maps internally, which prevents Superstream from applying optimizations. To enable Superstream optimizations with Pekko/Akka, you need to create the KafkaProducer manually with a mutable configuration.

    ✅ Superstream-optimized pattern:

    ❌ Native Pekko/Akka pattern (optimizations won't be applied):

    Why This Matters

    The Superstream library needs to modify your producer's configuration to apply optimizations based on your cluster's characteristics. This includes adjusting settings like compression, batch size, and other performance parameters. When the configuration is immutable, these optimizations cannot be applied.


    Installation

    Superstream package: https://central.sonatype.com/artifact/ai.superstream/superstream-clients-java/overview

    Step 0: Add permissions

    Any app that runs Superstream lib should be able to READ/WRITE/DESCRIBE from all topics with the prefix superstream.*

    Step 1: Add Superstream Jar to your application

    Download from GitHub https://github.com/superstreamlabs/superstream-clients-java/releases

    Available also in Maven Central https://central.sonatype.com/artifact/ai.superstream/superstream-clients

    Step 2: Add Environment Variables

    ENV
    Required?
    Description
    Example

    SUPERSTREAM_LATENCY_SENSITIVE=false

    No

    Set to true to prevent any modification to linger.ms values

    SUPERSTREAM_DISABLED=false

    No

    Set to true to disable optimization

    Step 3: Instrument

    Add Superstream Java agent to your application's startup command:

    Docker Integration

    When using Superstream Clients with containerized applications, include the agent in your Dockerfile:

    Prerequisites

    • Java 11 or higher

    • Kafka cluster that is connected to the Superstream's console

    • Read and write permissions to the superstream.* topics

    Jun 26, 2025

    Here’s what’s new, fixed, and improved across the Superstream platform in this release.


    🐛 Bug Fixes

    • Fixed a request reduction miscalculation that was skewing optimization metrics.

    • Resolved an issue where topic filtering didn't update the Connected Clients view correctly.


    🎨 UX & UI Improvements

    • Agents Page

      • Added a prominent “+ Add Agent” button.

      • Now displays agent version and flags if an update is needed.

    • Dashboard & Agents Headers


    🧩 Product & Sandbox Enhancements

    • New CTA buttons on Sandbox to encourage signups: Add Cluster / Add Agent.

    • CTA banner added in the Sandbox to push users to sign up.

    • Clusters & Topics Pages:

      • Streamlined the clusters page to show only essential endpoints.


    📊 BI & Metrics

    • Added a “Saved Traffic” metric showing total savings across cost and network.

    • Validated and reviewed graph data across all accounts.

    • Updated backend logic for cost calculations and request-count charts.


    🔧 Backend Updates

    • Introduced Python support for client integrations.

    Executive Summary

    Superstream automates Kafka optimization so you can focus on building, not babysitting brokers and clients.

    Superstream Labs Inc. 800 N King Street, Suite 304, Wilmington, DE 19801 [email protected]

    Overview

    Kafka Optimization. Automated. End-to-End.

    Superstream is a fully automated optimization platform for Apache Kafka that continuously analyzes and tunes both your clusters and clients. It helps engineering teams reduce cloud costs, improve reliability, and eliminate Kafka configuration drift—without needing deep expertise or manual intervention.

    Key Features & Benefits

    🔧 SuperCluster – Intelligent Cluster Optimization

    • Daily Health Scans: Automatically inspects topic configs, consumer groups, partition distribution, and usage patterns to identify inefficiencies.

    • Auto-Remediation: Safely fixes misaligned topic configurations, replication factor issues, and skewed partitions—with full audit logging and optional manual review.

    • Cluster Right-Sizing: Evaluates actual broker resource consumption to recommend optimized MSK or Aiven plans, including automated safe rebalancing.

    🚀 SuperClient – Kafka Client Tuning Without Code Changes

    • Real-Time Observability: Monitors producer behavior, including batching, compression, and throughput per topic and environment.

    • Smart Configuration Suggestions: Recommends optimal settings like batch.size, linger.ms, and compression.type based on actual workload characteristics.

    • Broker Load Reduction: Minimizes CPU and memory pressure on Kafka brokers by making clients more efficient at the source.

    Getting Started

    1. Deploy Superstream agents within your infrastructure using a Helm chart.

    2. Connect Kafka Clusters: Establish secure connections between Superstream and your Kafka clusters, ensuring proper authentication and permissions.

    3. SuperCluster analyzes and remediates cluster inefficiencies daily.

    4. SuperClient observes producer workloads and delivers tailored configuration sets.

    Specifications

    Required permissions can be found here:

    Prerequisites for local agent deployment can be found here:

    Security & legal hub can be found here:

    Compliance

    Superstream is committed to maintaining the highest standards of data security and compliance.

    Our platform is certified to leading industry standards, including ISO 27001 for information security management, GDPR for data protection, and SOC 2 (Type I and Type II) for managing customer data based on trust service principles. These certifications demonstrate our dedication to protecting client data and ensuring the integrity and confidentiality of your information.

    • SOC 2 Type 1 and 2: Superstream meets the stringent requirements of Service Organization Control (SOC) 2, both Type 1 and Type 2. This ensures that the platform's security, availability, processing integrity, confidentiality, and privacy of customer data align with the American Institute of Certified Public Accountants (AICPA) standards.

    • ISO 27001: Superstream aligns with ISO 27001, a globally recognized standard for information security management systems (ISMS). This certification indicates a commitment to a systematic and ongoing approach to managing sensitive company and customer information.

    • GDPR: Compliance with the General Data Protection Regulation (GDPR) underscores Superstream's dedication to data privacy and protection by European Union regulations. This includes ensuring user data rights and implementing appropriate measures for data handling and processing.

    By adhering to these standards, Superstream is committed to maintaining high levels of security and privacy, instilling trust in its users and stakeholders regarding data handling and protection. More information can be found in our legal hub.

    Distinguished Customers

    “Superstream took a huge load off our plates. We used to spend hours tuning Kafka and manually managing cost optimizations. Now it just works in the background—smart, safe, and way more efficient.” Ilay Simon, Sr. DevOps Engineer // Orca Security

    “We plugged Superstream in, and within days, it started surfacing config issues and cost sinks we had no idea existed. The auto-tuning is legit—performance went up and our network overhead dropped noticeably.” Shem Tov Fisher, Kafka Staff Engineer // Solidus Labs

    “I was skeptical at first, but Superstream quickly proved its value. It understands our workload patterns better than we do and keeps our Kafka lean without us lifting a finger. It’s like having another engineer on the team.” Ami Machluf, Data Engineering TL // eToro ($ETOR)

    // Using Properties (recommended)
    Properties props = new Properties();
    props.put("bootstrap.servers", "localhost:9092");
    // ... other properties ...
    KafkaProducer<String, String> producer = new KafkaProducer<>(props);
    
    // Using a regular HashMap
    Map<String, Object> config = new HashMap<>();
    config.put("bootstrap.servers", "localhost:9092");
    // ... other properties ...
    KafkaProducer<String, String> producer = new KafkaProducer<>(config);
    
    // Using Spring's @Value annotations and configuration loading
    @Configuration
    public class KafkaConfig {
        @Value("${spring.kafka.bootstrap-servers}")
        private String bootstrapServers;
        // ... other properties ...
    
        @Bean
        public ProducerFactory<String, String> producerFactory() {
            Map<String, Object> configProps = new HashMap<>();
            configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
            // ... other properties ...
            return new DefaultKafkaProducerFactory<>(configProps);
        }
    }
    // Using Collections.unmodifiableMap
    Map<String, Object> config = Collections.unmodifiableMap(new HashMap<>());
    KafkaProducer<String, String> producer = new KafkaProducer<>(config);
    
    // Using Map.of() (creates unmodifiable map)
    KafkaProducer<String, String> producer = new KafkaProducer<>(
        Map.of("bootstrap.servers", "localhost:9092")
    );
    
    // Using KafkaTemplate's getProducerFactory().getConfigurationProperties()
    // which returns an unmodifiable map
    KafkaTemplate<String, String> template = new KafkaTemplate<>(producerFactory);
    KafkaProducer<String, String> producer = new KafkaProducer<>(
        template.getProducerFactory().getConfigurationProperties()
    );
    # application.yml
    spring:
      kafka:
        producer:
          properties:
            compression.type: snappy
            batch.size: 16384
            linger.ms: 1
    @Configuration
    public class KafkaConfig {
        @Value("${spring.kafka.producer.properties.compression.type}")
        private String compressionType;
        
        @Bean
        public ProducerFactory<String, String> producerFactory() {
            Map<String, Object> configProps = new HashMap<>();
            configProps.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, compressionType);
            return new DefaultKafkaProducerFactory<>(configProps);
        }
    }
    // Add these lines to create a mutable producer
    Map<String, Object> configProps = new HashMap<>();
    configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
    configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
    configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
    
    org.apache.kafka.clients.producer.Producer<String, String> kafkaProducer = new KafkaProducer<>(configProps);
    
    ProducerSettings<String, String> producerSettings = ProducerSettings
        .create(system, new StringSerializer(), new StringSerializer())
        .withProducer(kafkaProducer);
    
    Source.single(ProducerMessage.single(record))
        .via(Producer.flexiFlow(producerSettings))
        .runWith(Sink.ignore, system);
    ProducerSettings<String, String> producerSettings = ProducerSettings
        .create(system, new StringSerializer(), new StringSerializer())
        .withBootstrapServers("localhost:9092");
    
    Source.single(ProducerMessage.single(record))
        .via(Producer.flexiFlow(producerSettings))
        .runWith(Sink.ignore, system);
    java -javaagent:/path/to/superstream-clients-1.0.17.jar -jar your-application.jar
    FROM openjdk:11-jre
    
    WORKDIR /app
    
    # Copy your application
    COPY target/your-application.jar app.jar
    
    # Copy the Superstream agent
    COPY path/to/superstream-clients-1.0.17.jar superstream-agent.jar
    
    # Run with the Java agent
    ENTRYPOINT ["java", "-javaagent:/app/superstream-agent.jar", "-jar", "/app/app.jar"]
    are now sticky for easier navigation.
  • Cleaned up UI:

    • Removed the unused “Stepper”.

    • Updated inconsistent labels:

      • Kafka360 → supercluster

      • super-client → superclient

      • Inter cluster → Inter-cluster (chart title)

      • Chart labels: Produced bytes → Transfer In, Consumed bytes → Transfer Out

  • Clients Table:

    • Now sorts by newest clients first.

    • Aligned headlines and tables across the console for consistency.

    • Updated instruction text to “Add Clients” and replaced the AI icon with a more relevant one.

  • Cluster/Topic Filters:

    • Sorted lists so the selected item appears first (e.g. Priceline).

  • New Cluster Flow Enhancements:

    • Placeholders shown across all cluster components.

    • Streamlined flow: “+ Add a Cluster” opens a drawer to select an existing agent or add a new one.

  • Client Drawer:

    • Added a bar chart showing before/after reductions in message size, request count, and traffic.

  • Introduced the ability to sample and analyze workloads for topic discovery and optimization.

  • Tracked and now display the first appearance date for each client.

  • SUPERSTREAM_LATENCY_SENSITIVE=true
    SUPERSTREAM_DISABLED=true
    Inactive Resource Cleanup: Identifies and optionally removes idle topics and consumer groups, reducing clutter and resource waste.
  • Topic Configuration Policies: Enforce standardized settings for critical configs like retention.ms, cleanup.policy, and replication.factor to prevent drift.

  • Instrumentation-Based Delivery: Requires no application code changes—SuperClient acts as a sidecar to inject optimizations securely and transparently.

  • Daily Analysis Cycle: Workload tuning happens daily; changes are applied on next application startup or redeploy.

  • https://docs.superstream.ai/getting-started/option-1-byoc/step-1-preparations
    https://docs.superstream.ai/getting-started/option-1-byoc/step-1-agent-deployment
    https://docs.superstream.ai/security-and-legal/processed-data

    Privacy Policy

    This Privacy Policy was last revised on 31 Dec, 2024.

    1. Purpose of this Privacy Policy.

    Strech, Inc. (dba Superstream) is committed to protecting your privacy. We have prepared this Privacy Policy to describe to you our practices regarding the Personal Data (as defined below) we collect from users of our website located at superstream.ai and in connection with our Superstream products and services (the "Products"). In addition, this Privacy Policy tells you about your privacy rights and how the law protects you.

    It is important that you read this Privacy Policy together with any other privacy notice or fair processing notice we may provide on specific occasions when we are collecting or processing Personal Data about you so that you are fully aware of how and why we are using your data. This Privacy Policy supplements the other notices and is not intended to override them.

    2. Controller and Contact Details.

    Strech Inc. (collectively referred to as “Superstream,” “we,” “us” or “our” in this Privacy Policy) is the controller of Personal Data submitted in accordance with this Privacy Policy and is responsible for that Personal Data. We have appointed a data protection officer (DPO) who is responsible for overseeing questions in relation to this Privacy Policy. If you have any questions about this Privacy Policy, including any requests to exercise your legal rights, please contact our DPO at [email protected].

    3. Types Of Data We Collect.

    We do not collect Personal Data (besides full name and company for login purposes). We do collect anonymous data from you when you visit our site, when you send us information or communications, when you engage with us through online chat applications, when you download and use our Products, and when you register for white papers, web seminars, and other events hosted by us. "Personal Data" means data that identifies, relates to, describes, can be used to contact, or could reasonably be linked directly or indirectly to you, including, for example, identifiers such as your real name, alias, postal address, unique personal identifier, online identifier, Internet Protocol (IP) address, email address, account name, or other similar identifiers; commercial information, including records of products or services purchased, obtained, or considered, or other purchasing or consuming histories or tendencies; Internet or other electronic network activity information. "Anonymous Data" means data that is not associated with or linked to your Personal Data; Anonymous Data does not permit the identification of individual persons. We do not collect any Special Categories of Personal Data about you (this includes details about your race or ethnicity, religious or philosophical beliefs, sex life, sexual orientation, political opinions, trade union membership, information about your health and genetic and biometric data).

    3.1 Personal Data You Provide to Us.

    We collect Personal Data from you, such as your first and last name, e-mail and mailing addresses, professional title, and company name when you download and install the Products, create an account to log in to our network, engage with us through online chat applications, or sign-up for our newsletter or other marketing material (internet/electronic activity).

    3.2 Personal Data Collected Via Cookies.

    We also use Cookies (as defined below) and navigational data like Uniform Resource Locators (URL) to gather information regarding the date and time of your visit and the solutions and information for which you searched and which you viewed (Internet/electronic activity). Like most technology companies, we automatically gather this Personal Data and store it in log files each time you visit our website or access your account on our network. "Cookies" are small pieces of information that a website sends to your computer’s hard drive while you are viewing a web site. We may use both session Cookies (which expire once you close your web browser) and persistent Cookies (which stay on your computer until you delete them) to provide you with a more personal and interactive experience on our website. Persistent Cookies can be removed by following Internet browser help file directions. You may choose to refuse or disable Cookies via the settings on your browser, however, by doing so, some areas of our website may not work properly.

    3.5 Personal Data That We Collect From You About Others.

    If you decide to create an account for and invite a third party to join our network, we will collect your and the third party's names and e-mail addresses (identifiers) in order to send an e-mail and follow up with the third party. You or the third party may contact us at [email protected] to request the removal of this information from our database.

    4. Use Of Your Data.

    4.1 General Use.

    Any Data you submit to us is only used to respond to your requests or to aid us in serving you better.

    4.2 Creation of Anonymous Data.

    We may create Anonymous Data records from collected data by excluding information (such as your name and IP address) that makes the data personally identifiable to you. We use this Anonymous Data to analyze request and usage patterns so that we may enhance the content of our Products and improve site navigation, and for marketing and analytics.

    4.3 Feedback.

    If you provide feedback on any of our Products or our website, we may use such feedback for any purpose, provided we will not associate such feedback with your Personal Data. We will collect any information contained in such communication and will treat the Personal Data in such communication in accordance with this Privacy Policy.

    5. Disclosure Of Your Personal Data.

    5.1 Disclosure to Affiliates.

    We will never share your Personal Data with other companies for advertisements, ads, or affiliation. If another company acquires us or our assets, that company will possess the Personal Data collected by it and us and will assume the rights and obligations regarding your Personal Data as described in this Privacy Policy. We may also disclose your Personal Data to third parties in the event that we sell or buy any business or assets, in which case we may disclose your Personal Data to the prospective seller or buyer of such business or assets.

    5.2 Disclosure to Third Party Service Providers.

    Superstream does not and will not sell any data or metadata collected. Except as otherwise stated in this policy, we do not share the Personal Data that we collect with other entities. However, we may share your Personal Data -- including each category of Personal Data described above -- with third party service providers to: (a) provide you with the Products that we offer you through our website; (b) process payments; (c) conduct quality assurance testing; (d) facilitate creation and maintenance of accounts; (e) collect and analyze data; (f) provide technical support; or (g) provide specific business services, such as synchronization with other software applications and marketing services. These third party service providers are required by written agreement not to retain, use, or disclose your Personal Data other than to provide the services requested by us.

    5.3 Disclosure to Other Third Party Companies.

    We will not disclose your Personal Data to other Third Party Companies except as otherwise stated in this policy.

    5.4 Other Disclosures.

    Regardless of any choices you make regarding your Personal Data (as described below), we may disclose Personal Data if we believe in good faith that such disclosure is necessary to (a) comply with relevant laws or to respond to subpoenas or warrants served on us; (b) protect or defend our rights or property or the rights or property of users of the Products; or (c) protect against fraud and reduce credit risk.

    6. Your Choices Regarding Your Personal Data.

    We offer you choices regarding the collection, use, and sharing of your Personal Data. We will periodically send you free newsletters and e-mails that directly promote the use of our site or the purchase of our Products. When you receive newsletters or promotional communications from us, you may indicate a preference to stop receiving further communications from us and you will have the opportunity to "opt-out" by following the unsubscribe instructions provided in the e-mail you receive or by contacting us directly (please see contact information above). Despite your indicated e-mail preferences, we may send you notices of any updates to our Privacy Policy.

    7. Your Legal Rights regarding your Personal Data.

    Under certain circumstances, you may have rights under applicable data protection laws in relation to your Personal Data.

    Where applicable, you may have the right to:

    Request information about how we collect, process, use and share your Personal Data (commonly known as a “right to know request”).

    Request access to your Personal Data (commonly known as a “data subject access request”). This enables you to receive a copy of the Personal Data we hold about you and to check that we are lawfully processing it.

    Request correction of the Personal Data that we hold about you. This enables you to have any incomplete or inaccurate data we hold about you corrected, though we may need to verify the accuracy of the new data you provide to us.

    Request erasure of your Personal Data (commonly known as a “request to be forgotten”). This enables you to ask us to delete or remove Personal Data. You also have the right to ask us to delete or remove your Personal Data where you have successfully exercised your right to object to processing (see below), where we may have processed your information unlawfully or where we are required to erase your Personal Data to comply with local law. Note, however, that we may not always be able to comply in full with your request of erasure for specific legal reasons which will be notified to you, if applicable, at the time of your request.

    Object to processing of your Personal Data where we are relying on a legitimate interest (or those of a third party) and there is something about your particular situation which makes you want to object to processing on this ground as you feel it impacts on your fundamental rights and freedoms. You also have the right to object where we are processing your Personal Data for direct marketing purposes. In some cases, we may demonstrate that we have compelling legitimate grounds to process your information which override your rights and freedoms.

    Request restriction of processing of your Personal Data. This enables you to ask us to suspend the processing of your Personal Data in the following scenarios: (a) if you want us to establish the data’s accuracy; (b) where our use of the data is unlawful but you do not want us to erase it; (c) where you need us to hold the data even if we no longer require it as you need it to establish, exercise or defend legal claims; or (d) you have objected to our use of your data but we need to verify whether we have overriding legitimate grounds to use it.

    Request the transfer of your Personal Data to you or to a third party. We will provide to you, or a third party you have chosen, your Personal Data in a structured, commonly used, machine-readable format. Note that this right only applies to automated information which you initially provided consent for us to use or where we used the information to perform a contract with you.

    Withdraw consent at any time where we are relying on consent to process your Personal Data. However, this will not affect the lawfulness of any processing carried out before you withdraw your consent. If you withdraw your consent, we may not be able to provide certain products or services to you. We will advise you if this is the case at the time you withdraw your consent.

    If you wish to exercise any of the above rights please contact [email protected]. You will not have to pay a fee to access your Personal Data (or to exercise any of the other rights), and Superstream does not discriminate based on whether you choose to exercise your choice and rights. We will not, based on your exercise of rights, deny our Products to you, charge you different rates, provide a different level or quality of Products to you, or suggest that you may receive such different treatment. However, we may charge a reasonable fee if your request is clearly unfounded, repetitive or excessive.

    Alternatively, we may refuse to comply with your request in these circumstances. We may need to request specific information from you to help us confirm your identity and ensure your right to access your Personal Data (or to exercise any of your other rights).

    This is a security measure to ensure that Personal Data is not disclosed to any person who has no right to receive it. As part of the verification process, we match the information submitted as part of your request against information stored by Superstream. In some instances, we will require additional information in order to verify your request. If an authorized third party makes a data subject request on your behalf, we will require sufficient written proof that you have designated them as your authorized agent. We try to respond to all legitimate requests within one month. Occasionally it may take us longer than a month if your request is particularly complex or you have made a number of requests. In this case, we will notify you and keep you updated.

    You also have the right to lodge a complaint with a data protection authority if you consider that the processing of your personal information infringes applicable law. If you have any questions, concerns or complaints regarding our compliance with this notice and applicable data protection laws, we encourage you to first contact our Data Protection Officer. We will investigate and attempt to resolve complaints and disputes and will make every reasonable effort to honour your wish to exercise your rights as quickly as possible and in any event, within the timescales provided by data protection laws.

    9. Data Retention.

    We will only retain your Personal Data for as long as necessary to fulfill the purposes we collected it for, including for the purposes of satisfying any legal, accounting, or reporting requirements.

    To determine the appropriate retention period for Personal Data, we consider the amount, nature, and sensitivity of the Personal Data, the potential risk of harm from unauthorised use or disclosure of your Personal Data, the purposes for which we process your Personal Data and whether we can achieve those purposes through other means, and the applicable legal requirements.

    10. Dispute Resolution.

    If you believe that we have not adhered to this Privacy Policy, please contact us by e-mail at [email protected]. We will do our best to address your concerns. If you feel that your complaint has been addressed incompletely, we invite you to let us know for further investigation. If we are unable to reach a resolution to the dispute, we will settle the dispute exclusively under the rules of the American Arbitration Association.

    11. Changes To This Privacy Policy.

    This Privacy Policy is subject to occasional revision, and if we make any substantial changes in the way we use your Personal Data, we will notify you by sending you an e-mail to the last e-mail address you provided to us or by prominently posting notice of the changes on our website. Any material changes to this Privacy Policy will be effective upon the earlier of thirty (30) calendar days following our dispatch of an e-mail notice to you or thirty (30) calendar days following our posting of notice of the changes on our site. These changes will be effective immediately for new users of our website and Products. Please note that at all times you are responsible for updating your Personal Data to provide us with your most current e-mail address. In the event that the last e-mail address that you have provided us is not valid, or for any reason is not capable of delivering to you the notice described above, our dispatch of the e-mail containing such notice will nonetheless constitute effective notice of the changes described in the notice. In any event, changes to this Privacy Policy may affect our use of Personal Data that you provided us prior to our notification to you of the changes. If you do not wish to permit changes in our use of your Personal Data, you must notify us prior to the effective date of the changes that you wish to deactivate your account with us. Continued use of our website or Products, following notice of such changes shall indicate your acknowledgement of such changes and agreement to be bound by the terms and conditions of such changes.

    12. Transfers of Personal Data outside the EEA.

    Your personal information may be transferred and stored in countries outside the EEA that are subject to different standards of data protection. Superstream takes appropriate steps to ensure that transfers of personal information are in accordance with applicable law and carefully managed to protect your privacy rights and interests, including through the use of standard contractual clauses and our certifications to the EU-US Data Privacy Framework (“DPF”), the UK Extension to the EU-US DPF, and the Swiss-US DPF. Additionally, Superstream uses a limited number of third-party service providers to assist us in providing our services to customers. These third parties may access, process, or store Personal Data in the course of providing their services. Superstream obtains contractual commitments from them to protect your Personal Data.

    13. Accessibility.

    We are committed to making our products and services accessible to everyone. If you need help with your accessibility-related requests and other servicing needs please contact us at [email protected].