Skip to main content
Version: 2.0.0 (Latest)

Kafka connector

Class: kafka — integrates with Apache Kafka. Sources consume topics and publish into a PADAS stream; sinks subscribe to a stream and produce records to a topic.

Create and edit under Sources and Sinks. Advanced Settings may expose more runtime tuning depending on deployment and permissions.

Source and sink behavior

RoleBehavior
SourceKafka consumer: reads topic from bootstrap_servers, uses group_id for partition assignment and offset commits, writes events to the connector stream.
SinkKafka producer: reads from the subscribed stream, publishes to topic with producer ack/retry/batch settings from config.
StreamsProcessing upstream of the sink must publish to the stream id the sink consumes; the source writes the stream tasks attach to (Streams).

Required fields

Every connector row

FieldRequiredNotes
nameYesDisplay name; id derived from it.
classYesMust be kafka.
streamYesResolved stream id after Auto Create Stream or manual pick.
typeYessource or sink from the screen used at create time.
configYesClass-specific object; see below.

Class kafka — required configuration

SettingRequiredNotes
bootstrap.servers / bootstrap_serversYesBroker list (UI often shows dots; stored key is commonly snake_case).
topicYesTopic to consume (source) or produce (sink).
group.id / group_idSource onlyConsumer group; omit for sinks.

Create connector

  1. Open Sources or SinksCreate.
  2. Set Class to Kafka, set name, stream behavior, and Enabled.
  3. Enter bootstrap brokers and topic; for a source, set consumer group.
  4. Expand Common configuration for SASL/TLS, consumer or producer tuning.
  5. Save, then wire the stream into tasks / pipelines.

Source (UI)

Create New Source modal with Class Kafka: bootstrap.servers and topic
The Kafka source connector form.
UI fieldConnector setting
bootstrap.servers (required)bootstrap_servers
topic (required)topic

Sink (UI)

The sink form reuses the same broker and topic fields; layout matches the source modal (titles differ).

Kafka connector Config: bootstrap.servers and topic
The Kafka sink connector form (same Config fields).

Configuration

Cluster and topic

  • bootstrap_servers, topic — Cluster entry and topic name.

Consumer (source)

  • group_id — Consumer group and offset semantics per Core settings.

Security

  • authentication, tls — SASL, trust stores, client certificates as required by the cluster.

Throughput

  • buffer, workers, batch — Read batching (source) or write batching (sink).

Runtime behavior

  • Connectors start after Core deployment; Disabled connectors do not read or write Kafka.
  • Source consumers commit offsets according to Core; sink producers respect acks and retries from config.
  • Dedicated consumer groups per PADAS source avoid fighting other applications across restarts.

Performance and operational notes

  • Use a unique group_id per PADAS Kafka source where possible.
  • Align sink batch sizes with message.max.bytes and downstream consumer limits.
  • Monitor lag and rebalance events when scaling workers or changing group_id.