is the bootstrap.servers setting
and represents the list of servers to connect to.
is the acks setting and represents
the number of acknowledgments the producer requires the leader to
have received before considering a request complete.
See Acks.
is the buffer.memory setting and
represents the total bytes of memory the producer
can use to buffer records waiting to be sent to the server.
is the compression.type setting and specifies
what compression algorithm to apply to all the generated data
by the producer. The default is none (no compression applied).
is the retries setting. A value greater than zero will
cause the client to resend any record whose send fails with
a potentially transient error.
is the ssl.key.password setting and represents
the password of the private key in the key store file.
This is optional for client.
is the ssl.keystore.password setting,
being the password of the private key in the key store file.
This is optional for client.
is the ssl.keystore.location setting and
represents the location of the key store file. This is optional
for client and can be used for two-way authentication for client.
is the ssl.truststore.location setting
and is the location of the trust store file.
is the ssl.truststore.password setting
and is the password for the trust store file.
is the batch.size setting.
The producer will attempt to batch records together into fewer
requests whenever multiple records are being sent to the
same partition. This setting specifies the maximum number of
records to batch together.
is the client.id setting,
an id string to pass to the server when making requests.
The purpose of this is to be able to track the source of
requests beyond just ip/port by allowing a logical application
name to be included in server-side request logging.
is the connections.max.idle.ms setting
and specifies how much time to wait before closing idle connections.
is the linger.ms setting
and specifies to buffer records for more efficient batching,
up to the maximum batch size or for the maximum lingerTime.
If zero, then no buffering will happen, but if different
from zero, then records will be delayed in absence of load.
is the max.block.ms setting.
The configuration controls how long KafkaProducer.send() and
KafkaProducer.partitionsFor() will block. These methods can be
blocked either because the buffer is full or metadata unavailable.
is the max.request.size setting
and represents the maximum size of a request in bytes.
This is also effectively a cap on the maximum record size.
is the max.in.flight.requests.per.connection setting
and represents the maximum number of unacknowledged request the client will send
on a single connection before blocking.
If this setting is set to be greater than 1 and there are failed sends,
there is a risk of message re-ordering due to retries (if enabled).
is the partitioner.class setting
and represents a class that implements the
org.apache.kafka.clients.producer.Partitioner interface.
is the receive.buffer.bytes setting
being the size of the TCP receive buffer (SO_RCVBUF) to use
when reading data.
is the request.timeout.ms setting,
a configuration the controls the maximum amount of time
the client will wait for the response of a request.
is the sasl.kerberos.service.name setting,
being the Kerberos principal name that Kafka runs as.
is the security.protocol setting,
being the protocol used to communicate with brokers.
is the send.buffer.bytes setting,
being the size of the TCP send buffer (SO_SNDBUF) to use
when sending data.
is the ssl.enabled.protocols setting,
being the list of protocols enabled for SSL connections.
is the ssl.keystore.type setting,
being the file format of the key store file.
is the ssl.protocol setting,
being the SSL protocol used to generate the SSLContext.
Default setting is TLS, which is fine for most cases.
Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL,
SSLv2 and SSLv3 may be supported in older JVMs, but their usage
is discouraged due to known security vulnerabilities.
is the ssl.provider setting,
being the name of the security provider used for SSL connections.
Default value is the default security provider of the JVM.
is the ssl.truststore.type setting, being
the file format of the trust store file.
is the reconnect.backoff.ms setting.
The amount of time to wait before attempting to reconnect to a
given host. This avoids repeatedly connecting to a host in a
tight loop. This backoff applies to all requests sent by the
consumer to the broker.
is the retry.backoff.ms setting.
The amount of time to wait before attempting to retry a failed
request to a given topic partition. This avoids repeatedly
sending requests in a tight loop under some failure scenarios.
is the metadata.max.age.ms setting.
The period of time in milliseconds after which we force a
refresh of metadata even if we haven't seen any partition
leadership changes to proactively discover any new brokers
or partitions.
is the metric.reporters setting.
A list of classes to use as metrics reporters. Implementing the
MetricReporter interface allows plugging in classes that will
be notified of new metric creation. The JmxReporter is always
included to register JMX statistics
is the metrics.num.samples setting.
The number of samples maintained to compute metrics.
is the metrics.sample.window.ms setting.
The metrics system maintains a configurable number of samples over
a fixed window size. This configuration controls the size of the
window. For example we might maintain two samples each measured
over a 30 second period. When a window expires we erase and
overwrite the oldest window.
is the monix.producer.sink.parallelism
setting indicating how many requests the KafkaProducerSink
can execute in parallel.
map of other properties that will be passed to the underlying kafka client. Any properties not explicitly handled by this object can be set via the map, but in case of a duplicate a value set on the case class will overwrite value set via properties.
is the acks setting and represents
the number of acknowledgments the producer requires the leader to
have received before considering a request complete.
is the acks setting and represents
the number of acknowledgments the producer requires the leader to
have received before considering a request complete.
See Acks.
is the batch.size setting.
is the batch.size setting.
The producer will attempt to batch records together into fewer
requests whenever multiple records are being sent to the
same partition. This setting specifies the maximum number of
records to batch together.
is the bootstrap.servers setting
and represents the list of servers to connect to.
is the buffer.memory setting and
represents the total bytes of memory the producer
can use to buffer records waiting to be sent to the server.
is the client.id setting,
an id string to pass to the server when making requests.
is the client.id setting,
an id string to pass to the server when making requests.
The purpose of this is to be able to track the source of
requests beyond just ip/port by allowing a logical application
name to be included in server-side request logging.
is the compression.type setting and specifies
what compression algorithm to apply to all the generated data
by the producer.
is the compression.type setting and specifies
what compression algorithm to apply to all the generated data
by the producer. The default is none (no compression applied).
is the connections.max.idle.ms setting
and specifies how much time to wait before closing idle connections.
is the linger.ms setting
and specifies to buffer records for more efficient batching,
up to the maximum batch size or for the maximum lingerTime.
is the linger.ms setting
and specifies to buffer records for more efficient batching,
up to the maximum batch size or for the maximum lingerTime.
If zero, then no buffering will happen, but if different
from zero, then records will be delayed in absence of load.
is the max.block.ms setting.
is the max.block.ms setting.
The configuration controls how long KafkaProducer.send() and
KafkaProducer.partitionsFor() will block. These methods can be
blocked either because the buffer is full or metadata unavailable.
is the max.in.flight.requests.per.connection setting
and represents the maximum number of unacknowledged request the client will send
on a single connection before blocking.
is the max.in.flight.requests.per.connection setting
and represents the maximum number of unacknowledged request the client will send
on a single connection before blocking.
If this setting is set to be greater than 1 and there are failed sends,
there is a risk of message re-ordering due to retries (if enabled).
is the max.request.size setting
and represents the maximum size of a request in bytes.
is the max.request.size setting
and represents the maximum size of a request in bytes.
This is also effectively a cap on the maximum record size.
is the metadata.max.age.ms setting.
is the metadata.max.age.ms setting.
The period of time in milliseconds after which we force a
refresh of metadata even if we haven't seen any partition
leadership changes to proactively discover any new brokers
or partitions.
is the metric.reporters setting.
is the metric.reporters setting.
A list of classes to use as metrics reporters. Implementing the
MetricReporter interface allows plugging in classes that will
be notified of new metric creation. The JmxReporter is always
included to register JMX statistics
is the metrics.num.samples setting.
is the metrics.num.samples setting.
The number of samples maintained to compute metrics.
is the metrics.sample.window.ms setting.
is the metrics.sample.window.ms setting.
The metrics system maintains a configurable number of samples over
a fixed window size. This configuration controls the size of the
window. For example we might maintain two samples each measured
over a 30 second period. When a window expires we erase and
overwrite the oldest window.
is the monix.producer.sink.parallelism
setting indicating how many requests the KafkaProducerSink
can execute in parallel.
is the partitioner.class setting
and represents a class that implements the
org.apache.kafka.clients.producer.Partitioner interface.
map of other properties that will be passed to the underlying kafka client.
map of other properties that will be passed to the underlying kafka client. Any properties not explicitly handled by this object can be set via the map, but in case of a duplicate a value set on the case class will overwrite value set via properties.
is the receive.buffer.bytes setting
being the size of the TCP receive buffer (SO_RCVBUF) to use
when reading data.
is the reconnect.backoff.ms setting.
is the reconnect.backoff.ms setting.
The amount of time to wait before attempting to reconnect to a
given host. This avoids repeatedly connecting to a host in a
tight loop. This backoff applies to all requests sent by the
consumer to the broker.
is the request.timeout.ms setting,
a configuration the controls the maximum amount of time
the client will wait for the response of a request.
is the retries setting.
is the retries setting. A value greater than zero will
cause the client to resend any record whose send fails with
a potentially transient error.
is the retry.backoff.ms setting.
is the retry.backoff.ms setting.
The amount of time to wait before attempting to retry a failed
request to a given topic partition. This avoids repeatedly
sending requests in a tight loop under some failure scenarios.
is the sasl.kerberos.service.name setting,
being the Kerberos principal name that Kafka runs as.
is the security.protocol setting,
being the protocol used to communicate with brokers.
is the send.buffer.bytes setting,
being the size of the TCP send buffer (SO_SNDBUF) to use
when sending data.
is the ssl.enabled.protocols setting,
being the list of protocols enabled for SSL connections.
is the ssl.key.password setting and represents
the password of the private key in the key store file.
is the ssl.key.password setting and represents
the password of the private key in the key store file.
This is optional for client.
is the ssl.keystore.location setting and
represents the location of the key store file.
is the ssl.keystore.location setting and
represents the location of the key store file. This is optional
for client and can be used for two-way authentication for client.
is the ssl.keystore.password setting,
being the password of the private key in the key store file.
is the ssl.keystore.password setting,
being the password of the private key in the key store file.
This is optional for client.
is the ssl.keystore.type setting,
being the file format of the key store file.
is the ssl.protocol setting,
being the SSL protocol used to generate the SSLContext.
is the ssl.protocol setting,
being the SSL protocol used to generate the SSLContext.
Default setting is TLS, which is fine for most cases.
Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL,
SSLv2 and SSLv3 may be supported in older JVMs, but their usage
is discouraged due to known security vulnerabilities.
is the ssl.provider setting,
being the name of the security provider used for SSL connections.
is the ssl.provider setting,
being the name of the security provider used for SSL connections.
Default value is the default security provider of the JVM.
is the ssl.truststore.location setting
and is the location of the trust store file.
is the ssl.truststore.password setting
and is the password for the trust store file.
is the ssl.truststore.type setting, being
the file format of the trust store file.
The Kafka Producer config.
For the official documentation on the available configuration options, see Producer Configs on
kafka.apache.org.is the
bootstrap.serverssetting and represents the list of servers to connect to.is the
ackssetting and represents the number of acknowledgments the producer requires the leader to have received before considering a request complete. See Acks.is the
buffer.memorysetting and represents the total bytes of memory the producer can use to buffer records waiting to be sent to the server.is the
compression.typesetting and specifies what compression algorithm to apply to all the generated data by the producer. The default is none (no compression applied).is the
retriessetting. A value greater than zero will cause the client to resend any record whose send fails with a potentially transient error.is the
ssl.key.passwordsetting and represents the password of the private key in the key store file. This is optional for client.is the
ssl.keystore.passwordsetting, being the password of the private key in the key store file. This is optional for client.is the
ssl.keystore.locationsetting and represents the location of the key store file. This is optional for client and can be used for two-way authentication for client.is the
ssl.truststore.locationsetting and is the location of the trust store file.is the
ssl.truststore.passwordsetting and is the password for the trust store file.is the
batch.sizesetting. The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This setting specifies the maximum number of records to batch together.is the
client.idsetting, an id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.is the
connections.max.idle.mssetting and specifies how much time to wait before closing idle connections.is the
linger.mssetting and specifies to buffer records for more efficient batching, up to the maximum batch size or for the maximumlingerTime. If zero, then no buffering will happen, but if different from zero, then records will be delayed in absence of load.is the
max.block.mssetting. The configuration controls how longKafkaProducer.send()andKafkaProducer.partitionsFor()will block. These methods can be blocked either because the buffer is full or metadata unavailable.is the
max.request.sizesetting and represents the maximum size of a request in bytes. This is also effectively a cap on the maximum record size.is the
max.in.flight.requests.per.connectionsetting and represents the maximum number of unacknowledged request the client will send on a single connection before blocking. If this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (if enabled).is the
partitioner.classsetting and represents a class that implements theorg.apache.kafka.clients.producer.Partitionerinterface.is the
receive.buffer.bytessetting being the size of the TCP receive buffer (SO_RCVBUF) to use when reading data.is the
request.timeout.mssetting, a configuration the controls the maximum amount of time the client will wait for the response of a request.is the
sasl.kerberos.service.namesetting, being the Kerberos principal name that Kafka runs as.is the
security.protocolsetting, being the protocol used to communicate with brokers.is the
send.buffer.bytessetting, being the size of the TCP send buffer (SO_SNDBUF) to use when sending data.is the
ssl.enabled.protocolssetting, being the list of protocols enabled for SSL connections.is the
ssl.keystore.typesetting, being the file format of the key store file.is the
ssl.protocolsetting, being the SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.is the
ssl.providersetting, being the name of the security provider used for SSL connections. Default value is the default security provider of the JVM.is the
ssl.truststore.typesetting, being the file format of the trust store file.is the
reconnect.backoff.mssetting. The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.is the
retry.backoff.mssetting. The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.is the
metadata.max.age.mssetting. The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.is the
metric.reporterssetting. A list of classes to use as metrics reporters. Implementing theMetricReporterinterface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statisticsis the
metrics.num.samplessetting. The number of samples maintained to compute metrics.is the
metrics.sample.window.mssetting. The metrics system maintains a configurable number of samples over a fixed window size. This configuration controls the size of the window. For example we might maintain two samples each measured over a 30 second period. When a window expires we erase and overwrite the oldest window.is the
monix.producer.sink.parallelismsetting indicating how many requests the KafkaProducerSink can execute in parallel.map of other properties that will be passed to the underlying kafka client. Any properties not explicitly handled by this object can be set via the map, but in case of a duplicate a value set on the case class will overwrite value set via properties.