Class FlinkKafkaProducer<K,V>
- java.lang.Object
-
- com.networknt.kafka.common.FlinkKafkaProducer<K,V>
-
- All Implemented Interfaces:
Closeable,AutoCloseable,org.apache.kafka.clients.producer.Producer<K,V>
public class FlinkKafkaProducer<K,V> extends Object implements org.apache.kafka.clients.producer.Producer<K,V>
Wrapper around KafkaProducer that allows to resume transactions in case of node failure, which allows to implement two phase commit algorithm for exactly-once semantic FlinkKafkaProducer.For happy path usage is exactly the same as
KafkaProducer. User is expected to call:To actually implement two phase commit, it must be possible to always commit a transaction after pre-committing it (here, pre-commit is just a
flush()). In case of some failure betweenflush()andcommitTransaction()this class allows to resume interrupted transaction and commit if after a restart:initTransactions()beginTransaction()send(ProducerRecord)flush()getProducerId()getEpoch()- node failure... restore producerId and epoch from state
resumeTransaction(long, short)commitTransaction()
resumeTransaction(long, short)replacesinitTransactions()as a way to obtain the producerId and epoch counters. It has to be done, because otherwiseinitTransactions()would automatically abort all on going transactions.Second way this implementation differs from the reference
KafkaProduceris that this one actually flushes new partitions onflush()instead of oncommitTransaction().The last one minor difference is that it allows to obtain the producerId and epoch counters via
getProducerId()andgetEpoch()methods (which are unfortunately private fields).Those changes are compatible with Kafka's 0.11.0 REST API although it clearly was not the intention of the Kafka's API authors to make them possible.
Internally this implementation uses
KafkaProducerand implements required changes via Java Reflection API. It might not be the prettiest solution. An alternative would be to re-implement whole Kafka's 0.11 REST API client on our own.
-
-
Constructor Summary
Constructors Constructor Description FlinkKafkaProducer(Map<String,Object> properties)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description voidabortTransaction()voidbeginTransaction()voidclose()voidclose(Duration timeout)voidcommitTransaction()voidflush()shortgetEpoch()longgetProducerId()StringgetTransactionalId()intgetTransactionCoordinatorId()voidinitTransactions()Map<org.apache.kafka.common.MetricName,? extends org.apache.kafka.common.Metric>metrics()List<org.apache.kafka.common.PartitionInfo>partitionsFor(String topic)voidresumeTransaction(long producerId, short epoch)Instead of obtaining producerId and epoch from the transaction coordinator, re-use previously obtained ones, so that we can resume transaction after a restart.Future<org.apache.kafka.clients.producer.RecordMetadata>send(org.apache.kafka.clients.producer.ProducerRecord<K,V> record)Future<org.apache.kafka.clients.producer.RecordMetadata>send(org.apache.kafka.clients.producer.ProducerRecord<K,V> record, org.apache.kafka.clients.producer.Callback callback)voidsendOffsetsToTransaction(Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> offsets, String consumerGroupId)voidsendOffsetsToTransaction(Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> offsets, org.apache.kafka.clients.consumer.ConsumerGroupMetadata groupMetadata)
-
-
-
Method Detail
-
initTransactions
public void initTransactions()
-
beginTransaction
public void beginTransaction() throws org.apache.kafka.common.errors.ProducerFencedException
-
commitTransaction
public void commitTransaction() throws org.apache.kafka.common.errors.ProducerFencedException
-
abortTransaction
public void abortTransaction() throws org.apache.kafka.common.errors.ProducerFencedException
-
sendOffsetsToTransaction
public void sendOffsetsToTransaction(Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> offsets, String consumerGroupId) throws org.apache.kafka.common.errors.ProducerFencedException
-
sendOffsetsToTransaction
public void sendOffsetsToTransaction(Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> offsets, org.apache.kafka.clients.consumer.ConsumerGroupMetadata groupMetadata) throws org.apache.kafka.common.errors.ProducerFencedException
-
send
public Future<org.apache.kafka.clients.producer.RecordMetadata> send(org.apache.kafka.clients.producer.ProducerRecord<K,V> record)
-
send
public Future<org.apache.kafka.clients.producer.RecordMetadata> send(org.apache.kafka.clients.producer.ProducerRecord<K,V> record, org.apache.kafka.clients.producer.Callback callback)
-
metrics
public Map<org.apache.kafka.common.MetricName,? extends org.apache.kafka.common.Metric> metrics()
-
close
public void close()
-
close
public void close(Duration timeout)
-
flush
public void flush()
-
resumeTransaction
public void resumeTransaction(long producerId, short epoch)Instead of obtaining producerId and epoch from the transaction coordinator, re-use previously obtained ones, so that we can resume transaction after a restart. Implementation of this method is based onKafkaProducer.initTransactions().- Parameters:
producerId- producer idepoch- epoch
-
getTransactionalId
public String getTransactionalId()
-
getProducerId
public long getProducerId()
-
getEpoch
public short getEpoch()
-
getTransactionCoordinatorId
public int getTransactionCoordinatorId()
-
-