Package io.confluent.parallelconsumer
Class WorkManager<K,V>
java.lang.Object
io.confluent.parallelconsumer.WorkManager<K,V>
- Type Parameters:
K-V-
- All Implemented Interfaces:
org.apache.kafka.clients.consumer.ConsumerRebalanceListener
public class WorkManager<K,V>
extends java.lang.Object
implements org.apache.kafka.clients.consumer.ConsumerRebalanceListener
Sharded, prioritised, offset managed, order controlled, delayed work queue.
-
Field Summary
Fields Modifier and Type Field Description static doubleUSED_PAYLOAD_THRESHOLD_MULTIPLIERBest efforts attempt to prevent usage of offset payload beyond X% - as encoding size test is currently only done per batch, we need to leave some buffer for the required space to overrun before hitting the hard limit where we have to drop the offset payload entirely. -
Constructor Summary
Constructors Constructor Description WorkManager(ParallelConsumerOptions options, org.apache.kafka.clients.consumer.Consumer<K,V> consumer)Use a privateDynamicLoadFactor, useful for testing.WorkManager(ParallelConsumerOptions newOptions, org.apache.kafka.clients.consumer.Consumer<K,V> consumer, DynamicLoadFactor dynamicExtraLoadFactor) -
Method Summary
Modifier and Type Method Description intgetNumberOfEntriesInPartitionQueues()intgetNumberRecordsOutForProcessing()ParallelConsumerOptionsgetOptions()intgetTotalWorkWaitingProcessing()WorkContainer<K,V>getWorkContainerForRecord(org.apache.kafka.clients.consumer.ConsumerRecord<K,V> rec)intgetWorkQueuedInShardsCount()protected voidhandleFutureResult(WorkContainer<K,V> wc)booleanhasWorkInCommitQueues()booleanhasWorkInFlight()booleanhasWorkInMailboxes()booleanisClean()<R> java.util.List<WorkContainer<K,V>>maybeGetWork()Get work with no limit on quantity, useful for testing.java.util.List<WorkContainer<K,V>>maybeGetWork(int requestedMaxWorkToRetrieve)Depth first work retrieval.voidonFailure(WorkContainer<K,V> wc)voidonOffsetCommitSuccess(java.util.Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> offsetsToSend)Truncate our tracked offsets as a commit was successful, so the low water mark rises, and we dont' need to track as much anymore.voidonPartitionsAssigned(java.util.Collection<org.apache.kafka.common.TopicPartition> partitions)Load offset map for assigned partitionsvoidonPartitionsLost(java.util.Collection<org.apache.kafka.common.TopicPartition> partitions)Clear offset map for lost partitionsvoidonPartitionsRevoked(java.util.Collection<org.apache.kafka.common.TopicPartition> partitions)Clear offset map for revoked partitionsvoidonSuccess(WorkContainer<K,V> wc)voidregisterWork(java.util.List<org.apache.kafka.clients.consumer.ConsumerRecords<K,V>> records)voidregisterWork(org.apache.kafka.clients.consumer.ConsumerRecords<K,V> records)static voidsetUSED_PAYLOAD_THRESHOLD_MULTIPLIER(double USED_PAYLOAD_THRESHOLD_MULTIPLIER)Best efforts attempt to prevent usage of offset payload beyond X% - as encoding size test is currently only done per batch, we need to leave some buffer for the required space to overrun before hitting the hard limit where we have to drop the offset payload entirely.booleanshouldThrottle()booleanworkIsWaitingToBeProcessed()
-
Field Details
-
USED_PAYLOAD_THRESHOLD_MULTIPLIER
public static double USED_PAYLOAD_THRESHOLD_MULTIPLIERBest efforts attempt to prevent usage of offset payload beyond X% - as encoding size test is currently only done per batch, we need to leave some buffer for the required space to overrun before hitting the hard limit where we have to drop the offset payload entirely.
-
-
Constructor Details
-
WorkManager
public WorkManager(ParallelConsumerOptions options, org.apache.kafka.clients.consumer.Consumer<K,V> consumer)Use a privateDynamicLoadFactor, useful for testing. -
WorkManager
public WorkManager(ParallelConsumerOptions newOptions, org.apache.kafka.clients.consumer.Consumer<K,V> consumer, DynamicLoadFactor dynamicExtraLoadFactor)
-
-
Method Details
-
onPartitionsAssigned
public void onPartitionsAssigned(java.util.Collection<org.apache.kafka.common.TopicPartition> partitions)Load offset map for assigned partitions- Specified by:
onPartitionsAssignedin interfaceorg.apache.kafka.clients.consumer.ConsumerRebalanceListener
-
onPartitionsRevoked
public void onPartitionsRevoked(java.util.Collection<org.apache.kafka.common.TopicPartition> partitions)Clear offset map for revoked partitionsParallelEoSStreamProcessor.onPartitionsRevoked(java.util.Collection<org.apache.kafka.common.TopicPartition>)handles committing off offsets upon revoke- Specified by:
onPartitionsRevokedin interfaceorg.apache.kafka.clients.consumer.ConsumerRebalanceListener- See Also:
ParallelEoSStreamProcessor.onPartitionsRevoked(java.util.Collection<org.apache.kafka.common.TopicPartition>)
-
onPartitionsLost
public void onPartitionsLost(java.util.Collection<org.apache.kafka.common.TopicPartition> partitions)Clear offset map for lost partitions- Specified by:
onPartitionsLostin interfaceorg.apache.kafka.clients.consumer.ConsumerRebalanceListener
-
registerWork
-
registerWork
-
maybeGetWork
Get work with no limit on quantity, useful for testing. -
maybeGetWork
Depth first work retrieval. -
onSuccess
-
onFailure
-
getNumberOfEntriesInPartitionQueues
public int getNumberOfEntriesInPartitionQueues() -
getTotalWorkWaitingProcessing
public int getTotalWorkWaitingProcessing()- Returns:
- Work count in mailbox plus work added to the processing shards
-
getWorkQueuedInShardsCount
public int getWorkQueuedInShardsCount()- Returns:
- Work ready in the processing shards, awaiting selection as work to do
-
getWorkContainerForRecord
public WorkContainer<K,V> getWorkContainerForRecord(org.apache.kafka.clients.consumer.ConsumerRecord<K,V> rec) -
onOffsetCommitSuccess
public void onOffsetCommitSuccess(java.util.Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> offsetsToSend)Truncate our tracked offsets as a commit was successful, so the low water mark rises, and we dont' need to track as much anymore.When commits are made to broker, we can throw away all the individually tracked offsets before the committed offset.
-
shouldThrottle
public boolean shouldThrottle() -
workIsWaitingToBeProcessed
public boolean workIsWaitingToBeProcessed() -
hasWorkInFlight
public boolean hasWorkInFlight() -
isClean
public boolean isClean() -
hasWorkInMailboxes
public boolean hasWorkInMailboxes() -
hasWorkInCommitQueues
public boolean hasWorkInCommitQueues() -
handleFutureResult
-
setUSED_PAYLOAD_THRESHOLD_MULTIPLIER
public static void setUSED_PAYLOAD_THRESHOLD_MULTIPLIER(double USED_PAYLOAD_THRESHOLD_MULTIPLIER)Best efforts attempt to prevent usage of offset payload beyond X% - as encoding size test is currently only done per batch, we need to leave some buffer for the required space to overrun before hitting the hard limit where we have to drop the offset payload entirely. -
getOptions
-
getNumberRecordsOutForProcessing
public int getNumberRecordsOutForProcessing()
-