Class PartitionStateManager<K,V>
- All Implemented Interfaces:
org.apache.kafka.clients.consumer.ConsumerRebalanceListener
public class PartitionStateManager<K,V> extends Object implements org.apache.kafka.clients.consumer.ConsumerRebalanceListener
PartitionStates.
This state is shared between the BrokerPollSystem thread and the AbstractParallelEoSStreamProcessor.
- See Also:
PartitionState
-
Field Summary
Fields Modifier and Type Field Description static doubleUSED_PAYLOAD_THRESHOLD_MULTIPLIER_DEFAULT -
Constructor Summary
Constructors Constructor Description PartitionStateManager(org.apache.kafka.clients.consumer.Consumer<K,V> consumer, ShardManager<K,V> sm, ParallelConsumerOptions<K,V> options, Clock clock) -
Method Summary
Modifier and Type Method Description voidaddWorkContainer(WorkContainer<K,V> wc)Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata>collectDirtyCommitData()booleancouldBeTakenAsWork(WorkContainer<?,?> workContainer)LonggetEpochOfPartition(org.apache.kafka.common.TopicPartition partition)LonggetEpochOfPartitionForRecord(org.apache.kafka.clients.consumer.ConsumerRecord<K,V> rec)longgetHighestSeenOffset(org.apache.kafka.common.TopicPartition tp)longgetNumberOfEntriesInPartitionQueues()PartitionState<K,V>getPartitionState(org.apache.kafka.common.TopicPartition tp)static doublegetUSED_PAYLOAD_THRESHOLD_MULTIPLIER()Best efforts attempt to prevent usage of offset payload beyond X% - as encoding size test is currently only done per batch, we need to leave some buffer for the required space to overrun before hitting the hard limit where we have to drop the offset payload entirely.booleanhasWorkInCommitQueues()booleanisAllowedMoreRecords(WorkContainer<?,?> wc)booleanisAllowedMoreRecords(org.apache.kafka.common.TopicPartition tp)Check we have capacity in offset storage to process more messagesbooleanisBlocked(org.apache.kafka.common.TopicPartition topicPartition)Checks if partition is blocked with back pressure.booleanisPartitionRemovedOrNeverAssigned(org.apache.kafka.clients.consumer.ConsumerRecord<?,?> rec)booleanisRecordPreviouslyCompleted(org.apache.kafka.clients.consumer.ConsumerRecord<K,V> rec)voidonFailure(WorkContainer<K,V> wc)voidonOffsetCommitSuccess(Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> committed)Truncate our tracked offsets as a commit was successful, so the low water mark rises, and we dont' need to track as much anymore.voidonPartitionsAssigned(Collection<org.apache.kafka.common.TopicPartition> assignedPartitions)Load offset map for assigned assignedPartitionsvoidonPartitionsLost(Collection<org.apache.kafka.common.TopicPartition> partitions)Clear offset map for lost partitionsvoidonPartitionsRevoked(Collection<org.apache.kafka.common.TopicPartition> partitions)Clear offset map for revoked partitionsvoidonSuccess(WorkContainer<K,V> wc)static voidsetUSED_PAYLOAD_THRESHOLD_MULTIPLIER(double USED_PAYLOAD_THRESHOLD_MULTIPLIER)Best efforts attempt to prevent usage of offset payload beyond X% - as encoding size test is currently only done per batch, we need to leave some buffer for the required space to overrun before hitting the hard limit where we have to drop the offset payload entirely.
-
Field Details
-
USED_PAYLOAD_THRESHOLD_MULTIPLIER_DEFAULT
public static final double USED_PAYLOAD_THRESHOLD_MULTIPLIER_DEFAULT- See Also:
- Constant Field Values
-
-
Constructor Details
-
PartitionStateManager
public PartitionStateManager(org.apache.kafka.clients.consumer.Consumer<K,V> consumer, ShardManager<K,V> sm, ParallelConsumerOptions<K,V> options, Clock clock)
-
-
Method Details
-
getPartitionState
-
onPartitionsAssigned
public void onPartitionsAssigned(Collection<org.apache.kafka.common.TopicPartition> assignedPartitions)Load offset map for assigned assignedPartitions- Specified by:
onPartitionsAssignedin interfaceorg.apache.kafka.clients.consumer.ConsumerRebalanceListener
-
onPartitionsRevoked
Clear offset map for revoked partitionsAbstractParallelEoSStreamProcessor.onPartitionsRevoked(java.util.Collection<org.apache.kafka.common.TopicPartition>)handles committing off offsets upon revoke- Specified by:
onPartitionsRevokedin interfaceorg.apache.kafka.clients.consumer.ConsumerRebalanceListener- See Also:
AbstractParallelEoSStreamProcessor.onPartitionsRevoked(java.util.Collection<org.apache.kafka.common.TopicPartition>)
-
onPartitionsLost
Clear offset map for lost partitions- Specified by:
onPartitionsLostin interfaceorg.apache.kafka.clients.consumer.ConsumerRebalanceListener
-
onOffsetCommitSuccess
public void onOffsetCommitSuccess(Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> committed)Truncate our tracked offsets as a commit was successful, so the low water mark rises, and we dont' need to track as much anymore.When commits are made to broker, we can throw away all the individually tracked offsets before the committed offset.
-
getEpochOfPartitionForRecord
public Long getEpochOfPartitionForRecord(org.apache.kafka.clients.consumer.ConsumerRecord<K,V> rec)- Returns:
- the current epoch of the partition this record belongs to
-
getEpochOfPartition
- Returns:
- the current epoch of the partition
-
isRecordPreviouslyCompleted
-
isAllowedMoreRecords
public boolean isAllowedMoreRecords(org.apache.kafka.common.TopicPartition tp)Check we have capacity in offset storage to process more messages -
isAllowedMoreRecords
- See Also:
isAllowedMoreRecords(TopicPartition)
-
hasWorkInCommitQueues
public boolean hasWorkInCommitQueues() -
getNumberOfEntriesInPartitionQueues
public long getNumberOfEntriesInPartitionQueues() -
getHighestSeenOffset
public long getHighestSeenOffset(org.apache.kafka.common.TopicPartition tp) -
addWorkContainer
-
isBlocked
public boolean isBlocked(org.apache.kafka.common.TopicPartition topicPartition)Checks if partition is blocked with back pressure.If false, more messages are allowed to process for this partition.
If true, we have calculated that we can't record any more offsets for this partition, as our best performing encoder requires nearly as much space is available for this partitions allocation of the maximum offset metadata size.
Default (missing elements) is true - more messages can be processed.
-
isPartitionRemovedOrNeverAssigned
public boolean isPartitionRemovedOrNeverAssigned(org.apache.kafka.clients.consumer.ConsumerRecord<?,?> rec) -
onSuccess
-
onFailure
-
collectDirtyCommitData
public Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> collectDirtyCommitData() -
couldBeTakenAsWork
-
getUSED_PAYLOAD_THRESHOLD_MULTIPLIER
public static double getUSED_PAYLOAD_THRESHOLD_MULTIPLIER()Best efforts attempt to prevent usage of offset payload beyond X% - as encoding size test is currently only done per batch, we need to leave some buffer for the required space to overrun before hitting the hard limit where we have to drop the offset payload entirely. -
setUSED_PAYLOAD_THRESHOLD_MULTIPLIER
public static void setUSED_PAYLOAD_THRESHOLD_MULTIPLIER(double USED_PAYLOAD_THRESHOLD_MULTIPLIER)Best efforts attempt to prevent usage of offset payload beyond X% - as encoding size test is currently only done per batch, we need to leave some buffer for the required space to overrun before hitting the hard limit where we have to drop the offset payload entirely.
-