Package io.confluent.parallelconsumer
Class OffsetMapCodecManager<K,V>
java.lang.Object
io.confluent.parallelconsumer.OffsetMapCodecManager<K,V>
public class OffsetMapCodecManager<K,V>
extends java.lang.Object
Uses multiple encodings to compare, when decided, can refactor other options out for analysis only -
encodeOffsetsCompressed(long, org.apache.kafka.common.TopicPartition, java.util.Set<java.lang.Long>)
TODO: consider IO exception management - question sneaky throws usage?
TODO: enforce max uncommitted < encoding length (Short.MAX)
Bitset serialisation format:
- byte1: magic
- byte2-3: Short: bitset size
- byte4-n: serialised
BitSet
-
Field Summary
Fields Modifier and Type Field Description static java.nio.charset.CharsetCHARSET_TO_USEstatic intDefaultMaxMetadataSizeMaximum size of the commit offset metadatastatic java.util.Optional<OffsetEncoding>forcedCodecForces the use of a specific codec, instead of choosing the most efficient one. -
Constructor Summary
Constructors Constructor Description OffsetMapCodecManager(WorkManager<K,V> wm, org.apache.kafka.clients.consumer.Consumer<K,V> consumer) -
Method Summary
-
Field Details
-
DefaultMaxMetadataSize
public static int DefaultMaxMetadataSizeMaximum size of the commit offset metadata- See Also:
- OffsetConfig#DefaultMaxMetadataSize, "kafka.coordinator.group.OffsetConfig#DefaultMaxMetadataSize"
-
CHARSET_TO_USE
public static final java.nio.charset.Charset CHARSET_TO_USE -
forcedCodec
Forces the use of a specific codec, instead of choosing the most efficient one. Useful for testing.
-
-
Constructor Details
-
OffsetMapCodecManager
public OffsetMapCodecManager(WorkManager<K,V> wm, org.apache.kafka.clients.consumer.Consumer<K,V> consumer)
-