Package io.confluent.parallelconsumer
Class ParallelEoSStreamProcessor<K,V>
java.lang.Object
io.confluent.parallelconsumer.ParallelEoSStreamProcessor<K,V>
- All Implemented Interfaces:
DrainingCloseable,ParallelConsumer<K,V>,ParallelStreamProcessor<K,V>,java.io.Closeable,java.lang.AutoCloseable,org.apache.kafka.clients.consumer.ConsumerRebalanceListener
- Direct Known Subclasses:
JStreamParallelEoSStreamProcessor
public class ParallelEoSStreamProcessor<K,V> extends java.lang.Object implements ParallelStreamProcessor<K,V>, org.apache.kafka.clients.consumer.ConsumerRebalanceListener, java.io.Closeable
- See Also:
ParallelConsumer
-
Nested Class Summary
Nested classes/interfaces inherited from interface io.confluent.parallelconsumer.DrainingCloseable
DrainingCloseable.DrainingModeNested classes/interfaces inherited from interface io.confluent.parallelconsumer.ParallelConsumer
ParallelConsumer.Tuple<L,R>Nested classes/interfaces inherited from interface io.confluent.parallelconsumer.ParallelStreamProcessor
ParallelStreamProcessor.ConsumeProduceResult<K,V,KK,VV> -
Field Summary
Fields Modifier and Type Field Description static java.lang.StringMDC_INSTANCE_IDprotected WorkManager<K,V>wm -
Constructor Summary
Constructors Constructor Description ParallelEoSStreamProcessor(ParallelConsumerOptions newOptions)Construct the AsyncConsumer by wrapping this passed in conusmer and producer, which can be configured any which way as per normal. -
Method Summary
Modifier and Type Method Description protected voidaddToMailbox(WorkContainer<K,V> wc)protected voidaddToMailBoxOnUserFunctionSuccess(WorkContainer<K,V> wc, java.util.List<?> resultsFromUserFunction)voidclose()Close the system, without draining.voidclose(java.time.Duration timeout, DrainingCloseable.DrainingMode drainMode)Close the consumer.java.lang.ExceptiongetFailureCause()intgetNumberOfAssignedPartitions()java.time.DurationgetTimeBetweenCommits()Time between commits.booleanisClosedOrFailed()voidonPartitionsAssigned(java.util.Collection<org.apache.kafka.common.TopicPartition> partitions)Delegate toWorkManagervoidonPartitionsLost(java.util.Collection<org.apache.kafka.common.TopicPartition> partitions)Delegate toWorkManagervoidonPartitionsRevoked(java.util.Collection<org.apache.kafka.common.TopicPartition> partitions)Commit our offsetsprotected voidonUserFunctionSuccess(WorkContainer<K,V> wc, java.util.List<?> resultsFromUserFunction)voidpoll(java.util.function.Consumer<org.apache.kafka.clients.consumer.ConsumerRecord<K,V>> usersVoidConsumptionFunction)Register a function to be applied in parallel to each received messagevoidpollAndProduce(java.util.function.Function<org.apache.kafka.clients.consumer.ConsumerRecord<K,V>,org.apache.kafka.clients.producer.ProducerRecord<K,V>> userFunction)Register a function to be applied in parallel to each received message, which in turn returns aProducerRecordto be sent back to the broker.voidpollAndProduce(java.util.function.Function<org.apache.kafka.clients.consumer.ConsumerRecord<K,V>,org.apache.kafka.clients.producer.ProducerRecord<K,V>> userFunction, java.util.function.Consumer<ParallelStreamProcessor.ConsumeProduceResult<K,V,K,V>> callback)Register a function to be applied in parallel to each received message, which in turn returns aProducerRecordto be sent back to the broker.voidpollAndProduceMany(java.util.function.Function<org.apache.kafka.clients.consumer.ConsumerRecord<K,V>,java.util.List<org.apache.kafka.clients.producer.ProducerRecord<K,V>>> userFunction)Register a function to be applied in parallel to each received message, which in turn returns one or manyProducerRecords to be sent back to the broker.voidpollAndProduceMany(java.util.function.Function<org.apache.kafka.clients.consumer.ConsumerRecord<K,V>,java.util.List<org.apache.kafka.clients.producer.ProducerRecord<K,V>>> userFunction, java.util.function.Consumer<ParallelStreamProcessor.ConsumeProduceResult<K,V,K,V>> callback)Register a function to be applied in parallel to each received message, which in turn returns one or moreProducerRecords to be sent back to the broker.voidrequestCommitAsap()Request a commit as soon as possible (ASAP), overriding other constraints.voidsetLongPollTimeout(java.time.Duration ofMillis)voidsetMyId(java.util.Optional<java.lang.String> myId)Optioanl ID of this instance.voidsetTimeBetweenCommits(java.time.Duration timeBetweenCommits)Time between commits.voidsubscribe(java.util.Collection<java.lang.String> topics)voidsubscribe(java.util.Collection<java.lang.String> topics, org.apache.kafka.clients.consumer.ConsumerRebalanceListener callback)voidsubscribe(java.util.regex.Pattern pattern)voidsubscribe(java.util.regex.Pattern pattern, org.apache.kafka.clients.consumer.ConsumerRebalanceListener callback)protected <R> voidsupervisorLoop(java.util.function.Function<org.apache.kafka.clients.consumer.ConsumerRecord<K,V>,java.util.List<R>> userFunction, java.util.function.Consumer<R> callback)protected <R> java.util.List<ParallelConsumer.Tuple<org.apache.kafka.clients.consumer.ConsumerRecord<K,V>,R>>userFunctionRunner(java.util.function.Function<org.apache.kafka.clients.consumer.ConsumerRecord<K,V>,java.util.List<R>> usersFunction, java.util.function.Consumer<R> callback, WorkContainer<K,V> wc)Run the supplied function.voidwaitForProcessedNotCommitted(java.time.Duration timeout)Block the calling thread until no more messages are being processed.intworkRemaining()Of the records consumed from the broker, how many do we have remaining in our local queuesMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface io.confluent.parallelconsumer.DrainingCloseable
closeDontDrainFirst, closeDontDrainFirst, closeDrainFirst, closeDrainFirst
-
Field Details
-
MDC_INSTANCE_ID
public static final java.lang.String MDC_INSTANCE_ID- See Also:
- Constant Field Values
-
wm
-
-
Constructor Details
-
ParallelEoSStreamProcessor
Construct the AsyncConsumer by wrapping this passed in conusmer and producer, which can be configured any which way as per normal.- See Also:
ParallelConsumerOptions
-
-
Method Details
-
isClosedOrFailed
public boolean isClosedOrFailed() -
getFailureCause
public java.lang.Exception getFailureCause()- Returns:
- if the system failed, returns the recorded reason.
-
subscribe
public void subscribe(java.util.Collection<java.lang.String> topics)- Specified by:
subscribein interfaceParallelConsumer<K,V>- See Also:
KafkaConsumer.subscribe(Collection)
-
subscribe
public void subscribe(java.util.regex.Pattern pattern)- Specified by:
subscribein interfaceParallelConsumer<K,V>- See Also:
KafkaConsumer.subscribe(Pattern)
-
subscribe
public void subscribe(java.util.Collection<java.lang.String> topics, org.apache.kafka.clients.consumer.ConsumerRebalanceListener callback)- Specified by:
subscribein interfaceParallelConsumer<K,V>- See Also:
KafkaConsumer.subscribe(Collection, ConsumerRebalanceListener)
-
subscribe
public void subscribe(java.util.regex.Pattern pattern, org.apache.kafka.clients.consumer.ConsumerRebalanceListener callback)- Specified by:
subscribein interfaceParallelConsumer<K,V>- See Also:
KafkaConsumer.subscribe(Pattern, ConsumerRebalanceListener)
-
onPartitionsRevoked
public void onPartitionsRevoked(java.util.Collection<org.apache.kafka.common.TopicPartition> partitions)Commit our offsetsMake sure the calling thread is the thread which performs commit - i.e. is the
OffsetCommitter.- Specified by:
onPartitionsRevokedin interfaceorg.apache.kafka.clients.consumer.ConsumerRebalanceListener
-
onPartitionsAssigned
public void onPartitionsAssigned(java.util.Collection<org.apache.kafka.common.TopicPartition> partitions)Delegate toWorkManager- Specified by:
onPartitionsAssignedin interfaceorg.apache.kafka.clients.consumer.ConsumerRebalanceListener- See Also:
WorkManager.onPartitionsAssigned(java.util.Collection<org.apache.kafka.common.TopicPartition>)
-
onPartitionsLost
public void onPartitionsLost(java.util.Collection<org.apache.kafka.common.TopicPartition> partitions)Delegate toWorkManager- Specified by:
onPartitionsLostin interfaceorg.apache.kafka.clients.consumer.ConsumerRebalanceListener- See Also:
WorkManager.onPartitionsAssigned(java.util.Collection<org.apache.kafka.common.TopicPartition>)
-
poll
public void poll(java.util.function.Consumer<org.apache.kafka.clients.consumer.ConsumerRecord<K,V>> usersVoidConsumptionFunction)Description copied from interface:ParallelConsumerRegister a function to be applied in parallel to each received message- Specified by:
pollin interfaceParallelConsumer<K,V>- Parameters:
usersVoidConsumptionFunction- the function
-
pollAndProduceMany
public void pollAndProduceMany(java.util.function.Function<org.apache.kafka.clients.consumer.ConsumerRecord<K,V>,java.util.List<org.apache.kafka.clients.producer.ProducerRecord<K,V>>> userFunction, java.util.function.Consumer<ParallelStreamProcessor.ConsumeProduceResult<K,V,K,V>> callback)Description copied from interface:ParallelStreamProcessorRegister a function to be applied in parallel to each received message, which in turn returns one or moreProducerRecords to be sent back to the broker.- Specified by:
pollAndProduceManyin interfaceParallelStreamProcessor<K,V>callback- applied after the produced message is acknowledged by kafka
-
pollAndProduceMany
public void pollAndProduceMany(java.util.function.Function<org.apache.kafka.clients.consumer.ConsumerRecord<K,V>,java.util.List<org.apache.kafka.clients.producer.ProducerRecord<K,V>>> userFunction)Description copied from interface:ParallelStreamProcessorRegister a function to be applied in parallel to each received message, which in turn returns one or manyProducerRecords to be sent back to the broker.- Specified by:
pollAndProduceManyin interfaceParallelStreamProcessor<K,V>
-
pollAndProduce
public void pollAndProduce(java.util.function.Function<org.apache.kafka.clients.consumer.ConsumerRecord<K,V>,org.apache.kafka.clients.producer.ProducerRecord<K,V>> userFunction)Description copied from interface:ParallelStreamProcessorRegister a function to be applied in parallel to each received message, which in turn returns aProducerRecordto be sent back to the broker.- Specified by:
pollAndProducein interfaceParallelStreamProcessor<K,V>
-
pollAndProduce
public void pollAndProduce(java.util.function.Function<org.apache.kafka.clients.consumer.ConsumerRecord<K,V>,org.apache.kafka.clients.producer.ProducerRecord<K,V>> userFunction, java.util.function.Consumer<ParallelStreamProcessor.ConsumeProduceResult<K,V,K,V>> callback)Description copied from interface:ParallelStreamProcessorRegister a function to be applied in parallel to each received message, which in turn returns aProducerRecordto be sent back to the broker.- Specified by:
pollAndProducein interfaceParallelStreamProcessor<K,V>callback- applied after the produced message is acknowledged by kafka
-
close
public void close()Close the system, without draining.- Specified by:
closein interfacejava.lang.AutoCloseable- Specified by:
closein interfacejava.io.Closeable- Specified by:
closein interfaceDrainingCloseable- See Also:
ParallelEoSStreamProcessor.State.draining
-
close
Description copied from interface:DrainingCloseableClose the consumer.- Specified by:
closein interfaceDrainingCloseable- Parameters:
timeout- how long to wait before giving updrainMode- wait for messages already consumed from the broker to be processed before closing
-
waitForProcessedNotCommitted
public void waitForProcessedNotCommitted(java.time.Duration timeout)Block the calling thread until no more messages are being processed. -
supervisorLoop
-
userFunctionRunner
protected <R> java.util.List<ParallelConsumer.Tuple<org.apache.kafka.clients.consumer.ConsumerRecord<K,V>,R>> userFunctionRunner(java.util.function.Function<org.apache.kafka.clients.consumer.ConsumerRecord<K,V>,java.util.List<R>> usersFunction, java.util.function.Consumer<R> callback, WorkContainer<K,V> wc)Run the supplied function. -
addToMailBoxOnUserFunctionSuccess
protected void addToMailBoxOnUserFunctionSuccess(WorkContainer<K,V> wc, java.util.List<?> resultsFromUserFunction) -
onUserFunctionSuccess
protected void onUserFunctionSuccess(WorkContainer<K,V> wc, java.util.List<?> resultsFromUserFunction) -
addToMailbox
-
workRemaining
public int workRemaining()Description copied from interface:DrainingCloseableOf the records consumed from the broker, how many do we have remaining in our local queues- Specified by:
workRemainingin interfaceDrainingCloseable- Returns:
- the number of consumed but outstanding records to process
-
setLongPollTimeout
public void setLongPollTimeout(java.time.Duration ofMillis) -
requestCommitAsap
public void requestCommitAsap()Request a commit as soon as possible (ASAP), overriding other constraints. -
setTimeBetweenCommits
public void setTimeBetweenCommits(java.time.Duration timeBetweenCommits)Time between commits. Using a higher frequency will put more load on the brokers. -
getTimeBetweenCommits
public java.time.Duration getTimeBetweenCommits()Time between commits. Using a higher frequency will put more load on the brokers. -
getNumberOfAssignedPartitions
public int getNumberOfAssignedPartitions() -
setMyId
public void setMyId(java.util.Optional<java.lang.String> myId)Optioanl ID of this instance. Useful for testing.
-