Packages

class KafkaConsumer[K, V] extends ReadStream[KafkaConsumerRecord[K, V]]

Vert.x Kafka consumer.

You receive Kafka records by providing a io.vertx.scala.kafka.client.consumer.KafkaConsumer#handler. As messages arrive the handler will be called with the records.

The io.vertx.scala.kafka.client.consumer.KafkaConsumer#pause and io.vertx.scala.kafka.client.consumer.KafkaConsumer#resume provides global control over reading the records from the consumer.

The io.vertx.scala.kafka.client.consumer.KafkaConsumer#pause and io.vertx.scala.kafka.client.consumer.KafkaConsumer#resume provides finer grained control over reading records for specific Topic/Partition, these are Kafka's specific operations.

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. KafkaConsumer
  2. ReadStream
  3. StreamBase
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new KafkaConsumer(_asJava: AnyRef)(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[K], arg1: scala.reflect.api.JavaUniverse.TypeTag[V])

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def asJava: AnyRef
    Definition Classes
    KafkaConsumerReadStreamStreamBase
  6. def assign(topicPartitions: Set[TopicPartition], completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Manually assign a list of partition to this consumer.

    Manually assign a list of partition to this consumer.

    topicPartitions

    partitions which want assigned

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  7. def assign(topicPartition: TopicPartition, completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Manually assign a partition to this consumer.

    Manually assign a partition to this consumer.

    topicPartition

    partition which want assignedsee TopicPartition

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  8. def assign(topicPartitions: Set[TopicPartition]): KafkaConsumer[K, V]

    Manually assign a list of partition to this consumer.

    Manually assign a list of partition to this consumer.

    topicPartitions

    partitions which want assigned

    returns

    current KafkaConsumer instance

  9. def assign(topicPartition: TopicPartition): KafkaConsumer[K, V]

    Manually assign a partition to this consumer.

    Manually assign a partition to this consumer.

    topicPartition

    partition which want assignedsee TopicPartition

    returns

    current KafkaConsumer instance

  10. def assignFuture(topicPartitions: Set[TopicPartition]): Future[Unit]

    Like assign but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  11. def assignFuture(topicPartition: TopicPartition): Future[Unit]

    Like assign but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  12. def assignment(handler: Handler[AsyncResult[Set[TopicPartition]]]): KafkaConsumer[K, V]

    Get the set of partitions currently assigned to this consumer.

    Get the set of partitions currently assigned to this consumer.

    handler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  13. def assignmentFuture(): Future[Set[TopicPartition]]

    Like assignment but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  14. def beginningOffsets(topicPartition: TopicPartition, handler: Handler[AsyncResult[Long]]): Unit

    Get the first offset for the given partitions.

    Get the first offset for the given partitions.

    topicPartition

    the partition to get the earliest offset.see TopicPartition

    handler

    handler called on operation completed. Returns the earliest available offset for the given partition

  15. def beginningOffsetsFuture(topicPartition: TopicPartition): Future[Long]

    Like beginningOffsets but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  16. def clone(): AnyRef
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  17. def close(completionHandler: Handler[AsyncResult[Unit]]): Unit

    Close the consumer

    Close the consumer

    completionHandler

    handler called on operation completed

  18. def close(): Unit

    Close the consumer

  19. def closeFuture(): Future[Unit]

    Like close but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  20. def commit(completionHandler: Handler[AsyncResult[Unit]]): Unit

    Commit current offsets for all the subscribed list of topics and partition.

    Commit current offsets for all the subscribed list of topics and partition.

    completionHandler

    handler called on operation completed

  21. def commit(): Unit

    Commit current offsets for all the subscribed list of topics and partition.

  22. def commitFuture(): Future[Unit]

    Like commit but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  23. def committed(topicPartition: TopicPartition, handler: Handler[AsyncResult[OffsetAndMetadata]]): Unit

    Get the last committed offset for the given partition (whether the commit happened by this process or another).

    Get the last committed offset for the given partition (whether the commit happened by this process or another).

    topicPartition

    topic partition for getting last committed offsetsee TopicPartition

    handler

    handler called on operation completed

  24. def committedFuture(topicPartition: TopicPartition): Future[OffsetAndMetadata]

    Like committed but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  25. def endHandler(endHandler: Handler[Unit]): KafkaConsumer[K, V]

    Set an end handler.

    Set an end handler. Once the stream has ended, and there is no more data to be read, this handler will be called.

    returns

    a reference to this, so the API can be used fluently

    Definition Classes
    KafkaConsumerReadStream
  26. def endOffsets(topicPartition: TopicPartition, handler: Handler[AsyncResult[Long]]): Unit

    Get the last offset for the given partition.

    Get the last offset for the given partition. The last offset of a partition is the offset of the upcoming message, i.e. the offset of the last available message + 1.

    topicPartition

    the partition to get the end offset.see TopicPartition

    handler

    handler called on operation completed. The end offset for the given partition.

  27. def endOffsetsFuture(topicPartition: TopicPartition): Future[Long]

    Like endOffsets but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  28. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  29. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  30. def exceptionHandler(handler: Handler[Throwable]): KafkaConsumer[K, V]

    Set an exception handler on the read stream.

    Set an exception handler on the read stream.

    handler

    the exception handler

    returns

    a reference to this, so the API can be used fluently

    Definition Classes
    KafkaConsumerReadStreamStreamBase
  31. def finalize(): Unit
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  32. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
  33. def handler(handler: Handler[KafkaConsumerRecord[K, V]]): KafkaConsumer[K, V]

    Set a data handler.

    Set a data handler. As data is read, the handler will be called with the data.

    returns

    a reference to this, so the API can be used fluently

    Definition Classes
    KafkaConsumerReadStream
  34. def hashCode(): Int
    Definition Classes
    AnyRef → Any
  35. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  36. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  37. final def notify(): Unit
    Definition Classes
    AnyRef
  38. final def notifyAll(): Unit
    Definition Classes
    AnyRef
  39. def offsetsForTimes(topicPartition: TopicPartition, timestamp: Long, handler: Handler[AsyncResult[OffsetAndTimestamp]]): Unit

    Look up the offset for the given partition by timestamp.

    Look up the offset for the given partition by timestamp.

    topicPartition

    TopicPartition to query.see TopicPartition

    timestamp

    Timestamp to be used in the query.

    handler

    handler called on operation completed

  40. def offsetsForTimesFuture(topicPartition: TopicPartition, timestamp: Long): Future[OffsetAndTimestamp]

    Like offsetsForTimes but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  41. def partitionsAssignedHandler(handler: Handler[Set[TopicPartition]]): KafkaConsumer[K, V]

    Set the handler called when topic partitions are assigned to the consumer

    Set the handler called when topic partitions are assigned to the consumer

    handler

    handler called on assigned topic partitions

    returns

    current KafkaConsumer instance

  42. def partitionsFor(topic: String, handler: Handler[AsyncResult[Buffer[PartitionInfo]]]): KafkaConsumer[K, V]

    Get metadata about the partitions for a given topic.

    Get metadata about the partitions for a given topic.

    topic

    topic partition for which getting partitions info

    handler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  43. def partitionsForFuture(topic: String): Future[Buffer[PartitionInfo]]

    Like partitionsFor but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  44. def partitionsRevokedHandler(handler: Handler[Set[TopicPartition]]): KafkaConsumer[K, V]

    Set the handler called when topic partitions are revoked to the consumer

    Set the handler called when topic partitions are revoked to the consumer

    handler

    handler called on revoked topic partitions

    returns

    current KafkaConsumer instance

  45. def pause(topicPartitions: Set[TopicPartition], completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Suspend fetching from the requested partitions.

    Suspend fetching from the requested partitions.

    topicPartitions

    topic partition from which suspend fetching

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  46. def pause(topicPartition: TopicPartition, completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Suspend fetching from the requested partition.

    Suspend fetching from the requested partition.

    topicPartition

    topic partition from which suspend fetchingsee TopicPartition

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  47. def pause(topicPartitions: Set[TopicPartition]): KafkaConsumer[K, V]

    Suspend fetching from the requested partitions.

    Suspend fetching from the requested partitions.

    topicPartitions

    topic partition from which suspend fetching

    returns

    current KafkaConsumer instance

  48. def pause(topicPartition: TopicPartition): KafkaConsumer[K, V]

    Suspend fetching from the requested partition.

    Suspend fetching from the requested partition.

    topicPartition

    topic partition from which suspend fetchingsee TopicPartition

    returns

    current KafkaConsumer instance

  49. def pause(): KafkaConsumer[K, V]

    Pause the ReadSupport.

    Pause the ReadSupport. While it's paused, no data will be sent to the dataHandler

    returns

    a reference to this, so the API can be used fluently

    Definition Classes
    KafkaConsumerReadStream
  50. def pauseFuture(topicPartitions: Set[TopicPartition]): Future[Unit]

    Like pause but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  51. def pauseFuture(topicPartition: TopicPartition): Future[Unit]

    Like pause but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  52. def paused(handler: Handler[AsyncResult[Set[TopicPartition]]]): Unit

    Get the set of partitions that were previously paused by a call to pause(Set).

    Get the set of partitions that were previously paused by a call to pause(Set).

    handler

    handler called on operation completed

  53. def pausedFuture(): Future[Set[TopicPartition]]

    Like paused but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  54. def position(partition: TopicPartition, handler: Handler[AsyncResult[Long]]): Unit

    Get the offset of the next record that will be fetched (if a record with that offset exists).

    Get the offset of the next record that will be fetched (if a record with that offset exists).

    partition

    The partition to get the position forsee TopicPartition

    handler

    handler called on operation completed

  55. def positionFuture(partition: TopicPartition): Future[Long]

    Like position but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  56. def resume(topicPartitions: Set[TopicPartition], completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Resume specified partitions which have been paused with pause.

    Resume specified partitions which have been paused with pause.

    topicPartitions

    topic partition from which resume fetching

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  57. def resume(topicPartition: TopicPartition, completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Resume specified partition which have been paused with pause.

    Resume specified partition which have been paused with pause.

    topicPartition

    topic partition from which resume fetchingsee TopicPartition

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  58. def resume(topicPartitions: Set[TopicPartition]): KafkaConsumer[K, V]

    Resume specified partitions which have been paused with pause.

    Resume specified partitions which have been paused with pause.

    topicPartitions

    topic partition from which resume fetching

    returns

    current KafkaConsumer instance

  59. def resume(topicPartition: TopicPartition): KafkaConsumer[K, V]

    Resume specified partition which have been paused with pause.

    Resume specified partition which have been paused with pause.

    topicPartition

    topic partition from which resume fetchingsee TopicPartition

    returns

    current KafkaConsumer instance

  60. def resume(): KafkaConsumer[K, V]

    Resume reading.

    Resume reading. If the ReadSupport has been paused, reading will recommence on it.

    returns

    a reference to this, so the API can be used fluently

    Definition Classes
    KafkaConsumerReadStream
  61. def resumeFuture(topicPartitions: Set[TopicPartition]): Future[Unit]

    Like resume but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  62. def resumeFuture(topicPartition: TopicPartition): Future[Unit]

    Like resume but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  63. def seek(topicPartition: TopicPartition, offset: Long, completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Overrides the fetch offsets that the consumer will use on the next poll.

    Overrides the fetch offsets that the consumer will use on the next poll.

    topicPartition

    topic partition for which seeksee TopicPartition

    offset

    offset to seek inside the topic partition

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  64. def seek(topicPartition: TopicPartition, offset: Long): KafkaConsumer[K, V]

    Overrides the fetch offsets that the consumer will use on the next poll.

    Overrides the fetch offsets that the consumer will use on the next poll.

    topicPartition

    topic partition for which seeksee TopicPartition

    offset

    offset to seek inside the topic partition

    returns

    current KafkaConsumer instance

  65. def seekFuture(topicPartition: TopicPartition, offset: Long): Future[Unit]

    Like seek but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  66. def seekToBeginning(topicPartitions: Set[TopicPartition], completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Seek to the first offset for each of the given partitions.

    Seek to the first offset for each of the given partitions.

    topicPartitions

    topic partition for which seek

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  67. def seekToBeginning(topicPartition: TopicPartition, completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Seek to the first offset for each of the given partition.

    Seek to the first offset for each of the given partition.

    topicPartition

    topic partition for which seeksee TopicPartition

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  68. def seekToBeginning(topicPartitions: Set[TopicPartition]): KafkaConsumer[K, V]

    Seek to the first offset for each of the given partitions.

    Seek to the first offset for each of the given partitions.

    topicPartitions

    topic partition for which seek

    returns

    current KafkaConsumer instance

  69. def seekToBeginning(topicPartition: TopicPartition): KafkaConsumer[K, V]

    Seek to the first offset for each of the given partition.

    Seek to the first offset for each of the given partition.

    topicPartition

    topic partition for which seeksee TopicPartition

    returns

    current KafkaConsumer instance

  70. def seekToBeginningFuture(topicPartitions: Set[TopicPartition]): Future[Unit]

    Like seekToBeginning but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  71. def seekToBeginningFuture(topicPartition: TopicPartition): Future[Unit]

    Like seekToBeginning but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  72. def seekToEnd(topicPartitions: Set[TopicPartition], completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Seek to the last offset for each of the given partitions.

    Seek to the last offset for each of the given partitions.

    topicPartitions

    topic partition for which seek

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  73. def seekToEnd(topicPartition: TopicPartition, completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Seek to the last offset for each of the given partition.

    Seek to the last offset for each of the given partition.

    topicPartition

    topic partition for which seeksee TopicPartition

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  74. def seekToEnd(topicPartitions: Set[TopicPartition]): KafkaConsumer[K, V]

    Seek to the last offset for each of the given partitions.

    Seek to the last offset for each of the given partitions.

    topicPartitions

    topic partition for which seek

    returns

    current KafkaConsumer instance

  75. def seekToEnd(topicPartition: TopicPartition): KafkaConsumer[K, V]

    Seek to the last offset for each of the given partition.

    Seek to the last offset for each of the given partition.

    topicPartition

    topic partition for which seeksee TopicPartition

    returns

    current KafkaConsumer instance

  76. def seekToEndFuture(topicPartitions: Set[TopicPartition]): Future[Unit]

    Like seekToEnd but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  77. def seekToEndFuture(topicPartition: TopicPartition): Future[Unit]

    Like seekToEnd but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  78. def subscribe(topics: Set[String], completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Subscribe to the given list of topics to get dynamically assigned partitions.

    Subscribe to the given list of topics to get dynamically assigned partitions.

    topics

    topics to subscribe to

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  79. def subscribe(topic: String, completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Subscribe to the given topic to get dynamically assigned partitions.

    Subscribe to the given topic to get dynamically assigned partitions.

    topic

    topic to subscribe to

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  80. def subscribe(topics: Set[String]): KafkaConsumer[K, V]

    Subscribe to the given list of topics to get dynamically assigned partitions.

    Subscribe to the given list of topics to get dynamically assigned partitions.

    topics

    topics to subscribe to

    returns

    current KafkaConsumer instance

  81. def subscribe(topic: String): KafkaConsumer[K, V]

    Subscribe to the given topic to get dynamically assigned partitions.

    Subscribe to the given topic to get dynamically assigned partitions.

    topic

    topic to subscribe to

    returns

    current KafkaConsumer instance

  82. def subscribeFuture(topics: Set[String]): Future[Unit]

    Like subscribe but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  83. def subscribeFuture(topic: String): Future[Unit]

    Like subscribe but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  84. def subscription(handler: Handler[AsyncResult[Set[String]]]): KafkaConsumer[K, V]

    Get the current subscription.

    Get the current subscription.

    handler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  85. def subscriptionFuture(): Future[Set[String]]

    Like subscription but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  86. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  87. def toString(): String
    Definition Classes
    AnyRef → Any
  88. def unsubscribe(completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Unsubscribe from topics currently subscribed with subscribe.

    Unsubscribe from topics currently subscribed with subscribe.

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  89. def unsubscribe(): KafkaConsumer[K, V]

    Unsubscribe from topics currently subscribed with subscribe.

    Unsubscribe from topics currently subscribed with subscribe.

    returns

    current KafkaConsumer instance

  90. def unsubscribeFuture(): Future[Unit]

    Like unsubscribe but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  91. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  92. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  93. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from ReadStream[KafkaConsumerRecord[K, V]]

Inherited from StreamBase

Inherited from AnyRef

Inherited from Any

Ungrouped