Packages

class KafkaConsumer[K, V] extends ReadStream[KafkaConsumerRecord[K, V]]

Vert.x Kafka consumer.

You receive Kafka records by providing a io.vertx.scala.kafka.client.consumer.KafkaConsumer#handler. As messages arrive the handler will be called with the records.

The io.vertx.scala.kafka.client.consumer.KafkaConsumer#pause and io.vertx.scala.kafka.client.consumer.KafkaConsumer#resume provides global control over reading the records from the consumer.

The io.vertx.scala.kafka.client.consumer.KafkaConsumer#pause and io.vertx.scala.kafka.client.consumer.KafkaConsumer#resume provides finer grained control over reading records for specific Topic/Partition, these are Kafka's specific operations.

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. KafkaConsumer
  2. ReadStream
  3. StreamBase
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new KafkaConsumer(_asJava: AnyRef)(implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[K], arg1: scala.reflect.api.JavaUniverse.TypeTag[V])

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def asJava: AnyRef
    Definition Classes
    KafkaConsumerReadStreamStreamBase
  6. def assign(topicPartitions: Set[TopicPartition], completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Manually assign a list of partition to this consumer.

    Manually assign a list of partition to this consumer.

    topicPartitions

    partitions which want assigned

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  7. def assign(topicPartition: TopicPartition, completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Manually assign a partition to this consumer.

    Manually assign a partition to this consumer.

    topicPartition

    partition which want assignedsee TopicPartition

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  8. def assign(topicPartitions: Set[TopicPartition]): KafkaConsumer[K, V]

    Manually assign a list of partition to this consumer.

    Manually assign a list of partition to this consumer.

    topicPartitions

    partitions which want assigned

    returns

    current KafkaConsumer instance

  9. def assign(topicPartition: TopicPartition): KafkaConsumer[K, V]

    Manually assign a partition to this consumer.

    Manually assign a partition to this consumer.

    topicPartition

    partition which want assignedsee TopicPartition

    returns

    current KafkaConsumer instance

  10. def assignFuture(topicPartitions: Set[TopicPartition]): Future[Unit]

    Like assign but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  11. def assignFuture(topicPartition: TopicPartition): Future[Unit]

    Like assign but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  12. def assignment(handler: Handler[AsyncResult[Set[TopicPartition]]]): KafkaConsumer[K, V]

    Get the set of partitions currently assigned to this consumer.

    Get the set of partitions currently assigned to this consumer.

    handler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  13. def assignmentFuture(): Future[Set[TopicPartition]]

    Like assignment but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  14. def clone(): AnyRef
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  15. def close(completionHandler: Handler[AsyncResult[Unit]]): Unit

    Close the consumer

    Close the consumer

    completionHandler

    handler called on operation completed

  16. def close(): Unit

    Close the consumer

  17. def closeFuture(): Future[Unit]

    Like close but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  18. def commit(completionHandler: Handler[AsyncResult[Unit]]): Unit

    Commit current offsets for all the subscribed list of topics and partition.

    Commit current offsets for all the subscribed list of topics and partition.

    completionHandler

    handler called on operation completed

  19. def commit(): Unit

    Commit current offsets for all the subscribed list of topics and partition.

  20. def commitFuture(): Future[Unit]

    Like commit but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  21. def committed(topicPartition: TopicPartition, handler: Handler[AsyncResult[OffsetAndMetadata]]): Unit

    Get the last committed offset for the given partition (whether the commit happened by this process or another).

    Get the last committed offset for the given partition (whether the commit happened by this process or another).

    topicPartition

    topic partition for getting last committed offsetsee TopicPartition

    handler

    handler called on operation completed

  22. def committedFuture(topicPartition: TopicPartition): Future[OffsetAndMetadata]

    Like committed but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  23. def endHandler(endHandler: Handler[Unit]): KafkaConsumer[K, V]

    Set an end handler.

    Set an end handler. Once the stream has ended, and there is no more data to be read, this handler will be called.

    returns

    a reference to this, so the API can be used fluently

    Definition Classes
    KafkaConsumerReadStream
  24. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  25. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  26. def exceptionHandler(handler: Handler[Throwable]): KafkaConsumer[K, V]

    Set an exception handler on the read stream.

    Set an exception handler on the read stream.

    handler

    the exception handler

    returns

    a reference to this, so the API can be used fluently

    Definition Classes
    KafkaConsumerReadStreamStreamBase
  27. def finalize(): Unit
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  28. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
  29. def handler(handler: Handler[KafkaConsumerRecord[K, V]]): KafkaConsumer[K, V]

    Set a data handler.

    Set a data handler. As data is read, the handler will be called with the data.

    returns

    a reference to this, so the API can be used fluently

    Definition Classes
    KafkaConsumerReadStream
  30. def hashCode(): Int
    Definition Classes
    AnyRef → Any
  31. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  32. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  33. final def notify(): Unit
    Definition Classes
    AnyRef
  34. final def notifyAll(): Unit
    Definition Classes
    AnyRef
  35. def partitionsAssignedHandler(handler: Handler[Set[TopicPartition]]): KafkaConsumer[K, V]

    Set the handler called when topic partitions are assigned to the consumer

    Set the handler called when topic partitions are assigned to the consumer

    handler

    handler called on assigned topic partitions

    returns

    current KafkaConsumer instance

  36. def partitionsFor(topic: String, handler: Handler[AsyncResult[Buffer[PartitionInfo]]]): KafkaConsumer[K, V]

    Get metadata about the partitions for a given topic.

    Get metadata about the partitions for a given topic.

    topic

    topic partition for which getting partitions info

    handler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  37. def partitionsForFuture(topic: String): Future[Buffer[PartitionInfo]]

    Like partitionsFor but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  38. def partitionsRevokedHandler(handler: Handler[Set[TopicPartition]]): KafkaConsumer[K, V]

    Set the handler called when topic partitions are revoked to the consumer

    Set the handler called when topic partitions are revoked to the consumer

    handler

    handler called on revoked topic partitions

    returns

    current KafkaConsumer instance

  39. def pause(topicPartitions: Set[TopicPartition], completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Suspend fetching from the requested partitions.

    Suspend fetching from the requested partitions.

    topicPartitions

    topic partition from which suspend fetching

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  40. def pause(topicPartition: TopicPartition, completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Suspend fetching from the requested partition.

    Suspend fetching from the requested partition.

    topicPartition

    topic partition from which suspend fetchingsee TopicPartition

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  41. def pause(topicPartitions: Set[TopicPartition]): KafkaConsumer[K, V]

    Suspend fetching from the requested partitions.

    Suspend fetching from the requested partitions.

    topicPartitions

    topic partition from which suspend fetching

    returns

    current KafkaConsumer instance

  42. def pause(topicPartition: TopicPartition): KafkaConsumer[K, V]

    Suspend fetching from the requested partition.

    Suspend fetching from the requested partition.

    topicPartition

    topic partition from which suspend fetchingsee TopicPartition

    returns

    current KafkaConsumer instance

  43. def pause(): KafkaConsumer[K, V]

    Pause the ReadSupport.

    Pause the ReadSupport. While it's paused, no data will be sent to the dataHandler

    returns

    a reference to this, so the API can be used fluently

    Definition Classes
    KafkaConsumerReadStream
  44. def pauseFuture(topicPartitions: Set[TopicPartition]): Future[Unit]

    Like pause but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  45. def pauseFuture(topicPartition: TopicPartition): Future[Unit]

    Like pause but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  46. def paused(handler: Handler[AsyncResult[Set[TopicPartition]]]): Unit

    Get the set of partitions that were previously paused by a call to pause(Set).

    Get the set of partitions that were previously paused by a call to pause(Set).

    handler

    handler called on operation completed

  47. def pausedFuture(): Future[Set[TopicPartition]]

    Like paused but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  48. def position(partition: TopicPartition, handler: Handler[AsyncResult[Long]]): Unit

    Get the offset of the next record that will be fetched (if a record with that offset exists).

    Get the offset of the next record that will be fetched (if a record with that offset exists).

    partition

    The partition to get the position forsee TopicPartition

    handler

    handler called on operation completed

  49. def positionFuture(partition: TopicPartition): Future[Long]

    Like position but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  50. def resume(topicPartitions: Set[TopicPartition], completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Resume specified partitions which have been paused with pause.

    Resume specified partitions which have been paused with pause.

    topicPartitions

    topic partition from which resume fetching

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  51. def resume(topicPartition: TopicPartition, completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Resume specified partition which have been paused with pause.

    Resume specified partition which have been paused with pause.

    topicPartition

    topic partition from which resume fetchingsee TopicPartition

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  52. def resume(topicPartitions: Set[TopicPartition]): KafkaConsumer[K, V]

    Resume specified partitions which have been paused with pause.

    Resume specified partitions which have been paused with pause.

    topicPartitions

    topic partition from which resume fetching

    returns

    current KafkaConsumer instance

  53. def resume(topicPartition: TopicPartition): KafkaConsumer[K, V]

    Resume specified partition which have been paused with pause.

    Resume specified partition which have been paused with pause.

    topicPartition

    topic partition from which resume fetchingsee TopicPartition

    returns

    current KafkaConsumer instance

  54. def resume(): KafkaConsumer[K, V]

    Resume reading.

    Resume reading. If the ReadSupport has been paused, reading will recommence on it.

    returns

    a reference to this, so the API can be used fluently

    Definition Classes
    KafkaConsumerReadStream
  55. def resumeFuture(topicPartitions: Set[TopicPartition]): Future[Unit]

    Like resume but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  56. def resumeFuture(topicPartition: TopicPartition): Future[Unit]

    Like resume but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  57. def seek(topicPartition: TopicPartition, offset: Long, completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Overrides the fetch offsets that the consumer will use on the next poll.

    Overrides the fetch offsets that the consumer will use on the next poll.

    topicPartition

    topic partition for which seeksee TopicPartition

    offset

    offset to seek inside the topic partition

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  58. def seek(topicPartition: TopicPartition, offset: Long): KafkaConsumer[K, V]

    Overrides the fetch offsets that the consumer will use on the next poll.

    Overrides the fetch offsets that the consumer will use on the next poll.

    topicPartition

    topic partition for which seeksee TopicPartition

    offset

    offset to seek inside the topic partition

    returns

    current KafkaConsumer instance

  59. def seekFuture(topicPartition: TopicPartition, offset: Long): Future[Unit]

    Like seek but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  60. def seekToBeginning(topicPartitions: Set[TopicPartition], completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Seek to the first offset for each of the given partitions.

    Seek to the first offset for each of the given partitions.

    topicPartitions

    topic partition for which seek

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  61. def seekToBeginning(topicPartition: TopicPartition, completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Seek to the first offset for each of the given partition.

    Seek to the first offset for each of the given partition.

    topicPartition

    topic partition for which seeksee TopicPartition

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  62. def seekToBeginning(topicPartitions: Set[TopicPartition]): KafkaConsumer[K, V]

    Seek to the first offset for each of the given partitions.

    Seek to the first offset for each of the given partitions.

    topicPartitions

    topic partition for which seek

    returns

    current KafkaConsumer instance

  63. def seekToBeginning(topicPartition: TopicPartition): KafkaConsumer[K, V]

    Seek to the first offset for each of the given partition.

    Seek to the first offset for each of the given partition.

    topicPartition

    topic partition for which seeksee TopicPartition

    returns

    current KafkaConsumer instance

  64. def seekToBeginningFuture(topicPartitions: Set[TopicPartition]): Future[Unit]

    Like seekToBeginning but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  65. def seekToBeginningFuture(topicPartition: TopicPartition): Future[Unit]

    Like seekToBeginning but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  66. def seekToEnd(topicPartitions: Set[TopicPartition], completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Seek to the last offset for each of the given partitions.

    Seek to the last offset for each of the given partitions.

    topicPartitions

    topic partition for which seek

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  67. def seekToEnd(topicPartition: TopicPartition, completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Seek to the last offset for each of the given partition.

    Seek to the last offset for each of the given partition.

    topicPartition

    topic partition for which seeksee TopicPartition

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  68. def seekToEnd(topicPartitions: Set[TopicPartition]): KafkaConsumer[K, V]

    Seek to the last offset for each of the given partitions.

    Seek to the last offset for each of the given partitions.

    topicPartitions

    topic partition for which seek

    returns

    current KafkaConsumer instance

  69. def seekToEnd(topicPartition: TopicPartition): KafkaConsumer[K, V]

    Seek to the last offset for each of the given partition.

    Seek to the last offset for each of the given partition.

    topicPartition

    topic partition for which seeksee TopicPartition

    returns

    current KafkaConsumer instance

  70. def seekToEndFuture(topicPartitions: Set[TopicPartition]): Future[Unit]

    Like seekToEnd but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  71. def seekToEndFuture(topicPartition: TopicPartition): Future[Unit]

    Like seekToEnd but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  72. def subscribe(topics: Set[String], completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Subscribe to the given list of topics to get dynamically assigned partitions.

    Subscribe to the given list of topics to get dynamically assigned partitions.

    topics

    topics to subscribe to

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  73. def subscribe(topic: String, completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Subscribe to the given topic to get dynamically assigned partitions.

    Subscribe to the given topic to get dynamically assigned partitions.

    topic

    topic to subscribe to

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  74. def subscribe(topics: Set[String]): KafkaConsumer[K, V]

    Subscribe to the given list of topics to get dynamically assigned partitions.

    Subscribe to the given list of topics to get dynamically assigned partitions.

    topics

    topics to subscribe to

    returns

    current KafkaConsumer instance

  75. def subscribe(topic: String): KafkaConsumer[K, V]

    Subscribe to the given topic to get dynamically assigned partitions.

    Subscribe to the given topic to get dynamically assigned partitions.

    topic

    topic to subscribe to

    returns

    current KafkaConsumer instance

  76. def subscribeFuture(topics: Set[String]): Future[Unit]

    Like subscribe but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  77. def subscribeFuture(topic: String): Future[Unit]

    Like subscribe but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  78. def subscription(handler: Handler[AsyncResult[Set[String]]]): KafkaConsumer[K, V]

    Get the current subscription.

    Get the current subscription.

    handler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  79. def subscriptionFuture(): Future[Set[String]]

    Like subscription but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  80. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  81. def toString(): String
    Definition Classes
    AnyRef → Any
  82. def unsubscribe(completionHandler: Handler[AsyncResult[Unit]]): KafkaConsumer[K, V]

    Unsubscribe from topics currently subscribed with subscribe.

    Unsubscribe from topics currently subscribed with subscribe.

    completionHandler

    handler called on operation completed

    returns

    current KafkaConsumer instance

  83. def unsubscribe(): KafkaConsumer[K, V]

    Unsubscribe from topics currently subscribed with subscribe.

    Unsubscribe from topics currently subscribed with subscribe.

    returns

    current KafkaConsumer instance

  84. def unsubscribeFuture(): Future[Unit]

    Like unsubscribe but returns a scala.concurrent.Future instead of taking an AsyncResultHandler.

  85. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  86. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  87. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from ReadStream[KafkaConsumerRecord[K, V]]

Inherited from StreamBase

Inherited from AnyRef

Inherited from Any

Ungrouped