Packages

c

org.apache.spark.sql.rapids.execution

SerializeConcatHostBuffersDeserializeBatch

class SerializeConcatHostBuffersDeserializeBatch extends Serializable with Logging

Class that is used to broadcast results (a contiguous host batch) to executors.

This is instantiated in the driver, serialized to an output stream provided by Spark to broadcast, and deserialized on the executor. Both the driver's and executor's copies are cleaned via GC. Because Spark closes AutoCloseable broadcast results after spilling to disk, this class does not subclass AutoCloseable. Instead we implement a closeInternal method only to be triggered via GC.

Annotations
@SerialVersionUID()
Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. SerializeConcatHostBuffersDeserializeBatch
  2. Logging
  3. Serializable
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new SerializeConcatHostBuffersDeserializeBatch(data: HostConcatResult, output: Seq[Attribute], numRows: Int, dataLen: Long)

    data

    HostConcatResult populated for a broadcast that has column, otherwise it is null. It is transient because we want the executor to deserialize its data from Spark's torrent-backed input stream.

    output

    used to find the schema for this broadcast batch

    numRows

    number of rows for this broadcast batch

    dataLen

    size in bytes for this broadcast batch

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def batch: SpillableColumnarBatch
  6. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  7. def closeInternal(): Unit

    This method is meant to only be called from finalize and it is not a regular AutoCloseable.close because we do not want Spark to close batchInternal when it spills the broadcast block's host torrent data.

    This method is meant to only be called from finalize and it is not a regular AutoCloseable.close because we do not want Spark to close batchInternal when it spills the broadcast block's host torrent data.

    Reference: https://github.com/NVIDIA/spark-rapids/issues/8602

    Public for tests.

  8. var data: HostConcatResult
  9. var dataLen: Long
  10. def dataSize: Long
  11. def doReadObject(in: ObjectInputStream): Unit

    Deserializes a broadcast result in the host into data, numRows and dataLen.

    Deserializes a broadcast result in the host into data, numRows and dataLen.

    Public for unit tests.

  12. def doWriteObject(out: ObjectOutputStream): Unit

    doWriteObject is invoked from both the driver, when it is trying to write a collected broadcast result on an stream to torrent broadcast to executors, and also when the executor MemoryStore evicts a "broadcast_[id]" block to make room in host memory.

    doWriteObject is invoked from both the driver, when it is trying to write a collected broadcast result on an stream to torrent broadcast to executors, and also when the executor MemoryStore evicts a "broadcast_[id]" block to make room in host memory.

    The driver will have data populated on construction and the executor will deserialize the object and, as part of the deserialization, invoke doReadObject. This will populate data before any task has had a chance to call .batch on this class.

    If batchInternal is defined we are in the executor, and there is no work to be done. This broadcast has been materialized on the GPU/RapidsBufferCatalog, and it is completely managed by the plugin.

    Public for unit tests.

    out

    the stream to write to

  13. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  14. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  15. def finalize(): Unit
    Definition Classes
    SerializeConcatHostBuffersDeserializeBatch → AnyRef
    Annotations
    @nowarn()
  16. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  17. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  18. def hostBatch: ColumnarBatch

    Create host columnar batches from either serialized buffers or device columnar batch.

    Create host columnar batches from either serialized buffers or device columnar batch. This method can be safely called in both driver node and executor nodes. For now, it is used on the driver side for reusing GPU broadcast results in the CPU.

    NOTE: The caller is responsible to release these host columnar batches.

  19. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  20. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  21. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  22. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  23. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  24. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  25. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  26. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  27. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  28. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  29. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  30. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  31. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  32. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  33. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  34. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  35. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  36. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  37. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  38. var numRows: Int
  39. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  40. def toString(): String
    Definition Classes
    AnyRef → Any
  41. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  42. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  43. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from Logging

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped