Packages

abstract class GpuTextBasedPartitionReader[BUFF <: LineBufferer, FACT <: LineBuffererFactory[BUFF]] extends PartitionReader[ColumnarBatch] with ScanWithMetrics

The text based PartitionReader

Linear Supertypes
ScanWithMetrics, PartitionReader[ColumnarBatch], Closeable, AutoCloseable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. GpuTextBasedPartitionReader
  2. ScanWithMetrics
  3. PartitionReader
  4. Closeable
  5. AutoCloseable
  6. AnyRef
  7. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new GpuTextBasedPartitionReader(conf: Configuration, partFile: PartitionedFile, dataSchema: StructType, readDataSchema: StructType, lineSeparatorInRead: Option[Array[Byte]], maxRowsPerChunk: Integer, maxBytesPerChunk: Long, execMetrics: Map[String, GpuMetric], bufferFactory: FACT)

    conf

    the Hadoop configuration

    partFile

    file split to read

    dataSchema

    schema of the data

    readDataSchema

    the Spark schema describing what will be read

    lineSeparatorInRead

    An optional byte line sep.

    maxRowsPerChunk

    maximum number of rows to read in a batch

    maxBytesPerChunk

    maximum number of bytes to read in a batch

    execMetrics

    metrics to update during read

Abstract Value Members

  1. abstract def castStringToBool(input: ColumnVector): ColumnVector
  2. abstract def dateFormat: Option[String]
  3. abstract def getFileFormatShortName: String

    File format short name used for logging and other things to uniquely identity which file format is being used.

    File format short name used for logging and other things to uniquely identity which file format is being used.

    returns

    the file format short name

  4. abstract def readToTable(dataBuffer: BUFF, cudfDataSchema: Schema, readDataSchema: StructType, cudfReadDataSchema: Schema, isFirstChunk: Boolean, decodeTime: GpuMetric): Table

    Read the host buffer to GPU table

    Read the host buffer to GPU table

    dataBuffer

    where the data is buffered

    cudfDataSchema

    the cudf schema of the data

    readDataSchema

    the Spark schema describing what will be read

    cudfReadDataSchema

    the cudf schema of just the data we want to read.

    isFirstChunk

    if it is the first chunk

    returns

    table

  5. abstract def timestampFormat: String

Concrete Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def castStringToDate(input: ColumnVector, dt: DType): ColumnVector
  6. def castStringToDecimal(input: ColumnVector, dt: DecimalType): ColumnVector
  7. def castStringToFloat(input: ColumnVector, dt: DType): ColumnVector
  8. def castStringToInt(input: ColumnVector, intType: DType): ColumnVector
  9. def castStringToTimestamp(lhs: ColumnVector, sparkFormat: String, dtype: DType): ColumnVector
  10. def castTableToDesiredTypes(table: Table, readSchema: StructType): Table
  11. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  12. def close(): Unit
    Definition Classes
    GpuTextBasedPartitionReader → Closeable → AutoCloseable
  13. def currentMetricsValues(): Array[CustomTaskMetric]
    Definition Classes
    PartitionReader
  14. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  15. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  16. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  17. def get(): ColumnarBatch
    Definition Classes
    GpuTextBasedPartitionReader → PartitionReader
  18. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  19. def getCudfSchema(dataSchema: StructType): Schema
  20. def handleResult(readDataSchema: StructType, table: Table): Option[Table]

    Handle the table decoded by GPU

    Handle the table decoded by GPU

    Please note that, this function owns table which is supposed to be closed in this function But for the optimization, we just return the original table.

    readDataSchema

    the Spark schema describing what will be read

    table

    the table decoded by GPU

    returns

    the new optional Table

  21. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  22. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  23. val metrics: Map[String, GpuMetric]
    Definition Classes
    ScanWithMetrics
  24. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  25. def next(): Boolean
    Definition Classes
    GpuTextBasedPartitionReader → PartitionReader
  26. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  27. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  28. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  29. def toString(): String
    Definition Classes
    AnyRef → Any
  30. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  31. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  32. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Inherited from ScanWithMetrics

Inherited from PartitionReader[ColumnarBatch]

Inherited from Closeable

Inherited from AutoCloseable

Inherited from AnyRef

Inherited from Any

Ungrouped