Packages

c

com.nvidia.spark.rapids

MultiFileCloudParquetPartitionReader

class MultiFileCloudParquetPartitionReader extends MultiFileCloudPartitionReaderBase with ParquetPartitionReaderBase

A PartitionReader that can read multiple Parquet files in parallel. This is most efficient running in a cloud environment where the I/O of reading is slow.

Efficiently reading a Parquet split on the GPU requires re-constructing the Parquet file in memory that contains just the column chunks that are needed. This avoids sending unnecessary data to the GPU and saves GPU memory.

Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. MultiFileCloudParquetPartitionReader
  2. ParquetPartitionReaderBase
  3. MultiFileReaderFunctions
  4. MultiFileCloudPartitionReaderBase
  5. FilePartitionReaderBase
  6. ScanWithMetrics
  7. Logging
  8. PartitionReader
  9. Closeable
  10. AutoCloseable
  11. AnyRef
  12. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new MultiFileCloudParquetPartitionReader(conf: Configuration, files: Array[PartitionedFile], filterFunc: (PartitionedFile) ⇒ ParquetFileInfoWithBlockMeta, isSchemaCaseSensitive: Boolean, debugDumpPrefix: Option[String], debugDumpAlways: Boolean, maxReadBatchSizeRows: Integer, maxReadBatchSizeBytes: Long, targetBatchSizeBytes: Long, maxGpuColumnSizeBytes: Long, useChunkedReader: Boolean, maxChunkedReaderMemoryUsageSizeBytes: Long, execMetrics: Map[String, GpuMetric], partitionSchema: StructType, numThreads: Int, maxNumFileProcessed: Int, ignoreMissingFiles: Boolean, ignoreCorruptFiles: Boolean, useFieldId: Boolean, alluxioPathReplacementMap: Map[String, String], alluxioReplacementTaskTime: Boolean, queryUsesInputFile: Boolean, keepReadsInOrder: Boolean, combineConf: CombineConf)

    conf

    the Hadoop configuration

    files

    the partitioned files to read

    filterFunc

    a function to filter the necessary blocks from a given file

    isSchemaCaseSensitive

    whether schema is case sensitive

    debugDumpPrefix

    a path prefix to use for dumping the fabricated Parquet data

    debugDumpAlways

    whether to debug dump always or only on errors

    maxReadBatchSizeRows

    soft limit on the maximum number of rows the reader reads per batch

    maxReadBatchSizeBytes

    soft limit on the maximum number of bytes the reader reads per batch

    targetBatchSizeBytes

    the target size of the batch

    maxGpuColumnSizeBytes

    the maximum size of a GPU column

    useChunkedReader

    whether to read Parquet by chunks or read all at once

    maxChunkedReaderMemoryUsageSizeBytes

    soft limit on the number of bytes of internal memory usage that the reader will use

    execMetrics

    metrics

    partitionSchema

    Schema of partitions.

    numThreads

    the size of the threadpool

    maxNumFileProcessed

    the maximum number of files to read on the CPU side and waiting to be processed on the GPU. This affects the amount of host memory used.

    ignoreMissingFiles

    Whether to ignore missing files

    ignoreCorruptFiles

    Whether to ignore corrupt files

    useFieldId

    Whether to use field id for column matching

    alluxioPathReplacementMap

    Map containing mapping of DFS scheme to Alluxio scheme

    alluxioReplacementTaskTime

    Whether the Alluxio replacement algorithm is set to task time

    queryUsesInputFile

    Whether the query requires the input file name functionality

    keepReadsInOrder

    Whether to require the files to be read in the same order as Spark. Defaults to true for formats that don't explicitly handle this.

    combineConf

    configs relevant to combination

Type Members

  1. case class HostMemoryBuffersWithMetaData(partitionedFile: PartitionedFile, origPartitionedFile: Option[PartitionedFile], memBuffersAndSizes: Array[SingleHMBAndMeta], bytesRead: Long, dateRebaseMode: DateTimeRebaseMode, timestampRebaseMode: DateTimeRebaseMode, hasInt96Timestamps: Boolean, clippedSchema: MessageType, readSchema: StructType, allPartValues: Option[Array[(Long, InternalRow)]]) extends HostMemoryBuffersWithMetaDataBase with Product with Serializable

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. val PARQUET_META_SIZE: Long
    Definition Classes
    ParquetPartitionReaderBase
  5. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  6. var batchIter: Iterator[ColumnarBatch]
    Attributes
    protected
    Definition Classes
    FilePartitionReaderBase
  7. def calculateExtraMemoryForParquetFooter(numCols: Int, numBlocks: Int): Int

    Calculate an amount of extra memory if we are combining multiple files together.

    Calculate an amount of extra memory if we are combining multiple files together. We want to add extra memory because the ColumnChunks saved in the footer have 2 fields file_offset and data_page_offset that get much larger when we are combining files. Here we estimate that by taking the number of columns * number of blocks which should be the number of column chunks and then saying there are 2 fields that could be larger and assume max size of those would be 8 bytes worst case. So we probably allocate to much here but it shouldn't be by a huge amount and its better then having to realloc and copy.

    numCols

    the number of columns

    numBlocks

    the total number of blocks to be combined

    returns

    amount of extra memory to allocate

    Definition Classes
    ParquetPartitionReaderBase
  8. def calculateParquetFooterSize(currentChunkedBlocks: Seq[BlockMetaData], schema: MessageType): Long
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
    Annotations
    @nowarn()
  9. def calculateParquetOutputSize(currentChunkedBlocks: Seq[BlockMetaData], schema: MessageType, handleCoalesceFiles: Boolean): Long
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  10. def canUseCombine: Boolean
  11. def checkIfNeedToSplit(current: HostMemoryBuffersWithMetaData, next: HostMemoryBuffersWithMetaData): Boolean
  12. def checkIfNeedToSplitBlocks(currentDateRebaseMode: DateTimeRebaseMode, nextDateRebaseMode: DateTimeRebaseMode, currentTimestampRebaseMode: DateTimeRebaseMode, nextTimestampRebaseMode: DateTimeRebaseMode, currentSchema: SchemaBase, nextSchema: SchemaBase, currentFilePath: String, nextFilePath: String): Boolean
    Definition Classes
    ParquetPartitionReaderBase
  13. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  14. def close(): Unit
    Definition Classes
    MultiFileCloudPartitionReaderBaseFilePartitionReaderBase → Closeable → AutoCloseable
  15. def combineHMBs(input: Array[HostMemoryBuffersWithMetaDataBase]): HostMemoryBuffersWithMetaDataBase
  16. var combineLeftOverFiles: Option[Array[HostMemoryBuffersWithMetaDataBase]]
    Attributes
    protected
    Definition Classes
    MultiFileCloudPartitionReaderBase
  17. def computeBlockMetaData(blocks: Seq[BlockMetaData], realStartOffset: Long): Seq[BlockMetaData]

    Computes new block metadata to reflect where the blocks and columns will appear in the computed Parquet file.

    Computes new block metadata to reflect where the blocks and columns will appear in the computed Parquet file.

    blocks

    block metadata from the original file(s) that will appear in the computed file

    realStartOffset

    starting file offset of the first block

    returns

    updated block metadata

    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
    Annotations
    @nowarn()
  18. val conf: Configuration
  19. def copyBlocksData(filePath: Path, out: HostMemoryOutputStream, blocks: Seq[BlockMetaData], realStartOffset: Long, metrics: Map[String, GpuMetric]): Seq[BlockMetaData]

    Copies the data corresponding to the clipped blocks in the original file and compute the block metadata for the output.

    Copies the data corresponding to the clipped blocks in the original file and compute the block metadata for the output. The output blocks will contain the same column chunk metadata but with the file offsets updated to reflect the new position of the column data as written to the output.

    out

    the output stream to receive the data

    blocks

    block metadata from the original file that will appear in the computed file

    realStartOffset

    starting file offset of the first block

    returns

    updated block metadata corresponding to the output

    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  20. val copyBufferSize: Int
    Definition Classes
    ParquetPartitionReaderBase
  21. def copyDataRange(range: CopyRange, in: FSDataInputStream, out: HostMemoryOutputStream, copyBuffer: Array[Byte]): Long
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  22. var currentFileHostBuffers: Option[HostMemoryBuffersWithMetaDataBase]
    Attributes
    protected
    Definition Classes
    MultiFileCloudPartitionReaderBase
  23. def currentMetricsValues(): Array[CustomTaskMetric]
    Definition Classes
    PartitionReader
  24. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  25. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  26. val execMetrics: Map[String, GpuMetric]
  27. def fileSystemBytesRead(): Long
    Attributes
    protected
    Definition Classes
    MultiFileReaderFunctions
    Annotations
    @nowarn()
  28. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  29. def get(): ColumnarBatch
    Definition Classes
    FilePartitionReaderBase → PartitionReader
  30. def getBatchRunner(tc: TaskContext, file: PartitionedFile, origFile: Option[PartitionedFile], conf: Configuration, filters: Array[Filter]): Callable[HostMemoryBuffersWithMetaDataBase]

    File reading logic in a Callable which will be running in a thread pool

    File reading logic in a Callable which will be running in a thread pool

    tc

    task context to use

    file

    file to be read

    origFile

    optional original unmodified file if replaced with Alluxio

    conf

    configuration

    filters

    push down filters

    returns

    Callable[HostMemoryBuffersWithMetaDataBase]

    Definition Classes
    MultiFileCloudParquetPartitionReaderMultiFileCloudPartitionReaderBase
  31. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  32. final def getFileFormatShortName: String

    File format short name used for logging and other things to uniquely identity which file format is being used.

    File format short name used for logging and other things to uniquely identity which file format is being used.

    returns

    the file format short name

    Definition Classes
    MultiFileCloudParquetPartitionReaderMultiFileCloudPartitionReaderBase
  33. def getParquetOptions(readDataSchema: StructType, clippedSchema: MessageType, useFieldId: Boolean): ParquetOptions
    Definition Classes
    ParquetPartitionReaderBase
  34. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  35. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  36. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  37. var isDone: Boolean
    Attributes
    protected
    Definition Classes
    FilePartitionReaderBase
  38. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  39. val isSchemaCaseSensitive: Boolean
  40. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  41. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  42. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  43. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  44. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  45. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  46. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  47. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  48. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  49. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  50. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  51. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  52. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  53. val metrics: Map[String, GpuMetric]
    Definition Classes
    ScanWithMetrics
  54. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  55. def next(): Boolean
    Definition Classes
    MultiFileCloudPartitionReaderBase → PartitionReader
  56. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  57. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  58. def populateCurrentBlockChunk(blockIter: BufferedIterator[BlockMetaData], maxReadBatchSizeRows: Int, maxReadBatchSizeBytes: Long, readDataSchema: StructType): Seq[BlockMetaData]
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  59. def readBatches(fileBufsAndMeta: HostMemoryBuffersWithMetaDataBase): Iterator[ColumnarBatch]

    Decode HostMemoryBuffers by GPU

    Decode HostMemoryBuffers by GPU

    fileBufsAndMeta

    the file HostMemoryBuffer read from a PartitionedFile

    returns

    Option[ColumnarBatch]

    Definition Classes
    MultiFileCloudParquetPartitionReaderMultiFileCloudPartitionReaderBase
  60. def readPartFile(blocks: Seq[BlockMetaData], clippedSchema: MessageType, filePath: Path): (HostMemoryBuffer, Long, Seq[BlockMetaData])
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  61. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  62. implicit def toBlockMetaData(block: DataBlockBase): BlockMetaData

    conversions used by multithreaded reader and coalescing reader

    conversions used by multithreaded reader and coalescing reader

    Definition Classes
    ParquetPartitionReaderBase
  63. implicit def toBlockMetaDataSeq(blocks: Seq[DataBlockBase]): Seq[BlockMetaData]
    Definition Classes
    ParquetPartitionReaderBase
  64. def toCudfColumnNames(readDataSchema: StructType, fileSchema: MessageType, isCaseSensitive: Boolean, useFieldId: Boolean): Seq[String]

    Take case-sensitive into consideration when getting the data reading column names before sending parquet-formatted buffer to cudf.

    Take case-sensitive into consideration when getting the data reading column names before sending parquet-formatted buffer to cudf. Also clips the column names if useFieldId is true.

    readDataSchema

    Spark schema to read

    fileSchema

    the schema of the dumped parquet-formatted buffer, already removed unmatched

    isCaseSensitive

    if it is case sensitive

    useFieldId

    if enabled spark.sql.parquet.fieldId.read.enabled

    returns

    a sequence of tuple of column names following the order of readDataSchema

    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  65. implicit def toDataBlockBase(blocks: Seq[BlockMetaData]): Seq[DataBlockBase]
    Definition Classes
    ParquetPartitionReaderBase
  66. def toString(): String
    Definition Classes
    AnyRef → Any
  67. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  68. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  69. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  70. def writeFooter(out: OutputStream, blocks: Seq[BlockMetaData], schema: MessageType): Unit
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase

Inherited from MultiFileReaderFunctions

Inherited from FilePartitionReaderBase

Inherited from ScanWithMetrics

Inherited from Logging

Inherited from PartitionReader[ColumnarBatch]

Inherited from Closeable

Inherited from AutoCloseable

Inherited from AnyRef

Inherited from Any

Ungrouped