c

org.apache.spark.sql.delta.commands

OptimizeExecutor

class OptimizeExecutor extends DeltaCommand with SQLMetricsReporting with Serializable

Optimize job which compacts small files into larger files to reduce the number of files and potentially allow more efficient reads.

Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. OptimizeExecutor
  2. Serializable
  3. SQLMetricsReporting
  4. DeltaCommand
  5. DeltaLogging
  6. DatabricksLogging
  7. DeltaProgressReporter
  8. Logging
  9. AnyRef
  10. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Instance Constructors

  1. new OptimizeExecutor(sparkSession: SparkSession, deltaLog: DeltaLog, partitionPredicate: Seq[Expression], zOrderByColumns: Seq[String])

    sparkSession

    Spark environment reference.

    deltaLog

    Delta table that is being optimized.

    partitionPredicate

    List of partition predicates to select subset of files to optimize.

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##: Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def buildBaseRelation(spark: SparkSession, txn: OptimisticTransaction, actionType: String, rootPath: Path, inputLeafFiles: Seq[String], nameToAddFileMap: Map[String, AddFile]): HadoopFsRelation

    Build a base relation of files that need to be rewritten as part of an update/delete/merge operation.

    Build a base relation of files that need to be rewritten as part of an update/delete/merge operation.

    Attributes
    protected
    Definition Classes
    DeltaCommand
  6. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.CloneNotSupportedException]) @native()
  7. def commitLarge(spark: SparkSession, txn: OptimisticTransaction, actions: Iterator[Action], op: Operation, context: Map[String, String], metrics: Map[String, String]): Long

    Create a large commit on the Delta log by directly writing an iterator of FileActions to the LogStore.

    Create a large commit on the Delta log by directly writing an iterator of FileActions to the LogStore. This function only commits the next possible version and will not check whether the commit is retry-able. If the next version has already been committed, then this function will fail. This bypasses all optimistic concurrency checks. We assume that transaction conflicts should be rare because this method is typically used to create new tables (e.g. CONVERT TO DELTA) or apply some commands which rarely receive other transactions (e.g. CLONE/RESTORE).

    Attributes
    protected
    Definition Classes
    DeltaCommand
  8. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  9. def equals(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef → Any
  10. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.Throwable])
  11. def generateCandidateFileMap(basePath: Path, candidateFiles: Seq[AddFile]): Map[String, AddFile]

    Generates a map of file names to add file entries for operations where we will need to rewrite files such as delete, merge, update.

    Generates a map of file names to add file entries for operations where we will need to rewrite files such as delete, merge, update. We expect file names to be unique, because each file contains a UUID.

    Attributes
    protected
    Definition Classes
    DeltaCommand
  12. final def getClass(): Class[_ <: AnyRef]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  13. def getDeltaLog(spark: SparkSession, path: Option[String], tableIdentifier: Option[TableIdentifier], operationName: String): DeltaLog

    Utility method to return the DeltaLog of an existing Delta table referred by either the given path or

    Utility method to return the DeltaLog of an existing Delta table referred by either the given path or

    spark

    SparkSession reference to use.

    path

    Table location. Expects a non-empty tableIdentifier or path.

    tableIdentifier

    Table identifier. Expects a non-empty tableIdentifier or path.

    operationName

    Operation that is getting the DeltaLog, used in error messages.

    returns

    DeltaLog of the table

    Attributes
    protected
    Definition Classes
    DeltaCommand
    Exceptions thrown

    AnalysisException If either no Delta table exists at the given path/identifier or there is neither path nor tableIdentifier is provided.

  14. def getMetric(name: String): Option[SQLMetric]

    Returns the metric with name registered for the given transaction if it exists.

    Returns the metric with name registered for the given transaction if it exists.

    Definition Classes
    SQLMetricsReporting
  15. def getMetricsForOperation(operation: Operation): Map[String, String]

    Get the metrics for an operation based on collected SQL Metrics and filtering out the ones based on the metric parameters for that operation.

    Get the metrics for an operation based on collected SQL Metrics and filtering out the ones based on the metric parameters for that operation.

    Definition Classes
    SQLMetricsReporting
  16. def getTouchedFile(basePath: Path, filePath: String, nameToAddFileMap: Map[String, AddFile]): AddFile

    Find the AddFile record corresponding to the file that was read as part of a delete/update/merge operation.

    Find the AddFile record corresponding to the file that was read as part of a delete/update/merge operation.

    filePath

    The path to a file. Can be either absolute or relative

    nameToAddFileMap

    Map generated through generateCandidateFileMap()

    Attributes
    protected
    Definition Classes
    DeltaCommand
  17. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  18. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  19. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  20. def isCatalogTable(analyzer: Analyzer, tableIdent: TableIdentifier): Boolean

    Use the analyzer to see whether the provided TableIdentifier is for a path based table or not

    Use the analyzer to see whether the provided TableIdentifier is for a path based table or not

    analyzer

    The session state analyzer to call

    tableIdent

    Table Identifier to determine whether is path based or not

    returns

    Boolean where true means that the table is a table in a metastore and false means the table is a path based table

    Definition Classes
    DeltaCommand
  21. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  22. def isPathIdentifier(tableIdent: TableIdentifier): Boolean

    Checks if the given identifier can be for a delta table's path

    Checks if the given identifier can be for a delta table's path

    tableIdent

    Table Identifier for which to check

    Attributes
    protected
    Definition Classes
    DeltaCommand
  23. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  24. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  25. def logConsole(line: String): Unit
    Definition Classes
    DatabricksLogging
  26. def logDebug(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  27. def logDebug(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  28. def logError(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  29. def logError(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  30. def logInfo(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  31. def logInfo(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  32. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  33. def logTrace(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  34. def logTrace(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  35. def logWarning(msg: => String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  36. def logWarning(msg: => String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  37. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  38. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  39. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  40. def optimize(): Seq[Row]
  41. def parsePredicates(spark: SparkSession, predicate: String): Seq[Expression]

    Converts string predicates into Expressions relative to a transaction.

    Converts string predicates into Expressions relative to a transaction.

    Attributes
    protected
    Definition Classes
    DeltaCommand
    Exceptions thrown

    AnalysisException if a non-partition column is referenced.

  42. def recordDeltaEvent(deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty, data: AnyRef = null, path: Option[Path] = None): Unit

    Used to record the occurrence of a single event or report detailed, operation specific statistics.

    Used to record the occurrence of a single event or report detailed, operation specific statistics.

    path

    Used to log the path of the delta table when deltaLog is null.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  43. def recordDeltaOperation[A](deltaLog: DeltaLog, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: => A): A

    Used to report the duration as well as the success or failure of an operation on a deltaLog.

    Used to report the duration as well as the success or failure of an operation on a deltaLog.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  44. def recordDeltaOperationForTablePath[A](tablePath: String, opType: String, tags: Map[TagDefinition, String] = Map.empty)(thunk: => A): A

    Used to report the duration as well as the success or failure of an operation on a tahoePath.

    Used to report the duration as well as the success or failure of an operation on a tahoePath.

    Attributes
    protected
    Definition Classes
    DeltaLogging
  45. def recordEvent(metric: MetricDefinition, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
    Definition Classes
    DatabricksLogging
  46. def recordFrameProfile[T](group: String, name: String)(thunk: => T): T
    Attributes
    protected
    Definition Classes
    DeltaLogging
  47. def recordOperation[S](opType: OpType, opTarget: String = null, extraTags: Map[TagDefinition, String], isSynchronous: Boolean = true, alwaysRecordStats: Boolean = false, allowAuthTags: Boolean = false, killJvmIfStuck: Boolean = false, outputMetric: MetricDefinition = null, silent: Boolean = true)(thunk: => S): S
    Definition Classes
    DatabricksLogging
  48. def recordProductEvent(metric: MetricDefinition with CentralizableMetric, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, trimBlob: Boolean = true): Unit
    Definition Classes
    DatabricksLogging
  49. def recordProductUsage(metric: MetricDefinition with CentralizableMetric, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
    Definition Classes
    DatabricksLogging
  50. def recordUsage(metric: MetricDefinition, quantity: Double, additionalTags: Map[TagDefinition, String] = Map.empty, blob: String = null, forceSample: Boolean = false, trimBlob: Boolean = true, silent: Boolean = false): Unit
    Definition Classes
    DatabricksLogging
  51. def registerSQLMetrics(spark: SparkSession, metrics: Map[String, SQLMetric]): Unit

    Register SQL metrics for an operation by appending the supplied metrics map to the operationSQLMetrics map.

    Register SQL metrics for an operation by appending the supplied metrics map to the operationSQLMetrics map.

    Definition Classes
    SQLMetricsReporting
  52. def removeFilesFromPaths(deltaLog: DeltaLog, nameToAddFileMap: Map[String, AddFile], filesToRewrite: Seq[String], operationTimestamp: Long): Seq[RemoveFile]

    This method provides the RemoveFile actions that are necessary for files that are touched and need to be rewritten in methods like Delete, Update, and Merge.

    This method provides the RemoveFile actions that are necessary for files that are touched and need to be rewritten in methods like Delete, Update, and Merge.

    deltaLog

    The DeltaLog of the table that is being operated on

    nameToAddFileMap

    A map generated using generateCandidateFileMap.

    filesToRewrite

    Absolute paths of the files that were touched. We will search for these in candidateFiles. Obtained as the output of the input_file_name function.

    operationTimestamp

    The timestamp of the operation

    Attributes
    protected
    Definition Classes
    DeltaCommand
  53. def resolveIdentifier(analyzer: Analyzer, identifier: TableIdentifier): LogicalPlan

    Use the analyzer to resolve the identifier provided

    Use the analyzer to resolve the identifier provided

    analyzer

    The session state analyzer to call

    identifier

    Table Identifier to determine whether is path based or not

    Attributes
    protected
    Definition Classes
    DeltaCommand
  54. final def synchronized[T0](arg0: => T0): T0
    Definition Classes
    AnyRef
  55. def toString(): String
    Definition Classes
    AnyRef → Any
  56. def updateAndCheckpoint(spark: SparkSession, deltaLog: DeltaLog, commitSize: Int, attemptVersion: Long, txnId: String): Snapshot

    Update the table now that the commit has been made, and write a checkpoint.

    Update the table now that the commit has been made, and write a checkpoint.

    Attributes
    protected
    Definition Classes
    DeltaCommand
  57. def verifyPartitionPredicates(spark: SparkSession, partitionColumns: Seq[String], predicates: Seq[Expression]): Unit
    Definition Classes
    DeltaCommand
  58. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  59. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException])
  60. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws(classOf[java.lang.InterruptedException]) @native()
  61. def withDmqTag[T](thunk: => T): T
    Attributes
    protected
    Definition Classes
    DeltaLogging
  62. def withStatusCode[T](statusCode: String, defaultMessage: String, data: Map[String, Any] = Map.empty)(body: => T): T

    Report a log to indicate some command is running.

    Report a log to indicate some command is running.

    Definition Classes
    DeltaProgressReporter

Inherited from Serializable

Inherited from SQLMetricsReporting

Inherited from DeltaCommand

Inherited from DeltaLogging

Inherited from DatabricksLogging

Inherited from DeltaProgressReporter

Inherited from Logging

Inherited from AnyRef

Inherited from Any

Ungrouped