case class TickTockHiveTableDataObject(id: DataObjectId, path: Option[String] = None, partitions: Seq[String] = Seq(), analyzeTableAfterWrite: Boolean = false, dateColumnType: DateColumnType = DateColumnType.Date, schemaMin: Option[StructType] = None, table: Table, numInitialHdfsPartitions: Int = 16, saveMode: SDLSaveMode = SDLSaveMode.Overwrite, acl: Option[AclDef] = None, connectionId: Option[ConnectionId] = None, expectedPartitionsCondition: Option[String] = None, housekeepingMode: Option[HousekeepingMode] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends TransactionalSparkTableDataObject with CanHandlePartitions with Product with Serializable

Linear Supertypes
Serializable, Serializable, Product, Equals, CanHandlePartitions, TransactionalSparkTableDataObject, CanWriteDataFrame, TableDataObject, SchemaValidation, CanCreateDataFrame, DataObject, AtlasExportable, SmartDataLakeLogger, ParsableFromConfig[DataObject], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. TickTockHiveTableDataObject
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. CanHandlePartitions
  7. TransactionalSparkTableDataObject
  8. CanWriteDataFrame
  9. TableDataObject
  10. SchemaValidation
  11. CanCreateDataFrame
  12. DataObject
  13. AtlasExportable
  14. SmartDataLakeLogger
  15. ParsableFromConfig
  16. SdlConfigObject
  17. AnyRef
  18. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new TickTockHiveTableDataObject(id: DataObjectId, path: Option[String] = None, partitions: Seq[String] = Seq(), analyzeTableAfterWrite: Boolean = false, dateColumnType: DateColumnType = DateColumnType.Date, schemaMin: Option[StructType] = None, table: Table, numInitialHdfsPartitions: Int = 16, saveMode: SDLSaveMode = SDLSaveMode.Overwrite, acl: Option[AclDef] = None, connectionId: Option[ConnectionId] = None, expectedPartitionsCondition: Option[String] = None, housekeepingMode: Option[HousekeepingMode] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry)

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. val acl: Option[AclDef]
  5. def addFieldIfNotExisting(writeSchema: StructType, colName: String, dataType: DataType): StructType
    Attributes
    protected
    Definition Classes
    CanCreateDataFrame
  6. val analyzeTableAfterWrite: Boolean
  7. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  8. def atlasName: String
    Definition Classes
    TableDataObject → DataObjectAtlasExportable
  9. def atlasQualifiedName(prefix: String): String
    Definition Classes
    TableDataObject → AtlasExportable
  10. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native() @HotSpotIntrinsicCandidate()
  11. val connectionId: Option[ConnectionId]
  12. def createEmptyPartition(partitionValues: PartitionValues)(implicit context: ActionPipelineContext): Unit

    create empty partition

    create empty partition

    Definition Classes
    TickTockHiveTableDataObjectCanHandlePartitions
  13. def createReadSchema(writeSchema: StructType)(implicit context: ActionPipelineContext): StructType

    Creates the read schema based on a given write schema.

    Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.

    Definition Classes
    CanCreateDataFrame
    Annotations
    @Scaladoc()
  14. val dateColumnType: DateColumnType
  15. def deletePartitions(partitionValues: Seq[PartitionValues])(implicit context: ActionPipelineContext): Unit

    Delete given partitions.

    Delete given partitions. This is used to cleanup partitions by housekeeping. Note: this is optional to implement.

    Definition Classes
    TickTockHiveTableDataObjectCanHandlePartitions
  16. def dropTable(implicit context: ActionPipelineContext): Unit
    Definition Classes
    TickTockHiveTableDataObject → TableDataObject
  17. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  18. val expectedPartitionsCondition: Option[String]

    Definition of partitions that are expected to exists.

    Definition of partitions that are expected to exists. This is used to validate that partitions being read exists and don't return no data. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false example: "elements['yourColName'] > 2017"

    returns

    true if partition is expected to exist.

    Definition Classes
    TickTockHiveTableDataObjectCanHandlePartitions
  19. def factory: FromConfigFactory[DataObject]

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    TickTockHiveTableDataObject → ParsableFromConfig
  20. def filesystem(implicit context: ActionPipelineContext): FileSystem
  21. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  22. def getConnection[T <: Connection](connectionId: ConnectionId)(implicit registry: InstanceRegistry, ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Handle class cast exception when getting objects from instance registry

    Handle class cast exception when getting objects from instance registry

    Attributes
    protected
    Definition Classes
    DataObject
    Annotations
    @Scaladoc()
  23. def getConnectionReg[T <: Connection](connectionId: ConnectionId, registry: InstanceRegistry)(implicit ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T
    Attributes
    protected
    Definition Classes
    DataObject
  24. def getDataFrame(partitionValues: Seq[PartitionValues] = Seq())(implicit context: ActionPipelineContext): DataFrame
    Definition Classes
    TickTockHiveTableDataObject → CanCreateDataFrame
  25. def getPKduplicates(implicit context: ActionPipelineContext): DataFrame
    Definition Classes
    TableDataObject
  26. def getPKnulls(implicit context: ActionPipelineContext): DataFrame
    Definition Classes
    TableDataObject
  27. def getPKviolators(implicit context: ActionPipelineContext): DataFrame
    Definition Classes
    TableDataObject
  28. def hadoopPath(implicit context: ActionPipelineContext): Path
  29. val housekeepingMode: Option[HousekeepingMode]

    Configure a housekeeping mode to e.g cleanup, archive and compact partitions.

    Configure a housekeeping mode to e.g cleanup, archive and compact partitions. Default is None.

    Definition Classes
    TickTockHiveTableDataObjectDataObject
  30. val id: DataObjectId

    A unique identifier for this instance.

    A unique identifier for this instance.

    Definition Classes
    TickTockHiveTableDataObjectDataObject → SdlConfigObject
  31. def init(df: DataFrame, partitionValues: Seq[PartitionValues], saveModeOptions: Option[SaveModeOptions] = None)(implicit context: ActionPipelineContext): Unit

    Called during init phase for checks and initialization.

    Called during init phase for checks and initialization. If possible dont change the system until execution phase.

    Definition Classes
    TickTockHiveTableDataObject → CanWriteDataFrame
  32. implicit val instanceRegistry: InstanceRegistry
  33. def isDbExisting(implicit context: ActionPipelineContext): Boolean
    Definition Classes
    TickTockHiveTableDataObject → TableDataObject
  34. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  35. def isPKcandidateKey(implicit context: ActionPipelineContext): Boolean
    Definition Classes
    TableDataObject
  36. def isTableExisting(implicit context: ActionPipelineContext): Boolean
    Definition Classes
    TickTockHiveTableDataObject → TableDataObject
  37. def listPartitions(implicit context: ActionPipelineContext): Seq[PartitionValues]

    list hive table partitions

    list hive table partitions

    Definition Classes
    TickTockHiveTableDataObjectCanHandlePartitions
    Annotations
    @Scaladoc()
  38. lazy val logger: Logger
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
    Annotations
    @transient()
  39. val metadata: Option[DataObjectMetadata]

    Additional metadata for the DataObject

    Additional metadata for the DataObject

    Definition Classes
    TickTockHiveTableDataObjectDataObject
  40. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  41. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  42. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  43. val numInitialHdfsPartitions: Int
  44. val partitions: Seq[String]

    Definition of partition columns

    Definition of partition columns

    Definition Classes
    TickTockHiveTableDataObjectCanHandlePartitions
  45. val path: Option[String]
  46. def preWrite(implicit context: ActionPipelineContext): Unit

    Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead

    Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead

    Definition Classes
    TickTockHiveTableDataObjectDataObject
  47. def prepare(implicit context: ActionPipelineContext): Unit

    Prepare & test DataObject's prerequisits

    Prepare & test DataObject's prerequisits

    This runs during the "prepare" operation of the DAG.

    Definition Classes
    TickTockHiveTableDataObjectDataObject
  48. val saveMode: SDLSaveMode
  49. val schemaMin: Option[StructType]

    An optional, minimal schema that a DataObject schema must have to pass schema validation.

    An optional, minimal schema that a DataObject schema must have to pass schema validation.

    The schema validation semantics are: - Schema A is valid in respect to a minimal schema B when B is a subset of A. This means: the whole column set of B is contained in the column set of A.

    • A column of B is contained in A when A contains a column with equal name and data type.
    • Column order is ignored.
    • Column nullability is ignored.
    • Duplicate columns in terms of name and data type are eliminated (set semantics).

    Note: This is mainly used by the functionality defined in CanCreateDataFrame and CanWriteDataFrame, that is, when reading or writing Spark data frames from/to the underlying data container. io.smartdatalake.workflow.action.Actions that work with files ignore the schemaMin attribute if it is defined. Additionally schemaMin can be used to define the schema used if there is no data or table doesn't yet exist.

    Definition Classes
    TickTockHiveTableDataObject → SchemaValidation
  50. def streamingOptions: Map[String, String]
    Definition Classes
    CanWriteDataFrame
  51. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  52. var table: Table
    Definition Classes
    TickTockHiveTableDataObject → TableDataObject
  53. val tableSchema: StructType
    Definition Classes
    TableDataObject
  54. def toStringShort: String
    Definition Classes
    DataObject
  55. def validateSchema(df: DataFrame, schemaExpected: StructType, role: String): Unit

    Validate the schema of a given Spark Data Frame df against a given expected schema.

    Validate the schema of a given Spark Data Frame df against a given expected schema.

    df

    The data frame to validate.

    schemaExpected

    The expected schema to validate against.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    SchemaValidation
    Annotations
    @Scaladoc()
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  56. def validateSchemaHasPartitionCols(df: DataFrame, role: String): Unit

    Validate the schema of a given Spark Data Frame df that it contains the specified partition columns

    Validate the schema of a given Spark Data Frame df that it contains the specified partition columns

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    CanHandlePartitions
    Annotations
    @Scaladoc()
    Exceptions thrown

    SchemaViolationException if the partitions columns are not included.

  57. def validateSchemaHasPrimaryKeyCols(df: DataFrame, primaryKeyCols: Seq[String], role: String): Unit

    Validate the schema of a given Spark Data Frame df that it contains the specified primary key columns

    Validate the schema of a given Spark Data Frame df that it contains the specified primary key columns

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    CanHandlePartitions
    Annotations
    @Scaladoc()
    Exceptions thrown

    SchemaViolationException if the partitions columns are not included.

  58. def validateSchemaMin(df: DataFrame, role: String): Unit

    Validate the schema of a given Spark Data Frame df against schemaMin.

    Validate the schema of a given Spark Data Frame df against schemaMin.

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    SchemaValidation
    Annotations
    @Scaladoc()
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  59. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  60. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  61. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  62. def writeDataFrame(df: DataFrame, partitionValues: Seq[PartitionValues] = Seq(), isRecursiveInput: Boolean = false, saveModeOptions: Option[SaveModeOptions] = None)(implicit context: ActionPipelineContext): Unit

    Write DataFrame to DataObject

    Write DataFrame to DataObject

    df

    the DataFrame to write

    partitionValues

    partition values included in DataFrames data

    isRecursiveInput

    if DataFrame needs this DataObject as input - special treatment might be needed in this case.

    Definition Classes
    TickTockHiveTableDataObject → CanWriteDataFrame
  63. def writeDataFrameInternal(df: DataFrame, createTableOnly: Boolean, partitionValues: Seq[PartitionValues], isRecursiveInput: Boolean, saveModeOptions: Option[SaveModeOptions])(implicit context: ActionPipelineContext): Unit

    Writes DataFrame to HDFS/Parquet and creates Hive table.

    Writes DataFrame to HDFS/Parquet and creates Hive table. DataFrames are repartitioned in order not to write too many small files or only a few HDFS files that are too large.

    Annotations
    @Scaladoc()
  64. def writeStreamingDataFrame(df: DataFrame, trigger: Trigger, options: Map[String, String], checkpointLocation: String, queryName: String, outputMode: OutputMode = OutputMode.Append, saveModeOptions: Option[SaveModeOptions] = None)(implicit context: ActionPipelineContext): StreamingQuery

    Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame.

    Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame. Some DataObjects will override this with specific implementations (Kafka).

    df

    The Streaming DataFrame to write

    trigger

    Trigger frequency for stream

    checkpointLocation

    location for checkpoints of streaming query

    Definition Classes
    CanWriteDataFrame
    Annotations
    @Scaladoc()

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] ) @Deprecated
    Deprecated

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from CanHandlePartitions

Inherited from TransactionalSparkTableDataObject

Inherited from CanWriteDataFrame

Inherited from TableDataObject

Inherited from SchemaValidation

Inherited from CanCreateDataFrame

Inherited from DataObject

Inherited from AtlasExportable

Inherited from SmartDataLakeLogger

Inherited from ParsableFromConfig[DataObject]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped