case class HiveTableDataObject(id: DataObjectId, path: Option[String] = None, partitions: Seq[String] = Seq(), analyzeTableAfterWrite: Boolean = false, dateColumnType: DateColumnType = DateColumnType.Date, schemaMin: Option[StructType] = None, table: Table, numInitialHdfsPartitions: Int = 16, saveMode: SDLSaveMode = SDLSaveMode.Overwrite, acl: Option[AclDef] = None, connectionId: Option[ConnectionId] = None, expectedPartitionsCondition: Option[String] = None, housekeepingMode: Option[HousekeepingMode] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends TableDataObject with CanWriteDataFrame with CanHandlePartitions with HasHadoopStandardFilestore with SmartDataLakeLogger with Product with Serializable
DataObject of type Hive. Provides details to access Hive tables to an Action
- id
unique name of this data object
- path
hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. If DataObject is only used for reading or if the HiveTable already exist, the path can be omitted. If the HiveTable already exists but with a different path, a warning is issued
- partitions
partition columns for this data object
- analyzeTableAfterWrite
enable compute statistics after writing data (default=false)
- dateColumnType
type of date column
- schemaMin
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
- table
hive table to be written by this output
- numInitialHdfsPartitions
number of files created when writing into an empty table (otherwise the number will be derived from the existing data)
- saveMode
spark SaveMode to use when writing files, default is "overwrite"
- acl
override connections permissions for files created tables hadoop directory with this connection
- connectionId
optional id of io.smartdatalake.workflow.connection.HiveTableConnection
- expectedPartitionsCondition
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
- housekeepingMode
Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.
- metadata
meta data
- Annotations
- @Scaladoc()
- Alphabetic
- By Inheritance
- HiveTableDataObject
- Serializable
- Serializable
- Product
- Equals
- HasHadoopStandardFilestore
- CanHandlePartitions
- CanWriteDataFrame
- TableDataObject
- SchemaValidation
- CanCreateDataFrame
- DataObject
- AtlasExportable
- SmartDataLakeLogger
- ParsableFromConfig
- SdlConfigObject
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
-
new
HiveTableDataObject(id: DataObjectId, path: Option[String] = None, partitions: Seq[String] = Seq(), analyzeTableAfterWrite: Boolean = false, dateColumnType: DateColumnType = DateColumnType.Date, schemaMin: Option[StructType] = None, table: Table, numInitialHdfsPartitions: Int = 16, saveMode: SDLSaveMode = SDLSaveMode.Overwrite, acl: Option[AclDef] = None, connectionId: Option[ConnectionId] = None, expectedPartitionsCondition: Option[String] = None, housekeepingMode: Option[HousekeepingMode] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry)
- id
unique name of this data object
- path
hadoop directory for this table. If it doesn't contain scheme and authority, the connections pathPrefix is applied. If pathPrefix is not defined or doesn't define scheme and authority, default schema and authority is applied. If DataObject is only used for reading or if the HiveTable already exist, the path can be omitted. If the HiveTable already exists but with a different path, a warning is issued
- partitions
partition columns for this data object
- analyzeTableAfterWrite
enable compute statistics after writing data (default=false)
- dateColumnType
type of date column
- schemaMin
An optional, minimal schema that this DataObject must have to pass schema validation on reading and writing.
- table
hive table to be written by this output
- numInitialHdfsPartitions
number of files created when writing into an empty table (otherwise the number will be derived from the existing data)
- saveMode
spark SaveMode to use when writing files, default is "overwrite"
- acl
override connections permissions for files created tables hadoop directory with this connection
- connectionId
optional id of io.smartdatalake.workflow.connection.HiveTableConnection
- expectedPartitionsCondition
Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.
- housekeepingMode
Optional definition of a housekeeping mode applied after every write. E.g. it can be used to cleanup, archive and compact partitions. See HousekeepingMode for available implementations. Default is None.
- metadata
meta data
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- val acl: Option[AclDef]
-
def
addFieldIfNotExisting(writeSchema: StructType, colName: String, dataType: DataType): StructType
- Attributes
- protected
- Definition Classes
- CanCreateDataFrame
- val analyzeTableAfterWrite: Boolean
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
atlasName: String
- Definition Classes
- TableDataObject → DataObject → AtlasExportable
-
def
atlasQualifiedName(prefix: String): String
- Definition Classes
- TableDataObject → AtlasExportable
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native() @HotSpotIntrinsicCandidate()
-
def
compactPartitions(partitionValues: Seq[PartitionValues])(implicit context: ActionPipelineContext): Unit
Compact given partitions combining smaller files into bigger ones.
Compact given partitions combining smaller files into bigger ones. This is used to compact partitions by housekeeping. Note: this is optional to implement.
- Definition Classes
- HiveTableDataObject → CanHandlePartitions
- val connectionId: Option[ConnectionId]
-
def
createEmptyPartition(partitionValues: PartitionValues)(implicit context: ActionPipelineContext): Unit
create empty partition
create empty partition
- Definition Classes
- HiveTableDataObject → CanHandlePartitions
-
def
createReadSchema(writeSchema: StructType)(implicit context: ActionPipelineContext): StructType
Creates the read schema based on a given write schema.
Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.
- Definition Classes
- CanCreateDataFrame
- Annotations
- @Scaladoc()
- val dateColumnType: DateColumnType
-
def
deletePartitions(partitionValues: Seq[PartitionValues])(implicit context: ActionPipelineContext): Unit
Delete given partitions.
Delete given partitions. This is used to cleanup partitions by housekeeping. Note: this is optional to implement.
- Definition Classes
- HiveTableDataObject → CanHandlePartitions
-
def
deletePartitionsIfExisting(partitionValues: Seq[PartitionValues])(implicit context: ActionPipelineContext): Unit
Checks if partition exists and deletes it.
Checks if partition exists and deletes it. Note that partition values to check don't need to have a key/value defined for every partition column.
- Annotations
- @Scaladoc()
-
def
dropTable(implicit context: ActionPipelineContext): Unit
- Definition Classes
- HiveTableDataObject → TableDataObject
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
val
expectedPartitionsCondition: Option[String]
Definition of partitions that are expected to exists.
Definition of partitions that are expected to exists. This is used to validate that partitions being read exists and don't return no data. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false example: "elements['yourColName'] > 2017"
- returns
true if partition is expected to exist.
- Definition Classes
- HiveTableDataObject → CanHandlePartitions
-
def
factory: FromConfigFactory[DataObject]
Returns the factory that can parse this type (that is, type
CO).Returns the factory that can parse this type (that is, type
CO).Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.
- returns
the factory (object) for this class.
- Definition Classes
- HiveTableDataObject → ParsableFromConfig
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @HotSpotIntrinsicCandidate()
-
def
getConnection[T <: Connection](connectionId: ConnectionId)(implicit registry: InstanceRegistry, ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T
Handle class cast exception when getting objects from instance registry
Handle class cast exception when getting objects from instance registry
- Attributes
- protected
- Definition Classes
- DataObject
- Annotations
- @Scaladoc()
-
def
getConnectionReg[T <: Connection](connectionId: ConnectionId, registry: InstanceRegistry)(implicit ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T
- Attributes
- protected
- Definition Classes
- DataObject
-
def
getDataFrame(partitionValues: Seq[PartitionValues] = Seq())(implicit context: ActionPipelineContext): DataFrame
- Definition Classes
- HiveTableDataObject → CanCreateDataFrame
-
def
getPKduplicates(implicit context: ActionPipelineContext): DataFrame
- Definition Classes
- TableDataObject
-
def
getPKnulls(implicit context: ActionPipelineContext): DataFrame
- Definition Classes
- TableDataObject
-
def
getPKviolators(implicit context: ActionPipelineContext): DataFrame
- Definition Classes
- TableDataObject
-
def
hadoopPath(implicit context: ActionPipelineContext): Path
- Definition Classes
- HiveTableDataObject → HasHadoopStandardFilestore
-
val
housekeepingMode: Option[HousekeepingMode]
Configure a housekeeping mode to e.g cleanup, archive and compact partitions.
Configure a housekeeping mode to e.g cleanup, archive and compact partitions. Default is None.
- Definition Classes
- HiveTableDataObject → DataObject
-
val
id: DataObjectId
A unique identifier for this instance.
A unique identifier for this instance.
- Definition Classes
- HiveTableDataObject → DataObject → SdlConfigObject
-
def
init(df: DataFrame, partitionValues: Seq[PartitionValues], saveModeOptions: Option[SaveModeOptions] = None)(implicit context: ActionPipelineContext): Unit
Called during init phase for checks and initialization.
Called during init phase for checks and initialization. If possible dont change the system until execution phase.
- Definition Classes
- HiveTableDataObject → CanWriteDataFrame
- implicit val instanceRegistry: InstanceRegistry
-
def
isDbExisting(implicit context: ActionPipelineContext): Boolean
- Definition Classes
- HiveTableDataObject → TableDataObject
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
isPKcandidateKey(implicit context: ActionPipelineContext): Boolean
- Definition Classes
- TableDataObject
-
def
isTableExisting(implicit context: ActionPipelineContext): Boolean
- Definition Classes
- HiveTableDataObject → TableDataObject
-
def
listPartitions(implicit context: ActionPipelineContext): Seq[PartitionValues]
list hive table partitions
list hive table partitions
- Definition Classes
- HiveTableDataObject → CanHandlePartitions
- Annotations
- @Scaladoc()
-
lazy val
logger: Logger
- Attributes
- protected
- Definition Classes
- SmartDataLakeLogger
- Annotations
- @transient()
-
val
metadata: Option[DataObjectMetadata]
Additional metadata for the DataObject
Additional metadata for the DataObject
- Definition Classes
- HiveTableDataObject → DataObject
-
def
movePartitions(partitionValues: Seq[(PartitionValues, PartitionValues)])(implicit context: ActionPipelineContext): Unit
Move given partitions.
Move given partitions. This is used to archive partitions by housekeeping. Note: this is optional to implement.
- Definition Classes
- HiveTableDataObject → CanHandlePartitions
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @HotSpotIntrinsicCandidate()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @HotSpotIntrinsicCandidate()
- val numInitialHdfsPartitions: Int
-
def
partitionLayout(): Option[String]
Return a String specifying the partition layout.
Return a String specifying the partition layout. For Hadoop the default partition layout is colname1=<value1>/colname2=<value2>/.../
- Definition Classes
- HasHadoopStandardFilestore
- Annotations
- @Scaladoc()
-
val
partitions: Seq[String]
Definition of partition columns
Definition of partition columns
- Definition Classes
- HiveTableDataObject → CanHandlePartitions
- val path: Option[String]
-
def
preWrite(implicit context: ActionPipelineContext): Unit
Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead
Runs operations before writing to DataObject Note: As the transformed SubFeed doesnt yet exist in Action.preWrite, no partition values can be passed as parameters as in preRead
- Definition Classes
- HiveTableDataObject → DataObject
-
def
prepare(implicit context: ActionPipelineContext): Unit
Prepare & test DataObject's prerequisits
Prepare & test DataObject's prerequisits
This runs during the "prepare" operation of the DAG.
- Definition Classes
- HiveTableDataObject → DataObject
- val saveMode: SDLSaveMode
-
val
schemaMin: Option[StructType]
An optional, minimal schema that a DataObject schema must have to pass schema validation.
An optional, minimal schema that a DataObject schema must have to pass schema validation.
The schema validation semantics are: - Schema A is valid in respect to a minimal schema B when B is a subset of A. This means: the whole column set of B is contained in the column set of A.
- A column of B is contained in A when A contains a column with equal name and data type.
- Column order is ignored.
- Column nullability is ignored.
- Duplicate columns in terms of name and data type are eliminated (set semantics).
Note: This is mainly used by the functionality defined in CanCreateDataFrame and CanWriteDataFrame, that is, when reading or writing Spark data frames from/to the underlying data container. io.smartdatalake.workflow.action.Actions that work with files ignore the
schemaMinattribute if it is defined. Additionally schemaMin can be used to define the schema used if there is no data or table doesn't yet exist.- Definition Classes
- HiveTableDataObject → SchemaValidation
-
def
streamingOptions: Map[String, String]
- Definition Classes
- CanWriteDataFrame
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
var
table: Table
- Definition Classes
- HiveTableDataObject → TableDataObject
-
val
tableSchema: StructType
- Definition Classes
- TableDataObject
-
def
toStringShort: String
- Definition Classes
- DataObject
-
def
validateSchema(df: DataFrame, schemaExpected: StructType, role: String): Unit
Validate the schema of a given Spark Data Frame
dfagainst a given expected schema.Validate the schema of a given Spark Data Frame
dfagainst a given expected schema.- df
The data frame to validate.
- schemaExpected
The expected schema to validate against.
- role
role used in exception message. Set to read or write.
- Definition Classes
- SchemaValidation
- Annotations
- @Scaladoc()
- Exceptions thrown
SchemaViolationExceptionis theschemaMindoes not validate.
-
def
validateSchemaHasPartitionCols(df: DataFrame, role: String): Unit
Validate the schema of a given Spark Data Frame
dfthat it contains the specified partition columnsValidate the schema of a given Spark Data Frame
dfthat it contains the specified partition columns- df
The data frame to validate.
- role
role used in exception message. Set to read or write.
- Definition Classes
- CanHandlePartitions
- Annotations
- @Scaladoc()
- Exceptions thrown
SchemaViolationExceptionif the partitions columns are not included.
-
def
validateSchemaHasPrimaryKeyCols(df: DataFrame, primaryKeyCols: Seq[String], role: String): Unit
Validate the schema of a given Spark Data Frame
dfthat it contains the specified primary key columnsValidate the schema of a given Spark Data Frame
dfthat it contains the specified primary key columns- df
The data frame to validate.
- role
role used in exception message. Set to read or write.
- Definition Classes
- CanHandlePartitions
- Annotations
- @Scaladoc()
- Exceptions thrown
SchemaViolationExceptionif the partitions columns are not included.
-
def
validateSchemaMin(df: DataFrame, role: String): Unit
Validate the schema of a given Spark Data Frame
dfagainstschemaMin.Validate the schema of a given Spark Data Frame
dfagainstschemaMin.- df
The data frame to validate.
- role
role used in exception message. Set to read or write.
- Definition Classes
- SchemaValidation
- Annotations
- @Scaladoc()
- Exceptions thrown
SchemaViolationExceptionis theschemaMindoes not validate.
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
def
writeDataFrame(df: DataFrame, partitionValues: Seq[PartitionValues] = Seq(), isRecursiveInput: Boolean = false, saveModeOptions: Option[SaveModeOptions] = None)(implicit context: ActionPipelineContext): Unit
Write DataFrame to DataObject
Write DataFrame to DataObject
- df
the DataFrame to write
- partitionValues
partition values included in DataFrames data
- isRecursiveInput
if DataFrame needs this DataObject as input - special treatment might be needed in this case.
- Definition Classes
- HiveTableDataObject → CanWriteDataFrame
-
def
writeDataFrameToPath(df: DataFrame, path: Path, finalSaveMode: SDLSaveMode)(implicit context: ActionPipelineContext): Unit
Write DataFrame to specific Path with properties of this DataObject.
Write DataFrame to specific Path with properties of this DataObject. This is needed for compacting partitions by housekeeping. Note: this is optional to implement.
- Definition Classes
- HiveTableDataObject → CanWriteDataFrame
-
def
writeStreamingDataFrame(df: DataFrame, trigger: Trigger, options: Map[String, String], checkpointLocation: String, queryName: String, outputMode: OutputMode = OutputMode.Append, saveModeOptions: Option[SaveModeOptions] = None)(implicit context: ActionPipelineContext): StreamingQuery
Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame.
Write Spark structured streaming DataFrame The default implementation uses foreachBatch and this traits writeDataFrame method to write the DataFrame. Some DataObjects will override this with specific implementations (Kafka).
- df
The Streaming DataFrame to write
- trigger
Trigger frequency for stream
- checkpointLocation
location for checkpoints of streaming query
- Definition Classes
- CanWriteDataFrame
- Annotations
- @Scaladoc()
Deprecated Value Members
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] ) @Deprecated
- Deprecated