case class CustomDfDataObject(id: DataObjectId, creator: CustomDfCreatorConfig, schemaMin: Option[StructType] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends DataObject with CanCreateDataFrame with SchemaValidation with Product with Serializable

Generic DataObject containing a config object. E.g. used to implement a CustomAction that reads a Webservice.

Annotations
@Scaladoc()
Linear Supertypes
Serializable, Serializable, Product, Equals, SchemaValidation, CanCreateDataFrame, DataObject, AtlasExportable, SmartDataLakeLogger, ParsableFromConfig[DataObject], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. CustomDfDataObject
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. SchemaValidation
  7. CanCreateDataFrame
  8. DataObject
  9. AtlasExportable
  10. SmartDataLakeLogger
  11. ParsableFromConfig
  12. SdlConfigObject
  13. AnyRef
  14. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new CustomDfDataObject(id: DataObjectId, creator: CustomDfCreatorConfig, schemaMin: Option[StructType] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry)

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def addFieldIfNotExisting(writeSchema: StructType, colName: String, dataType: DataType): StructType
    Attributes
    protected
    Definition Classes
    CanCreateDataFrame
  5. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  6. def atlasName: String
    Definition Classes
    DataObjectAtlasExportable
  7. def atlasQualifiedName(prefix: String): String
    Definition Classes
    AtlasExportable
  8. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native() @HotSpotIntrinsicCandidate()
  9. def createReadSchema(writeSchema: StructType)(implicit context: ActionPipelineContext): StructType

    Creates the read schema based on a given write schema.

    Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.

    Definition Classes
    CanCreateDataFrame
    Annotations
    @Scaladoc()
  10. val creator: CustomDfCreatorConfig
  11. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  12. def factory: FromConfigFactory[DataObject]

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    CustomDfDataObject → ParsableFromConfig
  13. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  14. def getConnection[T <: Connection](connectionId: ConnectionId)(implicit registry: InstanceRegistry, ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Handle class cast exception when getting objects from instance registry

    Handle class cast exception when getting objects from instance registry

    Attributes
    protected
    Definition Classes
    DataObject
    Annotations
    @Scaladoc()
  15. def getConnectionReg[T <: Connection](connectionId: ConnectionId, registry: InstanceRegistry)(implicit ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T
    Attributes
    protected
    Definition Classes
    DataObject
  16. def getDataFrame(partitionValues: Seq[PartitionValues] = Seq())(implicit context: ActionPipelineContext): DataFrame
    Definition Classes
    CustomDfDataObject → CanCreateDataFrame
  17. def housekeepingMode: Option[HousekeepingMode]

    Configure a housekeeping mode to e.g cleanup, archive and compact partitions.

    Configure a housekeeping mode to e.g cleanup, archive and compact partitions. Default is None.

    Definition Classes
    DataObject
    Annotations
    @Scaladoc()
  18. val id: DataObjectId

    A unique identifier for this instance.

    A unique identifier for this instance.

    Definition Classes
    CustomDfDataObjectDataObject → SdlConfigObject
  19. implicit val instanceRegistry: InstanceRegistry
  20. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  21. lazy val logger: Logger
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
    Annotations
    @transient()
  22. val metadata: Option[DataObjectMetadata]

    Additional metadata for the DataObject

    Additional metadata for the DataObject

    Definition Classes
    CustomDfDataObjectDataObject
  23. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  24. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  25. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  26. val schemaMin: Option[StructType]

    An optional, minimal schema that a DataObject schema must have to pass schema validation.

    An optional, minimal schema that a DataObject schema must have to pass schema validation.

    The schema validation semantics are: - Schema A is valid in respect to a minimal schema B when B is a subset of A. This means: the whole column set of B is contained in the column set of A.

    • A column of B is contained in A when A contains a column with equal name and data type.
    • Column order is ignored.
    • Column nullability is ignored.
    • Duplicate columns in terms of name and data type are eliminated (set semantics).

    Note: This is mainly used by the functionality defined in CanCreateDataFrame and CanWriteDataFrame, that is, when reading or writing Spark data frames from/to the underlying data container. io.smartdatalake.workflow.action.Actions that work with files ignore the schemaMin attribute if it is defined. Additionally schemaMin can be used to define the schema used if there is no data or table doesn't yet exist.

    Definition Classes
    CustomDfDataObject → SchemaValidation
  27. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  28. def toStringShort: String
    Definition Classes
    DataObject
  29. def validateSchema(df: DataFrame, schemaExpected: StructType, role: String): Unit

    Validate the schema of a given Spark Data Frame df against a given expected schema.

    Validate the schema of a given Spark Data Frame df against a given expected schema.

    df

    The data frame to validate.

    schemaExpected

    The expected schema to validate against.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    SchemaValidation
    Annotations
    @Scaladoc()
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  30. def validateSchemaMin(df: DataFrame, role: String): Unit

    Validate the schema of a given Spark Data Frame df against schemaMin.

    Validate the schema of a given Spark Data Frame df against schemaMin.

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    SchemaValidation
    Annotations
    @Scaladoc()
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  31. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  32. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  33. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] ) @Deprecated
    Deprecated

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from SchemaValidation

Inherited from CanCreateDataFrame

Inherited from DataObject

Inherited from AtlasExportable

Inherited from SmartDataLakeLogger

Inherited from ParsableFromConfig[DataObject]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped