case class AccessTableDataObject(id: DataObjectId, path: String, schemaMin: Option[StructType] = None, table: Table, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends TableDataObject with Product with Serializable

DataObject of type JDBC / Access. Provides access to a Access DB to an Action. The functionality is handled seperately from JdbcTableDataObject to avoid problems with net.ucanaccess.jdbc.UcanaccessDriver

Annotations
@Scaladoc()
Linear Supertypes
Serializable, Serializable, Product, Equals, TableDataObject, SchemaValidation, CanCreateDataFrame, DataObject, AtlasExportable, SmartDataLakeLogger, ParsableFromConfig[DataObject], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. AccessTableDataObject
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. TableDataObject
  7. SchemaValidation
  8. CanCreateDataFrame
  9. DataObject
  10. AtlasExportable
  11. SmartDataLakeLogger
  12. ParsableFromConfig
  13. SdlConfigObject
  14. AnyRef
  15. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new AccessTableDataObject(id: DataObjectId, path: String, schemaMin: Option[StructType] = None, table: Table, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry)

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def addFieldIfNotExisting(writeSchema: StructType, colName: String, dataType: DataType): StructType
    Attributes
    protected
    Definition Classes
    CanCreateDataFrame
  5. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  6. def atlasName: String
    Definition Classes
    TableDataObject → DataObjectAtlasExportable
  7. def atlasQualifiedName(prefix: String): String
    Definition Classes
    TableDataObject → AtlasExportable
  8. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native() @HotSpotIntrinsicCandidate()
  9. def createReadSchema(writeSchema: StructType)(implicit context: ActionPipelineContext): StructType

    Creates the read schema based on a given write schema.

    Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.

    Definition Classes
    CanCreateDataFrame
    Annotations
    @Scaladoc()
  10. def dropTable(implicit context: ActionPipelineContext): Unit
    Definition Classes
    AccessTableDataObject → TableDataObject
  11. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  12. def factory: FromConfigFactory[DataObject]

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    AccessTableDataObject → ParsableFromConfig
  13. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  14. def getConnection[T <: Connection](connectionId: ConnectionId)(implicit registry: InstanceRegistry, ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Handle class cast exception when getting objects from instance registry

    Handle class cast exception when getting objects from instance registry

    Attributes
    protected
    Definition Classes
    DataObject
    Annotations
    @Scaladoc()
  15. def getConnectionReg[T <: Connection](connectionId: ConnectionId, registry: InstanceRegistry)(implicit ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T
    Attributes
    protected
    Definition Classes
    DataObject
  16. def getDataFrame(partitionValues: Seq[PartitionValues] = Seq())(implicit context: ActionPipelineContext): DataFrame
    Definition Classes
    AccessTableDataObject → CanCreateDataFrame
  17. def getDataFrameByFramework(partitionValues: Seq[PartitionValues] = Seq())(implicit context: ActionPipelineContext): DataFrame
  18. def getPKduplicates(implicit context: ActionPipelineContext): DataFrame
    Definition Classes
    TableDataObject
  19. def getPKnulls(implicit context: ActionPipelineContext): DataFrame
    Definition Classes
    TableDataObject
  20. def getPKviolators(implicit context: ActionPipelineContext): DataFrame
    Definition Classes
    TableDataObject
  21. def housekeepingMode: Option[HousekeepingMode]

    Configure a housekeeping mode to e.g cleanup, archive and compact partitions.

    Configure a housekeeping mode to e.g cleanup, archive and compact partitions. Default is None.

    Definition Classes
    DataObject
    Annotations
    @Scaladoc()
  22. val id: DataObjectId

    A unique identifier for this instance.

    A unique identifier for this instance.

    Definition Classes
    AccessTableDataObjectDataObject → SdlConfigObject
  23. implicit val instanceRegistry: InstanceRegistry
  24. def isDbExisting(implicit context: ActionPipelineContext): Boolean
    Definition Classes
    AccessTableDataObject → TableDataObject
  25. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  26. def isPKcandidateKey(implicit context: ActionPipelineContext): Boolean
    Definition Classes
    TableDataObject
  27. def isTableExisting(implicit context: ActionPipelineContext): Boolean
    Definition Classes
    AccessTableDataObject → TableDataObject
  28. lazy val logger: Logger
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
    Annotations
    @transient()
  29. val metadata: Option[DataObjectMetadata]

    Additional metadata for the DataObject

    Additional metadata for the DataObject

    Definition Classes
    AccessTableDataObjectDataObject
  30. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  31. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  32. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  33. val path: String
  34. val schemaMin: Option[StructType]

    An optional, minimal schema that a DataObject schema must have to pass schema validation.

    An optional, minimal schema that a DataObject schema must have to pass schema validation.

    The schema validation semantics are: - Schema A is valid in respect to a minimal schema B when B is a subset of A. This means: the whole column set of B is contained in the column set of A.

    • A column of B is contained in A when A contains a column with equal name and data type.
    • Column order is ignored.
    • Column nullability is ignored.
    • Duplicate columns in terms of name and data type are eliminated (set semantics).

    Note: This is mainly used by the functionality defined in CanCreateDataFrame and CanWriteDataFrame, that is, when reading or writing Spark data frames from/to the underlying data container. io.smartdatalake.workflow.action.Actions that work with files ignore the schemaMin attribute if it is defined. Additionally schemaMin can be used to define the schema used if there is no data or table doesn't yet exist.

    Definition Classes
    AccessTableDataObject → SchemaValidation
  35. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  36. var table: Table
    Definition Classes
    AccessTableDataObject → TableDataObject
  37. val tableSchema: StructType
    Definition Classes
    TableDataObject
  38. def toStringShort: String
    Definition Classes
    DataObject
  39. def validateSchema(df: DataFrame, schemaExpected: StructType, role: String): Unit

    Validate the schema of a given Spark Data Frame df against a given expected schema.

    Validate the schema of a given Spark Data Frame df against a given expected schema.

    df

    The data frame to validate.

    schemaExpected

    The expected schema to validate against.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    SchemaValidation
    Annotations
    @Scaladoc()
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  40. def validateSchemaMin(df: DataFrame, role: String): Unit

    Validate the schema of a given Spark Data Frame df against schemaMin.

    Validate the schema of a given Spark Data Frame df against schemaMin.

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    SchemaValidation
    Annotations
    @Scaladoc()
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  41. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  42. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  43. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] ) @Deprecated
    Deprecated

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from TableDataObject

Inherited from SchemaValidation

Inherited from CanCreateDataFrame

Inherited from DataObject

Inherited from AtlasExportable

Inherited from SmartDataLakeLogger

Inherited from ParsableFromConfig[DataObject]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped