case class AirbyteDataObject(id: DataObjectId, config: Config, streamName: String, cmd: ParsableScriptDef, incrementalCursorFields: Seq[String] = Seq(), schemaMin: Option[StructType] = None, metadata: Option[DataObjectMetadata] = None) extends DataObject with CanCreateDataFrame with CanCreateIncrementalOutput with SchemaValidation with SmartDataLakeLogger with Product with Serializable

Limitations: Connectors have only access to locally mounted directories

id

DataObject identifier

config

Configuration for the source

streamName

The stream name to read. Must match an entry of the catalog of the source.

cmd

command to launch airbyte connector. Normally this is of type DockerRunScript.

incrementalCursorFields

Some sources need a specification of the cursor field for incremental mode

Annotations
@Scaladoc()
Linear Supertypes
Serializable, Serializable, Product, Equals, SchemaValidation, CanCreateIncrementalOutput, CanCreateDataFrame, DataObject, AtlasExportable, SmartDataLakeLogger, ParsableFromConfig[DataObject], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. AirbyteDataObject
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. SchemaValidation
  7. CanCreateIncrementalOutput
  8. CanCreateDataFrame
  9. DataObject
  10. AtlasExportable
  11. SmartDataLakeLogger
  12. ParsableFromConfig
  13. SdlConfigObject
  14. AnyRef
  15. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new AirbyteDataObject(id: DataObjectId, config: Config, streamName: String, cmd: ParsableScriptDef, incrementalCursorFields: Seq[String] = Seq(), schemaMin: Option[StructType] = None, metadata: Option[DataObjectMetadata] = None)

    id

    DataObject identifier

    config

    Configuration for the source

    streamName

    The stream name to read. Must match an entry of the catalog of the source.

    cmd

    command to launch airbyte connector. Normally this is of type DockerRunScript.

    incrementalCursorFields

    Some sources need a specification of the cursor field for incremental mode

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def addFieldIfNotExisting(writeSchema: StructType, colName: String, dataType: DataType): StructType
    Attributes
    protected
    Definition Classes
    CanCreateDataFrame
  5. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  6. def atlasName: String
    Definition Classes
    DataObjectAtlasExportable
  7. def atlasQualifiedName(prefix: String): String
    Definition Classes
    AtlasExportable
  8. final val catalogFilename: String("catalog.json")
  9. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native() @HotSpotIntrinsicCandidate()
  10. val cmd: ParsableScriptDef
  11. val config: Config
  12. final val configFilename: String("config.json")
  13. final val containerConfigDir: String("/mnt/config")
  14. def createReadSchema(writeSchema: StructType)(implicit context: ActionPipelineContext): StructType

    Creates the read schema based on a given write schema.

    Creates the read schema based on a given write schema. Normally this is the same, but some DataObjects can remove & add columns on read (e.g. KafkaTopicDataObject, SparkFileDataObject) In this cases we have to break the DataFrame lineage und create a dummy DataFrame in init phase.

    Definition Classes
    CanCreateDataFrame
    Annotations
    @Scaladoc()
  15. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  16. def factory: FromConfigFactory[DataObject]

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    AirbyteDataObject → ParsableFromConfig
  17. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  18. def getConnection[T <: Connection](connectionId: ConnectionId)(implicit registry: InstanceRegistry, ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Handle class cast exception when getting objects from instance registry

    Handle class cast exception when getting objects from instance registry

    Attributes
    protected
    Definition Classes
    DataObject
    Annotations
    @Scaladoc()
  19. def getConnectionReg[T <: Connection](connectionId: ConnectionId, registry: InstanceRegistry)(implicit ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T
    Attributes
    protected
    Definition Classes
    DataObject
  20. def getDataFrame(partitionValues: Seq[PartitionValues] = Seq())(implicit context: ActionPipelineContext): DataFrame
    Definition Classes
    AirbyteDataObject → CanCreateDataFrame
  21. def getState: Option[String]

    Return the state of the last increment or empty if no increment was processed.

    Return the state of the last increment or empty if no increment was processed.

    Definition Classes
    AirbyteDataObject → CanCreateIncrementalOutput
  22. def housekeepingMode: Option[HousekeepingMode]

    Configure a housekeeping mode to e.g cleanup, archive and compact partitions.

    Configure a housekeeping mode to e.g cleanup, archive and compact partitions. Default is None.

    Definition Classes
    DataObject
    Annotations
    @Scaladoc()
  23. val id: DataObjectId

    A unique identifier for this instance.

    A unique identifier for this instance.

    Definition Classes
    AirbyteDataObjectDataObject → SdlConfigObject
  24. val incrementalCursorFields: Seq[String]
  25. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  26. implicit val jsonFormats: Formats
  27. lazy val logger: Logger
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
    Annotations
    @transient()
  28. val metadata: Option[DataObjectMetadata]

    Additional metadata for the DataObject

    Additional metadata for the DataObject

    Definition Classes
    AirbyteDataObjectDataObject
  29. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  30. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  31. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  32. def prepare(implicit context: ActionPipelineContext): Unit

    Prepare & test DataObject's prerequisits

    Prepare & test DataObject's prerequisits

    This runs during the "prepare" operation of the DAG.

    Definition Classes
    AirbyteDataObjectDataObject
  33. val schemaMin: Option[StructType]

    An optional, minimal schema that a DataObject schema must have to pass schema validation.

    An optional, minimal schema that a DataObject schema must have to pass schema validation.

    The schema validation semantics are: - Schema A is valid in respect to a minimal schema B when B is a subset of A. This means: the whole column set of B is contained in the column set of A.

    • A column of B is contained in A when A contains a column with equal name and data type.
    • Column order is ignored.
    • Column nullability is ignored.
    • Duplicate columns in terms of name and data type are eliminated (set semantics).

    Note: This is mainly used by the functionality defined in CanCreateDataFrame and CanWriteDataFrame, that is, when reading or writing Spark data frames from/to the underlying data container. io.smartdatalake.workflow.action.Actions that work with files ignore the schemaMin attribute if it is defined. Additionally schemaMin can be used to define the schema used if there is no data or table doesn't yet exist.

    Definition Classes
    AirbyteDataObject → SchemaValidation
  34. def setState(state: Option[String])(implicit context: ActionPipelineContext): Unit

    To implement incremental processing this function is called to initialize the DataObject with its state from the last increment.

    To implement incremental processing this function is called to initialize the DataObject with its state from the last increment. The state is just a string. It's semantics is internal to the DataObject. Note that this method is called on initializiation of the SmartDataLakeBuilder job (init Phase) and for streaming execution after every execution of an Action involving this DataObject (postExec).

    state

    Internal state of last increment. If None then the first increment (may be a full increment) is delivered.

    Definition Classes
    AirbyteDataObject → CanCreateIncrementalOutput
  35. final val stateFilename: String("state.json")
  36. val streamName: String
  37. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  38. def toStringShort: String
    Definition Classes
    DataObject
  39. def validateSchema(df: DataFrame, schemaExpected: StructType, role: String): Unit

    Validate the schema of a given Spark Data Frame df against a given expected schema.

    Validate the schema of a given Spark Data Frame df against a given expected schema.

    df

    The data frame to validate.

    schemaExpected

    The expected schema to validate against.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    SchemaValidation
    Annotations
    @Scaladoc()
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  40. def validateSchemaMin(df: DataFrame, role: String): Unit

    Validate the schema of a given Spark Data Frame df against schemaMin.

    Validate the schema of a given Spark Data Frame df against schemaMin.

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    SchemaValidation
    Annotations
    @Scaladoc()
    Exceptions thrown

    SchemaViolationException is the schemaMin does not validate.

  41. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  42. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  43. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] ) @Deprecated
    Deprecated

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from SchemaValidation

Inherited from CanCreateIncrementalOutput

Inherited from CanCreateDataFrame

Inherited from DataObject

Inherited from AtlasExportable

Inherited from SmartDataLakeLogger

Inherited from ParsableFromConfig[DataObject]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped