case class SFtpFileRefDataObject(id: DataObjectId, path: String, connectionId: ConnectionId, partitions: Seq[String] = Seq(), partitionLayout: Option[String] = None, saveMode: SDLSaveMode = SDLSaveMode.Overwrite, expectedPartitionsCondition: Option[String] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry) extends FileRefDataObject with CanCreateInputStream with CanCreateOutputStream with SmartDataLakeLogger with Product with Serializable

Connects to SFtp files Needs java library "com.hieronymus % sshj % 0.21.1" The following authentication mechanisms are supported -> public/private-key: private key must be saved in ~/.ssh, public key must be registered on server. -> user/pwd authentication: user and password is taken from two variables set as parameters. These variables could come from clear text (CLEAR), a file (FILE) or an environment variable (ENV)

partitionLayout

partition layout defines how partition values can be extracted from the path. Use "%<colname>%" as token to extract the value for a partition column. With "%<colname:regex>%" a regex can be given to limit search. This is especially useful if there is no char to delimit the last token from the rest of the path or also between two tokens.

saveMode

Overwrite or Append new data.

expectedPartitionsCondition

Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.

Annotations
@Scaladoc()
Linear Supertypes
Serializable, Serializable, Product, Equals, CanCreateOutputStream, CanCreateInputStream, FileRefDataObject, FileDataObject, CanHandlePartitions, DataObject, AtlasExportable, SmartDataLakeLogger, ParsableFromConfig[DataObject], SdlConfigObject, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. SFtpFileRefDataObject
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. CanCreateOutputStream
  7. CanCreateInputStream
  8. FileRefDataObject
  9. FileDataObject
  10. CanHandlePartitions
  11. DataObject
  12. AtlasExportable
  13. SmartDataLakeLogger
  14. ParsableFromConfig
  15. SdlConfigObject
  16. AnyRef
  17. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new SFtpFileRefDataObject(id: DataObjectId, path: String, connectionId: ConnectionId, partitions: Seq[String] = Seq(), partitionLayout: Option[String] = None, saveMode: SDLSaveMode = SDLSaveMode.Overwrite, expectedPartitionsCondition: Option[String] = None, metadata: Option[DataObjectMetadata] = None)(implicit instanceRegistry: InstanceRegistry)

    partitionLayout

    partition layout defines how partition values can be extracted from the path. Use "%<colname>%" as token to extract the value for a partition column. With "%<colname:regex>%" a regex can be given to limit search. This is especially useful if there is no char to delimit the last token from the rest of the path or also between two tokens.

    saveMode

    Overwrite or Append new data.

    expectedPartitionsCondition

    Optional definition of partitions expected to exist. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false Default is to expect all partitions to exist.

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def atlasName: String
    Definition Classes
    DataObjectAtlasExportable
  6. def atlasQualifiedName(prefix: String): String
    Definition Classes
    AtlasExportable
  7. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native() @HotSpotIntrinsicCandidate()
  8. val connectionId: ConnectionId
  9. def createInputStream(path: String)(implicit context: ActionPipelineContext): InputStream
    Definition Classes
    SFtpFileRefDataObject → CanCreateInputStream
  10. def createOutputStream(path: String, overwrite: Boolean)(implicit context: ActionPipelineContext): OutputStream

    Create an OutputStream for a given path, that the Action can use to write data into.

    Create an OutputStream for a given path, that the Action can use to write data into.

    Definition Classes
    SFtpFileRefDataObject → CanCreateOutputStream
  11. def deleteAll(implicit context: ActionPipelineContext): Unit

    Delete all data.

    Delete all data. This is used to implement SaveMode.Overwrite.

    Definition Classes
    FileRefDataObject
    Annotations
    @Scaladoc()
  12. def deleteFileRefs(fileRefs: Seq[FileRef])(implicit context: ActionPipelineContext): Unit

    Delete given files.

    Delete given files. This is used to cleanup files after they are processed.

    Definition Classes
    SFtpFileRefDataObject → FileRefDataObject
  13. def endWritingOutputStreams(partitionValues: Seq[PartitionValues])(implicit context: ActionPipelineContext): Unit

    This is called after all output streams have been written.

    This is called after all output streams have been written. It is used for e.g. making sure empty partitions are created as well.

    Definition Classes
    SFtpFileRefDataObject → CanCreateOutputStream
  14. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  15. val expectedPartitionsCondition: Option[String]

    Definition of partitions that are expected to exists.

    Definition of partitions that are expected to exists. This is used to validate that partitions being read exists and don't return no data. Define a Spark SQL expression that is evaluated against a PartitionValues instance and returns true or false example: "elements['yourColName'] > 2017"

    returns

    true if partition is expected to exist.

    Definition Classes
    SFtpFileRefDataObjectCanHandlePartitions
  16. def extractPartitionValuesFromPath(filePath: String)(implicit context: ActionPipelineContext): PartitionValues

    Extract partition values from a given file path

    Extract partition values from a given file path

    Attributes
    protected
    Definition Classes
    FileRefDataObject
    Annotations
    @Scaladoc()
  17. def factory: FromConfigFactory[DataObject]

    Returns the factory that can parse this type (that is, type CO).

    Returns the factory that can parse this type (that is, type CO).

    Typically, implementations of this method should return the companion object of the implementing class. The companion object in turn should implement FromConfigFactory.

    returns

    the factory (object) for this class.

    Definition Classes
    SFtpFileRefDataObject → ParsableFromConfig
  18. val fileName: String

    Definition of fileName.

    Definition of fileName. Default is an asterix to match everything. This is concatenated with the partition layout to search for files.

    Definition Classes
    FileRefDataObject
    Annotations
    @Scaladoc()
  19. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  20. def getConnection[T <: Connection](connectionId: ConnectionId)(implicit registry: InstanceRegistry, ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T

    Handle class cast exception when getting objects from instance registry

    Handle class cast exception when getting objects from instance registry

    Attributes
    protected
    Definition Classes
    DataObject
    Annotations
    @Scaladoc()
  21. def getConnectionReg[T <: Connection](connectionId: ConnectionId, registry: InstanceRegistry)(implicit ct: ClassTag[T], tt: scala.reflect.api.JavaUniverse.TypeTag[T]): T
    Attributes
    protected
    Definition Classes
    DataObject
  22. def getFileRefs(partitionValues: Seq[PartitionValues])(implicit context: ActionPipelineContext): Seq[FileRef]

    List files for given partition values

    List files for given partition values

    partitionValues

    List of partition values to be filtered. If empty all files in root path of DataObject will be listed.

    returns

    List of FileRefs

    Definition Classes
    SFtpFileRefDataObject → FileRefDataObject
  23. def getPartitionString(partitionValues: PartitionValues)(implicit context: ActionPipelineContext): Option[String]

    get partition values formatted by partition layout

    get partition values formatted by partition layout

    Definition Classes
    FileRefDataObject
    Annotations
    @Scaladoc()
  24. def getPath(implicit context: ActionPipelineContext): String

    Method for subclasses to override the base path for this DataObject.

    Method for subclasses to override the base path for this DataObject. This is for instance needed if pathPrefix is defined in a connection.

    Definition Classes
    FileRefDataObject
    Annotations
    @Scaladoc()
  25. def getSearchPaths(partitionValues: Seq[PartitionValues])(implicit context: ActionPipelineContext): Seq[(PartitionValues, String)]

    prepare paths to be searched

    prepare paths to be searched

    Attributes
    protected
    Definition Classes
    FileRefDataObject
    Annotations
    @Scaladoc()
  26. def housekeepingMode: Option[HousekeepingMode]

    Configure a housekeeping mode to e.g cleanup, archive and compact partitions.

    Configure a housekeeping mode to e.g cleanup, archive and compact partitions. Default is None.

    Definition Classes
    DataObject
    Annotations
    @Scaladoc()
  27. val id: DataObjectId

    A unique identifier for this instance.

    A unique identifier for this instance.

    Definition Classes
    SFtpFileRefDataObjectDataObject → SdlConfigObject
  28. implicit val instanceRegistry: InstanceRegistry
  29. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  30. def listPartitions(implicit context: ActionPipelineContext): Seq[PartitionValues]

    List partitions on data object's root path

    List partitions on data object's root path

    Definition Classes
    SFtpFileRefDataObjectCanHandlePartitions
    Annotations
    @Scaladoc()
  31. lazy val logger: Logger
    Attributes
    protected
    Definition Classes
    SmartDataLakeLogger
    Annotations
    @transient()
  32. val metadata: Option[DataObjectMetadata]

    Additional metadata for the DataObject

    Additional metadata for the DataObject

    Definition Classes
    SFtpFileRefDataObjectDataObject
  33. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  34. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  35. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  36. val partitionLayout: Option[String]

    Definition of partition layout use %<partitionColName>% as placeholder and * for globs in layout Note: if you have globs in partition layout, it's not possible to write files to this DataObject Note: if this is a directory, you must add a final backslash to the partition layout

    Definition of partition layout use %<partitionColName>% as placeholder and * for globs in layout Note: if you have globs in partition layout, it's not possible to write files to this DataObject Note: if this is a directory, you must add a final backslash to the partition layout

    Definition Classes
    SFtpFileRefDataObject → FileRefDataObject
  37. val partitions: Seq[String]

    Definition of partition columns

    Definition of partition columns

    Definition Classes
    SFtpFileRefDataObjectCanHandlePartitions
  38. val path: String

    The root path of the files that are handled by this DataObject.

    The root path of the files that are handled by this DataObject.

    Definition Classes
    SFtpFileRefDataObject → FileDataObject
  39. def prepare(implicit context: ActionPipelineContext): Unit

    Prepare & test DataObject's prerequisits

    Prepare & test DataObject's prerequisits

    This runs during the "prepare" operation of the DAG.

    Definition Classes
    SFtpFileRefDataObject → FileDataObject → DataObject
  40. def relativizePath(filePath: String)(implicit context: ActionPipelineContext): String

    Make a given path relative to this DataObjects base path

    Make a given path relative to this DataObjects base path

    Definition Classes
    SFtpFileRefDataObject → FileDataObject
  41. val saveMode: SDLSaveMode

    Overwrite or Append new data.

    Overwrite or Append new data. When writing partitioned data, this applies only to partitions concerned.

    Definition Classes
    SFtpFileRefDataObject → FileRefDataObject
  42. val separator: Char

    default separator for paths

    default separator for paths

    Attributes
    protected
    Definition Classes
    FileDataObject
    Annotations
    @Scaladoc()
  43. def startWritingOutputStreams(partitionValues: Seq[PartitionValues] = Seq())(implicit context: ActionPipelineContext): Unit

    This is called before any output stream is created to initialize writing.

    This is called before any output stream is created to initialize writing. It is used to apply SaveMode, e.g. deleting existing partitions.

    Definition Classes
    SFtpFileRefDataObject → CanCreateOutputStream
  44. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  45. def toStringShort: String
    Definition Classes
    DataObject
  46. def translateFileRefs(fileRefs: Seq[FileRef])(implicit context: ActionPipelineContext): Seq[FileRefMapping]

    Given some FileRef for another DataObject, translate the paths to the root path of this DataObject

    Given some FileRef for another DataObject, translate the paths to the root path of this DataObject

    Definition Classes
    FileRefDataObject
    Annotations
    @Scaladoc()
  47. def validateSchemaHasPartitionCols(df: DataFrame, role: String): Unit

    Validate the schema of a given Spark Data Frame df that it contains the specified partition columns

    Validate the schema of a given Spark Data Frame df that it contains the specified partition columns

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    CanHandlePartitions
    Annotations
    @Scaladoc()
    Exceptions thrown

    SchemaViolationException if the partitions columns are not included.

  48. def validateSchemaHasPrimaryKeyCols(df: DataFrame, primaryKeyCols: Seq[String], role: String): Unit

    Validate the schema of a given Spark Data Frame df that it contains the specified primary key columns

    Validate the schema of a given Spark Data Frame df that it contains the specified primary key columns

    df

    The data frame to validate.

    role

    role used in exception message. Set to read or write.

    Definition Classes
    CanHandlePartitions
    Annotations
    @Scaladoc()
    Exceptions thrown

    SchemaViolationException if the partitions columns are not included.

  49. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  50. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  51. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] ) @Deprecated
    Deprecated

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from CanCreateOutputStream

Inherited from CanCreateInputStream

Inherited from FileRefDataObject

Inherited from FileDataObject

Inherited from CanHandlePartitions

Inherited from DataObject

Inherited from AtlasExportable

Inherited from SmartDataLakeLogger

Inherited from ParsableFromConfig[DataObject]

Inherited from SdlConfigObject

Inherited from AnyRef

Inherited from Any

Ungrouped