Packages

c

org.apache.flinkx.api

DataStream

class DataStream[T] extends AnyRef

Annotations
@Public()
Linear Supertypes
AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. DataStream
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new DataStream(stream: flink.streaming.api.datastream.DataStream[T])

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def addSink(fun: (T) ⇒ Unit): DataStreamSink[T]

    Adds the given sink to this DataStream.

    Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.

  5. def addSink(sinkFunction: SinkFunction[T]): DataStreamSink[T]

    Adds the given sink to this DataStream.

    Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.

  6. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  7. def assignAscendingTimestamps(extractor: (T) ⇒ Long): DataStream[T]

    Assigns timestamps to the elements in the data stream and periodically creates watermarks to signal event time progress.

    Assigns timestamps to the elements in the data stream and periodically creates watermarks to signal event time progress.

    This method is a shortcut for data streams where the element timestamp are known to be monotonously ascending within each parallel stream. In that case, the system can generate watermarks automatically and perfectly by tracking the ascending timestamps.

    For cases where the timestamps are not monotonously increasing, use the more general methods assignTimestampsAndWatermarks(AssignerWithPeriodicWatermarks) and assignTimestampsAndWatermarks(AssignerWithPunctuatedWatermarks).

    Annotations
    @PublicEvolving()
  8. def assignTimestampsAndWatermarks(watermarkStrategy: WatermarkStrategy[T]): DataStream[T]

    Assigns timestamps to the elements in the data stream and generates watermarks to signal event time progress.

    Assigns timestamps to the elements in the data stream and generates watermarks to signal event time progress. The given is used to create a TimestampAssigner and org.apache.flink.api.common.eventtime.WatermarkGenerator.

    For each event in the data stream, the long) method is called to assign an event timestamp.

    For each event in the data stream, the long, WatermarkOutput) will be called.

    Periodically (defined by the ExecutionConfig#getAutoWatermarkInterval()), the WatermarkGenerator#onPeriodicEmit(WatermarkOutput) method will be called.

    Common watermark generation patterns can be found as static methods in the org.apache.flink.api.common.eventtime.WatermarkStrategy class.

  9. def broadcast(broadcastStateDescriptors: MapStateDescriptor[_, _]*): BroadcastStream[T]

    Sets the partitioning of the DataStream so that the output elements are broadcasted to every parallel instance of the next operation.

    Sets the partitioning of the DataStream so that the output elements are broadcasted to every parallel instance of the next operation. In addition, it implicitly creates as many broadcast states as the specified descriptors which can be used to store the element of the stream.

    broadcastStateDescriptors

    the descriptors of the broadcast states to create.

    returns

    A BroadcastStream which can be used in the DataStream.connect(BroadcastStream) to create a BroadcastConnectedStream for further processing of the elements.

    Annotations
    @PublicEvolving()
  10. def broadcast: DataStream[T]

    Sets the partitioning of the DataStream so that the output tuples are broad casted to every parallel instance of the next component.

  11. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native() @HotSpotIntrinsicCandidate()
  12. def coGroup[T2](otherStream: DataStream[T2]): CoGroupedStreams[T, T2]

    Creates a co-group operation.

    Creates a co-group operation. See CoGroupedStreams for an example of how the keys and window can be specified.

  13. def connect[R](broadcastStream: BroadcastStream[R]): BroadcastConnectedStream[T, R]

    Creates a new BroadcastConnectedStream by connecting the current DataStream or KeyedStream with a BroadcastStream.

    Creates a new BroadcastConnectedStream by connecting the current DataStream or KeyedStream with a BroadcastStream.

    The latter can be created using the broadcast(MapStateDescriptor[]) method.

    The resulting stream can be further processed using the broadcastConnectedStream.process(myFunction) method, where myFunction can be either a org.apache.flink.streaming.api.functions.co.KeyedBroadcastProcessFunction or a org.apache.flink.streaming.api.functions.co.BroadcastProcessFunction depending on the current stream being a KeyedStream or not.

    broadcastStream

    The broadcast stream with the broadcast state to be connected with this stream.

    returns

    The BroadcastConnectedStream.

    Annotations
    @PublicEvolving()
  14. def connect[T2](dataStream: DataStream[T2]): ConnectedStreams[T, T2]

    Creates a new ConnectedStreams by connecting DataStream outputs of different type with each other.

    Creates a new ConnectedStreams by connecting DataStream outputs of different type with each other. The DataStreams connected using this operators can be used with CoFunctions.

  15. def countWindowAll(size: Long): AllWindowedStream[T, GlobalWindow]

    Windows this DataStream into tumbling count windows.

    Windows this DataStream into tumbling count windows.

    Note: This operation can be inherently non-parallel since all elements have to pass through the same operator instance. (Only for special cases, such as aligned time windows is it possible to perform this operation in parallel).

    size

    The size of the windows in number of elements.

  16. def countWindowAll(size: Long, slide: Long): AllWindowedStream[T, GlobalWindow]

    Windows this DataStream into sliding count windows.

    Windows this DataStream into sliding count windows.

    Note: This operation can be inherently non-parallel since all elements have to pass through the same operator instance. (Only for special cases, such as aligned time windows is it possible to perform this operation in parallel).

    size

    The size of the windows in number of elements.

    slide

    The slide interval in number of elements.

  17. def dataType: TypeInformation[T]

    Returns the TypeInformation for the elements of this DataStream.

  18. def disableChaining(): DataStream[T]

    Turns off chaining for this operator so thread co-location will not be used as an optimization.

    Turns off chaining for this operator so thread co-location will not be used as an optimization. Chaining can be turned off for the whole job by StreamExecutionEnvironment.disableOperatorChaining() however it is not advised for performance considerations.

    Annotations
    @PublicEvolving()
  19. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  20. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  21. def executeAndCollect(jobExecutionName: String, limit: Int): List[T]

    Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.

    Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.

    The DataStream application is executed in the regular distributed manner on the target environment, and the events from the stream are polled back to this application process and thread through Flink's REST API.

  22. def executeAndCollect(limit: Int): List[T]

    Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.

    Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.

    The DataStream application is executed in the regular distributed manner on the target environment, and the events from the stream are polled back to this application process and thread through Flink's REST API.

  23. def executeAndCollect(jobExecutionName: String): CloseableIterator[T]

    Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.

    Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.

    The DataStream application is executed in the regular distributed manner on the target environment, and the events from the stream are polled back to this application process and thread through Flink's REST API.

    IMPORTANT The returned iterator must be closed to free all cluster resources.

  24. def executeAndCollect(): CloseableIterator[T]

    Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.

    Triggers the distributed execution of the streaming dataflow and returns an iterator over the elements of the given DataStream.

    The DataStream application is executed in the regular distributed manner on the target environment, and the events from the stream are polled back to this application process and thread through Flink's REST API.

    IMPORTANT The returned iterator must be closed to free all cluster resources.

  25. def executionConfig: ExecutionConfig

    Returns the execution config.

  26. def executionEnvironment: StreamExecutionEnvironment

    Returns the StreamExecutionEnvironment associated with this data stream

  27. def filter(fun: (T) ⇒ Boolean): DataStream[T]

    Creates a new DataStream that contains only the elements satisfying the given filter predicate.

  28. def filter(filter: FilterFunction[T]): DataStream[T]

    Creates a new DataStream that contains only the elements satisfying the given filter predicate.

  29. def flatMap[R](fun: (T) ⇒ TraversableOnce[R])(implicit arg0: TypeInformation[R]): DataStream[R]

    Creates a new DataStream by applying the given function to every element and flattening the results.

  30. def flatMap[R](fun: (T, Collector[R]) ⇒ Unit)(implicit arg0: TypeInformation[R]): DataStream[R]

    Creates a new DataStream by applying the given function to every element and flattening the results.

  31. def flatMap[R](flatMapper: FlatMapFunction[T, R])(implicit arg0: TypeInformation[R]): DataStream[R]

    Creates a new DataStream by applying the given function to every element and flattening the results.

  32. def forward: DataStream[T]

    Sets the partitioning of the DataStream so that the output tuples are forwarded to the local subtask of the next component (whenever possible).

  33. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  34. def getSideOutput[X](tag: OutputTag[X])(implicit arg0: TypeInformation[X]): DataStream[X]
    Annotations
    @PublicEvolving()
  35. def global: DataStream[T]

    Sets the partitioning of the DataStream so that the output values all go to the first instance of the next processing operator.

    Sets the partitioning of the DataStream so that the output values all go to the first instance of the next processing operator. Use this setting with care since it might cause a serious performance bottleneck in the application.

    Annotations
    @PublicEvolving()
  36. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  37. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  38. def iterate[R, F](stepFunction: (ConnectedStreams[T, F]) ⇒ (DataStream[F], DataStream[R]), maxWaitTimeMillis: Long)(implicit arg0: TypeInformation[F]): DataStream[R]

    Initiates an iterative part of the program that creates a loop by feeding back data streams.

    Initiates an iterative part of the program that creates a loop by feeding back data streams. To create a streaming iteration the user needs to define a transformation that creates two DataStreams. The first one is the output that will be fed back to the start of the iteration and the second is the output stream of the iterative part.

    The input stream of the iterate operator and the feedback stream will be treated as a ConnectedStreams where the input is connected with the feedback stream.

    This allows the user to distinguish standard input from feedback inputs.

    stepfunction: initialStream => (feedback, output)

    The user must set the max waiting time for the iteration head. If no data received in the set time the stream terminates. If this parameter is set to 0 then the iteration sources will indefinitely, so the job must be killed to stop.

    Annotations
    @PublicEvolving()
  39. def iterate[R](stepFunction: (DataStream[T]) ⇒ (DataStream[T], DataStream[R]), maxWaitTimeMillis: Long = 0): DataStream[R]

    Initiates an iterative part of the program that creates a loop by feeding back data streams.

    Initiates an iterative part of the program that creates a loop by feeding back data streams. To create a streaming iteration the user needs to define a transformation that creates two DataStreams. The first one is the output that will be fed back to the start of the iteration and the second is the output stream of the iterative part.

    stepfunction: initialStream => (feedback, output)

    A common pattern is to use output splitting to create feedback and output DataStream. Please see the side outputs of ProcessFunction method of the DataStream

    By default a DataStream with iteration will never terminate, but the user can use the maxWaitTime parameter to set a max waiting time for the iteration head. If no data received in the set time the stream terminates.

    Parallelism of the feedback stream must match the parallelism of the original stream. Please refer to the setParallelism method for parallelism modification

    Annotations
    @PublicEvolving()
  40. def javaStream: flink.streaming.api.datastream.DataStream[T]

    Gets the underlying java DataStream object.

  41. def join[T2](otherStream: DataStream[T2]): JoinedStreams[T, T2]

    Creates a join operation.

    Creates a join operation. See JoinedStreams for an example of how the keys and window can be specified.

  42. def keyBy[K](fun: KeySelector[T, K])(implicit arg0: TypeInformation[K]): KeyedStream[T, K]

    Groups the elements of a DataStream by the given K key to be used with grouped operators like grouped reduce or grouped aggregations.

  43. def keyBy[K](fun: (T) ⇒ K)(implicit arg0: TypeInformation[K]): KeyedStream[T, K]

    Groups the elements of a DataStream by the given K key to be used with grouped operators like grouped reduce or grouped aggregations.

  44. def map[R](mapper: MapFunction[T, R])(implicit arg0: TypeInformation[R]): DataStream[R]

    Creates a new DataStream by applying the given function to every element of this DataStream.

  45. def map[R](fun: (T) ⇒ R)(implicit arg0: TypeInformation[R]): DataStream[R]

    Creates a new DataStream by applying the given function to every element of this DataStream.

  46. def minResources: ResourceSpec

    Returns the minimum resources of this operation.

    Returns the minimum resources of this operation.

    Annotations
    @PublicEvolving()
  47. def name(name: String): DataStream[T]

    Sets the name of the current data stream.

    Sets the name of the current data stream. This name is used by the visualization and logging during runtime.

    returns

    The named operator

  48. def name: String

    Gets the name of the current data stream.

    Gets the name of the current data stream. This name is used by the visualization and logging during runtime.

    returns

    Name of the stream.

  49. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  50. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  51. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  52. def parallelism: Int

    Returns the parallelism of this operation.

  53. def partitionCustom[K](partitioner: Partitioner[K], fun: (T) ⇒ K)(implicit arg0: TypeInformation[K]): DataStream[T]

    Partitions a DataStream on the key returned by the selector, using a custom partitioner.

    Partitions a DataStream on the key returned by the selector, using a custom partitioner. This method takes the key selector to get the key to partition on, and a partitioner that accepts the key type.

    Note: This method works only on single field keys, i.e. the selector cannot return tuples of fields.

  54. def preferredResources: ResourceSpec

    Returns the preferred resources of this operation.

    Returns the preferred resources of this operation.

    Annotations
    @PublicEvolving()
  55. def print(sinkIdentifier: String): DataStreamSink[T]

    Writes a DataStream to the standard output stream (stdout).

    Writes a DataStream to the standard output stream (stdout). For each element of the DataStream the result of AnyRef.toString() is written.

    sinkIdentifier

    The string to prefix the output with.

    returns

    The closed DataStream.

    Annotations
    @PublicEvolving()
  56. def print(): DataStreamSink[T]

    Writes a DataStream to the standard output stream (stdout).

    Writes a DataStream to the standard output stream (stdout). For each element of the DataStream the result of .toString is written.

    Annotations
    @PublicEvolving()
  57. def printToErr(sinkIdentifier: String): DataStreamSink[T]

    Writes a DataStream to the standard error stream (stderr).

    Writes a DataStream to the standard error stream (stderr).

    For each element of the DataStream the result of AnyRef.toString() is written.

    sinkIdentifier

    The string to prefix the output with.

    returns

    The closed DataStream.

    Annotations
    @PublicEvolving()
  58. def printToErr(): DataStreamSink[T]

    Writes a DataStream to the standard error stream (stderr).

    Writes a DataStream to the standard error stream (stderr).

    For each element of the DataStream the result of AnyRef.toString() is written.

    returns

    The closed DataStream.

    Annotations
    @PublicEvolving()
  59. def process[R](processFunction: ProcessFunction[T, R])(implicit arg0: TypeInformation[R]): DataStream[R]

    Applies the given ProcessFunction on the input stream, thereby creating a transformed output stream.

    Applies the given ProcessFunction on the input stream, thereby creating a transformed output stream.

    The function will be called for every element in the stream and can produce zero or more output.

    processFunction

    The ProcessFunction that is called for each element in the stream.

    Annotations
    @PublicEvolving()
  60. def rebalance: DataStream[T]

    Sets the partitioning of the DataStream so that the output tuples are distributed evenly to the next component.

  61. def rescale: DataStream[T]

    Sets the partitioning of the DataStream so that the output tuples are distributed evenly to a subset of instances of the downstream operation.

    Sets the partitioning of the DataStream so that the output tuples are distributed evenly to a subset of instances of the downstream operation.

    The subset of downstream operations to which the upstream operation sends elements depends on the degree of parallelism of both the upstream and downstream operation. For example, if the upstream operation has parallelism 2 and the downstream operation has parallelism 4, then one upstream operation would distribute elements to two downstream operations while the other upstream operation would distribute to the other two downstream operations. If, on the other hand, the downstream operation has parallelism 2 while the upstream operation has parallelism 4 then two upstream operations will distribute to one downstream operation while the other two upstream operations will distribute to the other downstream operations.

    In cases where the different parallelisms are not multiples of each other one or several downstream operations will have a differing number of inputs from upstream operations.

    Annotations
    @PublicEvolving()
  62. def setBufferTimeout(timeoutMillis: Long): DataStream[T]

    Sets the maximum time frequency (ms) for the flushing of the output buffer.

    Sets the maximum time frequency (ms) for the flushing of the output buffer. By default the output buffers flush only when they are full.

    timeoutMillis

    The maximum time between two output flushes.

    returns

    The operator with buffer timeout set.

  63. def setDescription(description: String): DataStream[T]

    Sets the description of this data stream.

    Sets the description of this data stream.

    Description is used in json plan and web ui, but not in logging and metrics where only name is available. Description is expected to provide detailed information about this operation, while name is expected to be more simple, providing summary information only, so that we can have more user-friendly logging messages and metric tags without losing useful messages for debugging.

    returns

    The operator with new description

    Annotations
    @PublicEvolving()
  64. def setMaxParallelism(maxParallelism: Int): DataStream[T]
  65. def setParallelism(parallelism: Int): DataStream[T]

    Sets the parallelism of this operation.

    Sets the parallelism of this operation. This must be at least 1.

  66. def setUidHash(hash: String): DataStream[T]

    Sets an user provided hash for this operator.

    Sets an user provided hash for this operator. This will be used AS IS the create the JobVertexID.

    The user provided hash is an alternative to the generated hashes, that is considered when identifying an operator through the default hash mechanics fails (e.g. because of changes between Flink versions).

    Important: this should be used as a workaround or for trouble shooting. The provided hash needs to be unique per transformation and job. Otherwise, job submission will fail. Furthermore, you cannot assign user-specified hash to intermediate nodes in an operator chain and trying so will let your job fail.

    hash

    the user provided hash for this operator.

    returns

    The operator with the user provided hash.

    Annotations
    @PublicEvolving()
  67. def shuffle: DataStream[T]

    Sets the partitioning of the DataStream so that the output tuples are shuffled to the next component.

    Sets the partitioning of the DataStream so that the output tuples are shuffled to the next component.

    Annotations
    @PublicEvolving()
  68. def sinkTo(sink: Sink[T]): DataStreamSink[T]

    Adds the given sink to this DataStream.

    Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.

  69. def sinkTo(sink: Sink[T, _, _, _]): DataStreamSink[T]

    Adds the given sink to this DataStream.

    Adds the given sink to this DataStream. Only streams with sinks added will be executed once the StreamExecutionEnvironment.execute(...) method is called.

  70. def slotSharingGroup(slotSharingGroup: SlotSharingGroup): DataStream[T]

    Sets the slot sharing group of this operation.

    Sets the slot sharing group of this operation. Parallel instances of operations that are in the same slot sharing group will be co-located in the same TaskManager slot, if possible.

    Operations inherit the slot sharing group of input operations if all input operations are in the same slot sharing group and no slot sharing group was explicitly specified.

    Initially an operation is in the default slot sharing group. An operation can be put into the default group explicitly by setting the slot sharing group to "default".

    slotSharingGroup

    Which contains name and its resource spec.

    Annotations
    @PublicEvolving()
  71. def slotSharingGroup(slotSharingGroup: String): DataStream[T]

    Sets the slot sharing group of this operation.

    Sets the slot sharing group of this operation. Parallel instances of operations that are in the same slot sharing group will be co-located in the same TaskManager slot, if possible.

    Operations inherit the slot sharing group of input operations if all input operations are in the same slot sharing group and no slot sharing group was explicitly specified.

    Initially an operation is in the default slot sharing group. An operation can be put into the default group explicitly by setting the slot sharing group to "default".

    slotSharingGroup

    The slot sharing group name.

    Annotations
    @PublicEvolving()
  72. def startNewChain(): DataStream[T]

    Starts a new task chain beginning at this operator.

    Starts a new task chain beginning at this operator. This operator will not be chained (thread co-located for increased performance) to any previous tasks even if possible.

    Annotations
    @PublicEvolving()
  73. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  74. def toString(): String
    Definition Classes
    AnyRef → Any
  75. def transform[R](operatorName: String, operator: OneInputStreamOperator[T, R])(implicit arg0: TypeInformation[R]): DataStream[R]

    Transforms the DataStream by using a custom OneInputStreamOperator.

    Transforms the DataStream by using a custom OneInputStreamOperator.

    R

    the type of elements emitted by the operator

    operatorName

    name of the operator, for logging purposes

    operator

    the object containing the transformation logic

    Annotations
    @PublicEvolving()
  76. def uid(uid: String): DataStream[T]

    Sets an ID for this operator.

    Sets an ID for this operator.

    The specified ID is used to assign the same operator ID across job submissions (for example when starting a job from a savepoint).

    Important: this ID needs to be unique per transformation and job. Otherwise, job submission will fail.

    uid

    The unique user-specified ID of this transformation.

    returns

    The operator with the specified ID.

    Annotations
    @PublicEvolving()
  77. def union(dataStreams: DataStream[T]*): DataStream[T]

    Creates a new DataStream by merging DataStream outputs of the same type with each other.

    Creates a new DataStream by merging DataStream outputs of the same type with each other. The DataStreams merged using this operator will be transformed simultaneously.

  78. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  79. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  80. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  81. def windowAll[W <: Window](assigner: WindowAssigner[_ >: T, W]): AllWindowedStream[T, W]

    Windows this data stream to a AllWindowedStream, which evaluates windows over a key grouped stream.

    Windows this data stream to a AllWindowedStream, which evaluates windows over a key grouped stream. Elements are put into windows by a WindowAssigner. The grouping of elements is done both by key and by window.

    A org.apache.flink.streaming.api.windowing.triggers.Trigger can be defined to specify when windows are evaluated. However, WindowAssigner have a default Trigger that is used if a Trigger is not specified.

    Note: This operation can be inherently non-parallel since all elements have to pass through the same operator instance. (Only for special cases, such as aligned time windows is it possible to perform this operation in parallel).

    assigner

    The WindowAssigner that assigns elements to windows.

    returns

    The trigger windows data stream.

    Annotations
    @PublicEvolving()
  82. def writeToSocket(hostname: String, port: Integer, schema: SerializationSchema[T]): DataStreamSink[T]

    Writes the DataStream to a socket as a byte array.

    Writes the DataStream to a socket as a byte array. The format of the output is specified by a SerializationSchema.

    Annotations
    @PublicEvolving()
  83. def writeUsingOutputFormat(format: OutputFormat[T]): DataStreamSink[T]

    Writes a DataStream using the given OutputFormat.

    Writes a DataStream using the given OutputFormat.

    Annotations
    @PublicEvolving()

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] ) @Deprecated
    Deprecated
  2. def getExecutionConfig: ExecutionConfig

    Returns the execution config.

    Returns the execution config.

    Annotations
    @deprecated @PublicEvolving()
    Deprecated
  3. def getExecutionEnvironment: StreamExecutionEnvironment

    Returns the StreamExecutionEnvironment associated with the current DataStream.

    Returns the StreamExecutionEnvironment associated with the current DataStream.

    returns

    associated execution environment

    Annotations
    @deprecated @PublicEvolving()
    Deprecated
  4. def getName: String

    Gets the name of the current data stream.

    Gets the name of the current data stream. This name is used by the visualization and logging during runtime.

    returns

    Name of the stream.

    Annotations
    @deprecated @PublicEvolving()
    Deprecated
  5. def getParallelism: Int

    Returns the parallelism of this operation.

    Returns the parallelism of this operation.

    Annotations
    @deprecated @PublicEvolving()
    Deprecated
  6. def getType(): TypeInformation[T]

    Returns the TypeInformation for the elements of this DataStream.

    Returns the TypeInformation for the elements of this DataStream.

    Annotations
    @deprecated @PublicEvolving()
    Deprecated
  7. def keyBy(firstField: String, otherFields: String*): KeyedStream[T, Tuple]

    Groups the elements of a DataStream by the given field expressions to be used with grouped operators like grouped reduce or grouped aggregations.

    Groups the elements of a DataStream by the given field expressions to be used with grouped operators like grouped reduce or grouped aggregations.

    Annotations
    @deprecated
    Deprecated

    use DataStream.keyBy(KeySelector) instead

  8. def keyBy(fields: Int*): KeyedStream[T, Tuple]

    Groups the elements of a DataStream by the given key positions (for tuple/array types) to be used with grouped operators like grouped reduce or grouped aggregations.

    Groups the elements of a DataStream by the given key positions (for tuple/array types) to be used with grouped operators like grouped reduce or grouped aggregations.

    Annotations
    @deprecated
    Deprecated

    use DataStream.keyBy(KeySelector) instead

  9. def partitionCustom[K](partitioner: Partitioner[K], field: String)(implicit arg0: TypeInformation[K]): DataStream[T]

    Partitions a POJO DataStream on the specified key fields using a custom partitioner.

    Partitions a POJO DataStream on the specified key fields using a custom partitioner. This method takes the key expression to partition on, and a partitioner that accepts the key type.

    Note: This method works only on single field keys.

    Annotations
    @deprecated
    Deprecated

    Use Function1) instead

  10. def partitionCustom[K](partitioner: Partitioner[K], field: Int)(implicit arg0: TypeInformation[K]): DataStream[T]

    Partitions a tuple DataStream on the specified key fields using a custom partitioner.

    Partitions a tuple DataStream on the specified key fields using a custom partitioner. This method takes the key position to partition on, and a partitioner that accepts the key type.

    Note: This method works only on single field keys.

    Annotations
    @deprecated
    Deprecated

    Use Function1) instead

Inherited from AnyRef

Inherited from Any

Ungrouped