class StreamExecutionEnvironment extends AnyRef
- Annotations
- @Public()
- Alphabetic
- By Inheritance
- StreamExecutionEnvironment
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
- new StreamExecutionEnvironment(javaEnv: flink.streaming.api.environment.StreamExecutionEnvironment)
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
addDefaultKryoSerializer(type: Class[_], serializerClass: Class[_ <: Serializer[_]]): Unit
Adds a new Kryo default serializer to the Runtime.
Adds a new Kryo default serializer to the Runtime.
- type
The class of the types serialized with the given serializer.
- serializerClass
The class of the serializer to use.
-
def
addDefaultKryoSerializer[T <: Serializer[_] with Serializable](type: Class[_], serializer: T): Unit
Adds a new Kryo default serializer to the Runtime.
Adds a new Kryo default serializer to the Runtime.
Note that the serializer instance must be serializable (as defined by java.io.Serializable), because it may be distributed to the worker nodes by java serialization.
- type
The class of the types serialized with the given serializer.
- serializer
The serializer to use.
-
def
addSource[T](function: (SourceContext[T]) ⇒ Unit)(implicit arg0: TypeInformation[T]): DataStream[T]
Create a DataStream using a user defined source function for arbitrary source functionality.
-
def
addSource[T](function: SourceFunction[T])(implicit arg0: TypeInformation[T]): DataStream[T]
Create a DataStream using a user defined source function for arbitrary source functionality.
Create a DataStream using a user defined source function for arbitrary source functionality. By default sources have a parallelism of 1. To enable parallel execution, the user defined source should implement ParallelSourceFunction or extend RichParallelSourceFunction. In these cases the resulting source will have the parallelism of the environment. To change this afterwards call DataStreamSource.setParallelism(int)
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clearJobListeners(): Unit
Clear all registered JobListeners.
Clear all registered JobListeners.
- Annotations
- @PublicEvolving()
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native() @HotSpotIntrinsicCandidate()
-
def
configure(configuration: ReadableConfig): Unit
Sets all relevant options contained in the ReadableConfig such as e.g.
Sets all relevant options contained in the ReadableConfig such as e.g. org.apache.flink.streaming.api.environment.StreamPipelineOptions#TIME_CHARACTERISTIC. It will reconfigure StreamExecutionEnvironment, org.apache.flink.api.common.ExecutionConfig and org.apache.flink.streaming.api.environment.CheckpointConfig.
It will change the value of a setting only if a corresponding option was set in the
configuration. If a key is not present, the current value of a field will remain untouched.- configuration
a configuration to read the values from
- Annotations
- @PublicEvolving()
-
def
configure(configuration: ReadableConfig, classLoader: ClassLoader): Unit
Sets all relevant options contained in the ReadableConfig such as e.g.
Sets all relevant options contained in the ReadableConfig such as e.g. org.apache.flink.streaming.api.environment.StreamPipelineOptions#TIME_CHARACTERISTIC. It will reconfigure StreamExecutionEnvironment, org.apache.flink.api.common.ExecutionConfig and org.apache.flink.streaming.api.environment.CheckpointConfig.
It will change the value of a setting only if a corresponding option was set in the
configuration. If a key is not present, the current value of a field will remain untouched.- configuration
a configuration to read the values from
- classLoader
a class loader to use when loading classes
- Annotations
- @PublicEvolving()
-
def
createInput[T](inputFormat: InputFormat[T, _])(implicit arg0: TypeInformation[T]): DataStream[T]
Generic method to create an input data stream with a specific input format.
Generic method to create an input data stream with a specific input format. Since all data streams need specific information about their types, this method needs to determine the type of the data produced by the input format. It will attempt to determine the data type by reflection, unless the input format implements the ResultTypeQueryable interface.
- Annotations
- @PublicEvolving()
-
def
disableOperatorChaining(): StreamExecutionEnvironment
Disables operator chaining for streaming operators.
Disables operator chaining for streaming operators. Operator chaining allows non-shuffle operations to be co-located in the same thread fully avoiding serialization and de-serialization.
- Annotations
- @PublicEvolving()
-
def
enableChangelogStateBackend(enabled: Boolean): StreamExecutionEnvironment
Enable the change log for current state backend.
Enable the change log for current state backend. This change log allows operators to persist state changes in a very fine-grained manner. Currently, the change log only applies to keyed state, so non-keyed operator state and channel state are persisted as usual. The 'state' here refers to 'keyed state'. Details are as follows:
Stateful operators write the state changes to that log (logging the state), in addition to applying them to the state tables in RocksDB or the in-mem Hashtable.
An operator can acknowledge a checkpoint as soon as the changes in the log have reached the durable checkpoint storage.
The state tables are persisted periodically, independent of the checkpoints. We call this the materialization of the state on the checkpoint storage.
Once the state is materialized on checkpoint storage, the state changelog can be truncated to the corresponding point.
It establish a way to drastically reduce the checkpoint interval for streaming applications across state backends. For more details please check the FLIP-158.
If this method is not called explicitly, it means no preference for enabling the change log. Configs for change log enabling will override in different config levels (job/local/cluster).
- enabled
true if enable the change log for state backend explicitly, otherwise disable the change log.
- returns
This StreamExecutionEnvironment itself, to allow chaining of function calls.
- Annotations
- @PublicEvolving()
- See also
#isChangelogStateBackendEnabled()
-
def
enableCheckpointing(interval: Long): StreamExecutionEnvironment
Enables checkpointing for the streaming job.
Enables checkpointing for the streaming job. The distributed state of the streaming dataflow will be periodically snapshotted. In case of a failure, the streaming dataflow will be restarted from the latest completed checkpoint.
The job draws checkpoints periodically, in the given interval. The program will use CheckpointingMode.EXACTLY_ONCE mode. The state will be stored in the configured state backend.
NOTE: Checkpointing iterative streaming dataflows in not properly supported at the moment. For that reason, iterative jobs will not be started if used with enabled checkpointing. To override this mechanism, use the CheckpointingMode, boolean) method.
- interval
Time interval between state checkpoints in milliseconds.
-
def
enableCheckpointing(interval: Long, mode: CheckpointingMode): StreamExecutionEnvironment
Enables checkpointing for the streaming job.
Enables checkpointing for the streaming job. The distributed state of the streaming dataflow will be periodically snapshotted. In case of a failure, the streaming dataflow will be restarted from the latest completed checkpoint.
The job draws checkpoints periodically, in the given interval. The system uses the given CheckpointingMode for the checkpointing ("exactly once" vs "at least once"). The state will be stored in the configured state backend.
NOTE: Checkpointing iterative streaming dataflows in not properly supported at the moment. For that reason, iterative jobs will not be started if used with enabled checkpointing. To override this mechanism, use the CheckpointingMode, boolean) method.
- interval
Time interval between state checkpoints in milliseconds.
- mode
The checkpointing mode, selecting between "exactly once" and "at least once" guarantees.
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
execute(jobName: String): JobExecutionResult
Triggers the program execution.
Triggers the program execution. The environment will execute all parts of the program that have resulted in a "sink" operation. Sink operations are for example printing results or forwarding them to a message queue.
The program execution will be logged and displayed with the provided name.
- returns
The result of the job execution, containing elapsed time and accumulators.
-
def
execute(): JobExecutionResult
Triggers the program execution.
Triggers the program execution. The environment will execute all parts of the program that have resulted in a "sink" operation. Sink operations are for example printing results or forwarding them to a message queue.
The program execution will be logged and displayed with a generated default name.
- returns
The result of the job execution, containing elapsed time and accumulators.
-
def
executeAsync(jobName: String): JobClient
Triggers the program execution asynchronously.
Triggers the program execution asynchronously. The environment will execute all parts of the program that have resulted in a "sink" operation. Sink operations are for example printing results or forwarding them to a message queue.
The program execution will be logged and displayed with the provided name.
ATTENTION: The caller of this method is responsible for managing the lifecycle of the returned JobClient. This means calling JobClient#close() at the end of its usage. In other case, there may be resource leaks depending on the JobClient implementation.
- returns
A JobClient that can be used to communicate with the submitted job, completed on submission succeeded.
- Annotations
- @PublicEvolving()
-
def
executeAsync(): JobClient
Triggers the program execution asynchronously.
Triggers the program execution asynchronously. The environment will execute all parts of the program that have resulted in a "sink" operation. Sink operations are for example printing results or forwarding them to a message queue.
The program execution will be logged and displayed with a generated default name.
ATTENTION: The caller of this method is responsible for managing the lifecycle of the returned JobClient. This means calling JobClient#close() at the end of its usage. In other case, there may be resource leaks depending on the JobClient implementation.
- returns
A JobClient that can be used to communicate with the submitted job, completed on submission succeeded.
- Annotations
- @PublicEvolving()
-
def
fromCollection[T](data: Iterator[T])(implicit arg0: TypeInformation[T]): DataStream[T]
Creates a DataStream from the given Iterator.
Creates a DataStream from the given Iterator.
Note that this operation will result in a non-parallel data source, i.e. a data source with a parallelism of one.
-
def
fromCollection[T](data: Seq[T])(implicit arg0: TypeInformation[T]): DataStream[T]
Creates a DataStream from the given non-empty Seq.
Creates a DataStream from the given non-empty Seq. The elements need to be serializable because the framework may move the elements into the cluster if needed.
Note that this operation will result in a non-parallel data source, i.e. a data source with a parallelism of one.
-
def
fromElements[T](data: T*)(implicit arg0: TypeInformation[T]): DataStream[T]
Creates a DataStream that contains the given elements.
Creates a DataStream that contains the given elements. The elements must all be of the same type.
Note that this operation will result in a non-parallel data source, i.e. a data source with a parallelism of one.
-
def
fromParallelCollection[T](data: SplittableIterator[T])(implicit arg0: TypeInformation[T]): DataStream[T]
Creates a DataStream from the given SplittableIterator.
-
def
fromSequence(from: Long, to: Long): DataStream[Long]
Creates a new data stream that contains a sequence of numbers (longs) and is useful for testing and for cases that just need a stream of N events of any kind.
Creates a new data stream that contains a sequence of numbers (longs) and is useful for testing and for cases that just need a stream of N events of any kind.
The generated source splits the sequence into as many parallel sub-sequences as there are parallel source readers. Each sub-sequence will be produced in order. If the parallelism is limited to one, the source will produce one sequence in order.
This source is always bounded. For very long sequences (for example over the entire domain of long integer values), you may consider executing the application in a streaming manner because of the end bound that is pretty far away.
Use String) together with NumberSequenceSource if you required more control over the created sources. For example, if you want to set a WatermarkStrategy.
-
def
fromSource[T](source: Source[T, _ <: SourceSplit, _], watermarkStrategy: WatermarkStrategy[T], sourceName: String)(implicit arg0: TypeInformation[T]): DataStream[T]
Create a DataStream using a Source.
Create a DataStream using a Source.
- Annotations
- @Experimental()
-
def
getBufferTimeout: Long
Gets the default buffer timeout set for this environment
-
def
getCachedFiles: List[Tuple2[String, DistributedCacheEntry]]
Gets cache files.
-
def
getCheckpointConfig: CheckpointConfig
Gets the checkpoint config, which defines values like checkpoint interval, delay between checkpoints, etc.
- def getCheckpointingMode: CheckpointingMode
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @HotSpotIntrinsicCandidate()
-
def
getConfig: ExecutionConfig
Gets the config object.
-
def
getConfiguration: ReadableConfig
Gives read-only access to the underlying configuration of this environment.
Gives read-only access to the underlying configuration of this environment.
Note that the returned configuration might not be complete. It only contains options that have initialized the environment or options that are not represented in dedicated configuration classes such as ExecutionConfig or CheckpointConfig.
Use configure to set options that are specific to this environment.
- Annotations
- @Internal()
-
def
getDefaultSavepointDirectory: Path
Gets the default savepoint directory for this Job.
Gets the default savepoint directory for this Job.
- Annotations
- @PublicEvolving()
- See also
#setDefaultSavepointDirectory(Path)
-
def
getExecutionPlan: String
Creates the plan with which the system will execute the program, and returns it as a String using a JSON representation of the execution data flow graph.
Creates the plan with which the system will execute the program, and returns it as a String using a JSON representation of the execution data flow graph. Note that this needs to be called, before the plan is executed.
-
def
getJavaEnv: flink.streaming.api.environment.StreamExecutionEnvironment
- returns
the wrapped Java environment
-
def
getJobListeners: List[JobListener]
Gets the config JobListeners.
Gets the config JobListeners.
- Annotations
- @PublicEvolving()
-
def
getMaxParallelism: Int
Returns the maximum degree of parallelism defined for the program.
Returns the maximum degree of parallelism defined for the program.
The maximum degree of parallelism specifies the upper limit for dynamic scaling. It also defines the number of key groups used for partitioned state.
-
def
getParallelism: Int
Returns the default parallelism for this execution environment.
Returns the default parallelism for this execution environment. Note that this value can be overridden by individual operations using DataStream#setParallelism(int)
-
def
getRestartStrategy: RestartStrategyConfiguration
Returns the specified restart strategy configuration.
Returns the specified restart strategy configuration.
- returns
The restart strategy configuration to be used
- Annotations
- @PublicEvolving()
-
def
getStateBackend: StateBackend
Returns the state backend that defines how to store and checkpoint state.
Returns the state backend that defines how to store and checkpoint state.
- Annotations
- @PublicEvolving()
-
def
getStreamGraph(clearTransformations: Boolean): StreamGraph
Getter of the org.apache.flink.streaming.api.graph.StreamGraph of the streaming job with the option to clear previously registered transformations.
Getter of the org.apache.flink.streaming.api.graph.StreamGraph of the streaming job with the option to clear previously registered transformations. Clearing the transformations allows, for example, to not re-execute the same operations when calling execute() multiple times.
- clearTransformations
Whether or not to clear previously registered transformations
- returns
The StreamGraph representing the transformations
- Annotations
- @Internal()
-
def
getStreamGraph: StreamGraph
Getter of the org.apache.flink.streaming.api.graph.StreamGraph of the streaming job.
Getter of the org.apache.flink.streaming.api.graph.StreamGraph of the streaming job. This call clears previously registered transformations.
- returns
The StreamGraph representing the transformations
- Annotations
- @Internal()
-
def
getStreamTimeCharacteristic: TimeCharacteristic
Gets the time characteristic/
Gets the time characteristic/
- returns
The time characteristic.
- Annotations
- @PublicEvolving()
- See also
#setStreamTimeCharacteristic
-
def
getWrappedStreamExecutionEnvironment: flink.streaming.api.environment.StreamExecutionEnvironment
Getter of the wrapped org.apache.flink.streaming.api.environment.StreamExecutionEnvironment
Getter of the wrapped org.apache.flink.streaming.api.environment.StreamExecutionEnvironment
- returns
The encased ExecutionEnvironment
- Annotations
- @Internal()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native() @HotSpotIntrinsicCandidate()
-
def
isChangelogStateBackendEnabled: TernaryBoolean
Gets the enable status of change log for state backend.
Gets the enable status of change log for state backend.
- returns
a TernaryBoolean for the enable status of change log for state backend. Could be TernaryBoolean#UNDEFINED if user never specify this by calling enableChangelogStateBackend(boolean).
- Annotations
- @PublicEvolving()
-
def
isForceUnalignedCheckpoints: Boolean
Returns whether Unaligned Checkpoints are force-enabled.
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
isUnalignedCheckpointsEnabled: Boolean
Returns whether Unaligned Checkpoints are enabled.
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @HotSpotIntrinsicCandidate()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @HotSpotIntrinsicCandidate()
-
def
readFile[T](inputFormat: FileInputFormat[T], filePath: String, watchType: FileProcessingMode, interval: Long)(implicit arg0: TypeInformation[T]): DataStream[T]
Reads the contents of the user-specified path based on the given FileInputFormat.
Reads the contents of the user-specified path based on the given FileInputFormat. Depending on the provided FileProcessingMode, the source may periodically monitor (every
intervalms) the path for new data (FileProcessingMode.PROCESS_CONTINUOUSLY), or process once the data currently in the path and exit (FileProcessingMode.PROCESS_ONCE). In addition, if the path contains files not to be processed, the user can specify a custom FilePathFilter. As a default implementation you can use FilePathFilter.createDefaultFilter().** NOTES ON CHECKPOINTING: ** If the
watchTypeis set to FileProcessingMode#PROCESS_ONCE, the source monitors the path ** once **, creates the FileInputSplits to be processed, forwards them to the downstream readers to read the actual data, and exits, without waiting for the readers to finish reading. This implies that no more checkpoint barriers are going to be forwarded after the source exits, thus having no checkpoints after that point.- inputFormat
The input format used to create the data stream
- filePath
The path of the file, as a URI (e.g., "file:///some/local/file" or "hdfs://host:port/file/path")
- watchType
The mode in which the source should operate, i.e. monitor path and react to new data, or process once and exit
- interval
In the case of periodic path monitoring, this specifies the interval (in millis) between consecutive path scans
- returns
The data stream that represents the data read from the given file
- Annotations
- @PublicEvolving()
-
def
readFile[T](inputFormat: FileInputFormat[T], filePath: String)(implicit arg0: TypeInformation[T]): DataStream[T]
Reads the given file with the given input format.
Reads the given file with the given input format. The file path should be passed as a URI (e.g., "file:///some/local/file" or "hdfs://host:port/file/path").
-
def
readTextFile(filePath: String, charsetName: String): DataStream[String]
Creates a data stream that represents the Strings produced by reading the given file line wise.
Creates a data stream that represents the Strings produced by reading the given file line wise. The character set with the given name will be used to read the files.
-
def
readTextFile(filePath: String): DataStream[String]
Creates a DataStream that represents the Strings produced by reading the given file line wise.
Creates a DataStream that represents the Strings produced by reading the given file line wise. The file will be read with the system's default character set.
-
def
registerCachedFile(filePath: String, name: String, executable: Boolean): Unit
Registers a file at the distributed cache under the given name.
Registers a file at the distributed cache under the given name. The file will be accessible from any user-defined function in the (distributed) runtime under a local path. Files may be local files (which will be distributed via BlobServer), or files in a distributed file system. The runtime will copy the files temporarily to a local cache, if needed.
The org.apache.flink.api.common.functions.RuntimeContext can be obtained inside UDFs via org.apache.flink.api.common.functions.RichFunction#getRuntimeContext() and provides access org.apache.flink.api.common.cache.DistributedCache via org.apache.flink.api.common.functions.RuntimeContext#getDistributedCache().
- filePath
The path of the file, as a URI (e.g. "file:///some/path" or "hdfs://host:port/and/path")
- name
The name under which the file is registered.
- executable
flag indicating whether the file should be executable
-
def
registerCachedFile(filePath: String, name: String): Unit
Registers a file at the distributed cache under the given name.
Registers a file at the distributed cache under the given name. The file will be accessible from any user-defined function in the (distributed) runtime under a local path. Files may be local files (which will be distributed via BlobServer), or files in a distributed file system. The runtime will copy the files temporarily to a local cache, if needed.
The org.apache.flink.api.common.functions.RuntimeContext can be obtained inside UDFs via org.apache.flink.api.common.functions.RichFunction#getRuntimeContext() and provides access org.apache.flink.api.common.cache.DistributedCache via org.apache.flink.api.common.functions.RuntimeContext#getDistributedCache().
- filePath
The path of the file, as a URI (e.g. "file:///some/path" or "hdfs://host:port/and/path")
- name
The name under which the file is registered.
-
def
registerJobListener(jobListener: JobListener): Unit
Register a JobListener in this environment.
Register a JobListener in this environment. The JobListener will be notified on specific job status changed.
- Annotations
- @PublicEvolving()
-
def
registerSlotSharingGroup(slotSharingGroup: SlotSharingGroup): StreamExecutionEnvironment
Register a slot sharing group with its resource spec.
Register a slot sharing group with its resource spec.
Note that a slot sharing group hints the scheduler that the grouped operators CAN be deployed into a shared slot. There's no guarantee that the scheduler always deploy the grouped operators together. In cases grouped operators are deployed into separate slots, the slot resources will be derived from the specified group requirements.
- slotSharingGroup
which contains name and its resource spec.
- Annotations
- @PublicEvolving()
-
def
registerType(typeClass: Class[_]): Unit
Registers the given type with the serialization stack.
Registers the given type with the serialization stack. If the type is eventually serialized as a POJO, then the type is registered with the POJO serializer. If the type ends up being serialized with Kryo, then it will be registered at Kryo to make sure that only tags are written.
-
def
registerTypeWithKryoSerializer(clazz: Class[_], serializer: Class[_ <: Serializer[_]]): Unit
Registers the given type with the serializer at the KryoSerializer.
-
def
registerTypeWithKryoSerializer[T <: Serializer[_] with Serializable](clazz: Class[_], serializer: T): Unit
Registers the given type with the serializer at the KryoSerializer.
Registers the given type with the serializer at the KryoSerializer.
Note that the serializer instance must be serializable (as defined by java.io.Serializable), because it may be distributed to the worker nodes by java serialization.
-
def
setBufferTimeout(timeoutMillis: Long): StreamExecutionEnvironment
Sets the maximum time frequency (milliseconds) for the flushing of the output buffers.
Sets the maximum time frequency (milliseconds) for the flushing of the output buffers. By default the output buffers flush frequently to provide low latency and to aid smooth developer experience. Setting the parameter can result in three logical modes:
- A positive integer triggers flushing periodically by that integer
- 0 triggers flushing after every record thus minimizing latency
- -1 triggers flushing only when the output buffer is full thus maximizing throughput
-
def
setDefaultSavepointDirectory(savepointDirectory: Path): StreamExecutionEnvironment
Sets the default savepoint directory, where savepoints will be written to if no is explicitly provided when triggered.
Sets the default savepoint directory, where savepoints will be written to if no is explicitly provided when triggered.
- returns
This StreamExecutionEnvironment itself, to allow chaining of function calls.
- Annotations
- @PublicEvolving()
- See also
#getDefaultSavepointDirectory()
-
def
setDefaultSavepointDirectory(savepointDirectory: URI): StreamExecutionEnvironment
Sets the default savepoint directory, where savepoints will be written to if no is explicitly provided when triggered.
Sets the default savepoint directory, where savepoints will be written to if no is explicitly provided when triggered.
- returns
This StreamExecutionEnvironment itself, to allow chaining of function calls.
- Annotations
- @PublicEvolving()
- See also
#getDefaultSavepointDirectory()
-
def
setDefaultSavepointDirectory(savepointDirectory: String): StreamExecutionEnvironment
Sets the default savepoint directory, where savepoints will be written to if no is explicitly provided when triggered.
Sets the default savepoint directory, where savepoints will be written to if no is explicitly provided when triggered.
- returns
This StreamExecutionEnvironment itself, to allow chaining of function calls.
- Annotations
- @PublicEvolving()
- See also
#getDefaultSavepointDirectory()
-
def
setMaxParallelism(maxParallelism: Int): Unit
Sets the maximum degree of parallelism defined for the program.
Sets the maximum degree of parallelism defined for the program. The maximum degree of parallelism specifies the upper limit for dynamic scaling. It also defines the number of key groups used for partitioned state.
-
def
setParallelism(parallelism: Int): Unit
Sets the parallelism for operations executed through this environment.
Sets the parallelism for operations executed through this environment. Setting a parallelism of x here will cause all operators (such as join, map, reduce) to run with x parallel instances. This value can be overridden by specific operations using DataStream#setParallelism(int).
-
def
setRestartStrategy(restartStrategyConfiguration: RestartStrategyConfiguration): Unit
Sets the restart strategy configuration.
Sets the restart strategy configuration. The configuration specifies which restart strategy will be used for the execution graph in case of a restart.
- restartStrategyConfiguration
Restart strategy configuration to be set
- Annotations
- @PublicEvolving()
-
def
setRuntimeMode(executionMode: RuntimeExecutionMode): StreamExecutionEnvironment
Sets the runtime execution mode for the application (see RuntimeExecutionMode).
Sets the runtime execution mode for the application (see RuntimeExecutionMode). This is equivalent to setting the "execution.runtime-mode" in your application's configuration file.
We recommend users to NOT use this method but set the "execution.runtime-mode" using the command-line when submitting the application. Keeping the application code configuration-free allows for more flexibility as the same application will be able to be executed in any execution mode.
- executionMode
the desired execution mode.
- returns
The execution environment of your application.
- Annotations
- @PublicEvolving()
-
def
setStateBackend(backend: StateBackend): StreamExecutionEnvironment
Sets the state backend that describes how to store operator.
Sets the state backend that describes how to store operator. It defines the data structures that hold state during execution (for example hash tables, RocksDB, or other data stores).
State managed by the state backend includes both keyed state that is accessible on org.apache.flink.api.KeyedStream, as well as state maintained directly by the user code that implements org.apache.flink.streaming.api.checkpoint.CheckpointedFunction.
The org.apache.flink.runtime.state.hashmap.HashMapStateBackend maintains state in heap memory, as objects. It is lightweight without extra dependencies, but is limited to JVM heap memory.
In contrast, the EmbeddedRocksDBStateBackend stores its state in an embedded RocksDB instance. This state backend can store very large state that exceeds memory and spills to local disk. All key/value state (including windows) is stored in the key/value index of RocksDB.
In both cases, fault tolerance is managed via the jobs org.apache.flink.runtime.state.CheckpointStorage which configures how and where state backends persist during a checkpoint.
- returns
This StreamExecutionEnvironment itself, to allow chaining of function calls.
- Annotations
- @PublicEvolving()
- See also
#getStateBackend()
-
def
socketTextStream(hostname: String, port: Int, delimiter: Char = '\n', maxRetry: Long = 0): DataStream[String]
Creates a new DataStream that contains the strings received infinitely from socket.
Creates a new DataStream that contains the strings received infinitely from socket. Received strings are decoded by the system's default character set. The maximum retry interval is specified in seconds, in case of temporary service outage reconnection is initiated every second.
- Annotations
- @PublicEvolving()
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
Deprecated Value Members
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] ) @Deprecated
- Deprecated
-
def
generateSequence(from: Long, to: Long): DataStream[Long]
Creates a new DataStream that contains a sequence of numbers.
Creates a new DataStream that contains a sequence of numbers. This source is a parallel source. If you manually set the parallelism to
1the emitted elements are in order.- Annotations
- @deprecated
- Deprecated
-
def
getNumberOfExecutionRetries: Int
Gets the number of times the system will try to re-execute failed tasks.
Gets the number of times the system will try to re-execute failed tasks. A value of "-1" indicates that the system default value (as defined in the configuration) should be used.
- Annotations
- @PublicEvolving()
- Deprecated
This method will be replaced by getRestartStrategy. The FixedDelayRestartStrategyConfiguration contains the number of execution retries.
-
def
setNumberOfExecutionRetries(numRetries: Int): Unit
Sets the number of times that failed tasks are re-executed.
Sets the number of times that failed tasks are re-executed. A value of zero effectively disables fault tolerance. A value of "-1" indicates that the system default value (as defined in the configuration) should be used.
- Annotations
- @PublicEvolving()
- Deprecated
This method will be replaced by setRestartStrategy(). The FixedDelayRestartStrategyConfiguration contains the number of execution retries.