Stream
A stream producing output of type
O and which may evaluate F effects.-
'''Purely functional''' a value of type
Stream[F, O]describes an effectful computation.
A function that returns aStream[F, O]builds a description of an effectful computation,
but does not perform them. The methods of theStreamclass derive new descriptions from others.
This is similar to how effect types likecats.effect.IOandmonix.Taskbuild descriptions of
computations. -
'''Pull''': to evaluate a stream, a consumer pulls its values from it, by repeatedly performing one pull step at a time.
Each step is aF-effectful computation that may yield someOvalues (or none), and a stream from which to continue pulling.
The consumer controls the evaluation of the stream, which effectful operations are performed, and when. -
'''Non-Strict''': stream evaluation only pulls from the stream a prefix large enough to compute its results.
Thus, although a stream may yield an unbounded number of values or, after successfully yielding several values,
either raise an error or hang up and never yield any value, the consumer need not reach those points of failure.
For the same reason, in general, no effect inFis evaluated unless and until the consumer needs it. -
'''Abstract''': a stream needs not be a plain finite list of fixed effectful computations in F.
It can also represent an input or output connection through which data incrementally arrives.
It can represent an effectful computation, such as reading the system's time, that can be re-evaluated
as often as the consumer of the stream requires.
=== Special properties for streams ===
There are some special properties or cases of streams:
- A stream is '''finite''', or if we can reach the end after a limited number of pull steps,
which may yield a finite number of values. It is '''empty''' if it terminates and yields no values.
- A '''singleton''' stream is a stream that ends after yielding one single value.
- A '''pure''' stream is one in which the
- A '''never''' stream is a stream that never terminates and never yields any value.
- A stream is '''finite''', or if we can reach the end after a limited number of pull steps,
which may yield a finite number of values. It is '''empty''' if it terminates and yields no values.
- A '''singleton''' stream is a stream that ends after yielding one single value.
- A '''pure''' stream is one in which the
F is Pure, which indicates that it evaluates no effects.- A '''never''' stream is a stream that never terminates and never yields any value.
== Pure Streams and operations ==
We can sometimes think of streams, naively, as lists of
This is particularly true for '''pure''' streams, which are instances of
We can convert every ''pure and finite'' stream into a
Also, we can convert pure ''infinite'' streams into instances of the
O elements with F-effects.This is particularly true for '''pure''' streams, which are instances of
Stream which use the Pure effect type.We can convert every ''pure and finite'' stream into a
List[O] using the .toList method.Also, we can convert pure ''infinite'' streams into instances of the
Stream[O] class from the Scala standard library.A method of the
in that their signature includes no type-class constraint (or implicit parameter) on the
Pure methods in
that we can applying the stream's method and converting the result to a list gets the same result as
first converting the stream to a list, and then applying list methods.
Stream class is '''pure''' if it can be applied to pure streams. Such methods are identifiedin that their signature includes no type-class constraint (or implicit parameter) on the
F method.Pure methods in
Stream[F, O] can be projected ''naturally'' to methods in the List class, which meansthat we can applying the stream's method and converting the result to a list gets the same result as
first converting the stream to a list, and then applying list methods.
Some methods that project directly to list are
There are other methods, like
but their stream counterparts return an (either empty or singleton) stream.
Other methods, like
map, filter, takeWhile, etc.There are other methods, like
exists or find, that in the List class they return a value or an Option,but their stream counterparts return an (either empty or singleton) stream.
Other methods, like
zipWithPrevious, have a more complicated but still pure translation to list methods.== Type-Class instances and laws of the Stream Operations ==
Laws (using infix syntax):
append forms a monoid in conjunction with empty:-
empty append s == s and s append empty == s.-
(s1 append s2) append s3 == s1 append (s2 append s3)And
-
cons is consistent with using ++ to prepend a single chunk:-
s.cons(c) == Stream.chunk(c) ++ sStream.raiseError propagates until being caught by handleErrorWith:-
Stream.raiseError(e) handleErrorWith h == h(e)-
Stream.raiseError(e) ++ s == Stream.raiseError(e)-
Stream.raiseError(e) flatMap f == Stream.raiseError(e)Stream forms a monad with emit and flatMap:-
Stream.emit >=> f == f (left identity)-
f >=> Stream.emit === f (right identity - note weaker equality notion here)-
(f >=> g) >=> h == f >=> (g >=> h) (associativity)where
Stream.emit(a) is defined as chunk(Chunk.singleton(a)) and
f >=> g is defined as a => a flatMap f flatMap g`The monad is the list-style sequencing monad:
-
-
-
(a ++ b) flatMap f == (a flatMap f) ++ (b flatMap f)-
Stream.empty flatMap f == Stream.empty== Technical notes==
''Note:'' since the chunk structure of the stream is observable, and
the right identity law uses a weaker notion of equality,
normalizes both sides with respect to chunk structure:
s flatMap Stream.emit produces a stream of singleton chunks,the right identity law uses a weaker notion of equality,
=== whichnormalizes both sides with respect to chunk structure:
(s1 === s2) = normalize(s1) == normalize(s2)where
== is full equality(
a == b iff f(a) is identical to f(b) for all f)normalize(s) can be defined as s.flatMap(Stream.emit), which justproduces a singly-chunked stream from any input stream
s.For instance, for a stream
- the result of
- the result of
The latter is using the definition of
s and a function f: A => B,- the result of
s.map(f) is a Stream with the same chunking as the s; wheras...- the result of
s.flatMap(x => S.emit(f(x))) is a Stream structured as a sequence of singleton chunks.The latter is using the definition of
map that is derived from the Monad instance.This is not unlike equality for maps or sets, which is defined by which elements they contain,
not by how these are spread between a tree's branches or a hashtable buckets.
However, a
so two streams "equal" under that notion may give different results through this method.
not by how these are spread between a tree's branches or a hashtable buckets.
However, a
Stream structure can be observed through the chunks method,so two streams "equal" under that notion may give different results through this method.
''Note:'' For efficiency
chunk at a time and preserves chunk structure, which differs from
the
which would produce singleton chunk. In particular, if
chunked version will fail on the first ''chunk'' with an error, while
the unchunked version will fail on the first ''element'' with an error.
Exceptions in pure code like this are strongly discouraged.
[[Stream.map]] function operates on an entirechunk at a time and preserves chunk structure, which differs from
the
map derived from the monad (s map f == s flatMap (f andThen Stream.emit))which would produce singleton chunk. In particular, if
f throws errors, thechunked version will fail on the first ''chunk'' with an error, while
the unchunked version will fail on the first ''element'' with an error.
Exceptions in pure code like this are strongly discouraged.
- Companion
- object
class AnyVal
trait Matchable
class Any
Value members
Methods
Implicitly added by InvariantOps
Lifts this stream to the specified effect type.
- Example
- {{{
scala> import cats.effect.IO
scala> Stream(1, 2, 3).covary[IO]
res0: Stream[IO,Int] = Stream(..)
}}}
Implicitly added by InvariantOps
Synchronously sends values through
p.If
p fails, then resulting stream will fail. If p halts the evaluation will halt too.Note that observe will only output full chunks of
by
in resulting stream.
O that are known to be successfully processedby
p. So if p terminates/fails in the middle of chunk processing, the chunk will not be availablein resulting stream.
Note that if your pipe can be represented by an
O => F[Unit], evalTap will provide much greater performance.- Example
- {{{
scala> import cats.effect.{ContextShift, IO}
scala> implicit val cs: ContextShift[IO] = IO.contextShift(scala.concurrent.ExecutionContext.Implicits.global)
scala> Stream(1, 2, 3).covary[IO] .observe(.showLinesStdOut).map( + 1).compile.toVector.unsafeRunSync()
res0: Vector[Int] = Vector(2, 3, 4)
}}}
Implicitly added by InvariantOps
Send chunks through
p, allowing up to maxQueued pending chunks before blocking s. def observeEither[L, R](left: (F, L) => Unit, right: (F, R) => Unit)(F: Concurrent[F], ev: O <:< Either[L, R]): Stream[F, Either[L, R]]
Implicitly added by InvariantOps
Observes this stream of
observes left values and another that observes right values.
Either[L, R] values with two pipes, one thatobserves left values and another that observes right values.
If either of
If either
left or right fails, then resulting stream will fail.If either
halts the evaluation will halt too.Implicitly added by InvariantOps
Gets a projection of this stream that allows converting it to a
Pull in a number of ways.Implicitly added by InvariantOps
Repeatedly invokes
returns
using, running the resultant Pull each time, halting when a pullreturns
None instead of Some(nextStream).@deprecated("2.0.2", "Use .to(Chunk) instead")
Implicitly added by PureOps
Runs this pure stream and returns the emitted elements in a chunk. Note: this method is only available on pure streams.
Implicitly added by PureOps
Runs this pure stream and returns the emitted elements in a list. Note: this method is only available on pure streams.
Implicitly added by PureOps
Runs this pure stream and returns the emitted elements in a vector. Note: this method is only available on pure streams.
Implicitly added by PureTo
Runs this pure stream and returns the emitted elements in a collection of the specified type. Note: this method is only available on pure streams.
@deprecated("2.0.2", "Use .to(Chunk) instead")
Implicitly added by FallibleOps
Runs this fallible stream and returns the emitted elements in a chunk. Note: this method is only available on fallible streams.
Implicitly added by FallibleOps
Runs this fallible stream and returns the emitted elements in a list. Note: this method is only available on fallible streams.
Implicitly added by FallibleOps
Runs this fallible stream and returns the emitted elements in a vector. Note: this method is only available on fallible streams.
Implicitly added by FallibleTo
Runs this fallible stream and returns the emitted elements in a collection of the specified type. Note: this method is only available on fallible streams.
Appends
s2 to the end of this stream.- Example
- {{{
scala> (Stream(1,2,3) ++ Stream(4,5,6)).toList
res0: List[Int] = List(1, 2, 3, 4, 5, 6)
}}}
Ifthisstream is infinite, then the result is equivalent tothis.
Equivalent to
val o2Memoized = o2; _.map(_ => o2Memoized).- Example
- {{{
scala> Stream(1,2,3).as(0).toList
res0: List[Int] = List(0, 0, 0)
}}}
Returns a stream of
O values wrapped in Right until the first error, which is emitted wrapped in Left.- Example
- {{{
scala> (Stream(1,2,3) ++ Stream.raiseError[cats.effect.IO] (new RuntimeException) ++ Stream(4,5,6)).attempt.compile.toList.unsafeRunSync()
res0: List[Either[Throwable,Int] ] = List(Right(1), Right(2), Right(3), Left(java.lang.RuntimeException))
}}}
rethrow is the inverse ofattempt, with the caveat that anything after the first failure is discarded.
def attempts[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](delays: Stream[F2, FiniteDuration])(evidence$1: Timer[F2]): Stream[F2, Either[Throwable, O]]
Retries on failure, returning a stream of attempts that can
be manipulated with standard stream operations such as
be manipulated with standard stream operations such as
take,collectFirst and interruptWhen.Note: The resulting stream does not automatically halt at the
first successful attempt. Also see
first successful attempt. Also see
retry. def broadcast[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](evidence$2: Concurrent[F2]): Stream[F2, Stream[F2, O]]
Returns a stream of streams where each inner stream sees all elements of the
source stream (after the inner stream has started evaluation).
For example,
inner streams, each of which see every element of the source.
source stream (after the inner stream has started evaluation).
For example,
src.broadcast.take(2) results in twoinner streams, each of which see every element of the source.
Alias for
through(Broadcast(1))./ def broadcastTo[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](pipes: (F2, O) => Unit*)(evidence$3: Concurrent[F2]): Stream[F2, Unit]
Like broadcast but instead of providing a stream of sources, runs each pipe.
The pipes are run concurrently with each other. Hence, the parallelism factor is equal
to the number of pipes.
Each pipe may have a different implementation, if required; for example one pipe may
process elements while another may send elements for processing to another machine.
to the number of pipes.
Each pipe may have a different implementation, if required; for example one pipe may
process elements while another may send elements for processing to another machine.
Each pipe is guaranteed to see all
where workers see only the elements after the start of each worker evaluation.
O pulled from the source stream, unlike broadcast,where workers see only the elements after the start of each worker evaluation.
Note: the resulting stream will not emit values, even if the pipes do.
If you need to emit
If you need to emit
Unit values, consider using broadcastThrough.Note: Elements are pulled as chunks from the source and the next chunk is pulled when all
workers are done with processing the current chunk. This behaviour may slow down processing
of incoming chunks by faster workers.
If this is not desired, consider using the
to compensate for slower workers.
workers are done with processing the current chunk. This behaviour may slow down processing
of incoming chunks by faster workers.
If this is not desired, consider using the
prefetch and prefetchN combinators on workersto compensate for slower workers.
- Value Params
- pipes
-
Pipes that will concurrently process the work.
def broadcastTo[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](maxConcurrent: Int)(pipe: (F2, O) => Unit)(evidence$4: Concurrent[F2]): Stream[F2, Unit]
Variant of
broadcastTo that broadcasts to maxConcurrent instances of a single pipe. def broadcastThrough[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](pipes: (F2, O) => O2*)(evidence$5: Concurrent[F2]): Stream[F2, O2]
Alias for
through(Broadcast.through(pipes)). def broadcastThrough[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](maxConcurrent: Int)(pipe: (F2, O) => O2)(evidence$6: Concurrent[F2]): Stream[F2, O2]
Variant of
broadcastTo that broadcasts to maxConcurrent instances of the supplied pipe.Behaves like the identity function, but requests
n elements at a time from the input.- Example
- {{{
scala> import cats.effect.IO
scala> val buf = new scala.collection.mutable.ListBufferString
scala> Stream.range(0, 100).covary[IO] .
| evalMap(i => IO { buf += s">$i"; i }).
| buffer(4).
| evalMap(i => IO { buf += s"<$i"; i }).
| take(10).
| compile.toVector.unsafeRunSync()
res0: Vector[Int] = Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
scala> buf.toList
res1: List[String] = List(>0, >1, >2, >3, <0, <1, <2, <3, >4, >5, >6, >7, <4, <5, <6, <7, >8, >9, >10, >11, <8, <9)
}}}
Behaves like the identity stream, but emits no output until the source is exhausted.
- Example
- {{{
scala> import cats.effect.IO
scala> val buf = new scala.collection.mutable.ListBufferString
scala> Stream.range(0, 10).covary[IO] .
| evalMap(i => IO { buf += s">$i"; i }).
| bufferAll.
| evalMap(i => IO { buf += s"<$i"; i }).
| take(4).
| compile.toVector.unsafeRunSync()
res0: Vector[Int] = Vector(0, 1, 2, 3)
scala> buf.toList
res1: List[String] = List(>0, >1, >2, >3, >4, >5, >6, >7, >8, >9, <0, <1, <2, <3)
}}}
Behaves like the identity stream, but requests elements from its
input in blocks that end whenever the predicate switches from true to false.
input in blocks that end whenever the predicate switches from true to false.
- Example
- {{{
scala> import cats.effect.IO
scala> val buf = new scala.collection.mutable.ListBufferString
scala> Stream.range(0, 10).covary[IO] .
| evalMap(i => IO { buf += s">$i"; i }).
| bufferBy(_ % 2 == 0).
| evalMap(i => IO { buf += s"<$i"; i }).
| compile.toVector.unsafeRunSync()
res0: Vector[Int] = Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
scala> buf.toList
res1: List[String] = List(>0, >1, <0, <1, >2, >3, <2, <3, >4, >5, <4, <5, >6, >7, <6, <7, >8, >9, <8, <9)
}}}
Emits only elements that are distinct from their immediate predecessors,
using natural equality for comparison.
using natural equality for comparison.
- Example
- {{{
scala> Stream(1,1,2,2,2,3,3).changes.toList
res0: List[Int] = List(1, 2, 3)
}}}
Emits only elements that are distinct from their immediate predecessors
according to
according to
f, using natural equality for comparison.Note that
and hence should be fast (e.g., an accessor). It is not intended to be
used for computationally intensive conversions. For such conversions,
consider something like:
f is called for each element in the stream multiple timesand hence should be fast (e.g., an accessor). It is not intended to be
used for computationally intensive conversions. For such conversions,
consider something like:
src.map(o => (o, f(o))).changesBy(_._2).map(_._1)- Example
- {{{
scala> Stream(1,1,2,4,6,9).changesBy(_ % 2).toList
res0: List[Int] = List(1, 2, 9)
}}}
Collects all output chunks in to a single chunk and emits it at the end of the
source stream. Note: if more than 2^32-1 elements are collected, this operation
will fail.
source stream. Note: if more than 2^32-1 elements are collected, this operation
will fail.
- Example
- {{{
scala> (Stream(1) ++ Stream(2, 3) ++ Stream(4, 5, 6)).chunkAll.toList
res0: List[Chunk[Int] ] = List(Chunk(1, 2, 3, 4, 5, 6))
}}}
Outputs all chunks from the source stream.
- Example
- {{{
scala> (Stream(1) ++ Stream(2, 3) ++ Stream(4, 5, 6)).chunks.toList
res0: List[Chunk[Int] ] = List(Chunk(1), Chunk(2, 3), Chunk(4, 5, 6))
}}}
Outputs chunk with a limited maximum size, splitting as necessary.
- Example
- {{{
scala> (Stream(1) ++ Stream(2, 3) ++ Stream(4, 5, 6)).chunkLimit(2).toList
res0: List[Chunk[Int] ] = List(Chunk(1), Chunk(2, 3), Chunk(4, 5), Chunk(6))
}}}
Outputs chunks of size larger than N
Chunks from the source stream are split as necessary.
If
if the stream is smaller than N, should the elements be included
allowFewerTotal is true,if the stream is smaller than N, should the elements be included
- Example
- {{{
scala> (Stream(1,2) ++ Stream(3,4) ++ Stream(5,6,7)).chunkMin(3).toList
res0: List[Chunk[Int] ] = List(Chunk(1, 2, 3, 4), Chunk(5, 6, 7))
}}}
Outputs chunks of size
n.Chunks from the source stream are split as necessary.
If
If
allowFewer is true, the last chunk that is emitted may have less than n elements.- Example
- {{{
scala> Stream(1,2,3).repeat.chunkN(2).take(5).toList
res0: List[Chunk[Int] ] = List(Chunk(1, 2), Chunk(3, 1), Chunk(2, 3), Chunk(1, 2), Chunk(3, 1))
}}}
Filters and maps simultaneously. Calls
collect on each chunk in the stream.- Example
- {{{
scala> Stream(Some(1), Some(2), None, Some(3), None, Some(4)).collect { case Some(i) => i }.toList
res0: List[Int] = List(1, 2, 3, 4)
}}}
Emits the first element of the stream for which the partial function is defined.
- Example
- {{{
scala> Stream(None, Some(1), Some(2), None, Some(3)).collectFirst { case Some(i) => i }.toList
res0: List[Int] = List(1)
}}}
Like collect but terminates as soon as the partial function is undefined.
- Example
- {{{
scala> Stream(Some(1), Some(2), Some(3), None, Some(4)).collectWhile { case Some(i) => i }.toList
res0: List[Int] = List(1, 2, 3)
}}}
def compile[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), G <: ([_$12] =>> Any), O2 >: O](compiler: Compiler[F2, G]): CompileOps[F2, G, O2]
Gets a projection of this stream that allows converting it to an
F[..] in a number of ways.- Example
- {{{
scala> import cats.effect.IO
scala> val prg: IO[Vector[Int] ] = Stream.eval(IO(1)).append(Stream(2,3,4)).compile.toVector
scala> prg.unsafeRunSync()
res2: Vector[Int] = Vector(1, 2, 3, 4)
}}}
def concurrently[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](that: Stream[F2, O2])(F: Concurrent[F2]): Stream[F2, O]
Runs the supplied stream in the background as elements from this stream are pulled.
The resulting stream terminates upon termination of this stream. The background stream will
be interrupted at that point. Early termination of
be interrupted at that point. Early termination of
that does not terminate the resulting stream.Any errors that occur in either
with an error.
this or that stream result in the overall stream terminatingwith an error.
Upon finalization, the resulting stream will interrupt the background stream and wait for it to be
finalized.
finalized.
This method is equivalent to
this mergeHaltL that.drain, just more efficient for this and that evaluation.- Example
- {{{
scala> import cats.effect.{ContextShift, IO}
scala> implicit val cs: ContextShift[IO] = IO.contextShift(scala.concurrent.ExecutionContext.Implicits.global)
scala> val data: Stream[IO,Int] = Stream.range(1, 10).covary[IO]
scala> Stream.eval(fs2.concurrent.SignallingRefIO,Int).flatMap(s => Stream(s).concurrently(data.evalMap(s.set))).flatMap(.discrete).takeWhile( < 9, true).compile.last.unsafeRunSync()
res0: Option[Int] = Some(9)
}}}
Prepends a chunk onto the front of this stream.
- Example
- {{{
scala> Stream(1,2,3).cons(Chunk(-1, 0)).toList
res0: List[Int] = List(-1, 0, 1, 2, 3)
}}}
Prepends a chunk onto the front of this stream.
- Example
- {{{
scala> Stream(1,2,3).consChunk(Chunk.vector(Vector(-1, 0))).toList
res0: List[Int] = List(-1, 0, 1, 2, 3)
}}}
Prepends a single value onto the front of this stream.
- Example
- {{{
scala> Stream(1,2,3).cons1(0).toList
res0: List[Int] = List(0, 1, 2, 3)
}}}
Lifts this stream to the specified effect and output types.
- Example
- {{{
scala> import cats.effect.IO
scala> Stream.empty.covaryAll[IO,Int]
res0: Stream[IO,Int] = Stream(..)
}}}
Lifts this stream to the specified output type.
- Example
- {{{
scala> Stream(Some(1), Some(2), Some(3)).covaryOutput[Option[Int] ]
res0: Stream[Pure,Option[Int] ] = Stream(..)
}}}
def debounce[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](d: FiniteDuration)(F: Concurrent[F2], timer: Timer[F2]): Stream[F2, O]
Debounce the stream with a minimum period of
d between each element.Use-case: if this is a stream of updates about external state, we may want to refresh (side-effectful)
once every 'd' milliseconds, and every time we refresh we only care about the latest update.
once every 'd' milliseconds, and every time we refresh we only care about the latest update.
- Returns
-
A stream whose values is an in-order, not necessarily strict subsequence of this stream,
and whose evaluation will force a delaydbetween emitting each element.
The exact subsequence would depend on the chunk structure of this stream, and the timing they arrive.
There is no guarantee ta - Example
- {{{
scala> import scala.concurrent.duration., cats.effect.{ContextShift, IO, Timer}
scala> implicit val cs: ContextShift[IO] = IO.contextShift(scala.concurrent.ExecutionContext.Implicits.global)
scala> implicit val timer: Timer[IO] = IO.timer(scala.concurrent.ExecutionContext.Implicits.global)
scala> val s = Stream(1, 2, 3) ++ Stream.sleepIO ++ Stream(4, 5) ++ Stream.sleep_IO ++ Stream(6)
scala> val s2 = s.debounce(100.milliseconds)
scala> s2.compile.toVector.unsafeRunSync()
res0: Vector[Int] = Vector(3, 6)
}}}
Logs the elements of this stream as they are pulled.
By default,
to standard out. To change formatting, supply a value for the
param. To change the destination, supply a value for the
toString is called on each element and the result is printedto standard out. To change formatting, supply a value for the
formatterparam. To change the destination, supply a value for the
logger param.This method does not change the chunk structure of the stream. To debug the
chunk structure, see debugChunks.
chunk structure, see debugChunks.
Logging is not done in
including pure streams.
F because this operation is intended for debugging,including pure streams.
- Example
- {{{
scala> Stream(1, 2).append(Stream(3, 4)).debug(o => s"a: $o").toList
a: 1
a: 2
a: 3
a: 4
res0: List[Int] = List(1, 2, 3, 4)
}}}
Like debug but logs chunks as they are pulled instead of individual elements.
- Example
- {{{
scala> Stream(1, 2, 3).append(Stream(4, 5, 6)).debugChunks(c => s"a: $c").buffer(2).debugChunks(c => s"b: $c").toList
a: Chunk(1, 2, 3)
b: Chunk(1, 2)
a: Chunk(4, 5, 6)
b: Chunk(3, 4)
b: Chunk(5, 6)
res0: List[Int] = List(1, 2, 3, 4, 5, 6)
}}}
def delayBy[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](d: FiniteDuration)(evidence$8: Timer[F2]): Stream[F2, O]
Returns a stream that when run, sleeps for duration
d and then pulls from this stream.Alias for
sleep_[F](d) ++ this.Skips the first element that matches the predicate.
- Example
- {{{
scala> Stream.range(1, 10).delete(_ % 2 == 0).toList
res0: List[Int] = List(1, 3, 4, 5, 6, 7, 8, 9)
}}}
def balanceAvailable[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](evidence$9: Concurrent[F2]): Stream[F2, Stream[F2, O]]
Like balance but uses an unlimited chunk size.
Alias for
through(Balance(Int.MaxValue)). def balance[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](chunkSize: Int)(evidence$10: Concurrent[F2]): Stream[F2, Stream[F2, O]]
Returns a stream of streams where each inner stream sees an even portion of the
elements of the source stream relative to the number of inner streams taken from
the outer stream. For example,
inner streams, each which see roughly half of the elements of the source stream.
elements of the source stream relative to the number of inner streams taken from
the outer stream. For example,
src.balance(chunkSize).take(2) results in twoinner streams, each which see roughly half of the elements of the source stream.
The
that should be passed to an inner stream. For completely fair distribution of elements,
use a chunk size of 1. For best performance, use a chunk size of
chunkSize parameter specifies the maximum chunk size from the source streamthat should be passed to an inner stream. For completely fair distribution of elements,
use a chunk size of 1. For best performance, use a chunk size of
Int.MaxValue.See fs2.concurrent.Balance.apply for more details.
Alias for
through(Balance(chunkSize)). def balanceTo[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](chunkSize: Int)(pipes: (F2, O) => Unit*)(evidence$11: Concurrent[F2]): Stream[F2, Unit]
Like balance but instead of providing a stream of sources, runs each pipe.
The pipes are run concurrently with each other. Hence, the parallelism factor is equal
to the number of pipes.
Each pipe may have a different implementation, if required; for example one pipe may
process elements while another may send elements for processing to another machine.
to the number of pipes.
Each pipe may have a different implementation, if required; for example one pipe may
process elements while another may send elements for processing to another machine.
Each pipe is guaranteed to see all
where workers see only the elements after the start of each worker evaluation.
O pulled from the source stream, unlike broadcast,where workers see only the elements after the start of each worker evaluation.
Note: the resulting stream will not emit values, even if the pipes do.
If you need to emit
If you need to emit
Unit values, consider using balanceThrough.- Value Params
- chunkSize
-
max size of chunks taken from the source stream
- pipes
-
pipes that will concurrently process the work
def balanceTo[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](chunkSize: Int, maxConcurrent: Int)(pipe: (F2, O) => Unit)(evidence$12: Concurrent[F2]): Stream[F2, Unit]
Variant of
balanceTo that broadcasts to maxConcurrent instances of a single pipe.- Value Params
- chunkSize
-
max size of chunks taken from the source stream
- maxConcurrent
-
maximum number of pipes to run concurrently
- pipe
-
pipe to use to process elements
def balanceThrough[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](chunkSize: Int)(pipes: (F2, O) => O2*)(evidence$13: Concurrent[F2]): Stream[F2, O2]
Alias for
through(Balance.through(chunkSize)(pipes). def balanceThrough[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](chunkSize: Int, maxConcurrent: Int)(pipe: (F2, O) => O2)(evidence$14: Concurrent[F2]): Stream[F2, O2]
Variant of
balanceThrough that takes number of concurrency required and single pipe.- Value Params
- chunkSize
-
max size of chunks taken from the source stream
- maxConcurrent
-
maximum number of pipes to run concurrently
- pipe
-
pipe to use to process elements
Removes all output values from this stream.
Often used with
while getting outputs from the opposite side of the merge.
merge to run one side of the merge for its effectwhile getting outputs from the opposite side of the merge.
- Example
- {{{
scala> import cats.effect.IO
scala> Stream.eval(IO(println("x"))).drain.compile.toVector.unsafeRunSync()
res0: Vector[INothing] = Vector()
}}}
Drops
n elements of the input, then echoes the rest.- Example
- {{{
scala> Stream.range(0,10).drop(5).toList
res0: List[Int] = List(5, 6, 7, 8, 9)
}}}
Drops the last element.
- Example
- {{{
scala> Stream.range(0,10).dropLast.toList
res0: List[Int] = List(0, 1, 2, 3, 4, 5, 6, 7, 8)
}}}
Drops the last element if the predicate evaluates to true.
- Example
- {{{
scala> Stream.range(0,10).dropLastIf(_ > 5).toList
res0: List[Int] = List(0, 1, 2, 3, 4, 5, 6, 7, 8)
}}}
Outputs all but the last
n elements of the input.This is a '''pure''' stream operation: if
is equal to
s is a finite pure stream, then s.dropRight(n).toListis equal to
this.toList.reverse.drop(n).reverse.- Example
- {{{
scala> Stream.range(0,10).dropRight(5).toList
res0: List[Int] = List(0, 1, 2, 3, 4)
}}}
Like dropWhile, but drops the first value which tests false.
- Example
- {{{
scala> Stream.range(0,10).dropThrough(_ != 4).toList
res0: List[Int] = List(5, 6, 7, 8, 9)
}}}
'''Pure:''' ifthisis a finite pure stream, thenthis.dropThrough(p).toListis equal tothis.toList.dropWhile(p).drop(1)
Drops elements from the head of this stream until the supplied predicate returns false.
- Example
- {{{
scala> Stream.range(0,10).dropWhile(_ != 4).toList
res0: List[Int] = List(4, 5, 6, 7, 8, 9)
}}}
'''Pure''' this operation maps directly toList.dropWhile
def either[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](that: Stream[F2, O2])(evidence$15: Concurrent[F2]): Stream[F2, Either[O, O2]]
Like
[[merge]], but tags each output with the branch it came from.- Example
- {{{
scala> import scala.concurrent.duration._, cats.effect.{ContextShift, IO, Timer}
scala> implicit val cs: ContextShift[IO] = IO.contextShift(scala.concurrent.ExecutionContext.Implicits.global)
scala> implicit val timer: Timer[IO] = IO.timer(scala.concurrent.ExecutionContext.Implicits.global)
scala> val s1 = Stream.awakeEveryIO.scan(0)((acc, ) => acc + 1)
scala> val s = s1.either(Stream.sleepIO ++ s1).take(10)
scala> s.take(10).compile.toVector.unsafeRunSync()
res0: Vector[Either[Int,Int] ] = Vector(Left(0), Right(0), Left(1), Right(1), Left(2), Right(2), Left(3), Right(3), Left(4), Right(4))
}}}
Alias for
flatMap(o => Stream.eval(f(o))).- Example
- {{{
scala> import cats.effect.IO
scala> Stream(1,2,3,4).evalMap(i => IO(println(i))).compile.drain.unsafeRunSync()
res0: Unit = ()
}}}
Note this operator will de-chunk the stream back into chunks of size 1, which has performance
implications. For maximum performance,evalMapChunkis available, however, with caveats.
def evalMapChunk[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](f: O => F2[O2])(evidence$16: Applicative[F2]): Stream[F2, O2]
Like
is not lazy on every single element, rather on the chunks.
evalMap, but operates on chunks for performance. This means this operatoris not lazy on every single element, rather on the chunks.
For instance,
evalMap would only print twice in the follow example (note the take(2)):- Example
- {{{
scala> import cats.effect.IO
scala> Stream(1,2,3,4).evalMap(i => IO(println(i))).take(2).compile.drain.unsafeRunSync()
res0: Unit = ()
}}}
But withevalMapChunk, it will print 4 times:{{{
scala> Stream(1,2,3,4).evalMapChunk(i => IO(println(i))).take(2).compile.drain.unsafeRunSync()
res0: Unit = ()
}}}
def evalMapAccumulate[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), S, O2](s: S)(f: (S, O) => F2[(S, O2)]): Stream[F2, (S, O2)]
Like
[[Stream#mapAccumulate]], but accepts a function returning an F[_].- Example
- {{{
scala> import cats.effect.IO
scala> Stream(1,2,3,4).covary[IO] .evalMapAccumulate(0)((acc,i) => IO((i, acc + i))).compile.toVector.unsafeRunSync()
res0: Vector[(Int, Int)] = Vector((1,1), (2,3), (3,5), (4,7))
}}}
def evalMapFilter[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](f: O => F2[Option[O2]]): Stream[F2, O2]
Effectfully maps and filters the elements of the stream depending on the optionality of the result of the
application of the effectful function
application of the effectful function
f.- Example
- {{{
scala> import cats.effect.IO, cats.syntax.all._
scala> Stream(1, 2, 3, 4, 5).evalMapFilter(n => IO((n * 2).some.filter(_ % 4 == 0))).compile.toList.unsafeRunSync()
res0: List[Int] = List(4, 8)
}}}
def evalScan[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](z: O2)(f: (O2, O) => F2[O2]): Stream[F2, O2]
Like
[[Stream#scan]], but accepts a function returning an F[_].- Example
- {{{
scala> import cats.effect.IO
scala> Stream(1,2,3,4).covary[IO] .evalScan(0)((acc,i) => IO(acc + i)).compile.toVector.unsafeRunSync()
res0: Vector[Int] = Vector(0, 1, 3, 6, 10)
}}}
def evalTap[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](f: O => F2[O2])(evidence$17: Functor[F2]): Stream[F2, O]
Like
Not as powerful as
Alias for
observe but observes with a function O => F[_] instead of a pipe.Not as powerful as
observe since not all pipes can be represented by O => F[_], but much faster.Alias for
evalMap(o => f(o).as(o)). def evalTapChunk[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](f: O => F2[O2])(evidence$18: Functor[F2], evidence$19: Applicative[F2]): Stream[F2, O]
Alias for
evalMapChunk(o => f(o).as(o)).Emits
'''Pure''': this operation maps to
true as soon as a matching element is received, else false if no input matches.'''Pure''': this operation maps to
List.exists- Returns
-
Either a singleton stream, or a
neverstream.
- Ifthisis a finite stream, the result is a singleton stream, with after yielding one single value.
Ifthisis empty, that value is thememptyof the instance ofMonoid.
- Ifthisis a non-terminating stream, and no matter if it yields any value, then the result is
equivalent to theStream.never: it never terminates nor yields any value. - Example
- {{{
scala> Stream.range(0,10).exists(_ == 4).toList
res0: List[Boolean] = List(true)
scala> Stream.range(0,10).exists(_ == 10).toList
res1: List[Boolean] = List(false)
}}}
Emits only inputs which match the supplied predicate.
This is a '''pure''' operation, that projects directly into
List.filter- Example
- {{{
scala> Stream.range(0,10).filter(_ % 2 == 0).toList
res0: List[Int] = List(0, 2, 4, 6, 8)
}}}
def evalFilter[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](f: O => F2[Boolean])(evidence$20: Functor[F2]): Stream[F2, O]
Like
filter, but allows filtering based on an effect.Note: The result Stream will consist of chunks that are empty or 1-element-long.
If you want to operate on chunks after using it, consider buffering, e.g. by using buffer.
If you want to operate on chunks after using it, consider buffering, e.g. by using buffer.
def evalFilterAsync[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](maxConcurrent: Int)(f: O => F2[Boolean])(evidence$21: Concurrent[F2]): Stream[F2, O]
Like
The ordering of emitted elements is unchanged.
filter, but allows filtering based on an effect, with up to maxConcurrent concurrently running effects.The ordering of emitted elements is unchanged.
def evalFilterNot[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](f: O => F2[Boolean])(evidence$22: Functor[F2]): Stream[F2, O]
Like
filterNot, but allows filtering based on an effect.Note: The result Stream will consist of chunks that are empty or 1-element-long.
If you want to operate on chunks after using it, consider buffering, e.g. by using buffer.
If you want to operate on chunks after using it, consider buffering, e.g. by using buffer.
def evalFilterNotAsync[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](maxConcurrent: Int)(f: O => F2[Boolean])(evidence$23: Concurrent[F2]): Stream[F2, O]
Like
The ordering of emitted elements is unchanged.
filterNot, but allows filtering based on an effect, with up to maxConcurrent concurrently running effects.The ordering of emitted elements is unchanged.
Like
current elements.
filter, but the predicate f depends on the previously emitted andcurrent elements.
- Example
- {{{
scala> Stream(1, -1, 2, -2, 3, -3, 4, -4).filterWithPrevious((previous, current) => previous < current).toList
res0: List[Int] = List(1, 2, 3, 4)
}}}
Emits the first input (if any) which matches the supplied predicate.
- Example
- {{{
scala> Stream.range(1,10).find(_ % 2 == 0).toList
res0: List[Int] = List(2)
}}}
'''Pure''' ifsis a finite pure stream,s.find(p).toListis equal tos.toList.find(p).toList,
where the secondtoListis to turnOptionintoList.
Creates a stream whose elements are generated by applying
the source stream and concatenated all of the results.
f to each output ofthe source stream and concatenated all of the results.
- Example
- {{{
scala> Stream(1, 2, 3).flatMap { i => Stream.chunk(Chunk.seq(List.fill(i)(i))) }.toList
res0: List[Int] = List(1, 2, 2, 3, 3, 3)
}}}
Flattens a stream of streams in to a single stream by concatenating each stream.
See parJoin and parJoinUnbounded for concurrent flattening of 'n' streams.
See parJoin and parJoinUnbounded for concurrent flattening of 'n' streams.
Folds all inputs using an initial value
and emits a single element stream.
z and supplied binary operator,and emits a single element stream.
- Example
- {{{
scala> Stream(1, 2, 3, 4, 5).fold(0)(_ + _).toList
res0: List[Int] = List(15)
}}}
Folds all inputs using the supplied binary operator, and emits a single-element
stream, or the empty stream if the input is empty, or the never stream if the input is non-terminating.
stream, or the empty stream if the input is empty, or the never stream if the input is non-terminating.
- Example
- {{{
scala> Stream(1, 2, 3, 4, 5).fold1(_ + _).toList
res0: List[Int] = List(15)
}}}
Alias for
map(f).foldMonoid.- Example
- {{{
scala> Stream(1, 2, 3, 4, 5).foldMap(_ => 1).toList
res0: List[Int] = List(5)
}}}
Folds this stream with the monoid for
O.- Returns
-
Either a singleton stream or a
neverstream:
- Ifthisis a finite stream, the result is a singleton stream.
Ifthisis empty, that value is thememptyof the instance ofMonoid.
- Ifthisis a non-terminating stream, and no matter if it yields any value, then the result is
equivalent to theStream.never: it never terminates nor yields any value. - Example
- {{{
scala> Stream(1, 2, 3, 4, 5).foldMonoid.toList
res0: List[Int] = List(15)
}}}
Emits
emits a single
or hangs without emitting values if the input is infinite and all inputs match the predicate.
false and halts as soon as a non-matching element is received; oremits a single
true value if it reaches the stream end and every input before that matches the predicate;or hangs without emitting values if the input is infinite and all inputs match the predicate.
- Returns
-
Either a singleton or a never stream:
- '''If'''thisyields an elementxfor which¬ p(x), '''then'''
a singleton stream with the valuefalse. Pulling from the resultg
performs all the effects needed until reaching the counterexamplex.
- Ifthisis a finite stream with no counterexamples ofp, '''then''' a singleton stream with thetruevalue.
Pulling from the it will perform all effects ofthis.
- Ifthisis an infinite stream and all its the elements satisfyp, then the result
is aneverstream. Pulling from that stream will pull all effects fromthis. - Example
- {{{
scala> Stream(1, 2, 3, 4, 5).forall(_ < 10).toList
res0: List[Boolean] = List(true)
}}}
Partitions the input into a stream of chunks according to a discriminator function.
Each chunk in the source stream is grouped using the supplied discriminator function
and the results of the grouping are emitted each time the discriminator function changes
values.
and the results of the grouping are emitted each time the discriminator function changes
values.
Note: there is no limit to how large a group can become. To limit the group size, use
groupAdjacentByLimit.
groupAdjacentByLimit.
- Example
- {{{
scala> Stream("Hello", "Hi", "Greetings", "Hey").groupAdjacentBy(_.head).toList.map { case (k,vs) => k -> vs.toList }
res0: List[(Char,List[String] )] = List((H,List(Hello, Hi)), (G,List(Greetings)), (H,List(Hey)))
}}}
Like groupAdjacentBy but limits the size of emitted chunks.
- Example
- {{{
scala> Stream.range(0, 12).groupAdjacentByLimit(3)(_ / 4).toList
res0: List[(Int,Chunk[Int] )] = List((0,Chunk(0, 1, 2)), (0,Chunk(3)), (1,Chunk(4, 5, 6)), (1,Chunk(7)), (2,Chunk(8, 9, 10)), (2,Chunk(11)))
}}}
def groupWithin[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](n: Int, d: FiniteDuration)(timer: Timer[F2], F: Concurrent[F2]): Stream[F2, Chunk[O]]
Divide this streams into groups of elements received within a time window,
or limited by the number of the elements, whichever happens first.
Empty groups, which can occur if no elements can be pulled from upstream
in a given time window, will not be emitted.
or limited by the number of the elements, whichever happens first.
Empty groups, which can occur if no elements can be pulled from upstream
in a given time window, will not be emitted.
Note: a time window starts each time downstream pulls.
def handleErrorWith[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O](h: Throwable => Stream[F2, O2]): Stream[F2, O2]
If
this terminates with Stream.raiseError(e), invoke h(e).- Example
- {{{
scala> Stream(1, 2, 3).append(Stream.raiseError[cats.effect.IO] (new RuntimeException)).handleErrorWith(_ => Stream(0)).compile.toList.unsafeRunSync()
res0: List[Int] = List(1, 2, 3, 0)
}}}
Emits the first element of this stream (if non-empty) and then halts.
- Example
- {{{
scala> Stream(1, 2, 3).head.toList
res0: List[Int] = List(1)
}}}
def hold[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O](initial: O2)(F: Concurrent[F2]): Stream[F2, Signal[F2, O2]]
Converts a discrete stream to a signal. Returns a single-element stream.
Resulting signal is initially
produced by
will always be
initial, and is updated with latest valueproduced by
source. If the source stream is empty, the resulting signalwill always be
initial. def holdOption[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O](evidence$24: Concurrent[F2]): Stream[F2, Signal[F2, Option[O2]]]
Like hold but does not require an initial value, and hence all output elements are wrapped in
Some. def holdResource[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O](initial: O2)(F: Concurrent[F2]): Resource[F2, Signal[F2, O2]]
Like hold but returns a
Resource rather than a single element stream. def holdOptionResource[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O](evidence$25: Concurrent[F2]): Resource[F2, Signal[F2, Option[O2]]]
Like holdResource but does not require an initial value,
and hence all output elements are wrapped in
and hence all output elements are wrapped in
Some. def interleave[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O](that: Stream[F2, O2]): Stream[F2, O2]
Deterministically interleaves elements, starting on the left, terminating when the end of either branch is reached naturally.
- Example
- {{{
scala> Stream(1, 2, 3).interleave(Stream(4, 5, 6, 7)).toList
res0: List[Int] = List(1, 4, 2, 5, 3, 6)
}}}
def interleaveAll[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O](that: Stream[F2, O2]): Stream[F2, O2]
Deterministically interleaves elements, starting on the left, terminating when the ends of both branches are reached naturally.
- Example
- {{{
scala> Stream(1, 2, 3).interleaveAll(Stream(4, 5, 6, 7)).toList
res0: List[Int] = List(1, 4, 2, 5, 3, 6, 7)
}}}
def interruptAfter[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](duration: FiniteDuration)(evidence$26: Concurrent[F2], evidence$27: Timer[F2]): Stream[F2, O]
Interrupts this stream after the specified duration has passed.
def interruptWhen[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](haltWhenTrue: Stream[F2, Boolean])(F2: Concurrent[F2]): Stream[F2, O]
Let through the
listening asynchronously for the left branch to become
This halts as soon as either branch halts.
s2 branch as long as the s1 branch is false,listening asynchronously for the left branch to become
true.This halts as soon as either branch halts.
Consider using the overload that takes a
Signal, Deferred or F[Either[Throwable, Unit]]. def interruptWhen[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](haltWhenTrue: Deferred[F2, Either[Throwable, Unit]])(evidence$28: Concurrent[F2]): Stream[F2, O]
Alias for
interruptWhen(haltWhenTrue.get). def interruptWhen[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](haltWhenTrue: Signal[F2, Boolean])(evidence$29: Concurrent[F2]): Stream[F2, O]
Alias for
interruptWhen(haltWhenTrue.discrete). def interruptWhen[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](haltOnSignal: F2[Either[Throwable, Unit]])(F2: Concurrent[F2]): Stream[F2, O]
Interrupts the stream, when
haltOnSignal finishes its evaluation. def interruptScope[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](evidence$30: Concurrent[F2]): Stream[F2, O]
Creates a scope that may be interrupted by calling scope#interrupt.
Emits the specified separator between every pair of elements in the source stream.
- Example
- {{{
scala> Stream(1, 2, 3, 4, 5).intersperse(0).toList
res0: List[Int] = List(1, 0, 2, 0, 3, 0, 4, 0, 5)
}}}
Returns the last element of this stream, if non-empty.
- Example
- {{{
scala> Stream(1, 2, 3).last.toList
res0: List[Option[Int] ] = List(Some(3))
}}}
Returns the last element of this stream, if non-empty, otherwise the supplied
fallback value.- Example
- {{{
scala> Stream(1, 2, 3).lastOr(0).toList
res0: List[Int] = List(3)
scala> Stream.empty.lastOr(0).toList
res1: List[Int] = List(0)
}}}
def lines[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](out: PrintStream)(F: Sync[F2], ev: O <:< String): Stream[F2, Unit]
Writes this stream of strings to the supplied
PrintStream.Note: printing to the
Use
PrintStream is performed synchronously.Use
linesAsync(out, blocker) if synchronous writes are a concern. def linesAsync[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](out: PrintStream, blocker: Blocker)(F: Sync[F2], cs: ContextShift[F2], ev: O <:< String): Stream[F2, Unit]
Writes this stream of strings to the supplied
PrintStream.Note: printing to the
PrintStream is performed on the supplied blocking execution context.Applies the specified pure function to each input and emits the result.
- Example
- {{{
scala> Stream("Hello", "World!").map(_.size).toList
res0: List[Int] = List(5, 6)
}}}
Maps a running total according to
S and the input with the function f.- Example
- {{{
scala> Stream("Hello", "World").mapAccumulate(0)((l, s) => (l + s.length, s.head)).toVector
res0: Vector[(Int, Char)] = Vector((5,H), (10,W))
}}}
def mapAsync[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](maxConcurrent: Int)(f: O => F2[O2])(evidence$31: Concurrent[F2]): Stream[F2, O2]
Alias for parEvalMap.
def mapAsyncUnordered[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](maxConcurrent: Int)(f: O => F2[O2])(evidence$32: Concurrent[F2]): Stream[F2, O2]
Alias for parEvalMapUnordered.
Applies the specified pure function to each chunk in this stream.
- Example
- {{{
scala> Stream(1, 2, 3).append(Stream(4, 5, 6)).mapChunks { c => val ints = c.toInts; for (i <- 0 until ints.values.size) ints.values(i) = 0; ints }.toList
res0: List[Int] = List(0, 0, 0, 0, 0, 0)
}}}
Behaves like the identity function but halts the stream on an error and does not return the error.
- Example
- {{{
scala> (Stream(1,2,3) ++ Stream.raiseError[cats.effect.IO] (new RuntimeException) ++ Stream(4, 5, 6)).mask.compile.toList.unsafeRunSync()
res0: List[Int] = List(1, 2, 3)
}}}
def switchMap[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](f: O => Stream[F2, O2])(F2: Concurrent[F2]): Stream[F2, O2]
Like flatMap but interrupts the inner stream when new elements arrive in the outer stream.
The implementation will try to preserve chunks like merge.
Finializers of each inner stream are guaranteed to run before the next inner stream starts.
When the outer stream stops gracefully, the currently running inner stream will continue to run.
When an inner stream terminates/interrupts, nothing happens until the next element arrives
in the outer stream(i.e the outer stream holds the stream open during this time or else the
stream terminates)
in the outer stream(i.e the outer stream holds the stream open during this time or else the
stream terminates)
When either the inner or outer stream fails, the entire stream fails and the finalizer of the
inner stream runs before the outer one.
inner stream runs before the outer one.
def merge[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O](that: Stream[F2, O2])(F2: Concurrent[F2]): Stream[F2, O2]
Interleaves the two inputs nondeterministically. The output stream
halts after BOTH
of an uncaught failure on either
eventually terminate with
elements of
halts after BOTH
s1 and s2 terminate normally, or in the eventof an uncaught failure on either
s1 or s2. Has the property thatmerge(Stream.empty, s) == s and merge(raiseError(e), s) willeventually terminate with
raiseError(e), possibly after emitting someelements of
s first.The implementation always tries to pull one chunk from each side
before waiting for it to be consumed by resulting stream.
As such, there may be up to two chunks (one from each stream)
waiting to be processed while the resulting stream
is processing elements.
before waiting for it to be consumed by resulting stream.
As such, there may be up to two chunks (one from each stream)
waiting to be processed while the resulting stream
is processing elements.
Also note that if either side produces empty chunk,
the processing on that side continues,
w/o downstream requiring to consume result.
the processing on that side continues,
w/o downstream requiring to consume result.
If either side does not emit anything (i.e. as result of drain) that side
will continue to run even when the resulting stream did not ask for more data.
will continue to run even when the resulting stream did not ask for more data.
Note that even when this is equivalent to
this implementation is little more efficient
Stream(this, that).parJoinUnbounded,this implementation is little more efficient
- Example
- {{{
scala> import scala.concurrent.duration._, cats.effect.{ContextShift, IO, Timer}
scala> implicit val cs: ContextShift[IO] = IO.contextShift(scala.concurrent.ExecutionContext.Implicits.global)
scala> implicit val timer: Timer[IO] = IO.timer(scala.concurrent.ExecutionContext.Implicits.global)
scala> val s1 = Stream.awakeEveryIO.scan(0)((acc, ) => acc + 1)
scala> val s = s1.merge(Stream.sleepIO ++ s1)
scala> s.take(6).compile.toVector.unsafeRunSync()
res0: Vector[Int] = Vector(0, 0, 1, 1, 2, 2)
}}}
def mergeHaltBoth[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O](that: Stream[F2, O2])(evidence$33: Concurrent[F2]): Stream[F2, O2]
Like
merge, but halts as soon as either branch halts. def mergeHaltL[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O](that: Stream[F2, O2])(evidence$34: Concurrent[F2]): Stream[F2, O2]
Like
merge, but halts as soon as the s1 branch halts.Note: it is not guaranteed that the last element of the stream will come from
s1. def mergeHaltR[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O](that: Stream[F2, O2])(evidence$35: Concurrent[F2]): Stream[F2, O2]
Like
merge, but halts as soon as the s2 branch halts.Note: it is not guaranteed that the last element of the stream will come from
s2.Emits each output wrapped in a
Some and emits a None at the end of the stream.s.noneTerminate.unNoneTerminate == s- Example
- {{{
scala> Stream(1,2,3).noneTerminate.toList
res0: List[Option[Int] ] = List(Some(1), Some(2), Some(3), None)
}}}
def onComplete[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O](s2: => Stream[F2, O2]): Stream[F2, O2]
Run
s2 after this, regardless of errors during this, then reraise any errors encountered during this.Note: this should not be used for resource cleanup! Use
bracket or onFinalize instead.- Example
- {{{
scala> Stream(1, 2, 3).onComplete(Stream(4, 5)).toList
res0: List[Int] = List(1, 2, 3, 4, 5)
}}}
def onFinalize[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](f: F2[Unit])(F2: Applicative[F2]): Stream[F2, O]
Runs the supplied effectful action at the end of this stream, regardless of how the stream terminates.
def onFinalizeWeak[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](f: F2[Unit])(F2: Applicative[F2]): Stream[F2, O]
Like onFinalize but does not introduce a scope, allowing finalization to occur after
subsequent appends or other scope-preserving transformations.
subsequent appends or other scope-preserving transformations.
Scopes can be manually introduced via scope if desired.
Example use case:
In this example, use of
over the compiled resource. By using
to the scope governing
a.concurrently(b).onFinalizeWeak(f).compile.resource.use(g)In this example, use of
onFinalize would result in b shutting down beforeg is run, because onFinalize creates a scope, whose lifetime is extendedover the compiled resource. By using
onFinalizeWeak instead, f is attachedto the scope governing
concurrently. def onFinalizeCase[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](f: ExitCase[Throwable] => F2[Unit])(F2: Applicative[F2]): Stream[F2, O]
Like onFinalize but provides the reason for finalization as an
ExitCase[Throwable]. def onFinalizeCaseWeak[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](f: ExitCase[Throwable] => F2[Unit])(F2: Applicative[F2]): Stream[F2, O]
Like onFinalizeCase but does not introduce a scope, allowing finalization to occur after
subsequent appends or other scope-preserving transformations.
subsequent appends or other scope-preserving transformations.
Scopes can be manually introduced via scope if desired.
See onFinalizeWeak for more details on semantics.
def parEvalMap[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](maxConcurrent: Int)(f: O => F2[O2])(evidence$36: Concurrent[F2]): Stream[F2, O2]
Like evalMap, but will evaluate effects in parallel, emitting the results
downstream in the same order as the input stream. The number of concurrent effects
is limited by the
downstream in the same order as the input stream. The number of concurrent effects
is limited by the
maxConcurrent parameter.See parEvalMapUnordered if there is no requirement to retain the order of
the original stream.
the original stream.
- Example
- {{{
scala> import cats.effect.{ContextShift, IO}
scala> implicit val cs: ContextShift[IO] = IO.contextShift(scala.concurrent.ExecutionContext.Implicits.global)
scala> Stream(1,2,3,4).covary[IO] .parEvalMap(2)(i => IO(println(i))).compile.drain.unsafeRunSync()
res0: Unit = ()
}}}
def parEvalMapUnordered[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](maxConcurrent: Int)(f: O => F2[O2])(evidence$37: Concurrent[F2]): Stream[F2, O2]
Like evalMap, but will evaluate effects in parallel, emitting the results
downstream. The number of concurrent effects is limited by the
downstream. The number of concurrent effects is limited by the
maxConcurrent parameter.See parEvalMap if retaining the original order of the stream is required.
- Example
- {{{
scala> import cats.effect.{ContextShift, IO}
scala> implicit val cs: ContextShift[IO] = IO.contextShift(scala.concurrent.ExecutionContext.Implicits.global)
scala> Stream(1,2,3,4).covary[IO] .parEvalMapUnordered(2)(i => IO(println(i))).compile.drain.unsafeRunSync()
res0: Unit = ()
}}}
def parJoin[F2 <: ([_$57] =>> Any), O2](maxOpen: Int)(ev: O <:< Stream[F2, O2], ev2: F[Any] <:< F2[Any], F2: Concurrent[F2]): Stream[F2, O2]
Nondeterministically merges a stream of streams (
opening at most
outer) in to a single stream,opening at most
maxOpen streams at any point in time.The outer stream is evaluated and each resulting inner stream is run concurrently,
up to
is paused until one or more inner streams finish evaluating.
up to
maxOpen stream. Once this limit is reached, evaluation of the outer streamis paused until one or more inner streams finish evaluating.
When the outer stream stops gracefully, all inner streams continue to run,
resulting in a stream that will stop when all inner streams finish
their evaluation.
resulting in a stream that will stop when all inner streams finish
their evaluation.
When the outer stream fails, evaluation of all inner streams is interrupted
and the resulting stream will fail with same failure.
and the resulting stream will fail with same failure.
When any of the inner streams fail, then the outer stream and all other inner
streams are interrupted, resulting in stream that fails with the error of the
stream that caused initial failure.
streams are interrupted, resulting in stream that fails with the error of the
stream that caused initial failure.
Finalizers on each inner stream are run at the end of the inner stream,
concurrently with other stream computations.
concurrently with other stream computations.
Finalizers on the outer stream are run after all inner streams have been pulled
from the outer stream but not before all inner streams terminate -- hence finalizers on the outer stream will run
AFTER the LAST finalizer on the very last inner stream.
from the outer stream but not before all inner streams terminate -- hence finalizers on the outer stream will run
AFTER the LAST finalizer on the very last inner stream.
Finalizers on the returned stream are run after the outer stream has finished
and all open inner streams have finished.
and all open inner streams have finished.
- Value Params
- maxOpen
-
Maximum number of open inner streams at any time. Must be > 0.
def parJoinUnbounded[F2 <: ([_$67] =>> Any), O2](ev: O <:< Stream[F2, O2], ev2: F[Any] <:< F2[Any], F2: Concurrent[F2]): Stream[F2, O2]
Like parJoin but races all inner streams simultaneously.
def parZip[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](that: Stream[F2, O2])(evidence$38: Concurrent[F2]): Stream[F2, (O, O2)]
Concurrent zip.
It combines elements pairwise and in order like
instead of pulling from the left stream and then from the right
stream, it evaluates both pulls concurrently.
The resulting stream terminates when either stream terminates.
zip, butinstead of pulling from the left stream and then from the right
stream, it evaluates both pulls concurrently.
The resulting stream terminates when either stream terminates.
The concurrency is bounded following a model of successive
races: both sides start evaluation of a single element
concurrently, and whichever finishes first waits for the other
to catch up and the resulting pair to be emitted, at which point
the process repeats. This means that no branch is allowed to get
ahead by more than one element.
races: both sides start evaluation of a single element
concurrently, and whichever finishes first waits for the other
to catch up and the resulting pair to be emitted, at which point
the process repeats. This means that no branch is allowed to get
ahead by more than one element.
Notes:
- Effects within each stream are executed in order, they are
only concurrent with respect to each other.
- The output of
although the order in which effects are executed differs.
- Effects within each stream are executed in order, they are
only concurrent with respect to each other.
- The output of
parZip is guaranteed to be the same as zip,although the order in which effects are executed differs.
def parZipWith[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O, O3, O4](that: Stream[F2, O3])(f: (O2, O3) => O4)(evidence$39: Concurrent[F2]): Stream[F2, O4]
Like
of tupling them.
parZip, but combines elements pairwise with a function insteadof tupling them.
def pauseWhen[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](pauseWhenTrue: Stream[F2, Boolean])(F2: Concurrent[F2]): Stream[F2, O]
Pause this stream when
pauseWhenTrue emits true, resuming when false is emitted. def pauseWhen[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](pauseWhenTrue: Signal[F2, Boolean])(evidence$40: Concurrent[F2]): Stream[F2, O]
Alias for
pauseWhen(pauseWhenTrue.discrete). def prefetchN[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](n: Int)(evidence$42: Concurrent[F2]): Stream[F2, O]
Behaves like
consumption, enabling processing on either side of the
identity, but starts fetches up to n chunks in parallel with downstreamconsumption, enabling processing on either side of the
prefetchN to run in parallel. def rechunkRandomlyWithSeed[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](minFactor: Double, maxFactor: Double)(seed: Long): Stream[F2, O]
Rechunks the stream such that output chunks are within
The pseudo random generator is deterministic based on the supplied seed.
[inputChunk.size * minFactor, inputChunk.size * maxFactor].The pseudo random generator is deterministic based on the supplied seed.
def rechunkRandomly[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](minFactor: Double, maxFactor: Double)(evidence$43: Sync[F2]): Stream[F2, O]
Rechunks the stream such that output chunks are within [inputChunk.size * minFactor, inputChunk.size * maxFactor]
.
Reduces this stream with the Semigroup for
O.- Example
- {{{
scala> Stream("The", "quick", "brown", "fox").intersperse(" ").reduceSemigroup.toList
res0: List[String] = List(The quick brown fox)
}}}
Repartitions the input with the function
to the input and all elements but the last of the resulting sequence
are emitted. The last element is then appended to the next input using the
Semigroup
f. On each step f is appliedto the input and all elements but the last of the resulting sequence
are emitted. The last element is then appended to the next input using the
Semigroup
S.- Example
- {{{
scala> Stream("Hel", "l", "o Wor", "ld").repartition(s => Chunk.array(s.split(" "))).toList
res0: List[String] = List(Hello, World)
}}}
Repeat this stream an infinite number of times.
s.repeat == s ++ s ++ s ++ ...- Example
- {{{
scala> Stream(1,2,3).repeat.take(8).toList
res0: List[Int] = List(1, 2, 3, 1, 2, 3, 1, 2)
}}}
Repeat this stream a given number of times.
s.repeatN(n) == s ++ s ++ s ++ ... (n times)- Example
- {{{
scala> Stream(1,2,3).repeatN(3).take(100).toList
res0: List[Int] = List(1, 2, 3, 1, 2, 3, 1, 2, 3)
}}}
def rethrow[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](ev: O <:< Either[Throwable, O2], rt: RaiseThrowable[F2]): Stream[F2, O2]
Converts a
Preserves chunkiness.
Stream[F,Either[Throwable,O]] to a Stream[F,O], which emits right values and fails upon the first Left(t).Preserves chunkiness.
- Example
- {{{
scala> Stream(Right(1), Right(2), Left(new RuntimeException), Right(3)).rethrow[cats.effect.IO, Int] .handleErrorWith(_ => Stream(-1)).compile.toList.unsafeRunSync()
res0: List[Int] = List(-1)
}}}
Left fold which outputs all intermediate results.
- Example
- {{{
scala> Stream(1,2,3,4).scan(0)(_ + _).toList
res0: List[Int] = List(0, 1, 3, 6, 10)
}}}
More generally:Stream().scan(z)(f) == Stream(z)Stream(x1).scan(z)(f) == Stream(z, f(z,x1))Stream(x1,x2).scan(z)(f) == Stream(z, f(z,x1), f(f(z,x1),x2))
etc
Like
[[scan]], but uses the first element of the stream as the seed.- Example
- {{{
scala> Stream(1,2,3,4).scan1(_ + _).toList
res0: List[Int] = List(1, 3, 6, 10)
}}}
Like
[[scan1]], but uses the implicitly available Semigroup[O2]
` to combine elements.- Example
- {{{
scala> Stream(1,2,3,4).scan1Semigroup.toList
res0: List[Int] = List(1, 3, 6, 10)
}}}
Like
The resulting chunk is emitted and the result of the chunk is used in the
next invocation of
scan but f is applied to each chunk of the source stream.The resulting chunk is emitted and the result of the chunk is used in the
next invocation of
f.Many stateful pipes can be implemented efficiently (i.e., supporting fusion) with this method.
def scanChunksOpt[S, O2 >: O, O3](init: S)(f: S => Option[Chunk[O2] => (S, Chunk[O3])]): Stream[F, O3]
More general version of
to determine if another chunk should be pulled or if the stream should terminate.
Termination is signaled by returning
the next chunk is returned wrapped in
scanChunks where the current state (i.e., S) can be inspectedto determine if another chunk should be pulled or if the stream should terminate.
Termination is signaled by returning
None from f. Otherwise, a function which consumesthe next chunk is returned wrapped in
Some.- Example
- {{{
scala> def take[F[_] ,O](s: Stream[F,O] , n: Int): Stream[F,O] =
| s.scanChunksOpt(n) { n => if (n <= 0) None else Some((c: Chunk[O] ) => if (c.size < n) (n - c.size, c) else (0, c.take(n))) }
scala> take(Stream.range(0,100), 5).toList
res0: List[Int] = List(0, 1, 2, 3, 4)
}}}
Alias for
map(f).scanMonoid.- Example
- {{{
scala> Stream("a", "aa", "aaa", "aaaa").scanMap(_.length).toList
res0: List[Int] = List(0, 1, 3, 6, 10)
}}}
Folds this stream with the monoid for
O while emitting all intermediate results.- Example
- {{{
scala> Stream(1, 2, 3, 4).scanMonoid.toList
res0: List[Int] = List(0, 1, 3, 6, 10)
}}}
Introduces an explicit scope.
Scopes are normally introduced automatically, when using
operations that acquire resources and run finalizers. Manual scope introduction
is useful when using onFinalizeWeak/onFinalizeCaseWeak, where no scope
is introduced.
bracket or similaroperations that acquire resources and run finalizers. Manual scope introduction
is useful when using onFinalizeWeak/onFinalizeCaseWeak, where no scope
is introduced.
def showLines[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O](out: PrintStream)(F: Sync[F2], showO: Show[O2]): Stream[F2, Unit]
Writes this stream to the supplied
PrintStream, converting each element to a String via Show.Note: printing to the
Use
PrintStream is performed synchronously.Use
showLinesAsync(out, blocker) if synchronous writes are a concern. def showLinesAsync[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O](out: PrintStream, blocker: Blocker)(evidence$44: Sync[F2], evidence$45: ContextShift[F2], evidence$46: Show[O2]): Stream[F2, Unit]
Writes this stream to the supplied
PrintStream, converting each element to a String via Show.Note: printing to the
PrintStream is performed on the supplied blocking execution context. def showLinesStdOut[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O](F: Sync[F2], showO: Show[O2]): Stream[F2, Unit]
Writes this stream to standard out, converting each element to a
String via Show.Note: printing to standard out is performed synchronously.
Use
Use
showLinesStdOutAsync(blockingEc) if synchronous writes are a concern. def showLinesStdOutAsync[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O](blocker: Blocker)(evidence$47: Sync[F2], evidence$48: ContextShift[F2], evidence$49: Show[O2]): Stream[F2, Unit]
Writes this stream to standard out, converting each element to a
String via Show.Note: printing to the
PrintStream is performed on the supplied blocking execution context.Groups inputs in fixed size chunks by passing a "sliding window"
of size
of size
n over them. If the input contains less than or equal ton elements, only one chunk of this size will be emitted.- Throws
- scala.IllegalArgumentException
- scala.IllegalArgumentException
- Example
- {{{
scala> Stream(1, 2, 3, 4).sliding(2).toList
res0: List[scala.collection.immutable.Queue[Int] ] = List(Queue(1, 2), Queue(2, 3), Queue(3, 4))
}}}
def spawn[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](evidence$50: Concurrent[F2]): Stream[F2, Fiber[F2, Unit]]
Starts this stream and cancels it as finalization of the returned stream.
Breaks the input into chunks where the delimiter matches the predicate.
The delimiter does not appear in the output. Two adjacent delimiters in the
input result in an empty chunk in the output.
The delimiter does not appear in the output. Two adjacent delimiters in the
input result in an empty chunk in the output.
- Example
- {{{
scala> Stream.range(0, 10).split(_ % 4 == 0).toList
res0: List[Chunk[Int] ] = List(empty, Chunk(1, 2, 3), Chunk(5, 6, 7), Chunk(9))
}}}
Emits all elements of the input except the first one.
- Example
- {{{
scala> Stream(1,2,3).tail.toList
res0: List[Int] = List(2, 3)
}}}
Emits the first
n elements of this stream.- Example
- {{{
scala> Stream.range(0,1000).take(5).toList
res0: List[Int] = List(0, 1, 2, 3, 4)
}}}
Emits the last
n elements of the input.- Example
- {{{
scala> Stream.range(0,1000).takeRight(5).toList
res0: List[Int] = List(995, 996, 997, 998, 999)
}}}
Like takeWhile, but emits the first value which tests false.
- Example
- {{{
scala> Stream.range(0,1000).takeThrough(_ != 5).toList
res0: List[Int] = List(0, 1, 2, 3, 4, 5)
}}}
Emits the longest prefix of the input for which all elements test true according to
f.- Example
- {{{
scala> Stream.range(0,1000).takeWhile(_ != 5).toList
res0: List[Int] = List(0, 1, 2, 3, 4)
}}}
def through[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2](f: Stream[F, O] => Stream[F2, O2]): Stream[F2, O2]
Transforms this stream using the given
Pipe.- Example
- {{{
scala> Stream("Hello", "world").through(text.utf8Encode).toVector.toArray
res0: Array[Byte] = Array(72, 101, 108, 108, 111, 119, 111, 114, 108, 100)
}}}
def through2[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2, O3](s2: Stream[F2, O2])(f: (Stream[F, O], Stream[F2, O2]) => Stream[F2, O3]): Stream[F2, O3]
Transforms this stream and
s2 using the given Pipe2. def timeout[F2 >: ([x] =>> F[x]) <: ([x] =>> Any)](timeout: FiniteDuration)(evidence$51: Concurrent[F2], evidence$52: Timer[F2]): Stream[F2, O]
Fails this stream with a TimeoutException if it does not complete within given
timeout. def translate[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), G <: ([_$78] =>> Any)](u: FunctionK[F2, G]): Stream[G, O]
Translates effect type from
F to G using the supplied FunctionK.Note: the resulting stream is not interruptible in all cases. To get an interruptible
stream,
stream,
translateInterruptible instead, which requires a Concurrent[G] instance. def translateInterruptible[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), G <: ([_$79] =>> Any)](u: FunctionK[F2, G])(evidence$53: Concurrent[G]): Stream[G, O]
Translates effect type from
F to G using the supplied FunctionK.Converts the input to a stream of 1-element chunks.
- Example
- {{{
scala> (Stream(1,2,3) ++ Stream(4,5,6)).unchunk.chunks.toList
res0: List[Chunk[Int] ] = List(Chunk(1), Chunk(2), Chunk(3), Chunk(4), Chunk(5), Chunk(6))
}}}
Filters any 'None'.
- Example
- {{{
scala> Stream(Some(1), Some(2), None, Some(3), None).unNone.toList
res0: List[Int] = List(1, 2, 3)
}}}
Halts the input stream at the first
None.- Example
- {{{
scala> Stream(Some(1), Some(2), None, Some(3), None).unNoneTerminate.toList
res0: List[Int] = List(1, 2)
}}}
def zipAll[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O, O3](that: Stream[F2, O3])(pad1: O2, pad2: O3): Stream[F2, (O2, O3)]
Determinsitically zips elements, terminating when the ends of both branches
are reached naturally, padding the left branch with
with
are reached naturally, padding the left branch with
pad1 and padding the right branchwith
pad2 as necessary.- Example
- {{{
scala> Stream(1,2,3).zipAll(Stream(4,5,6,7))(0,0).toList
res0: List[(Int,Int)] = List((1,4), (2,5), (3,6), (0,7))
}}}
def zipAllWith[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O, O3, O4](that: Stream[F2, O3])(pad1: O2, pad2: O3)(f: (O2, O3) => O4): Stream[F2, O4]
Determinsitically zips elements with the specified function, terminating
when the ends of both branches are reached naturally, padding the left
branch with
when the ends of both branches are reached naturally, padding the left
branch with
pad1 and padding the right branch with pad2 as necessary.- Example
- {{{
scala> Stream(1,2,3).zipAllWith(Stream(4,5,6,7))(0, 0)(_ + _).toList
res0: List[Int] = List(5, 7, 9, 7)
}}}
Determinsitically zips elements, terminating when the end of either branch is reached naturally.
- Example
- {{{
scala> Stream(1, 2, 3).zip(Stream(4, 5, 6, 7)).toList
res0: List[(Int,Int)] = List((1,4), (2,5), (3,6))
}}}
Like
Useful with timed streams, the example below will emit a number every 100 milliseconds.
zip, but selects the right values only.Useful with timed streams, the example below will emit a number every 100 milliseconds.
- Example
- {{{
scala> import scala.concurrent.duration._, cats.effect.{ContextShift, IO, Timer}
scala> implicit val cs: ContextShift[IO] = IO.contextShift(scala.concurrent.ExecutionContext.Implicits.global)
scala> implicit val timer: Timer[IO] = IO.timer(scala.concurrent.ExecutionContext.Implicits.global)
scala> val s = Stream.fixedDelay(100.millis) zipRight Stream.range(0, 5)
scala> s.compile.toVector.unsafeRunSync()
res0: Vector[Int] = Vector(0, 1, 2, 3, 4)
}}}
Like
Useful with timed streams, the example below will emit a number every 100 milliseconds.
zip, but selects the left values only.Useful with timed streams, the example below will emit a number every 100 milliseconds.
- Example
- {{{
scala> import scala.concurrent.duration._, cats.effect.{ContextShift, IO, Timer}
scala> implicit val cs: ContextShift[IO] = IO.contextShift(scala.concurrent.ExecutionContext.Implicits.global)
scala> implicit val timer: Timer[IO] = IO.timer(scala.concurrent.ExecutionContext.Implicits.global)
scala> val s = Stream.range(0, 5) zipLeft Stream.fixedDelay(100.millis)
scala> s.compile.toVector.unsafeRunSync()
res0: Vector[Int] = Vector(0, 1, 2, 3, 4)
}}}
def zipWith[F2 >: ([x] =>> F[x]) <: ([x] =>> Any), O2 >: O, O3, O4](that: Stream[F2, O3])(f: (O2, O3) => O4): Stream[F2, O4]
Determinsitically zips elements using the specified function,
terminating when the end of either branch is reached naturally.
terminating when the end of either branch is reached naturally.
- Example
- {{{
scala> Stream(1, 2, 3).zipWith(Stream(4, 5, 6, 7))(_ + _).toList
res0: List[Int] = List(5, 7, 9)
}}}
Zips the elements of the input stream with its indices, and returns the new stream.
- Example
- {{{
scala> Stream("The", "quick", "brown", "fox").zipWithIndex.toList
res0: List[(String,Long)] = List((The,0), (quick,1), (brown,2), (fox,3))
}}}
Zips each element of this stream with the next element wrapped into
The last element is zipped with
Some.The last element is zipped with
None.- Example
- {{{
scala> Stream("The", "quick", "brown", "fox").zipWithNext.toList
res0: List[(String,Option[String] )] = List((The,Some(quick)), (quick,Some(brown)), (brown,Some(fox)), (fox,None))
}}}
Zips each element of this stream with the previous element wrapped into
The first element is zipped with
Some.The first element is zipped with
None.- Example
- {{{
scala> Stream("The", "quick", "brown", "fox").zipWithPrevious.toList
res0: List[(Option[String] ,String)] = List((None,The), (Some(The),quick), (Some(quick),brown), (Some(brown),fox))
}}}
Zips each element of this stream with its previous and next element wrapped into
The first element is zipped with
the last element is zipped with
Some.The first element is zipped with
None as the previous element,the last element is zipped with
None as the next element.- Example
- {{{
scala> Stream("The", "quick", "brown", "fox").zipWithPreviousAndNext.toList
res0: List[(Option[String] ,String,Option[String] )] = List((None,The,Some(quick)), (Some(The),quick,Some(brown)), (Some(quick),brown,Some(fox)), (Some(brown),fox,None))
}}}
Zips the input with a running total according to
S, up to but not including the current element. Thus the initialz value is the first emitted to the output:- See also
- Example
- {{{
scala> Stream("uno", "dos", "tres", "cuatro").zipWithScan(0)(_ + _.length).toList
res0: List[(String,Int)] = List((uno,0), (dos,3), (tres,6), (cuatro,10))
}}}
Zips the input with a running total according to
S, including the current element. Thus the initialz value is the first emitted to the output:- See also
- Example
- {{{
scala> Stream("uno", "dos", "tres", "cuatro").zipWithScan1(0)(_ + _.length).toList
res0: List[(String, Int)] = List((uno,3), (dos,6), (tres,10), (cuatro,16))
}}}