compress
Provides utilities for compressing/decompressing byte streams.
class Object
trait Matchable
class Any
Type members
Classlikes
Value members
Methods
def deflate[F <: ([_$1] =>> Any)](level: Int, nowrap: Boolean, bufferSize: Int, strategy: Int): (F, Byte) => Byte
Returns a
a
Pipe that deflates (compresses) its input elements usinga
java.util.zip.Deflater with the parameters level, nowrap and strategy.- Value Params
- bufferSize
-
size of the internal buffer that is used by the
compressor. Default size is 32 KB. - level
-
the compression level (0-9)
- nowrap
-
if true then use GZIP compatible compression
- strategy
-
compression strategy -- see
java.util.zip.Deflaterfor details
def inflate[F <: ([_$4] =>> Any)](nowrap: Boolean, bufferSize: Int)(ev: RaiseThrowable[F]): (F, Byte) => Byte
Returns a
a
Pipe that inflates (decompresses) its input elements usinga
java.util.zip.Inflater with the parameter nowrap.- Value Params
- bufferSize
-
size of the internal buffer that is used by the
decompressor. Default size is 32 KB. - nowrap
-
if true then support GZIP compatible decompression
Returns a pipe that incrementally compresses input into the GZIP format
by delegating to
with the GNU utils
understands GZIP. Note, however, that the GZIP format is not "stable" in
the sense that all compressors will produce identical output given
identical input. Part of the header seeding is arbitrary and chosen by
the compression implementation. For this reason, the exact bytes produced
by this pipe will differ in insignificant ways from the exact bytes produced
by a tool like the GNU utils
by delegating to
java.util.zip.GZIPOutputStream. Output is compatiblewith the GNU utils
gunzip utility, as well as really anything else thatunderstands GZIP. Note, however, that the GZIP format is not "stable" in
the sense that all compressors will produce identical output given
identical input. Part of the header seeding is arbitrary and chosen by
the compression implementation. For this reason, the exact bytes produced
by this pipe will differ in insignificant ways from the exact bytes produced
by a tool like the GNU utils
gzip.- Value Params
- bufferSize
-
The buffer size which will be used to page data
from the OutputStream back into chunks. This will
be the chunk size of the output stream. You should
set it to be equal to the size of the largest
chunk in the input stream. Setting this to a size
which is ''smaller'' than the chunks in the input
stream will result in performance degradation of
roughly 50-75%.
Returns a pipe that incrementally decompresses input according to the GZIP
format. Any errors in decompression will be sequenced as exceptions into the
output stream. The implementation of this pipe delegates directly to
and async fashion without any thread blockage. Under the surface, this is
handled by enqueueing chunks into a special type of byte array InputStream
which throws exceptions when exhausted rather than blocking. These signal
exceptions are caught by the pipe and treated as an async suspension. Thus,
there are no issues with arbitrarily-framed data and chunk boundaries. Also
note that there is almost no performance impact from these exceptions, due
to the way that the JVM handles throw/catch.
format. Any errors in decompression will be sequenced as exceptions into the
output stream. The implementation of this pipe delegates directly to
GZIPInputStream. Despite this, decompression is still handled in a streamingand async fashion without any thread blockage. Under the surface, this is
handled by enqueueing chunks into a special type of byte array InputStream
which throws exceptions when exhausted rather than blocking. These signal
exceptions are caught by the pipe and treated as an async suspension. Thus,
there are no issues with arbitrarily-framed data and chunk boundaries. Also
note that there is almost no performance impact from these exceptions, due
to the way that the JVM handles throw/catch.
The chunk size here is actually really important. If you set it to be too
small, then there will be insufficient buffer space for
read the GZIP header preamble. This can result in repeated, non-progressing
async suspensions. This case is caught internally and will be raised as an
exception (
Under normal circumstances, you shouldn't have to worry about this. Just, uh,
don't set the buffer size to something tiny. Matching the input stream largest
chunk size, or roughly 8 KB (whichever is larger) is a good rule of thumb.
small, then there will be insufficient buffer space for
GZIPInputStream toread the GZIP header preamble. This can result in repeated, non-progressing
async suspensions. This case is caught internally and will be raised as an
exception (
NonProgressiveDecompressionException) within the output stream.Under normal circumstances, you shouldn't have to worry about this. Just, uh,
don't set the buffer size to something tiny. Matching the input stream largest
chunk size, or roughly 8 KB (whichever is larger) is a good rule of thumb.
- Value Params
- bufferSize
-
The bounding size of the input buffer. This should roughly
match the size of the largest chunk in the input stream.
The chunk size in the output stream will be determined by
double this value.