Age | Commit message (Collapse) | Author |
|
This caused an infinite loop before since it would just keep asking the
underlying source for more data.
In practice this happened because an HTTP server served a
response to a HEAD request (for which curl will not retrieve any body or
call our write callback function) with Content-Encoding: br, leading to
decompressing nothing at all and going into an infinite loop.
This adds a test to make sure none of our compression methods do that
again, as well as just patching the HTTP client to never feed empty data
into a compression algorithm (since they absolutely have the right to
throw CompressionError on unexpectedly-short streams!).
Reported on Matrix: https://matrix.to/#/!lymvtcwDJ7ZA9Npq:lix.systems/$8BWQR_zKxCQDJ40C5NnDo4bQPId3pZ_aoDj2ANP7Itc?via=lix.systems&via=matrix.org&via=tchncs.de
Change-Id: I027566e280f0f569fdb8df40e5ecbf46c211dad1
|
|
The lint did it :3
Change-Id: I2d9f276b01ebbf14101de4257ea13e44ff6fe0a0
|
|
This:
- Converts a bunch of C style casts into C++ casts.
- Removes some very silly pointer subtraction code (which is no more or
less busted on i686 than it began)
- Fixes some "technically UB" that never had to be UB in the first
place.
- Makes finally follow the noexcept status of the inner function. Maybe
in the future we should ban the function from not being noexcept, but
that is not today.
- Makes various locally-used exceptions inherit from std::exception.
Change-Id: I22e66972602604989b5e494fd940b93e0e6e9297
|
|
the sole remaining user of this function can use makeDecompressionSource
instead, while making the sinkToSource in the caller unnecessary as well
Change-Id: I4258227b5dbbb735a75b477d8a57007bfca305e9
|
|
If we've consumed the entire input, that doesn't actually mean we're
done decompressing - there might be more output left. This worked (?)
in most cases because the input and output sizes are pretty comparable,
but sometimes they're not and then things get very funny.
Change-Id: I73435a654a911b8ce25119f713b80706c5783c1b
|
|
Change-Id: Iac7f24d79e24417436b9b5cbefd6af051aeea0a6
|
|
Change-Id: I9579dd08f7bd0f927bde9d3128515b0cee15f320
|
|
Change-Id: Ic1f68e6af658e94ef7922841dd3ad4c69551ef56
|
|
Copies part of the changes of ac89bb064aeea85a62b82a6daf0ecca7190a28b7
Change-Id: I9ce601875cd6d4db5eb1132d7835c5bab9f126d8
|
|
The `write` name is ambiguous and could lead to some funny bugs like
https://github.com/NixOS/nix/pull/8173#issuecomment-1500009480. So
rename it to the more explicit `writeUnbuffered`.
Besides, this method shouldn't be (and isn't) used outside of the class
implementation, so mark it `protected`.
This makes it more symetrical to `BufferedSource` which uses a
`protected readUnbuffered` method.
|
|
These were needed back in the pre-C++11 era because we didn't have
move semantics. But now we do.
|
|
|
|
Based off on @dtzWill's #2276
|
|
* libstore: `bz2` should not be linked
* libutil: `zlib.h` should not be included
Signed-off-by: Pamplemousse <xav.maso@gmail.com>
|
|
This function doesn't support all compression methods (i.e. 'none' and
'br') so it shouldn't be exposed.
Also restore the original decompress() as a wrapper around
makeDecompressionSink().
|
|
The S3 store relies on the ability to be able to decompress things with
an empty method, because it just passes the value of the Content-Encoding
directly to decompress.
If the file is not compressed, then this will cause the compression
routine to get confused.
This caused NixOS/nixpkgs#120120.
|
|
This has been broken since faa31f40846f7a4dbc2487d000b112a6aef69d1b.
|
|
|
|
|
|
|
|
|
|
|
|
Closes #3256
|
|
|
|
|
|
This didn't work anymore since decompression was only done in the
non-coroutine case.
Decompressors are now sinks, just like compressors.
Also fixed a bug in bzip2 API handling (we have to handle BZ_RUN_OK
rather than BZ_OK), which we didn't notice because there was a missing
'throw':
if (ret != BZ_OK)
CompressionError("error while compressing bzip2 file");
|
|
|
|
Bzip2's 'avail_in' parameter is declared as an unsigned int, so
assigning a size_t length to it led to silent truncation.
Fixes #2111.
|
|
|
|
This allows decompression to happen in O(1) memory.
|
|
|
|
|
|
|
|
by default.
|
|
some comments about possible improvements wrt memory usage/threading.
|
|
the case of hydra where the overhead of single threaded encoding is more
noticeable e.g most of the time spent in "Sending inputs"/"Receiving outputs"
is due to compression while the actual upload to the binary cache seems
to be negligible.
|
|
* Look for both 'brotli' and 'bro' as external command,
since upstream has renamed it in newer versions.
If neither are found, current runtime behavior
is preserved: try to find 'bro' on PATH.
* Limit amount handed to BrotliEncoderCompressStream
to ensure interrupts are processed in a timely manner.
Testing shows negligible performance impact.
(Other compression sinks don't seem to require this)
|
|
Relevant RFC: NixOS/rfcs#4
$ ag -l | xargs sed -i -e "/\"/s/’/'/g;/\"/s/‘/'/g"
|
|
Fixes #1285.
|
|
|
|
|
|
For example, if we call brotli with an empty input, it shouldn't read
from the caller's stdin.
|
|
You can now set the store parameter "text-compression=br" to compress
textual files in the binary cache (i.e. narinfo and logs) using
Brotli. This sets the Content-Encoding header; the extension of
compressed files is unchanged.
You can separately specify the compression of log files using
"log-compression=br". This is useful when you don't want to compress
narinfo files for backward compatibility.
|
|
nix: src/libutil/compression.cc:142: virtual nix::XzSink::~XzSink(): Assertion `finished' failed.
|
|
Build logs on cache.nixos.org are compressed using Brotli (since this
allows them to be decompressed automatically by Chrome and Firefox),
so it's handy if "nix log" can decompress them.
|
|
This reverts commit f78126bfd6b6c8477fcdbc09b2f98772dbe9a1e7. There
really is no need for such a massive change...
|
|
|
|
|
|
As a side effect, this ensures that signatures are propagated when
copying paths between stores.
Also refactored import/export to make use of this.
|
|
|