Age | Commit message (Collapse) | Author |
|
There was a bug report about a potential call to `memcpy` with a null
pointer which is not reproducible:
https://git.lix.systems/lix-project/lix/issues/492
This occurred in `src/libstore/filetransfer.cc` in `InnerSource::read`.
To ensure that this doesn't happen, an early return is added before
calling `memcpy` if the length of the data to be copied is 0.
This change also adds a test that ensures that when `InnerSource::read`
is called with an empty file, it throws an `EndOfFile` exception.
Change-Id: Ia18149bee9a3488576c864f28475a3a0c9eadfbb
|
|
This splits `ignoreException` into `ignoreExceptionExceptInterrupt`
(which ignores all exceptions except `Interrupt`, which indicates a
SIGINT/CTRL-C) and `ignoreExceptionInDestructor` (which ignores all
exceptions, so that destructors do not throw exceptions).
This prevents many cases where Nix ignores CTRL-C entirely.
See: https://github.com/NixOS/nix/issues/7245
Upstream-PR: https://github.com/NixOS/nix/pull/11618
Change-Id: Ie7d2467eedbe840d1b9fa2e88a4e88e4ab26a87b
|
|
This caused an infinite loop before since it would just keep asking the
underlying source for more data.
In practice this happened because an HTTP server served a
response to a HEAD request (for which curl will not retrieve any body or
call our write callback function) with Content-Encoding: br, leading to
decompressing nothing at all and going into an infinite loop.
This adds a test to make sure none of our compression methods do that
again, as well as just patching the HTTP client to never feed empty data
into a compression algorithm (since they absolutely have the right to
throw CompressionError on unexpectedly-short streams!).
Reported on Matrix: https://matrix.to/#/!lymvtcwDJ7ZA9Npq:lix.systems/$8BWQR_zKxCQDJ40C5NnDo4bQPId3pZ_aoDj2ANP7Itc?via=lix.systems&via=matrix.org&via=tchncs.de
Change-Id: I027566e280f0f569fdb8df40e5ecbf46c211dad1
|
|
This didn't really feel so worth it afterwards, but I did untangle a
bunch of stuff that should not have been tangled.
The general gist of this change is that variant bullshit was causing a
bunch of compile time, and it seems like the only way to deal with
variant induced compile time is to keep variant types out of headers.
Explicit template instantiation seems to do nothing for them.
I also seem to have gotten some back-end time improvement from
explicitly instantiating regex, but I don't know why. There is no
corresponding front-end time improvement from it: regex is still at the
top of the sinners list.
**** Templates that took longest to instantiate:
15231 ms: std::basic_regex<char>::_M_compile (28 times, avg 543 ms)
15066 ms: std::__detail::_Compiler<std::regex_traits<char>>::_Compiler (28 times, avg 538 ms)
12571 ms: std::__detail::_Compiler<std::regex_traits<char>>::_M_disjunction (28 times, avg 448 ms)
12454 ms: std::__detail::_Compiler<std::regex_traits<char>>::_M_alternative (28 times, avg 444 ms)
12225 ms: std::__detail::_Compiler<std::regex_traits<char>>::_M_term (28 times, avg 436 ms)
11363 ms: nlohmann::basic_json<>::parse<const char *> (21 times, avg 541 ms)
10628 ms: nlohmann::basic_json<>::basic_json (109 times, avg 97 ms)
10134 ms: std::__detail::_Compiler<std::regex_traits<char>>::_M_atom (28 times, avg 361 ms)
Back-end time before messing with the regex:
**** Function sets that took longest to compile / optimize:
8076 ms: void boost::io::detail::put<$>(boost::io::detail::put_holder<$> cons... (177 times, avg 45 ms)
4382 ms: std::_Rb_tree<$>::_M_erase(std::_Rb_tree_node<$>*) (1247 times, avg 3 ms)
3137 ms: boost::stacktrace::detail::to_string_impl_base<boost::stacktrace::de... (137 times, avg 22 ms)
2896 ms: void boost::io::detail::mk_str<$>(std::__cxx11::basic_string<$>&, ch... (177 times, avg 16 ms)
2304 ms: std::_Rb_tree<$>::_M_get_insert_hint_unique_pos(std::_Rb_tree_const_... (210 times, avg 10 ms)
2116 ms: bool std::__detail::_Compiler<$>::_M_expression_term<$>(std::__detai... (112 times, avg 18 ms)
2051 ms: std::_Rb_tree_iterator<$> std::_Rb_tree<$>::_M_emplace_hint_unique<$... (244 times, avg 8 ms)
2037 ms: toml::result<$> toml::detail::sequence<$>::invoke<$>(toml::detail::l... (93 times, avg 21 ms)
1928 ms: std::__detail::_Compiler<$>::_M_quantifier() (28 times, avg 68 ms)
1859 ms: nlohmann::json_abi_v3_11_3::detail::serializer<$>::dump(nlohmann::js... (41 times, avg 45 ms)
1824 ms: std::_Function_handler<$>::_M_manager(std::_Any_data&, std::_Any_dat... (973 times, avg 1 ms)
1810 ms: std::__detail::_BracketMatcher<$>::_BracketMatcher(std::__detail::_B... (112 times, avg 16 ms)
1793 ms: nix::fetchers::GitInputScheme::fetch(nix::ref<$>, nix::fetchers::Inp... (1 times, avg 1793 ms)
1759 ms: std::_Rb_tree<$>::_M_get_insert_unique_pos(std::__cxx11::basic_strin... (281 times, avg 6 ms)
1722 ms: bool nlohmann::json_abi_v3_11_3::detail::parser<$>::sax_parse_intern... (19 times, avg 90 ms)
1677 ms: boost::io::basic_altstringbuf<$>::overflow(int) (194 times, avg 8 ms)
1674 ms: std::__cxx11::basic_string<$>::_M_mutate(unsigned long, unsigned lon... (249 times, avg 6 ms)
1660 ms: std::_Rb_tree_node<$>* std::_Rb_tree<$>::_M_copy<$>(std::_Rb_tree_no... (304 times, avg 5 ms)
1599 ms: bool nlohmann::json_abi_v3_11_3::detail::parser<$>::sax_parse_intern... (19 times, avg 84 ms)
1568 ms: void std::__detail::_Compiler<$>::_M_insert_bracket_matcher<$>(bool) (112 times, avg 14 ms)
1541 ms: std::__shared_ptr<$>::~__shared_ptr() (531 times, avg 2 ms)
1539 ms: nlohmann::json_abi_v3_11_3::detail::serializer<$>::dump_escaped(std:... (41 times, avg 37 ms)
1471 ms: void std::__detail::_Compiler<$>::_M_insert_character_class_matcher<... (112 times, avg 13 ms)
After messing with the regex (notice std::__detail::_Compiler vanishes
here, but I don't know why):
**** Function sets that took longest to compile / optimize:
8054 ms: void boost::io::detail::put<$>(boost::io::detail::put_holder<$> cons... (177 times, avg 45 ms)
4313 ms: std::_Rb_tree<$>::_M_erase(std::_Rb_tree_node<$>*) (1217 times, avg 3 ms)
3259 ms: boost::stacktrace::detail::to_string_impl_base<boost::stacktrace::de... (137 times, avg 23 ms)
3045 ms: void boost::io::detail::mk_str<$>(std::__cxx11::basic_string<$>&, ch... (177 times, avg 17 ms)
2314 ms: std::_Rb_tree<$>::_M_get_insert_hint_unique_pos(std::_Rb_tree_const_... (207 times, avg 11 ms)
1923 ms: std::_Rb_tree_iterator<$> std::_Rb_tree<$>::_M_emplace_hint_unique<$... (216 times, avg 8 ms)
1817 ms: bool nlohmann::json_abi_v3_11_3::detail::parser<$>::sax_parse_intern... (18 times, avg 100 ms)
1816 ms: toml::result<$> toml::detail::sequence<$>::invoke<$>(toml::detail::l... (93 times, avg 19 ms)
1788 ms: nlohmann::json_abi_v3_11_3::detail::serializer<$>::dump(nlohmann::js... (40 times, avg 44 ms)
1749 ms: std::_Rb_tree<$>::_M_get_insert_unique_pos(std::__cxx11::basic_strin... (278 times, avg 6 ms)
1724 ms: std::__cxx11::basic_string<$>::_M_mutate(unsigned long, unsigned lon... (248 times, avg 6 ms)
1697 ms: boost::io::basic_altstringbuf<$>::overflow(int) (194 times, avg 8 ms)
1684 ms: nix::fetchers::GitInputScheme::fetch(nix::ref<$>, nix::fetchers::Inp... (1 times, avg 1684 ms)
1680 ms: std::_Rb_tree_node<$>* std::_Rb_tree<$>::_M_copy<$>(std::_Rb_tree_no... (303 times, avg 5 ms)
1589 ms: bool nlohmann::json_abi_v3_11_3::detail::parser<$>::sax_parse_intern... (18 times, avg 88 ms)
1483 ms: non-virtual thunk to boost::wrapexcept<$>::~wrapexcept() (181 times, avg 8 ms)
1447 ms: nlohmann::json_abi_v3_11_3::detail::serializer<$>::dump_escaped(std:... (40 times, avg 36 ms)
1441 ms: std::__shared_ptr<$>::~__shared_ptr() (496 times, avg 2 ms)
1420 ms: boost::stacktrace::basic_stacktrace<$>::init(unsigned long, unsigned... (137 times, avg 10 ms)
1396 ms: boost::basic_format<$>::~basic_format() (194 times, avg 7 ms)
1290 ms: std::__cxx11::basic_string<$>::_M_replace_cold(char*, unsigned long,... (231 times, avg 5 ms)
1258 ms: std::vector<$>::~vector() (354 times, avg 3 ms)
1222 ms: std::__cxx11::basic_string<$>::_M_replace(unsigned long, unsigned lo... (231 times, avg 5 ms)
1194 ms: std::_Rb_tree<$>::_M_get_insert_hint_unique_pos(std::_Rb_tree_const_... (49 times, avg 24 ms)
1186 ms: bool tao::pegtl::internal::sor<$>::match<$>(std::integer_sequence<$>... (1 times, avg 1186 ms)
1149 ms: std::__detail::_Executor<$>::_M_dfs(std::__detail::_Executor<$>::_Ma... (70 times, avg 16 ms)
1123 ms: toml::detail::sequence<$>::invoke(toml::detail::location&) (69 times, avg 16 ms)
1110 ms: nlohmann::json_abi_v3_11_3::basic_json<$>::json_value::destroy(nlohm... (55 times, avg 20 ms)
1079 ms: std::_Function_handler<$>::_M_manager(std::_Any_data&, std::_Any_dat... (541 times, avg 1 ms)
1033 ms: nlohmann::json_abi_v3_11_3::detail::lexer<$>::scan_number() (20 times, avg 51 ms)
Change-Id: I10af282bcd4fc39c2d3caae3453e599e4639c70b
|
|
This:
- Converts a bunch of C style casts into C++ casts.
- Removes some very silly pointer subtraction code (which is no more or
less busted on i686 than it began)
- Fixes some "technically UB" that never had to be UB in the first
place.
- Makes finally follow the noexcept status of the inner function. Maybe
in the future we should ban the function from not being noexcept, but
that is not today.
- Makes various locally-used exceptions inherit from std::exception.
Change-Id: I22e66972602604989b5e494fd940b93e0e6e9297
|
|
This was introduced in I0fc80718eb7e02d84cc4b5d5deec4c0f41116134 and
unnoticed since it only appears in gcc builds.
Change-Id: I1de80ce2a8fab63efdca7ca0de2a302ceb118267
|
|
Change-Id: I0fc80718eb7e02d84cc4b5d5deec4c0f41116134
|
|
without this we will not be able to get rid of makeDecompressionSink,
which in turn will be necessary to get rid of sourceToSink (since the
libarchive archive wrapper *must* be a Source due to api limitations)
Change-Id: Iccd3d333ba2cbcab49cb5a1d3125624de16bce27
|
|
even the transfer function is not all that necessary since there aren't
that many users, but we'll keep it for now. we could've kept both names
but we also kind of want to use `download` for something else very soon
Change-Id: I005e403ee59de433e139e37aa2045c26a523ccbf
|
|
Fixes a compiler error that looks like:
error: could not convert '[...]' from 'future<void>' to 'future<nix::FileTransferResult>'
Change-Id: I4aeadfeba0dadfdf133f25e6abce90ede7a86ca6
|
|
foo.
Change-Id: I7d7db22f68046d2ecf3b594b4ee6fd9c9dac4be1
|
|
* changes:
util.hh: Delete remaining file and clean up headers
util.hh: Move nativeSystem to local-derivation-goal.cc
util.hh: Move stuff to types.hh
util.cc: Delete remaining file
util.{hh,cc}: Move ignoreException to error.{hh,cc}
util.{hh,cc}: Split out namespaces.{hh,cc}
util.{hh,cc}: Split out users.{hh,cc}
util.{hh,cc}: Split out strings.{hh,cc}
util.{hh,cc}: Split out unix-domain-socket.{hh,cc}
util.{hh,cc}: Split out child.{hh,cc}
util.{hh,cc}: Split out current-process.{hh,cc}
util.{hh,cc}: Split out processes.{hh,cc}
util.{hh,cc}: Split out file-descriptor.{hh,cc}
util.{hh,cc}: Split out file-system.{hh,cc}
util.{hh,cc}: Split out terminal.{hh,cc}
util.{hh,cc}: Split out environment-variables.{hh,cc}
|
|
while refactoring the curl wrapper we inadvertently broken the immutable
flake protocol, because the immutable flake protocol accumulates headers
across the entire redirect chain instead of using only the headers given
in the final response of the chain. this is a problem because Some Known
Providers Of Flake Infrastructure set rel=immutable link headers only in
the penultimate entry of the redirect chain, and curl does not regard it
as worth returning to us via its response header enumeration mechanisms.
fixes https://git.lix.systems/lix-project/lix/issues/358
Change-Id: I645c3932b465cde848bd6a3565925a1e3cbcdda0
|
|
Change-Id: Ic1f68e6af658e94ef7922841dd3ad4c69551ef56
|
|
Change-Id: I8fd3f3b50c15ede29d489066b4e8d99c2c4636a6
|
|
121edecf654ec084274ba1a779c7140082f4115d added a new state field to
carry over content encoding settings from transfer to sink creation, but
never actually set that field.
Change-Id: I714b2efe745561e851b78a4791479b3501db8c72
|
|
it's no longer used. it really shouldn't have existed this long since it
was just a mashup of both std::promise and std::packaged_task in a shape
that makes composition unnecessarily difficult. all but a single case of
Callback pattern calls were fully synchronous anyway, and even this sole
outlier was by far not important enough to justify the extra complexity.
Change-Id: I208aec4572bf2501cdbd0f331f27d505fca3a62f
|
|
also add a few more tests for exception propagation behavior. using
packaged_tasks and futures (which only allow a single call to a few
of their methods) introduces error paths that weren't there before.
Change-Id: I42ca5236f156fefec17df972f6e9be45989cf805
|
|
returning 0 from the callback for errors signals successful transfer if
the source returned no data even though the exception we've just caught
clearly disagrees. while this is not all that important (since the only
viable cause of such errors will be dataCallback, and the sole instance
of it being used already takes care of exceptions) we can just do this.
Change-Id: I2bb150eff447121d82e8e3aa4e00057c40523ac6
|
|
this will be necessary if we want download() to return a source instead
of consuming a sink, which will in turn be needed to remove coroutines.
Change-Id: I34ec241e9bbc5d32fbcd243b244e29c3757533aa
|
|
not doing this will cause transfers that had their readers disappear to
linger. with lingering transfers the curl thread can't shut down, which
will cause nix itself to not shut down until the transfer finishes some
other way (most likely network timeouts). also add a new test for this.
Change-Id: Id2401b3ac85731c824db05918d4079125be25b57
|
|
only decompress the response once all data has been received (in the
fully buffered case), or at least outside of the curl wrapper itself
(in the receive-to-sink case). unfortunately this means we will have
to duplicate decompression logic for these two cases for time being,
but once the curl wrapper has been rewritten to return a real future
or Source we can deduplicate this logic again. the curl wrapper will
have to turn into a proper Source first and use decompression source
logic which also does not currently exist—only decompression *sinks*
Change-Id: I66bc692f07d9b9e69fe10689ee73a2de8d65e35c
|
|
this is highly questionable. single-arg download calls will misbehave
with it set, and two-arg download calls will just overwrite it. being
an implementation detail this should not have been in the API at all.
Change-Id: I613772951ee03d8302366085f06a53601d13f132
|
|
this lets each implementation of FileTransfer (of which currently only
the one exists at all) implement appropriate handling for its internal
behaviours that are not otherwise exposed. in curl this lets us switch
the buffer-full handling method from "block the entire curl thread" to
"pause just the one transfer", move the non-libcurl body decompression
out of the actual curl wrapper (which will let us eventually morph the
curl wrapper intto an actual source of Sources), and some other things
Change-Id: Id6d3593cde6b4915aab3e90a43b175c103cc3f18
|
|
just accumulate error data into result.data as we would for successful
transfers without a dataCallback. errorSink and data would contain the
same data in error cases anyway, so splitting them is not very useful.
Change-Id: I00e449866454389ac6a564ab411c903fd357dabf
|
|
this was broken in 75b62e52600a44b42693944b50638bf580a2c86e.
Change-Id: If8583e802afbcde822623036bf41a9708fbc7c8d
|
|
don't reimplement header parsing. this was only really needed due to the
ancient github bug we no longer care about, everything else we have done
in custom code can also be done using curl itself. doing this also fixes
possible sources of header smuggling (because the header function didn't
unfold headers and we'd trim them before parsing, which would've made us
read contents of one header as a fully formed header in itself). this is
a slight behavior change because we now honor only the first instance of
a given header where previous behavior was to honor either the last or a
combination of all of them (accept-ranges was logical-or'd by accident).
Change-Id: I93cb93ddb91ab98c8991f846014926f6ef039fdb
|
|
this was a workaround for a *github* bug that happend *in 2015*.
not only is github no longer buggy, it shouldn't have been nix's
responsibility to work around these bugs like this to begin with
while we're at it we'll also remove another workaround—again for
github specifically and again for etag handling—from 2021 that's
also not needed any more. future workarounds for serverside bugs
should probably come with an expiration date that mutates into a
build warning after a while, otherwise this *will* happen again.
Change-Id: I74f739ae3e36d40350f78bebcb5869aa8cc9adcd
|
|
the previous solution to the wakeup problem (adding a pipe and passing
it as an additional fd to curl_multi_wait) worked, but there have been
builtin alternatives for this since 2020. not only do these save code,
they're also a lot more likely to work natively on windows when needed
Change-Id: Iab751b900997110a8d15de45ea3ab0c42f7e5973
|
|
the oldest version checked for here is 7.47, which was released in
2016. it's probably safe to say that we do not need these any more
Change-Id: I003411f6b2ce6d56f7ca337390df3ea86bd59a99
|
|
* some things that can throw are marked noexcept
yet the linter seems to think not. Maybe they can't throw in practice.
I would rather not have the UB possibility in pretty obvious cold
paths.
* various default-case-missing complaints
* a fair pile of casts from integer to character, which are in fact
deliberate.
* an instance of <https://clang.llvm.org/extra/clang-tidy/checks/bugprone/move-forwarding-reference.html>
* bugprone-not-null-terminated-result on handing a string to curl in
chunks of bytes. our usage is fine.
* reassigning a unique_ptr by CRIMES instead of using release(), then
using release() and ignoring the result. wild. let's use release() for
its intended purpose.
Change-Id: Ic3e7affef12383576213a8a7c8145c27e662513d
|
|
Once this commit lands, we are even more visible in analytics FWIW.
Change-Id: Id7e0c162315d0f191edbea9cb5fb82ce363704b9
Signed-off-by: Raito Bezarius <raito@lix.systems>
|
|
These now have equivalents in the standard lib in C++20. This change was
performed with a custom clang-tidy check which I will submit later.
Executed like so:
ninja -C build && run-clang-tidy -checks='-*,nix-*' -load=build/libnix-clang-tidy.so -p .. -fix ../tests | tee -a clang-tidy-result
Change-Id: I62679e315ff9e7ce72a40b91b79c3e9fc01b27e9
|
|
Copies part of the changes of ac89bb064aeea85a62b82a6daf0ecca7190a28b7
Change-Id: I9ce601875cd6d4db5eb1132d7835c5bab9f126d8
|
|
Cleanup `fmt.hh`
(cherry picked from commit 47a1dbb4b8e7913cbb9b4d604728b912e76e4ca0)
Change-Id: Id076a45cb39652f437fe3f8bda10c310a9894777
|
|
std::move(state->data) and data.empty() were called in a loop, and
could run with no other threads intervening. Accessing moved objects
is undefined behavior, and could cause a crash.
|
|
Previously, for tarball flakes, we recorded the original URL of the
tarball flake, rather than the URL to which it ultimately
redirects. Thus, a flake URL like
http://example.org/patchelf-latest.tar that redirects to
http://example.org/patchelf-<revision>.tar was not really usable. We
couldn't record the redirected URL, because sites like GitHub redirect
to CDN URLs that we can't rely on to be stable.
So now we use the redirected URL only if the server returns the
`x-nix-is-immutable` or `x-amz-meta-nix-is-immutable` headers in its
response.
|
|
|
|
This provides a platform-independent way to configure the SSL
certificates file in the Nix daemon. Previously we provided
instructions for overriding the environment variable in launchd, but
that obviously doesn't work with systemd. Now we can just tell users
to add
ssl-cert-file = /etc/ssl/my-certificate-bundle.crt
to their nix.conf.
|
|
Remove FormatOrString and remaining uses of format()
|
|
|
|
Don't show the users the raw (possibly compressed) error message as
everyone isn't able to decompress brotli in their brain.
|
|
Check interrupts even when download stalled
|
|
|
|
|
|
|
|
tl;dr: With this 1 line change I was able to get a speedup of 1.5x on 1Gbit/s
wan connections by enabling zstd compression in nginx.
Also nix already supported all common compression format for http
transfer, webservers usually only enable them if they are advertised
through the Accept-Encoding header.
This pull requests makes nix advertises content compression support for
zstd, br, gzip and deflate.
It's particular useful to add transparent compression for binary caches
that serve packages from the host nix store in particular nix-serve,
nix-serve-ng and harmonia.
I tried so far gzip, brotli and zstd, whereas only zstd was able to bring
me performance improvements for 1Gbit/s WAN connections.
The following nginx configuration was used in combination with the
[zstd module](https://github.com/tokers/zstd-nginx-module) and
[harmonia](https://github.com/nix-community/harmonia/)
```nix
{
services.nginx.virtualHosts."cache.yourhost.com" = {
locations."/".extraConfig = ''
proxy_pass http://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_redirect http:// https://;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
zstd on;
zstd_types application/x-nix-archive;
'';
};
}
```
For testing I unpacked a linux kernel tarball to the nix store using
this command `nix-prefetch-url --unpack https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-6.1.8.tar.gz`.
Before:
```console
$ nix build && rm -rf /tmp/hello && time ./result/bin/nix copy --no-check-sigs --from https://cache.thalheim.io --to 'file:///tmp/hello?compression=none' '/nix/store/j42mahch5f0jvfmayhzwbb88sw36fvah-linux-6.1.8.tar.gz'
warning: Git tree '/scratch/joerg/nix' is dirty
real 0m18,375s
user 0m2,889s
sys 0m1,558s
```
After:
```console
$ nix build && rm -rf /tmp/hello && time ./result/bin/nix copy --no-check-sigs --from https://cache.thalheim.io --to 'file:///tmp/hello?compression=none' '/nix/store/j42mahch5f0jvfmayhzwb
b88sw36fvah-linux-6.1.8.tar.gz'
real 0m11,884s
user 0m4,130s
sys 0m1,439s
```
Signed-off-by: Jörg Thalheim <joerg@thalheim.io>
Update src/libstore/filetransfer.cc
Co-authored-by: Théophane Hufschmitt <7226587+thufschmitt@users.noreply.github.com>
|
|
These are purely related to NIX_PATH / -I command line parsing, so put
them in libexpr.
|
|
|
|
Remove the `verify TLS: Nix CA file = 'blah'` message that Nix used to print when fetching anything as it's both useless (`libcurl` prints the same info in its logs) and misleading (gives the impression that a new TLS connection is being established which might not be the case because of multiplexing. See #7011 )
|