Age | Commit message (Collapse) | Author |
|
|
|
This is to assert that callback functions should never throw (since
the context in which they're called may not be able to handle the
exception).
|
|
Also, make Callback movable but uncopyable.
|
|
|
|
|
|
This reverts commit 78fa47a7f08a4cb6ee7061bf0bd86a40e1d6dc91.
|
|
This flag
* Disables substituters.
* Sets the tarball-ttl to infinity (ensuring e.g. that the flake
registry and any downloaded flakes are considered current).
* Disables retrying downloads and sets the connection timeout to the
minimum. (So it doesn't completely disable downloads at the moment.)
(cherry picked from commit 8ea842260b4fd93315d35c5ba94b1ff99ab391d8)
|
|
Once we've started writing data to a Sink, we can't restart a download
request, because then we end up writing duplicate data to the
Sink. Therefore we shouldn't handle retries in Downloader but at a
higher level (in particular, in copyStorePath()).
Fixes #2952.
(cherry picked from commit a67cf5a3585c41dd9f219a2c7aa9cf67fa69520b)
|
|
Once we've started writing data to a Sink, we can't restart a download
request, because then we end up writing duplicate data to the
Sink. Therefore we shouldn't handle retries in Downloader but at a
higher level (in particular, in copyStorePath()).
Fixes #2952.
|
|
This flag
* Disables substituters.
* Sets the tarball-ttl to infinity (ensuring e.g. that the flake
registry and any downloaded flakes are considered current).
* Disables retrying downloads and sets the connection timeout to the
minimum. (So it doesn't completely disable downloads at the moment.)
|
|
This ensures that commands like 'nix flake info /my/nixpkgs' don't
copy a gigabyte of crap to ~/.cache/nix.
Fixes #60.
|
|
Otherwise, we just keep asking the substituter for other .narinfo
files, which can take a very long time due to retries/timeouts.
|
|
Don't say "download" when we mean "upload".
|
|
This reduces memory consumption of
nix copy --from https://cache.nixos.org --to ~/my-nix /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79
from 176 MiB to 82 MiB. (The remaining memory is probably due to xz
decompression overhead.)
Issue https://github.com/NixOS/nix/issues/1681.
Issue https://github.com/NixOS/nix/issues/1969.
|
|
|
|
|
|
Some servers, such as Artifactory, allow uploading with PUT and BASIC
auth. This allows nix copy to work to upload binaries to those
servers.
Worked on together with @adelbertc
|
|
Relevant RFC: NixOS/rfcs#4
$ ag -l | xargs sed -i -e "/\"/s/’/'/g;/\"/s/‘/'/g"
|
|
|
|
This is necessary for serving log files to browsers.
|
|
This reverts commit f78126bfd6b6c8477fcdbc09b2f98772dbe9a1e7. There
really is no need for such a massive change...
|
|
|
|
|
|
|
|
The fact that queryPathInfo() is synchronous meant that we needed a
thread for every concurrent binary cache lookup, even though they end
up being handled by the same download thread. Requiring hundreds of
threads is not a good idea. So now there is an asynchronous version of
queryPathInfo() that takes a callback function to process the
result. Similarly, enqueueDownload() now takes a callback rather than
returning a future.
Thus, a command like
nix path-info --store https://cache.nixos.org/ -r /nix/store/slljrzwmpygy1daay14kjszsr9xix063-nixos-16.09beta231.dccf8c5
that returns 4941 paths now takes 1.87s using only 2 threads (the main
thread and the downloader thread). (This is with a prewarmed
CloudFront.)
|
|
The binary cache store can now use HTTP/2 to do lookups. This is much
more efficient than HTTP/1.1 due to multiplexing: we can issue many
requests in parallel over a single TCP connection. Thus it's no longer
necessary to use a bunch of concurrent TCP connections (25 by
default).
For example, downloading 802 .narinfo files from
https://cache.nixos.org/, using a single TCP connection, takes 11.8s
with HTTP/1.1, but only 0.61s with HTTP/2.
This did require a fairly substantial rewrite of the Downloader class
to use the curl multi interface, because otherwise curl wouldn't be
able to do multiplexing for us. As a bonus, we get connection reuse
even with HTTP/1.1. All downloads are now handled by a single worker
thread. Clients call Downloader::enqueueDownload() to tell the worker
thread to start the download, getting a std::future to the result.
|
|
|
|
This makes us more robust against 500 errors from CloudFront or S3
(assuming the 500 error isn't cached by CloudFront...).
|
|
Also, test HttpBinaryCacheStore in addition to LocalBinaryCacheStore.
|
|
|
|
|
|
|
|
|
|
This is to allow store-specific configuration,
e.g. s3://my-cache?compression=bzip2&secret-key=/path/to/key.
|
|
This re-implements the binary cache database in C++, allowing it to be
used by other Store backends, in particular the S3 backend.
|
|
This allows readFile() to indicate that a file doesn't exist, and
might eliminate some large string copying.
|
|
|
|
|
|
http://hydra.nixos.org/build/33279996
|
|
|
|
The public key can be derived from the secret key, so there's no need
for the user to supply it separately.
|
|
|