Age | Commit message (Collapse) | Author |
|
The default is 1000ms, but we can hit it a lot of we don't have direct
link to AWS (e.g. using VPN).
|
|
This enables using for http for S3 request for debugging or
implementations that don't have https configured. This is not a problem
for binary caches since they should not contain sensitive information.
Both package signatures and AWS auth already protect against tampering.
|
|
Since we're not using multi-part uploads at the moment, we can drop
this patch.
|
|
|
|
The use of TransferManager has several issues, including that it
doesn't allow setting a Content-Encoding without a patch, and it
doesn't handle exceptions in worker threads (causing termination on
memory allocation failure).
Fixes #2493.
|
|
Since the callback is global we can't refer to 'path' in it. This
could cause a segfault or printing of arbitrary data.
|
|
This meant that making a typo in an s3:// URI would cause a bucket to
be created. Also it didn't handle eventual consistency very well. Now
it's up to the user to create the bucket.
|
|
TransferManager allocates a lot of memory (50 MiB by default), and it
might leak but I'm not sure about that. In any case it was causing
OOMs in hydra-queue-runner. So allocate only one TransferManager per
S3BinaryCacheStore.
Hopefully fixes https://github.com/NixOS/hydra/issues/586.
|
|
This callback is executed on a different thread, so exceptions thrown
from the callback are not caught:
Aug 08 16:25:48 chef hydra-queue-runner[11967]: terminate called after throwing an instance of 'nix::Error'
Aug 08 16:25:48 chef hydra-queue-runner[11967]: what(): AWS error: failed to upload 's3://nix-cache/19dbddlfb0vp68g68y19p9fswrgl0bg7.ls'
Therefore, just check the transfer status after it completes. Also
include the S3 error message in the exception.
|
|
Fixes https://github.com/NixOS/nix/issues/2333 and https://github.com/NixOS/nixpkgs/issues/44337.
|
|
This didn't work anymore since decompression was only done in the
non-coroutine case.
Decompressors are now sinks, just like compressors.
Also fixed a bug in bzip2 API handling (we have to handle BZ_RUN_OK
rather than BZ_OK), which we didn't notice because there was a missing
'throw':
if (ret != BZ_OK)
CompressionError("error while compressing bzip2 file");
|
|
Works for uploading and not downloading.
|
|
runner logs.
|
|
This reduces memory consumption of
nix copy --from file://... --to ~/my-nix /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79
from 514 MiB to 18 MiB for an uncompressed binary cache, and from 192
MiB to 53 MiB for a bzipped binary cache. It may also be faster
because fetching can happen concurrently with decompression/writing.
Continuation of 48662d151bdf4a38670897beacea9d1bd750376a.
Issue https://github.com/NixOS/nix/issues/1681.
|
|
|
|
|
|
|
|
case the nix command is interrupted.
|
|
5Gb.
|
|
|
|
|
|
This allows specifying the AWS configuration profile to use. E.g.
nix copy --from s3://my-cache?profile=aws-dev-account /nix/store/cf3isrlqavvd5w7rpky1fa8j9lcnlggm-...
|
|
As far as we're concerned, not being able to access a file just means
the file is missing. Plus, AWS explicitly goes out of its way to
return a 403 if the file is missing and the requester doesn't have
permission to list the bucket.
Also getting rid of an old hack that Eelco said was only relevant
to an older AWS SDK.
|
|
Relevant RFC: NixOS/rfcs#4
$ ag -l | xargs sed -i -e "/\"/s/’/'/g;/\"/s/‘/'/g"
|
|
Recently aws-sdk-cpp quietly switched to using S3 virtual host URIs
(https://github.com/aws/aws-sdk-cpp/commit/69d9c53882), i.e. it sends
requests to http://<bucket>.<region>.s3.amazonaws.com rather than
http://<region>.s3.amazonaws.com/<bucket>. However this interacts
badly with curl connection reuse. For example, if we do the following:
1) Check whether a bucket exists using GetBucketLocation.
2) If it doesn't, create it using CreateBucket.
3) Do operations on the bucket.
then 3) will fail for a minute or so with a NoSuchBucket exception,
presumably because the server being hit is a fallback for cases when
buckets don't exist.
Disabling the use of virtual hosts ensures that 3) succeeds
immediately. (I don't know what S3's consistency guarantees are for
bucket creation, but in practice buckets appear to be available
immediately.)
|
|
|
|
This is returned by recent versions. Also handle NO_SUCH_KEY even
though the library doesn't actually return that at the moment.
|
|
Newer versions of aws-sdk-cpp call CalculateDelayBeforeNextRetry()
even for non-retriable errors (like NoSuchKey) whih causes log spam in
hydra-queue-runner.
|
|
|
|
|
|
The typical use is to inherit Config and add Setting<T> members:
class MyClass : private Config
{
Setting<int> foo{this, 123, "foo", "the number of foos to use"};
Setting<std::string> bar{this, "blabla", "bar", "the name of the bar"};
MyClass() : Config(readConfigFile("/etc/my-app.conf"))
{
std::cout << foo << "\n"; // will print 123 unless overriden
}
};
Currently, this is used by Store and its subclasses for store
parameters. You now get a warning if you specify a non-existant store
parameter in a store URI.
|
|
|
|
|
|
|
|
So if "text-compression=br", the .ls file in S3 will get a
Content-Encoding of "br". Brotli appears to compress better than xz
for this kind of file and is natively supported by browsers.
|
|
This is necessary for serving log files to browsers.
|
|
You can now set the store parameter "text-compression=br" to compress
textual files in the binary cache (i.e. narinfo and logs) using
Brotli. This sets the Content-Encoding header; the extension of
compressed files is unchanged.
You can separately specify the compression of log files using
"log-compression=br". This is useful when you don't want to compress
narinfo files for backward compatibility.
|
|
|
|
|
|
Fixes the problem mentioned in e6a61b8da788efbbbb0eb690c49434b6b5fc9741
See #1135
|
|
|
|
|
|
This adds support for s3:// URIs in all places where Nix allows URIs,
e.g. in builtins.fetchurl, builtins.fetchTarball, <nix/fetchurl.nix>
and NIX_PATH. It allows fetching resources from private S3 buckets,
using credentials obtained from the standard places (i.e. AWS_*
environment variables, ~/.aws/credentials and the EC2 metadata
server). This may not be super-useful in general, but since we already
depend on aws-sdk-cpp, it's a cheap feature to add.
|
|
Because config.h can #define things like _FILE_OFFSET_BITS=64 and not
every compilation unit includes config.h, we currently compile half of
Nix with _FILE_OFFSET_BITS=64 and other half with _FILE_OFFSET_BITS
unset. This causes major havoc with the Settings class on e.g. 32-bit ARM,
where different compilation units disagree with the struct layout.
E.g.:
diff --git a/src/libstore/globals.cc b/src/libstore/globals.cc
@@ -166,6 +166,8 @@ void Settings::update()
_get(useSubstitutes, "build-use-substitutes");
+ fprintf(stderr, "at Settings::update(): &useSubstitutes = %p\n", &nix::settings.useSubstitutes);
_get(buildUsersGroup, "build-users-group");
diff --git a/src/libstore/remote-store.cc b/src/libstore/remote-store.cc
+++ b/src/libstore/remote-store.cc
@@ -138,6 +138,8 @@ void RemoteStore::initConnection(Connection & conn)
void RemoteStore::setOptions(Connection & conn)
{
+ fprintf(stderr, "at RemoteStore::setOptions(): &useSubstitutes = %p\n", &nix::settings.useSubstitutes);
conn.to << wopSetOptions
Gave me:
at Settings::update(): &useSubstitutes = 0xb6e5c5cb
at RemoteStore::setOptions(): &useSubstitutes = 0xb6e5c5c7
That was not a fun one to debug!
|
|
This is required now.
|
|
|
|
It failed with
AWS error uploading ‘6gaxphsyhg66mz0a00qghf9nqf7majs2.ls.xz’: Unable to parse ExceptionName: MissingContentLength Message: You must provide the Content-Length HTTP header.
possibly because the istringstream_nocopy introduced in
0d2ebb4373e509521f27a6e8f16bfd39d05b2188 doesn't supply the seek
method that the AWS library expects. So bring back the old version,
but only for S3BinaryCacheStore.
|
|
This reverts commit f78126bfd6b6c8477fcdbc09b2f98772dbe9a1e7. There
really is no need for such a massive change...
|
|
|
|
This cuts hydra-queue-runner's peak memory usage by about a third.
|