Age | Commit message (Collapse) | Author |
|
Fixes #2359.
|
|
Fixes #2361.
|
|
Fun fact: rules with multiple targets don't work properly with 'make
-j'. For example, a rule like
a b: c
touch a b
is equivalent to
a: c
touch a b
b: c
touch a b
so with 'make -j', the 'touch' command will be run twice. See
e.g. https://stackoverflow.com/questions/2973445/gnu-makefile-rule-generating-a-few-targets-from-a-single-source-file.
|
|
update config/config.{sub,guess}
|
|
ignore when listxattr fails with ENODATA
|
|
Just
curl 'http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.sub;hb=HEAD' > config/config.sub
curl 'http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD' > config/config.guess
Those files are 5 years old and failed to guess new archs ("ppc64-linux")
|
|
This happens on CIFS and means the remote filesystem has no extended
attributes.
|
|
|
|
TransferManager allocates a lot of memory (50 MiB by default), and it
might leak but I'm not sure about that. In any case it was causing
OOMs in hydra-queue-runner. So allocate only one TransferManager per
S3BinaryCacheStore.
Hopefully fixes https://github.com/NixOS/hydra/issues/586.
|
|
Also, add $path/bin to $PATH even if it doesn't exist. This makes
'man' work properly (since it looks for ../share/man relative to $PATH
entries).
|
|
This callback is executed on a different thread, so exceptions thrown
from the callback are not caught:
Aug 08 16:25:48 chef hydra-queue-runner[11967]: terminate called after throwing an instance of 'nix::Error'
Aug 08 16:25:48 chef hydra-queue-runner[11967]: what(): AWS error: failed to upload 's3://nix-cache/19dbddlfb0vp68g68y19p9fswrgl0bg7.ls'
Therefore, just check the transfer status after it completes. Also
include the S3 error message in the exception.
|
|
Revert "progress-bar: re-draw last update if nothing new for 1sec."
|
|
|
|
Fixes https://github.com/NixOS/nix/issues/2333 and https://github.com/NixOS/nixpkgs/issues/44337.
|
|
This didn't work anymore since decompression was only done in the
non-coroutine case.
Decompressors are now sinks, just like compressors.
Also fixed a bug in bzip2 API handling (we have to handle BZ_RUN_OK
rather than BZ_OK), which we didn't notice because there was a missing
'throw':
if (ret != BZ_OK)
CompressionError("error while compressing bzip2 file");
|
|
|
|
|
|
It adds a new operation, cmdAddToStoreNar, that does the same thing as
the corresponding nix-daemon operation, i.e. call addToStore(). This
replaces cmdImportPaths, which has the major issue that it sends the
NAR first and the store path second, thus requiring us to store the
incoming NAR either in memory or on disk until we decide what to do
with it.
For example, this reduces the memory usage of
$ nix copy --to 'ssh://localhost?remote-store=/tmp/nix' /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79
from 267 MiB to 12 MiB.
Probably fixes #1988.
|
|
|
|
|
|
|
|
This is primarily useful for testing since it removes the need to have
SSH working.
|
|
This is primarily useful for testing, e.g.
$ nix copy --to 'ssh://localhost?remote-store=/tmp/nix' ...
|
|
2.1 release notes: Add note about s3-compatible stores
|
|
|
|
|
|
Fix symlink leak in restricted eval mode
|
|
Allows selectively adding environment variables to pure shells.
|
|
In EvalState::checkSourcePath, the path is checked against the list of
allowed paths first and later it's checked again *after* resolving
symlinks.
The resolving of the symlinks is done via canonPath, which also strips
out "../" and "./". However after the canonicalisation the error message
pointing out that the path is not allowed prints the symlink target in
the error message.
Even if we'd suppress the message, symlink targets could still be leaked
if the symlink target doesn't exist (in this case the error is thrown in
canonPath).
So instead, we now do canonPath() without symlink resolving first before
even checking against the list of allowed paths and then later do the
symlink resolving and checking the allowed paths again.
The first call to canonPath() should get rid of all the "../" and "./",
so in theory the only way to leak a symlink if the attacker is able to
put a symlink in one of the paths allowed by restricted evaluation mode.
For the latter I don't think this is part of the threat model, because
if the attacker can write to that path, the attack vector is even
larger.
Signed-off-by: aszlig <aszlig@nix.build>
|
|
Includes documentation and test.
|
|
Works for uploading and not downloading.
|
|
Removes unused variable from `nix-build/nix-shell`
|
|
This particular `shell` variable wasn't used, since a new one was
declared in the only side of the `if` branch that used a `shell`
variable.
It could realistically confuse developers thinking it could use `$SHELL`
under some situations.
|
|
|
|
|
|
Fedora 27 provides an incompatible version of Boost (1.64.0).
|
|
This fixes 'error 10 while decompressing xz file'.
https://hydra.nixos.org/build/78308551
|
|
In some Boost versions, coroutines don't propagate exceptions
properly, causing Nix to fail with the exception 'coroutine has
finished'.
|
|
|
|
https://hydra.nixos.org/build/73991153
|
|
copyPathsToStore: honour keep-going
|
|
|
|
parser.y: fix assoc of -> and < > <= >=
|
|
|
|
prim_foldlStrict: call forceValue() before value is copied
|
|
The parser allowed senseless `a > b > c` but disallowed `a -> b -> c` which seems valid
It might be a typo
|
|
forceValue() were called after a value is copied effectively forcing only one of the copies keeping another copy not evaluated.
This resulted in its evaluation of the same lazy value more than once (the number of hits is not big though)
|
|
Not ready for this yet, causes the prompt to disappear in nix repl
and more generally can overwrite non-progress-bar messages.
This reverts commit 44de71a39624d86d6744062ee36f57170024c9a0.
|
|
Before:
$ command time nix-prefetch-url https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.17.6.tar.xz
1.19user 1.02system 0:41.96elapsed 5%CPU (0avgtext+0avgdata 182720maxresident)k
After:
1.38user 1.05system 0:39.73elapsed 6%CPU (0avgtext+0avgdata 16204maxresident)k
Note however that addToStore() can still take a lot of memory
(e.g. RemoteStore::addToStore() is constant space, but
LocalStore::addToStore() isn't; that's fixed by
https://github.com/edolstra/nix/commit/c94b4fc7ee0c7b322a5f3c7ee784063b47a11d98
though).
Fixes #1400.
|
|
Apparently, on macOS, 'long' != 'int64_t'.
https://hydra.nixos.org/build/77100756
|