Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
Fixes #3684.
|
|
|
|
|
|
Not a regular git revert as there have been many merges and things.
|
|
|
|
|
|
|
|
Needed so that we can include it as a logger in loggers.cc without
adding a dependency on nix
This also requires moving names.hh to libutil to prevent a circular
dependency between libmain and libexpr
|
|
Instead, `Hash` uses `std::optional<HashType>`. In the future, we may
also make `Hash` itself require a known hash type, encoraging people to
use `std::optional<Hash>` instead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
(cherry picked from commit 2c692a3b144523bca68dd6de618124ba6c9bb332)
|
|
|
|
This does a few enums; the rest will be gotten in subsequent commits.
|
|
(cherry picked from commit 0b013a54dc570395bed887369f8dd622b8ce337b)
|
|
Starting ba87b08f8529e4d9f8c58d8c625152058ceadb75 getEnv now returns an
std::optional which means these getEnv() != "" conditions no longer happen
if the variables are not defined.
|
|
|
|
Most functions now take a StorePath argument rather than a Path (which
is just an alias for std::string). The StorePath constructor ensures
that the path is syntactically correct (i.e. it looks like
<store-dir>/<base32-hash>-<name>). Similarly, functions like
buildPaths() now take a StorePathWithOutputs, rather than abusing Path
by adding a '!<outputs>' suffix.
Note that the StorePath type is implemented in Rust. This involves
some hackery to allow Rust values to be used directly in C++, via a
helper type whose destructor calls the Rust type's drop()
function. The main issue is the dynamic nature of C++ move semantics:
after we have moved a Rust value, we should not call the drop function
on the original value. So when we move a value, we set the original
value to bitwise zero, and the destructor only calls drop() if the
value is not bitwise zero. This should be sufficient for most types.
Also lots of minor cleanups to the C++ API to make it more modern
(e.g. using std::optional and std::string_view in some places).
|
|
|
|
Also, fetchGit now runs in O(1) memory since we pipe the output of
'git archive' directly into unpackTarball() (rather than first reading
it all into memory).
|
|
|
|
|
|
These are all symlinks to 'nix' now, reducing the installed size by
about ~1.7 MiB.
|
|
Before:
$ command time nix-prefetch-url https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.17.6.tar.xz
1.19user 1.02system 0:41.96elapsed 5%CPU (0avgtext+0avgdata 182720maxresident)k
After:
1.38user 1.05system 0:39.73elapsed 6%CPU (0avgtext+0avgdata 16204maxresident)k
Note however that addToStore() can still take a lot of memory
(e.g. RemoteStore::addToStore() is constant space, but
LocalStore::addToStore() isn't; that's fixed by
https://github.com/edolstra/nix/commit/c94b4fc7ee0c7b322a5f3c7ee784063b47a11d98
though).
Fixes #1400.
|
|
EvalState contains a few counters (e.g. nrValues) that increase
quickly enough that they end up being interpreted as pointers by the
garbage collector. Moving it to the heap makes them invisible to the
garbage collector.
This reduces the max RSS doing 100 evaluations of
nixos.tests.firefox.x86_64-linux.drvPath from 455 MiB to 292 MiB.
Note: ideally, allocations would be much further up in the 64-bit
address space to reduce the odds of an integer being misinterpreted as
a pointer. Maybe we can use some linker magic to move the .bss segment
to a higher address.
|
|
All plugins in plugin-files will be dlopened, allowing them to
statically construct instances of the various Register* types Nix
supports.
|
|
Also, random cleanup to argument handling.
|
|
Fixes #1568.
|
|
Relevant RFC: NixOS/rfcs#4
$ ag -l | xargs sed -i -e "/\"/s/’/'/g;/\"/s/‘/'/g"
|
|
Also simplify the Hash API.
Fixes #1437.
|
|
For example, if we call brotli with an empty input, it shouldn't read
from the caller's stdin.
|
|
This reverts commit f78126bfd6b6c8477fcdbc09b2f98772dbe9a1e7. There
really is no need for such a massive change...
|
|
|
|
|
|
The binary cache store can now use HTTP/2 to do lookups. This is much
more efficient than HTTP/1.1 due to multiplexing: we can issue many
requests in parallel over a single TCP connection. Thus it's no longer
necessary to use a bunch of concurrent TCP connections (25 by
default).
For example, downloading 802 .narinfo files from
https://cache.nixos.org/, using a single TCP connection, takes 11.8s
with HTTP/1.1, but only 0.61s with HTTP/2.
This did require a fairly substantial rewrite of the Downloader class
to use the curl multi interface, because otherwise curl wouldn't be
able to do multiplexing for us. As a bonus, we get connection reuse
even with HTTP/1.1. All downloads are now handled by a single worker
thread. Clients call Downloader::enqueueDownload() to tell the worker
thread to start the download, getting a std::future to the result.
|
|
|
|
|
|
This allows readFile() to indicate that a file doesn't exist, and
might eliminate some large string copying.
|