Age | Commit message (Collapse) | Author |
|
|
|
Since Symbol is just an integer, passing it by const reference is
never advantageous.
|
|
after #6218 `Symbol` no longer confers a uniqueness invariant on the
string it wraps, it is now possible to create multiple symbols that
compare equal but whose string contents have different addresses. this
guarantee is now only provided by `SymbolIdx`, leaving `Symbol` only as
a string wrapper that knows about the intricacies of how symbols need to
be formatted for output.
this change renames `SymbolIdx` to `Symbol` to restore the previous
semantics of `Symbol` to that name. we also keep the wrapper type and
rename it to `SymbolStr` instead of returning plain strings from lookups
into the symbol table because symbols are formatted for output in many
places. theoretically we do not need `SymbolStr`, only a function that
formats a string for output as a symbol, but having to wrap every symbol
that appears in a message into eg `formatSymbol()` is error-prone and
inconvient.
|
|
this slightly increases the amount of memory used for any given symbol, but this
increase is more than made up for if the symbol is referenced more than once in
the EvalState that holds it. on average every symbol should be referenced at
least twice (once to introduce a binding, once to use it), so we expect no
increase in memory on average.
symbol tables are limited to 2³² entries like position tables, and similar
arguments apply to why overflow is not likely: 2³² symbols would require as many
string instances (at 24 bytes each) and map entries (at 24 bytes or more each,
assuming that the map holds on average at most one item per bucket as the docs
say). a full symbol table would require at least 192GB of memory just for
symbols, which is well out of reach. (an ofborg eval of nixpks today creates
less than a million symbols!)
|
|
PosTable deduplicates origin information, so using symbols for paths is no
longer necessary. moving away from path Symbols also reduces the usage of
symbols for things that are not keys in attribute sets, which will become
important in the future when we turn symbols into indices as well.
|
|
Pos objects are somewhat wasteful as they duplicate the origin file name and
input type for each object. on files that produce more than one Pos when parsed
this a sizeable waste of memory (one pointer per Pos). the same goes for
ptr<Pos> on 64 bit machines: parsing enough source to require 8 bytes to locate
a position would need at least 8GB of input and 64GB of expression memory. it's
not likely that we'll hit that any time soon, so we can use a uint32_t index to
locate positions instead.
|
|
when we introduce position and symbol tables we'll need to do lookups to turn
indices into those tables into actual positions/symbols. having the error
functions as members of EvalState will avoid a lot of churn for adding lookups
into the tables for each caller.
|
|
we don't *need* symbols here. the only advantage they have over strings is
making call-counting slightly faster, but that's a diagnostic feature and thus
needn't be optimized.
this also fixes a move bug that previously didn't show up: PrimOp structs were
accessed after being moved from, which technically invalidates them. previously
the names remained valid because Symbol copies on move, but strings are
invalidated. we now copy the entire primop struct instead of moving since primop
registration happen once and are not performance-sensitive.
|
|
|
|
|
|
|
|
Impure derivations are derivations that can produce a different result
every time they're built. Example:
stdenv.mkDerivation {
name = "impure";
__impure = true; # marks this derivation as impure
outputHashAlgo = "sha256";
outputHashMode = "recursive";
buildCommand = "date > $out";
};
Some important characteristics:
* This requires the 'impure-derivations' experimental feature.
* Impure derivations are not "cached". Thus, running "nix-build" on
the example above multiple times will cause a rebuild every time.
* They are implemented similar to CA derivations, i.e. the output is
moved to a content-addressed path in the store. The difference is
that we don't register a realisation in the Nix database.
* Pure derivations are not allowed to depend on impure derivations. In
the future fixed-output derivations will be allowed to depend on
impure derivations, thus forming an "impurity barrier" in the
dependency graph.
* When sandboxing is enabled, impure derivations can access the
network in the same way as fixed-output derivations. In relaxed
sandboxing mode, they can access the local filesystem.
|
|
|
|
|
|
experimental feature is enabled
This allows writing fallback code like
if builtins ? fetchClosure then
builtins.fetchClose { ... }
else
builtins.storePath ...
|
|
I gather decoding happens on demand, so I hope don't think this should
have any perf implications one way or the other.
|
|
|
|
|
|
|
|
|
|
these functions are called a whole lot, and they're all comparatively small.
always inlining them gives ~0.7% performance boost on eval.
before:
Benchmark 1: nix flakes search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 6.935 s ± 0.052 s [User: 5.852 s, System: 0.853 s]
Range (min … max): 6.808 s … 7.026 s 20 runs
Benchmark 2: nix flakes eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 329.8 ms ± 2.7 ms [User: 299.0 ms, System: 30.8 ms]
Range (min … max): 326.6 ms … 336.5 ms 20 runs
Benchmark 3: nix flakes eval --raw --impure --file expr.nix
Time (mean ± σ): 2.655 s ± 0.038 s [User: 2.364 s, System: 0.220 s]
Range (min … max): 2.574 s … 2.737 s 20 runs
after:
Benchmark 1: nix flakes search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 6.912 s ± 0.036 s [User: 5.823 s, System: 0.856 s]
Range (min … max): 6.849 s … 6.980 s 20 runs
Benchmark 2: nix flakes eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 325.1 ms ± 2.5 ms [User: 293.2 ms, System: 31.8 ms]
Range (min … max): 322.2 ms … 332.8 ms 20 runs
Benchmark 3: nix flakes eval --raw --impure --file expr.nix
Time (mean ± σ): 2.636 s ± 0.024 s [User: 2.352 s, System: 0.226 s]
Range (min … max): 2.574 s … 2.681 s 20 runs
|
|
vast majority of envs is this size.
before:
Benchmark 1: nix flakes search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 6.946 s ± 0.041 s [User: 5.875 s, System: 0.835 s]
Range (min … max): 6.834 s … 7.005 s 20 runs
Benchmark 2: nix flakes eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 330.3 ms ± 2.5 ms [User: 299.2 ms, System: 30.9 ms]
Range (min … max): 327.5 ms … 337.7 ms 20 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.671 s ± 0.035 s [User: 2.370 s, System: 0.232 s]
Range (min … max): 2.597 s … 2.749 s 20 runs
after:
Benchmark 1: nix flakes search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 6.935 s ± 0.052 s [User: 5.852 s, System: 0.853 s]
Range (min … max): 6.808 s … 7.026 s 20 runs
Benchmark 2: nix flakes eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 329.8 ms ± 2.7 ms [User: 299.0 ms, System: 30.8 ms]
Range (min … max): 326.6 ms … 336.5 ms 20 runs
Benchmark 3: nix flakes eval --raw --impure --file expr.nix
Time (mean ± σ): 2.655 s ± 0.038 s [User: 2.364 s, System: 0.220 s]
Range (min … max): 2.574 s … 2.737 s 20 runs
|
|
This is useful whenever we want to evaluate something to a store path
(e.g. in get-drvs.cc).
Extracted from the lazy-trees branch (where we can require that a
store path must come from a store source tree accessor).
|
|
This was introduced in #6174. However fetch{url,Tarball} are legacy
and we shouldn't have an undocumented attribute that does the same
thing as one that already exists ('sha256').
|
|
This switches addPath from `printStorePath` to `toRealPath`.
|
|
Also use std::string_view in a few more places.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
we'll retain the old coerceToString interface that returns a string, but callers
that don't need the returned value to outlive the Value it came from can save
copies by using the new interface instead. for values that weren't stringy we'll
pass a new buffer argument that'll be used for storage and shouldn't be
inspected.
|
|
once a string has been forced we already have dynamic storage allocated for it,
so we can easily reuse that storage instead of copying.
|
|
|
|
keeping it as a simple data member means it won't be scanned by the GC, so
eventually the GC will collect a cache that is still referenced (resulting in
use-after-free of cache elements).
fixes #5962
|
|
|
|
- Make passing the position to `forceValue` mandatory,
this way we remember people that the position is
important for better error messages
- Add pos to all `forceValue` calls
|
|
optimize primops and utils by caching more and copying less
|
|
improve parser performance a bit
|
|
gives about 1% improvement on system eval, a bit less on nix search.
# before
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 7.419 s ± 0.045 s [User: 6.362 s, System: 0.794 s]
Range (min … max): 7.335 s … 7.517 s 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.921 s ± 0.023 s [User: 2.626 s, System: 0.210 s]
Range (min … max): 2.883 s … 2.957 s 20 runs
# after
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 7.370 s ± 0.059 s [User: 6.333 s, System: 0.791 s]
Range (min … max): 7.286 s … 7.541 s 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.891 s ± 0.033 s [User: 2.606 s, System: 0.210 s]
Range (min … max): 2.823 s … 2.958 s 20 runs
|
|
when given a string yacc will copy the entire input to a newly allocated
location so that it can add a second terminating NUL byte. since the
parser is a very internal thing to EvalState we can ensure that having
two terminating NUL bytes is always possible without copying, and have
the parser itself merely check that the expected NULs are present.
# before
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 572.4 ms ± 2.3 ms [User: 563.4 ms, System: 8.6 ms]
Range (min … max): 566.9 ms … 579.1 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 381.7 ms ± 1.0 ms [User: 348.3 ms, System: 33.1 ms]
Range (min … max): 380.2 ms … 387.7 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.936 s ± 0.005 s [User: 2.715 s, System: 0.221 s]
Range (min … max): 2.923 s … 2.946 s 50 runs
# after
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 571.7 ms ± 2.4 ms [User: 563.3 ms, System: 8.0 ms]
Range (min … max): 566.7 ms … 579.7 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 376.6 ms ± 1.0 ms [User: 345.8 ms, System: 30.5 ms]
Range (min … max): 374.5 ms … 379.1 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.922 s ± 0.006 s [User: 2.707 s, System: 0.215 s]
Range (min … max): 2.906 s … 2.934 s 50 runs
|
|
there's a few symbols in primops we can create once and pick them out of
EvalState afterwards instead of creating them every time we need them. this
gives almost 1% speedup to an uncached nix search.
|
|
|
|
|
|
|
|
|
|
|
|
Previously you had to remember to call value->attrs->sort() after
populating value->attrs. Now there is a BindingsBuilder helper that
wraps Bindings and ensures that sort() is called before you can use
it.
|