Age | Commit message (Collapse) | Author |
|
libexpr: extend `Value::print` to allow limited depth
|
|
This is done in roughly the same way builtin functions are documented.
Also auto-link experimental features for primops, subsuming PR #8371.
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
|
|
|
|
|
|
Motivation
`PathSet` is not correct because string contexts have other forms
(`Built` and `DrvDeep`) that are not rendered as plain store paths.
Instead of wrongly using `PathSet`, or "stringly typed" using
`StringSet`, use `std::std<StringContextElem>`.
-----
In support of this change, `NixStringContext` is now defined as
`std::std<StringContextElem>` not `std:vector<StringContextElem>`. The
old definition was just used by a `getContext` method which was only
used by the eval cache. It can be deleted altogether since the types are
now unified and the preexisting `copyContext` function already suffices.
Summarizing the previous paragraph:
Old:
- `value/context.hh`: `NixStringContext = std::vector<StringContextElem>`
- `value.hh`: `NixStringContext Value::getContext(...)`
- `value.hh`: `copyContext(...)`
New:
- `value/context.hh`: `NixStringContext = std::set<StringContextElem>`
- `value.hh`: `copyContext(...)`
----
The string representation of string context elements no longer contains
the store dir. The diff of `src/libexpr/tests/value/context.cc` should
make clear what the new representation is, so we recommend reviewing
that file first. This was done for two reasons:
Less API churn:
`Value::mkString` and friends did not take a `Store` before. But if
`NixStringContextElem::{parse, to_string}` *do* take a store (as they
did before), then we cannot have the `Value` functions use them (in
order to work with the fully-structured `NixStringContext`) without
adding that argument.
That would have been a lot of churn of threading the store, and this
diff is already large enough, so the easier and less invasive thing to
do was simply make the element `parse` and `to_string` functions not
take the `Store` reference, and the easiest way to do that was to simply
drop the store dir.
Space usage:
Dropping the `/nix/store/` (or similar) from the internal representation
will safe space in the heap of the Nix programming being interpreted. If
the heap contains many strings with non-trivial contexts, the saving
could add up to something significant.
----
The eval cache version is bumped.
The eval cache serialization uses `NixStringContextElem::{parse,
to_string}`, and since those functions are changed per the above, that
means the on-disk representation is also changed.
This is simply done by changing the name of the used for the eval cache
from `eval-cache-v4` to eval-cache-v5`.
----
To avoid some duplication `EvalCache::mkPathString` is added to abstract
over the simple case of turning a store path to a string with just that
string in the context.
Context
This PR picks up where #7543 left off. That one introduced the fully
structured `NixStringContextElem` data type, but kept `PathSet context`
as an awkward middle ground between internal `char[][]` interpreter heap
string contexts and `NixStringContext` fully parsed string contexts.
The infelicity of `PathSet context` was specifically called out during
Nix team group review, but it was agreeing that fixing it could be left
as future work. This is that future work.
A possible follow-up step would be to get rid of the `char[][]`
evaluator heap representation, too, but it is not yet clear how to do
that. To use `NixStringContextElem` there we would need to get the STL
containers to GC pointers in the GC build, and I am not sure how to do
that.
----
PR #7543 effectively is writing the inverse of a `mkPathString`,
`mkOutputString`, and one more such function for the `DrvDeep` case. I
would like that PR to have property tests ensuring it is actually the
inverse as expected.
This PR sets things up nicely so that reworking that PR to be in that
more elegant and better tested way is possible.
Co-authored-by: Théophane Hufschmitt <7226587+thufschmitt@users.noreply.github.com>
|
|
|
|
* Finish converting existing comments for internal API docs
99% of this was just reformatting existing comments. Only two exceptions:
- Expanded upon `BuildResult::status` compat note
- Split up file-level `symbol-table.hh` doc comments to get
per-definition docs
Also fixed a few whitespace goofs, turning leading tabs to spaces and
removing trailing spaces.
Picking up from #8133
* Fix two things from comments
* Use triple-backtick not indent for `dumpPath`
* Convert GNU-style `\`..'` quotes to markdown style in API docs
This will render correctly.
|
|
This introduces the SourcePath type from lazy-trees as an abstraction
for accessing files from inputs that may not be materialized in the
real filesystem (e.g. Git repositories). Currently, however, it's just
a wrapper around CanonPath, so it shouldn't change any behaviour. (On
lazy-trees, SourcePath is a <InputAccessor, CanonPath> tuple.)
|
|
`///@file` makes them show up in the internal API dos. A tiny few were
missing `#pragma once`.
|
|
This reverts commit a75b7ba30f1e4f8b15e810fd18e63ee9552e0815, reversing
changes made to 9af16c5f742300e831a2cc400e43df1e22f87f31.
|
|
Prior to this change, we had a bunch of ad-hoc string manipulation code
scattered around. This made it hard to figure out what data model for
string contexts is.
Now, we still store string contexts most of the time as encoded strings
--- I was wary of the performance implications of changing that --- but
whenever we parse them we do so only through the
`NixStringContextElem::parse` method, which handles all cases. This
creates a data type that is very similar to `DerivedPath` but:
- Represents the funky `=<drvpath>` case as properly distinct from the
others.
- Only encodes a single output, no wildcards and no set, for the
"built" case.
(I would like to deprecate `=<path>`, after which we are in spitting
distance of `DerivedPath` and could maybe get away with fewer types, but
that is another topic for another day.)
|
|
|
|
|
|
|
|
Makes `printValueAsJSON` not copy paths to the store for `nix eval
--json`, `nix-instantiate --eval --json` and `nix-env --json`.
Fixes https://github.com/NixOS/nix/issues/5612
|
|
|
|
|
|
|
|
after #6218 `Symbol` no longer confers a uniqueness invariant on the
string it wraps, it is now possible to create multiple symbols that
compare equal but whose string contents have different addresses. this
guarantee is now only provided by `SymbolIdx`, leaving `Symbol` only as
a string wrapper that knows about the intricacies of how symbols need to
be formatted for output.
this change renames `SymbolIdx` to `Symbol` to restore the previous
semantics of `Symbol` to that name. we also keep the wrapper type and
rename it to `SymbolStr` instead of returning plain strings from lookups
into the symbol table because symbols are formatted for output in many
places. theoretically we do not need `SymbolStr`, only a function that
formats a string for output as a symbol, but having to wrap every symbol
that appears in a message into eg `formatSymbol()` is error-prone and
inconvient.
|
|
this slightly increases the amount of memory used for any given symbol, but this
increase is more than made up for if the symbol is referenced more than once in
the EvalState that holds it. on average every symbol should be referenced at
least twice (once to introduce a binding, once to use it), so we expect no
increase in memory on average.
symbol tables are limited to 2³² entries like position tables, and similar
arguments apply to why overflow is not likely: 2³² symbols would require as many
string instances (at 24 bytes each) and map entries (at 24 bytes or more each,
assuming that the map holds on average at most one item per bucket as the docs
say). a full symbol table would require at least 192GB of memory just for
symbols, which is well out of reach. (an ofborg eval of nixpks today creates
less than a million symbols!)
|
|
Pos objects are somewhat wasteful as they duplicate the origin file name and
input type for each object. on files that produce more than one Pos when parsed
this a sizeable waste of memory (one pointer per Pos). the same goes for
ptr<Pos> on 64 bit machines: parsing enough source to require 8 bytes to locate
a position would need at least 8GB of input and 64GB of expression memory. it's
not likely that we'll hit that any time soon, so we can use a uint32_t index to
locate positions instead.
|
|
Fixes #6243.
|
|
I gather decoding happens on demand, so I hope don't think this should
have any perf implications one way or the other.
|
|
|
|
|
|
|
|
We now memoize on Bindings / list element vectors rather than Values,
so that e.g. two Values that point to the same Bindings will be
printed only once.
|
|
Fixes #6157.
|
|
|
|
|
|
constructing an ostringstream for non-string concats (like integer addition) is
a small constant cost that we can avoid. for string concats we can keep all the
string temporaries we get from coerceToString and concatenate them in one go,
which saves a lot of intermediate temporaries and copies in ostringstream. we
can also avoid copying the concatenated string again by directly allocating it
in GC memory and moving ownership of the concatenated string into the target
value.
saves about 2% on system eval.
before:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.837 s ± 0.031 s [User: 2.562 s, System: 0.191 s]
Range (min … max): 2.796 s … 2.892 s 20 runs
after:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.790 s ± 0.035 s [User: 2.532 s, System: 0.187 s]
Range (min … max): 2.722 s … 2.836 s 20 runs
|
|
|
|
|
|
|
|
|
|
Previously you had to remember to call value->attrs->sort() after
populating value->attrs. Now there is a BindingsBuilder helper that
wraps Bindings and ensures that sort() is called before you can use
it.
|
|
This function is very useful in nixpkgs, but its implementation in Nix
itself is rather slow due to it requiring a lot of attribute set and
list appends.
|
|
|
|
- This change applies to builtins.toXML and inner workings
- Proof of concept:
```nix
let e = builtins.toXML e; in e
```
- Before:
```
$ nix-instantiate --eval poc.nix
error: infinite recursion encountered
```
- After:
```
$ nix-instantiate --eval poc.nix
error: infinite recursion encountered
at /data/github/kamadorueda/nix/poc.nix:1:9:
1| let e = builtins.toXML e; in e
|
```
|
|
When working on some more complex Nix code, there are sometimes rather
unhelpful or misleading error messages, especially if coerce-errors are
thrown.
This patch is a first steps towards improving that. I'm happy to file
more changes after that, but I'd like to gather some feedback first.
To summarize, this patch does the following things:
* Attrsets (a.k.a. `Bindings` in `libexpr`) now have a `Pos`. This is
helpful e.g. to identify which attribute-set in `listToAttrs` is
invalid.
* The `Value`-struct has a new method named `determinePos` which tries
to guess the position of a value and falls back to a default if that's
not possible.
This can be used to provide better messages if a coercion fails.
* The new `determinePos`-API is used by `builtins.concatMap` now. With
that change, Nix shows the exact position in the error where a wrong
value was returned by the lambda.
To make sure it's still obvious that `concatMap` is the problem,
another stack-frame was added.
* The changes described above can be added to every other `primop`, but
first I'd like to get some feedback about the overall approach.
|
|
Move clearValue inside Value
mkInt instead of setInt
mkBool instead of setBool
mkString instead of setString
mkPath instead of setPath
mkNull instead of setNull
mkAttrs instead of setAttrs
mkList instead of setList*
mkThunk instead of setThunk
mkApp instead of setApp
mkLambda instead of setLambda
mkBlackhole instead of setBlackhole
mkPrimOp instead of setPrimOp
mkPrimOpApp instead of setPrimOpApp
mkExternal instead of setExternal
mkFloat instead of setFloat
Add note that the static mk* function should be removed eventually
|
|
|
|
And Value::type to Value::internalType, such that type() can be used in
the next commit to get the new ValueType
|
|
This is an implementation detail and shouldn't be used. Use normalType()
and the various is<Type> functions instead
|
|
|
|
|
|
This will be useful to abstract over the ValueType implementation
details
Make use of it already to replace the showType(ValueType) function
|
|
|
|
|
|
We want to *trace* the 'Value *' arrays, not garbage-collect them!
Otherwise the vectors/maps can end up pointing to nowhere.
Fixes #3377. Closes #3384.
|