blob: 60ab951a71f4d2d72fd3de0c74d8562731e21106 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
|
# Candelabra
This folder contains the source code for Candelabra as a Cargo workspace.
First, setup the dependencies as detailed in the root of the repository.
## Building
Building is done with Cargo as normal: `cargo build`. This places the executable in `./target/debug/candelabra-cli`.
This is not necessary if using the testing VM, and you should replace `cargo run` with `candelabra-cli` in all commands below.
## Creating cost models
To build and view a cost model, first pick an implementation to look at:
- primrose_library::VecMap
- primrose_library::VecSet
- std::vec::Vec
- std::collections::BTreeSet
- std::collections::BTreeMap
- primrose_library::SortedVecSet
- std::collections::LinkedList
- primrose_library::SortedVecMap
- primrose_library::SortedVec
- std::collections::HashMap
- std::collections::HashSet
To view the cost model for a single implementation, run `just cost-model <impl>`.
Alternatively, run `just cost-models` to view models for all implementations.
This will clear the cache before running.
Cost models are also saved to `target/candelabra/benchmark_results` as JSON files. To analyse your built cost models, copy them to `../analysis/current/candelabra/benchmark_results` and see the README in `../analysis/`.
## Profiling applications
To profile an application in the `tests/` directory and display the results, run `just profile <project>`.
Profiling info is also saved to `target/candelabra/profiler_info/` as JSON.
## Selecting containers
To print the estimated cost of using each implementation in a project, run `just select <project>`.
Alternatively, run `just selections` to run selection for all test projects.
You can add `--compare` to either of these commands to also benchmark the project with every assignment of implementations, and print out the results.
## Running the full test suite
To run everything we did, from scratch:
```
$ just cost-models # approx 10m
$ just selections --compare 2>&1 | tee ../analysis/current/log # approx 1hr 30m
```
|