aboutsummaryrefslogtreecommitdiff
path: root/src/README.md
blob: a2691a3a7e3442dd08f7381694561e7bcbea367a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
# Candelabra

This folder contains the source code for Candelabra as a Cargo workspace.
First, setup the dependencies as detailed in the root of the repository.

## Building

Building is done with Cargo as normal: `cargo build`. This places the executable in `./target/debug/candelabra-cli`.

## Creating cost models

To build and view cost models, first find an implementation to look at:

```
$ cargo run -- list-library
[...]
Available container implementations:
  primrose_library::VecMap
  primrose_library::VecSet
  std::vec::Vec
  std::collections::BTreeSet
  std::collections::BTreeMap
  primrose_library::SortedVecSet
  std::collections::LinkedList
  primrose_library::SortedVecMap
  primrose_library::SortedVec
  std::collections::HashMap
  std::collections::HashSet
```

To view the cost model for a single implementation, run `cargo run -- cost-model <impl>`.

Alternatively, run `just cost-model <impl>` to look at a single implementation, or `just cost-models` to view models for all implementations.
The latter will clear the cache before running.

Cost models are also saved to `target/candelabra/benchmark_results` as JSON files. To analyse your built cost models, copy them to `../analysis/current/candelabra/benchmark_results` and see the README in `../analysis/`.

## Profiling applications

To profile an application in the `tests/` directory and display the results, run `cargo run -- --manifest-path tests/Cargo.toml -p <project> profile`.

Alternatively, run `just profile <project>`.

Profiling info is also saved to `target/candelabra/profiler_info/` as JSON.

## Selecting containers

To print the estimated cost of using each implementation in a project, run `cargo run -- --manifest-path tests/Cargo.toml -p <project> select`.

Alternatively, run `just select <project>` for a single project, or `just selections` to run selection for all test projects.
You can add `--compare` to any of these commands to also benchmark the project with every assignment of implementations, and print out the results.