aboutsummaryrefslogtreecommitdiff
path: root/thesis/parts/implementation.tex
blob: 3576447a106578112fd2356ab525a8aaa1d570e4 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
\todo{Introduction}

\section{Modifications to Primrose}

%% API
In order to facilitate integration with Primrose, we refactored large parts of the code to support being called as an API, rather than only through the command line.
This also required updating the older code to a newer edition of Rust, and improving the error handling throughout.

%% Mapping trait
As suggested in the original paper, we added the ability to ask for associative container types: ones that map a key to a value.
This was done by adding a new \code{Mapping} trait to the library, and updating the type checking and analysis code to support multiple type variables in container type declarations, and be aware of the operations available on mappings.

Operations on mapping implementations can be modelled and checked against constraints in the same way that regular containers can be.
They are modelled in Rosette as a list of key-value pairs.
\code{src/crates/library/src/hashmap.rs} shows how mapping container types can be declared, and operations on them modelled.

\todo{add and list library types}

We also added new syntax to the language to support defining properties that only make sense for mappings (\code{dictProperty}), however this was unused.

%% Resiliency, etc
While performing integration testing, we found and fixed several other issues with the existing code:

\begin{enumerate}
\item Only push and pop operations could be modelled in properties without raising an error during type-checking.
\item The Rosette code generated for properties using other operations would be incorrect.
\item Some trait methods used mutable borrows unnecessarily, making it difficult or impossible to write safe Rust using them.
\item The generated code would perform an unnecessary heap allocation for every created container, which could affect performance.
\end{enumerate}

We also added a requirement for all \code{Container}s and \code{Mappings} to implement \code{IntoIterator} and \code{FromIterator}, as well as to allow iterating over elements.

\section{Building cost models}

%% Benchmarker crate
In order to benchmark container types, we use a seperate crate (\code{src/crates/candelabra-benchmarker}) which contains benchmarking code for each trait in the Primrose library.
When benchmarks need to be run for an implementation, we dynamically generate a new crate, which runs all benchmark methods appropriate for the given implementation (\code{src/crate/candelabra/src/cost/benchmark.rs}).

As Rust's generics are monomorphised, our generic code is compiled as if we were using the concrete type in our code, so we don't need to worry about affecting the benchmark results.

Each benchmark is run in a 'warmup' loop for a fixed amount of time (currently 500ms), then runs for a fixed number of iterations (currently 50).
This is important because we use every observation when fitting our cost models, so varying our number of iterations would change our curve's fit.
We repeat each benchmark at a range of $n$ values, ranging from $64$ to $65,536$.

Each benchmark we run corresponds to one container operation.
For most operations, we prepare a container of size $n$ and run the operation once per iteration.
For certain operations which are commonly amortized (\code{insert}, \code{push}, and \code{pop}), we instead run the operation itself $n$ times and divide all data points by $n$.

Our benchmarker crate outputs every observation in a similar format to Criterion (a popular benchmarking crate for Rust).
We then parse this from our main program, and use least squares to fit a polynomial to our data.
We initially tried other approaches to fitting a curve to our data, however we found that they all overfitted, resulting in more sensitivity to benchmarking noise.
As operations on most common data structures are polynomial or logarithmic complexity, we believe that least squares fitting is good enough to capture the cost of most operations.

\todo{variable coefficients, which ones we tried}

\section{Profiling}

We implement profiling by using a \code{ProfilerWrapper} type (\code{src/crates/library/src/profiler.rs}), which takes as a type parameter the 'inner' container implementation.
We then implement any primrose traits that the inner container implements, counting the number of times each operation is called.
We also check the length of the container after each insertion operation, and track the maximum.

This tracking is done per-instance, and recorded when the instance goes out of scope and its \code{Drop} implementation is called.
We write the counts of each operation and maximum size of the collection to a location specified by an environment variable, and a constant generic parameter which allows us to match up container types to their profiler outputs.

When we want to profile a program, we pick any valid inner implementation for each selection site, and use that candidate with our profiling wrapper as the concrete implementation for that site.

This approach has the advantage of giving us information on each individual collection allocated, rather than only statistics for the type as a whole.
For example, if one instance of a container type is used in a very different way from the rest, we will be able to see it more clearly than a normal profiling tool would allow us to.
Although it has some amount of overhead, it's not important as we aren't measuring the program's execution time when profiling.

\section{Selection and Codegen}

%% Selection Algorithm incl Adaptiv

%% Generated code (opaque types)

%% Implementation w/ const generics

\section{Misc Concerns}

\todo{Justify Rust as language}

\todo{Explain cargo's role in rust projects \& how it is integrated}

%% get project metadata from cargo
%% available benchmarks and source directories
%% works with most projects
%% could be expanded to run as cargo command

\todo{Caching and stuff}

\todo{Ease of use}

%% parse minimal amount of information from criterion benchmark
%% most common benchmarking tool, closest there is to a standard
%% should be easy to adapt if/when cargo ships proper benchmarking support