aboutsummaryrefslogtreecommitdiff
path: root/thesis/parts/design.tex
blob: 6d9c482e11ce7a13234bbf30d575d467257a9477 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
This chapter outlines the design of our container selection system (Candelabra), and justifies our design decisions.

We first describe our aims and priorities for the system, and illustrate its usage with an example.

We then provide an overview of the container selection process, and each part in it.

We leave detailed discussion of implementation for chapter \ref{chap:implementation}.

\section{Aims \& Usage}

As mentioned previously, we aim to create an all-in-one solution for container selection that can select based on both functional and non-functional requirements.
Flexibility is a high priority: It should be easy to add new container implementations, and to integrate our system into existing applications.
Our system should also be able to scale to larger programs, and remain convenient for developers to use.

We chose to implement our system as a Rust CLI, and to work on programs also written in Rust.
We chose Rust both for the expressivity of its type system, and its focus on speed and low-level control.
However, most of the techniques we use are not tied to Rust in particular, and so should be possible to generalise to other languages.

We require the user to provide their own benchmarks, which should be representative of a typical application run - without this, we have no consistent way to evaluate speed.

Users specify their functional requirements by listing the required traits and properties they need for a given container type.
Traits are Rust's primary method of abstraction, and are similar to interfaces in object-oriented languages, or typeclasses in functional languages.
Properties are specified in a lisp-like DSL as a predicate on a model of the container.

For example, Listing \ref{lst:selection_example} shows code from our test case based on the sieve of Eratosthenes (\code{src/tests/prime_sieve} in the source artifacts).
Here we request two container types: \code{Sieve} and \code{Primes}.
The first must implement the \code{Container} and \code{Stack} traits, and must satisfy the \code{lifo} property. This property is defined at the top as only being applicable to \code{Stack}s, and requires that for any \code{x}, pushing \code{x} then popping from the container returns \code{x}.

The second container type, \code{Primes}, must only implement the \code{Container} trait, and must satisfy the \code{ascending} property.
This property requires that for all consecutive \code{x, y} pairs in the container, \code{x <= y}.

\begin{figure}
\begin{lstlisting}[caption=Container type definitions for prime\_sieve,label={lst:selection_example}]
/*SPEC*
property lifo<T> {
    \c <: (Stack) -> (forall \x -> ((equal? (pop ((op-push c) x))) x))
}

property ascending<T> {
    \c -> ((for-all-consecutive-pairs c) leq?)
}


type Sieve<S> = {c impl (Container, Stack) | (lifo c)}
type Primes<S> = {c impl (Container) | (ascending c)}
*ENDSPEC*/
\end{lstlisting}
\end{figure}

Once we've specified our functional requirements and provided a benchmark (\code{src/tests/prime_sieve/benches/main.rs}), we can simply run Candelabra to select a container: \code{candelabra-cli -p prime_sieve select}.
This command outputs something like Table \ref{table:selection_output}, and saves the best combination of container types to be used the next time the program is run.
Here, the code generated uses \code{Vec} as the implementation for \code{Sieve}, and \code{HashSet} as the implementation for \code{Primes}.

\begin{table}[h]
  \centering
  \begin{tabular}{|c|c|c|c|}
    Name & Implementation & Estimated Cost \\
    \hline
    Sieve & std::vec::Vec & 159040493883 \\
    Sieve & std::collections::LinkedList & 564583506434 \\
    Primes & primrose\_library::SortedVec & 414991320 \\
    Primes & std::collections::BTreeSet & 355962089 \\
    Primes & std::collections::HashSet & 309638677 \\
  \end{tabular}
  \caption{Example output from selection command}
  \label{table:selection_output}
\end{table}

\section{Overview of process}

Our tool integrates with Rust's packaging system (Cargo) to discover the information it needs about our project, then runs Primrose to find a list of implementations satsifying our functional requirements, from a pre-built library of container implementations.

Once we have this list, we then build a 'cost model' for each candidate type. This allows us to get an upper bound for the runtime cost of an operation at any given n.

We then run the user-provided benchmarks, using any of the valid candidates instrumented to track how many times each operation is performed, and the maximum size of the container.

We combine this information with our cost models to estimate a total cost for each candidate, which is an upper bound on the total time taken for all container operations.
At this point, we also check if an 'adaptive' container would be better, by checking if one implementation is better performing at a lower n, and another at a higher n.

Finally, we pick the implementation with the minimum cost, and generate code which sets the container type to use that implementation.

Our solution requires little user intervention, integrates well with existing workflows, and the time it takes scales linearly with the number of container types in a given project.

We now go into more detail on how each step works, although we leave some specifics until Chapter \ref{chap:implementation}.

\section{Functional requirements}

%% Explain role in entire process
As described in Chapter \ref{chap:background}, any implementation we pick must satisfy the program's functional requirements.
To do this, we integrate Primrose \citep{qin_primrose_2023} as a first step.

Primrose allows users to specify both the traits they require in an implementation (essentially the API and methods available), and what properties must be satisfied.

Each container type that we want to select an implementation for is bound by a list of traits and a list of properties (lines 11 and 12 in Listing \ref{lst:selection_example}).

%% Short explanation of selection method
In brief, primrose works by:
\begin{itemize}
\item Finding all implementations in the container library that implement all required traits
\item Translate any specified properties to a Rosette expression
\item For each implementation, model the behaviour of each operation in Rosette, and check that the required properties always hold
\end{itemize}

We use the code provided with the Primrose paper, with minor modifications elaborated on in Chapter \ref{chap:implementation}.

At this stage, we have a list of implementations for each container type we are selecting. The command \code{candelabra-cli candidates} will show this output, as in Table \ref{table:candidates_prime_sieve}.

\begin{table}[h]
  \centering
  \begin{tabular}{|c|c|c|}
    Type & Implementation \\
    \hline
    Primes & primrose\_library::EagerSortedVec \\
    Primes & std::collections::HashSet \\
    Primes & std::collections::BTreeSet \\
    Sieve & std::collections::LinkedList \\
    Sieve & std::vec::Vec \\
  \end{tabular}
  \caption{Usable implementations by container type for \code{prime_sieve}}
  \label{table:candidates_prime_sieve}
\end{table}

%% Abstraction over backend
Although we use primrose in our implementation, the rest of our system isn't dependent on it, and it would be relatively simple to use a different approach for selecting based on functional requirements.

\section{Cost Models}

Now that we have a list of possible implementations, we need to understand the performance characteristics of each of them.
We use an approach similar to CollectionSwitch\citep{costa_collectionswitch_2018}, which assumes that the main factor in how long an operation takes is the current size of the collection.

%% Benchmarks
An implementation has a seperate cost model for each operation, which we obtain by executing the operation repeatedly on collections of various sizes.

For example, to build a cost model for \code{Vec::contains}, we would create several \code{Vec}s of varying sizes, and find the average execution time $t$ of \code{contains} at each.

%% Linear Regression
We then perform regression, using the collection size $n$ to predict $t$.
In the case of \code{Vec::contains}, we would expect the resulting polynomial to be roughly linear.

In our implementation, we fit a function of the form $x_0 + x_1 n + x_2 n^2 + x_3 \log_2 n$, using regular least-squares fitting.
Whilst we could use a more complex technique, in practice this is good enough: Most common operations are polynomial at worst, and more complex models risk overfitting.

%% Limitations
This method works well for many operations and structures, although has notable limitations.

In particular, implementations which defer work from one function to another will be extremely inconsistent.
For example, \code{LazySortedVec} (provided by Primrose) inserts new elements at the end by default, and waits to sort the list until the contents of the list are read from (such as by using \code{contains}).

We were unable to work around this, and so we have removed these variants from our container library.
A potential solution could be to perform untimed 'warmup' operations before each operation, but this is complex because it requires some understanding of what operations will cause work to be deferred.

At the end of this stage, we are able to reason about the relative cost of operations between implementations.
These models are cached for as long as our container library remains the same, as they are independent of what program the user is currently working on.

\section{Profiling applications}

We now need to collect information about how the user's application uses its container types.

%% Data Collected
As mentioned above, the ordering of operations can have a large effect on container performance.
Unfortunately, tracking every container operation in order quickly becomes unfeasible, so we settle for tracking the count of each operation, and the maximum size of each collection instance.

Every instance or allocation of the collection is tracked separately, and results are collated after profiling.
%% Segmentation
Results with a close enough n value get sorted into partitions, where each partition stores the average count of each operation, and a weight indicating how common results in that partition were.
This serves 3 purposes.

The first is to compress the data, which speeds up processing and stops us running out of memory in more complex programs.
The second is to capture the fact that the number of operations will likely depend on the size of the container.
The third is to aid in searching for adaptive containers, which we will elaborate on later.

\section{Selection process}

%% Selection process
Once we have an estimate of how long each operation may take (from our cost models), and how often we use each operation (from our profiling information), we combine these to estimate the total cost of each implementation.
For each implementation, our total cost estimate is:

$$
\sum_{o\in \textrm{ops}} \sum_{(r_{o}, N, W) \in \textrm{partitions}} C_o(N) * r_o
* W
$$

\begin{itemize}
\item $C_o(N)$ is the cost estimated by the cost model for operation $o$ at n value $N$
\item $r_o$ is the average count of a given operation in a partition
\item $N$ is the average maximum N value in a partition
\item $W$ is the weight of a partition, representing how many allocations fell in to this partition
\end{itemize}

Essentially, we scale an estimated worst-case cost of each operation by how frequently we think we will encounter it.

\section{Adaptive containers}

In many cases, the maximum size of a container type varies greatly between program runs.
In these cases, it may be desirable to start off with one container type, and switch to another one if the size of the container grows greatly.

For example, if a program requires a set, then for small sizes it may be best to keep a sorted list and use binary search for \code{contains} operations.
But when the size of the container grows, the cost of doing \code{contains} may grow high enough that using a \code{HashSet} would actually be faster.

Adaptive containers attempt to address this need, by starting off with one implementation (the low or before implementation), and switching to a new implemenation (the high or after implementation) once the size of the container passes a certain threshold.

This is similar to systems such as CoCo\citep{hutchison_coco_2013} and in work by \"{O}sterlund\citep{osterlund_dynamically_2013}.
However, we decide when to switch container implementation before the program is run, rather than as it is running.
We also do so in a way that requires no knowledge of the implementation internals.

%% Adaptive container detection
Using our list of partitions, we sort it by ascending container size and attempt to find a split.
If we can split our partitions in half such that everything to the left performs best with one implementation, and everything to the right with another, then we should be able to switch implementation around that n value.

In practice, finding the correct threshold is more difficult: We must take into account the cost of transforming from one implementation to another.
If we adapt our container too early, we may do more work adapting it than we save if we just stuck with our low implementation.
If we adapt too late, we have more data to move and less of our program gets to take advantage of the new implementation.

We choose the relatively simple strategy of switching halfway between two partitions.
Our cost models let us estimate how expensive switching implementations will be, which we compare against how much we save by switching to the after implementation.