aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--Tasks.org123
-rw-r--r--thesis/parts/design.tex67
-rw-r--r--thesis/parts/implementation.tex85
3 files changed, 214 insertions, 61 deletions
diff --git a/Tasks.org b/Tasks.org
index af64ccf..213b779 100644
--- a/Tasks.org
+++ b/Tasks.org
@@ -155,3 +155,126 @@ Ideas:
* Writing
+** TODO Abstract
+
+** TODO Introduction
+
+*** TODO Introduce problem
+
+**** TODO Container types common in programs
+
+**** TODO Functionally identical implementations
+
+**** TODO Large difference in performance
+
+*** TODO Motivate w/ effectiveness claims
+
+*** TODO Overview of aims & approach
+
+**** TODO Scalability to larger projects
+
+**** TODO Ease of integration into existing projects
+
+**** TODO Ease of adding new container types
+
+**** TODO Flexibility of selection
+
+*** TODO Overview of results
+
+** TODO Background
+
+*** TODO Introduce problem
+
+*** TODO Functional vs non-functional requirements
+
+*** TODO Existing approaches & their shortfalls
+
+*** TODO Lead to next chapter
+
+** TODO Design
+
+*** TODO Usage Example
+
+*** TODO Primrose Integration
+
+**** TODO Explain role in entire process
+
+**** TODO Short explanation of selection method
+
+**** TODO Abstraction over backend
+
+*** TODO Building cost models
+
+**** TODO Benchmarks
+
+**** TODO Linear Regression
+
+**** TODO Limitations
+
+*** TODO Profiling applications
+
+**** TODO Data collected
+
+**** TODO Segmentation
+
+**** TODO Limitations w/ pre-benchmark steps
+
+*** TODO Selection process & adaptive containers
+
+**** TODO Selection process
+
+**** TODO Adaptive container detection
+
+**** TODO Code generation
+
+** TODO Implementation
+
+*** TODO Modifications to Primrose
+
+**** TODO API
+
+**** TODO Mapping trait
+
+**** TODO Resiliency, etc
+
+*** TODO Integration w/ Cargo
+
+**** TODO Metadata fetching
+
+**** TODO Caching of build dependencies
+
+*** TODO Running Benchmarks
+
+**** TODO Benchmarker crate
+
+**** TODO Code generation
+
+*** TODO Profiling wrapper
+
+**** TODO Use of Drop
+
+**** TODO Generics and stuff
+
+*** TODO Selection / Codegen
+
+**** TODO Generated code (opaque types)
+
+**** TODO Selection Algorithm incl Adaptive
+
+**** TODO Implementation w/ const generics
+
+*** TODO Misc Concerns
+
+**** TODO Justify Rust as language
+
+**** TODO Explain cargo's role in rust projects & how it is integrated
+
+**** TODO Caching and stuff
+
+**** TODO Ease of use
+
+** TODO Results
+
+** TODO Analysis
+
+** TODO Conclusion
diff --git a/thesis/parts/design.tex b/thesis/parts/design.tex
index 03e281d..6c3afab 100644
--- a/thesis/parts/design.tex
+++ b/thesis/parts/design.tex
@@ -14,16 +14,16 @@ We chose to implement our system as a Rust CLI, and limit it to selecting contai
We require the user to provide their own benchmarks, which should be representative of a typical application run.
The user can specify their functional requirements by listing the required traits, and specifying properties that must always hold in a lisp-like language.
-This part of the process is handled by Primrose\parencite{qin_primrose_2023}, with only minor modifications to integrate with the rest of our system.
-For example, take the below code from our test case based on the sieve of Eratosthenes (\code{src/tests/prime\_sieve} in the source artifacts).
+For example, Listing \ref{lst:selection_example} shows code from our test case based on the sieve of Eratosthenes (\code{src/tests/prime\_sieve} in the source artifacts).
Here we request two container types: \code{Sieve} and \code{Primes}.
The first must implement the \code{Container} and \code{Stack} traits, and must satisfy the \code{lifo} property. This property is defined at the top as only being applicable to \code{Stack}s, and requires that for any \code{x}, pushing \code{x} then popping from the container returns \code{x}.
The second container type, \code{Primes}, must only implement the \code{Container} trait, and must satisfy the \code{ascending} property.
This property requires that at any point, for all consecutive \code{x, y} pairs in the container, \code{x $\leq$ y}.
-\begin{lstlisting}
+\begin{figure}
+\begin{lstlisting}[caption=Container type definitions for prime\_sieve,label={lst:selection_example}]
/*SPEC*
property lifo<T> {
\c <: (Stack) -> (forall \x -> ((equal? (pop ((push c) x))) x))
@@ -38,6 +38,7 @@ type Sieve<S> = {c impl (Container, Stack) | (lifo c)}
type Primes<S> = {c impl (Container) | (ascending c)}
*ENDSPEC*/
\end{lstlisting}
+\end{figure}
Once we've specified our functional requirements and provided a benchmark (\code{src/tests/prime\_sieve/benches/main.rs}), we can simply run candelabra to select a container:
@@ -52,11 +53,15 @@ Finally, it picks the container with the minimum cost, and creates a new Rust fi
Our tool requires little user intervention, integrates well with existing workflows, and the time it takes scales linearly with the number of container types in a given project.
-\section{Selection Process}
+\section{Selecting for functional requirements}
-We now describe the design of our selection process in detail, and justify our choices.
+%% Explain role in entire process
+Before we can select the fastest container, we first need to find a list of candidates which satisfy the program's functional requirements.
+Primrose allows users to specify both the traits they require in an implementation (essentially the API and methods available), and what properties must be satisfied.
-As mentioned above, the first stage of our process is to satisfy functional requirements, which we do using code from Primrose\parencite{qin_primrose_2023}.
+Each container type that we want to select an implementation for is bound by a list of traits and a list of properties (lines 11 and 12 in Listing \ref{lst:selection_example}).
+
+%% Short explanation of selection method
The exact internals are beyond the scope of this paper, but in brief this works by:
\begin{itemize}
\item Finding all implementations in the container library that implement all required traits
@@ -64,67 +69,65 @@ The exact internals are beyond the scope of this paper, but in brief this works
\item For each implementation, model the behaviour of each operation in Rosette, and check that the required properties always hold
\end{itemize}
-We use the code provided with the Primrose paper, with minor modifications elaborated on in \ref{chap:implementation}.
-
-Once a list of functionally close enough implementations have been found, selection is done by:
+We use the code provided with the Primrose paper, with minor modifications elaborated on in Chapter \ref{chap:implementation}.
-\begin{itemize}
-\item For each operation of each implementation, build a cost model which can estimate the 'cost' of that operation at any given container size $n$
-\item Profile the program, tracking operation frequency and container sizes
-\item Combining the two to create an estimate of the relative cost of each implementation
-\end{itemize}
+%% Abstraction over backend
+\todo{Abstraction over backend}
-\subsection{Cost Models}
+\section{Cost Models}
We use an approach similar to CollectionSwitch\parencite{costa_collectionswitch_2018}, which assumes that the main factor in how long an operation takes is the current size of the collection.
+%% Benchmarks
Each operation has a seperate cost model, which we build by executing the operation repeatedly on collections of various sizes.
For example, to build a cost model for \code{Vec::contains}, we would create several \code{Vec}s of varying sizes, and find the average execution time $t$ of \code{contains} at each.
+%% Linear Regression
We then perform linear regression, using the collection size $n$ to predict $t$.
In the case of \code{Vec::contains}, we would expect the resulting polynomial to be roughly linear.
+Once we have the data, we fit a polynomial to the data.
+Whilst we could use a more complex technique, in practice this is good enough: Very few common operations are above $O(n^3)$, and factors such as logarithms are usually 'close enough'.
+
+%% Limitations
This method works well for many operations and structures, although has notable limitations.
For example, the container implementation \code{LazySortedVec} (provided by Primrose) inserts new elements at the end by default, and only sorts them when an operation that relies on the order is called.
-were unable to work around this, although a potential later solution could be to perform untimed 'warmup' operations before each operation.
+We were unable to work around this, although a potential solution could be to perform untimed 'warmup' operations before each operation.
this is complex because it requires some understanding of what operations will have deferred work for them.
-Once we have the data, we fit a polynomial to the data.
-Whilst we could use a more complex technique, in practice this is good enough: Very few common operations are above $O(n^3)$, and factors such as logarithms are usually 'close enough'.
-
-We cache this data for as long as the implementation is unchanged.
+\todo{Find a place for this}
Whilst it would be possible to share this data across computers, micro-architecture can have a large effect on collection performance\parencite{jung_brainy_2011}, so we calculate it on demand.
-\subsection{Profiling}
+\section{Profiling applications}
+%% Data Collected
As mentioned above, the ordering of operations can have a large effect on container performance.
Unfortunately, tracking every container operation in order quickly becomes unfeasible, so we settle for tracking the count of each operation, and the size of the collection.
Every instance of the collection is tracked separately, and results are collated after profiling.
+
+%% Segmentation
+
results with a close enough n value get sorted into partitions, where each partition has the average amount of each operation, and a weight indicating how common results in that partition were.
this is done to compress the data, and also to allow searching for adaptive containers later
-\todo{deal with not taking into account 'preparatory' operations during benchmarks}
+%% Limitations w/ pre-benchmark steps
-\todo{Combining}
+\todo{not taking into account 'preparatory' operations during benchmarks}
-\subsection{Adaptive Containers}
+\section{Selection process \& adaptive containers}
+%% Selection process
+
+%% Adaptive container detection
adaptive containers are implemented using const generics, and a wrapper class.
they are detected by finding the best implementation for each partition, sorting by n, and seeing if we can split the partitions in half where a different implementation is best on each side
we then check if the cost saving is greater than the cost of a clear operation and n insert operations
-\subsection{Associative Collections for Primrose}
-
-We add a new mapping trait to primrose to express KV maps
-
-\todo{add and list library types}
-
-the constraint solver has been updated to allow properties on dicts (dictproperty), but this was unused.
+%% Code generation
-\todo{Summary}
diff --git a/thesis/parts/implementation.tex b/thesis/parts/implementation.tex
index f5d8d1f..5009836 100644
--- a/thesis/parts/implementation.tex
+++ b/thesis/parts/implementation.tex
@@ -1,42 +1,69 @@
\todo{Introduction}
-\todo{Primrose adaptations}
+\section{Modifications to Primrose}
-\section{Tooling integration}
+%% API
-get project metadata from cargo
-available benchmarks and source directories
-works with most projects
-could be expanded to run as cargo command
+%% Mapping trait
-parse minimal amount of information from criterion benchmark
-most common benchmarking tool, closest there is to a standard
-should be easy to adapt if/when cargo ships proper benchmarking support
+%% We add a new mapping trait to primrose to express KV maps
-benchmarking of container implementations also outputs this format, but without using criterion harness
-allows more flexibility, ie pre-processing results
+%% \todo{add and list library types}
-each trait has its own set of benchmarks, which run different workloads
-benchmarker trait doesn't have Ns
-example benchmarks for hashmap and vec
+%% the constraint solver has been updated to allow properties on dicts (dictproperty), but this was unused.
-\code{candelabra::cost::benchmark} generates code which just calls \code{candelabra\_benchmarker} methods
-Ns are set there, and vary from [...]
+%% Resiliency, etc
-fitting done with least squares in \code{candelabra::cost::fit}
-list other methods tried
-simple, which helps 'smooth out' noisy benchmark results
+\section{Cost models}
-profiler type in \code{primrose\_library::profiler}
-wraps an 'inner' implementation and implements whatever operations it does, keeping track of number of calls
-on drop, creates new file in folder specified by env variable
+%% Benchmarker crate
-primrose results generated in \code{primrose::codegen}, which is called in \code{candelabra::profiler}
-picks the first valid candidate - performance doesn't really matter for this case
-each drop generates a file, so we get details of every individual collection allocated
+%% each trait has its own set of benchmarks, which run different workloads
+%% benchmarker trait doesn't have Ns
+%% example benchmarks for hashmap and vec
-estimate a cost for each candidate: op(avg\_n) * op\_times for each op
-pick the smallest one
+%% Code generation
-\todo{Caching}
-\todo{Other implementation details?}
+%% \code{candelabra::cost::benchmark} generates code which just calls \code{candelabra\_benchmarker} methods
+%% Ns are set there, and vary from [...]
+
+%% fitting done with least squares in \code{candelabra::cost::fit}
+%% list other methods tried
+%% simple, which helps 'smooth out' noisy benchmark results
+
+\section{Profiling}
+
+%% profiler type in \code{primrose\_library::profiler}
+%% wraps an 'inner' implementation and implements whatever operations it does, keeping track of number of calls
+%% on drop, creates new file in folder specified by env variable
+
+%% primrose results generated in \code{primrose::codegen}, which is called in \code{candelabra::profiler}
+%% picks the first valid candidate - performance doesn't really matter for this case
+%% each drop generates a file, so we get details of every individual collection allocated
+
+\section{Selection and Codegen}
+
+%% Generated code (opaque types)
+
+%% Selection Algorithm incl Adaptiv
+
+%% Implementation w/ const generics
+
+\section{Misc Concerns}
+
+\todo{Justify Rust as language}
+
+\todo{Explain cargo's role in rust projects \& how it is integrated}
+
+%% get project metadata from cargo
+%% available benchmarks and source directories
+%% works with most projects
+%% could be expanded to run as cargo command
+
+\todo{Caching and stuff}
+
+\todo{Ease of use}
+
+%% parse minimal amount of information from criterion benchmark
+%% most common benchmarking tool, closest there is to a standard
+%% should be easy to adapt if/when cargo ships proper benchmarking support