aboutsummaryrefslogtreecommitdiff
path: root/thesis/parts/results.tex
diff options
context:
space:
mode:
authorAria Shrimpton <me@aria.rip>2024-03-27 17:10:21 +0000
committerAria Shrimpton <me@aria.rip>2024-03-27 17:10:21 +0000
commit2f75ce401867feaddce578e09be542407c327f48 (patch)
tree9865e39aa815787a17fcaccd155bbe25d5e93f70 /thesis/parts/results.tex
parent98340e2ddf76a50b7341444a161362b1d01beb22 (diff)
analysis & write up cost models
Diffstat (limited to 'thesis/parts/results.tex')
-rw-r--r--thesis/parts/results.tex79
1 files changed, 56 insertions, 23 deletions
diff --git a/thesis/parts/results.tex b/thesis/parts/results.tex
index 2f06d06..9d0b5b4 100644
--- a/thesis/parts/results.tex
+++ b/thesis/parts/results.tex
@@ -1,55 +1,88 @@
+In this chapter, we present our results from benchmarking our system.
+We examine the produced cost models of certain operations in detail, with reference to the expected asymptotics of each operation.
+We then compare the selections made by our system to the actual optimal selections for a variety of test cases.
+This includes examining when adaptive containers are suggested, and their effectiveness.
+
%% * Testing setup, benchmarking rationale
\section{Testing setup}
%% ** Specs and VM setup
-In order to ensure consistent results and reduce the chance of outliers, all benchmarks were run on a KVM virtual machine on server hardware.
+In order to ensure consistent results and reduce the effect of other running processes, all benchmarks were run on a KVM virtual machine on server hardware.
We used 4 cores of an Intel Xeon E5-2687Wv4 CPU, and 4GiB of RAM.
%% ** Reproducibility
The VM was managed and provisioned using NixOS, meaning it can be easily reproduced with the exact software we used.
+Instructions on how to do so are in the supplementary materials.
+The most important software versions are listed below.
+
+\begin{itemize}
+\item Linux 6.1.64
+\item Rust nightly 2024-01-25
+\item LLVM 17.0.6
+\item Racket 8.10
+\end{itemize}
\section{Cost models}
-We start by looking at our generated cost models, and comparing them both to the observations they are based on, and what we expect from asymptotic analysis.
-As we build a total of 51 cost models from our library, we will not examine all of them.
-We look at ones for the most common operations, and group them by containers that are commonly selected together.
+We start by examining some of our generated cost models, and comparing them both to the observations they are based on, and what we expect from asymptotic analysis.
+As we build a total of 77 cost models from our library, we will not examine them all in detail.
+We look at models of the most common operations, and group them by containers that are commonly selected together.
\subsection{Insertion operations}
Starting with the \code{insert} operation, Figure \ref{fig:cm_insert} shows how the estimated cost changes with the size of the container.
-The lines correspond to our fitted curves, while the points indicate the raw observations they are drawn from.
-To help readability, we group these into regular \code{Container} implementations, and our associative key-value \code{Mapping} implementations.
+The lines correspond to our fitted curves, while the points indicate the raw observations these curves are fitted from.
\begin{figure}[h!]
\centering
- \includegraphics[width=10cm]{assets/insert.png}
+ \includegraphics[width=15cm]{assets/insert.png}
\caption{Estimated cost of insert operation by implementation}
\label{fig:cm_insert}
\end{figure}
-For \code{Vec}, we see that insertion is incredibly cheap, and gets slightly cheaper as the size of the container increases.
-This is to be expected, as Rust's Vector implementation grows by a multiple whenever it reaches its maximum capacity, so we would expect amortised inserts to require less resizes as $n$ increases.
+Starting with the operation on a \code{Vec}, we see that insertion is very cheap, and gets slightly cheaper as the size of the container increases.
+This roughly agrees with the expected $O(1)$ time of amortised inserts on a Vec.
+However, we also note a sharply increasing curve when $n$ is small, and a slight 'bump' around $n=35,000$.
+The former appears to be in line with the observations, and is likely due to the static growth rate of Rust's Vec implementation.
+The latter appears to diverge from the observations, and may indicate poor fitting.
\code{LinkedList} has a more stable, but significantly slower insertion.
-This is likely because it requires a heap allocation for every item inserted, no matter the current size.
-This would also explain why data points appear spread out more, as it can be hard to predict the performance of kernel calls, even on systems with few other processes running.
+This is likely because it requires a syscall for heap allocation for every item inserted, no matter the current size.
+This would also explain why data points appear spread out more, as system calls have more unpredictable latency, even on systems with few other processes running.
+Notably, insertion appears to start to get cheaper past $n=24,000$, although this is only weakly suggested by observations.
It's unsurprising that these two implementations are the cheapest, as they have no ordering or uniqueness guarantees, unlike our other implementations.
-\code{HashSet} insertions are the next most expensive, however the cost appears to rise as the size of the collection goes up.
-This is likely due to hash collisions being more likely as the size of the collection increases.
+The \code{SortedVec} family of containers (\code{SortedVec}, \code{SortedVecSet}, and \code{SortedVecMap}) all exhibit roughly logarithmic growth, with \code{SortedVecMap} exhibiting a slightly higher growth rate.
+This is expected, as internally all of these containers perform a binary search to determine where the new element should go, which is $O(\lg n)$ time.
+
+\code{SortedVecMap} exhibits roughly the same shape as its siblings, but with a slightly higher growth rate.
+This pattern is shared across all of the \code{*Map} types we examine, and could be explained by the increased size of each element reducing the effectiveness of the cache.
+
+\code{VecMap} and \code{VecSet} both have a significantly higher, roughly linear, growth rate.
+Both of these implementations work by scanning through the existing array before each insertion to check for existing keys, therefore a linear growth rate is expected.
+
+\code{HashSet} and \code{HashMap} insertions are much less expensive, and mostly linear with only a slight growth at very large $n$ values.
+This is what we expect for hash-based collections, with the slight growth likely due to more hash collisions as the size of the collection increases.
-\code{BTreeSet} insertions are also expensive, however the cost appears to level out as the collection size goes up (a logarithmic curve).
-It's important to note that Rust's \code{BTreeSet}s are not based on binary tree search, but instead a more general tree search originally proposed by R Bayer and E McCreight\citep{bayer_organization_1970}, where each node contains $B-1$ to $2B-1$ elements in an array.
+\code{BTreeSet} has similar behaviour, but settles at a larger value overall.
+\code{BTreeMap} appears to grow more rapidly, and cost more overall.
+It's important to note that Rust's \code{BTreeSet}s are not based on binary tree search, but instead a more general tree search originally proposed by \cite{bayer_organization_1970}, where each node contains $B-1$ to $2B-1$ elements in an array.
+The standard library documentation\citep{rust_documentation_team_btreemap_2024} states that search is expected to take $O(B\lg n)$ comparisons.
+Since both of these implementations require searching the collection before inserting, the close-to-logarithmic growth makes sense.
-Our two mapping types, \code{BTreeMap} and \code{HashMap}, mimic the behaviour of their set counterparts.
+\subsubsection{Small n values}
-Our two outlier containers, \code{SortedUniqueVec} and \code{SortedVec}, both have a substantially higher insertion cost which grows roughly linearly.
-Internally, both of these containers perform a binary search to determine where the new element should go.
-This would suggest we should see a roughly logarithmic complexity.
-However, as we will be inserting most elements near the middle of a list, we will on average be copying half the list every time.
-This could explain why we see a roughly linear growth.
+Whilst our main figures for insertion operations indicate a clear winner within each category, looking at small $n$ values reveals some more complexity.
+Figure \ref{fig:cm_insert_small_n} shows the cost models for insert operations on different set implementations at smaller n values.
-\todo{This explanation could be better}
+\todo{Explain this}
+
+\begin{figure}[h!]
+ \centering
+ \includegraphics[width=15cm]{assets/insert_small_n.png}
+ \caption{Estimated cost of insert operation on set implementations, at small n values}
+ \label{fig:cm_insert_small_n}
+\end{figure}
\subsection{Contains operations}
@@ -63,7 +96,7 @@ This is desirable assuming that \code{contains} operations are actually randomly
\begin{figure}[h!]
\centering
- \includegraphics[width=10cm]{assets/contains.png}
+ \includegraphics[width=15cm]{assets/contains.png}
\caption{Estimated cost of \code{contains} operation by implementation}
\label{fig:cm_contains}
\end{figure}