aboutsummaryrefslogtreecommitdiff
path: root/thesis/parts/results.tex
diff options
context:
space:
mode:
Diffstat (limited to 'thesis/parts/results.tex')
-rw-r--r--thesis/parts/results.tex90
1 files changed, 84 insertions, 6 deletions
diff --git a/thesis/parts/results.tex b/thesis/parts/results.tex
index 47c7c98..17b6088 100644
--- a/thesis/parts/results.tex
+++ b/thesis/parts/results.tex
@@ -1,9 +1,87 @@
-\todo{Selection of benchmarks}
-\todo{Testing setup}
-\todo{Justification of tested elements}
+%% * Testing setup, benchmarking rationale
+\section{Testing setup}
-\todo{Produced cost models}
+%% ** Specs and VM setup
+In order to ensure consistent results and reduce the chance of outliers, all benchmarks were run on a KVM virtual machine on server hardware.
+We used 4 cores of an Intel Xeon E5-2687Wv4 CPU, and 4GiB of RAM.
-\todo{Estimated costs}
+%% ** Reproducibility
+The VM was managed and provisioned using NixOS, meaning it can be easily reproduced with the exact software we used.
-\todo{Real benchmark results}
+\section{Cost models}
+
+We start by looking at our generated cost models, and comparing them both to the observations they are based on, and what we expect from asymptotic analysis.
+As we build a total of 51 cost models from our library, we will not examine all of them.
+We look at ones for the most common operations, and group them by containers that are commonly selected together.
+
+%% ** Insertion operations
+Starting with the \code{insert} operation, Figure \ref{fig:cm_insert} shows how the estimated cost changes with the size of the container.
+To help readability, we group these into regular \code{Container} implementations, and our associative key-value \code{Mapping} implementations.
+
+\begin{figure}[h]
+ \centering
+ \includegraphics[width=10cm]{assets/insert_containers.png}
+ \par\centering\rule{11cm}{0.5pt}
+ \includegraphics[width=10cm]{assets/insert_mappings.png}
+ \caption{Estimated cost of insert operation on \code{Container} implementations and \code{Mapping} implementations}
+ \label{fig:cm_insert}
+\end{figure}
+
+%% ** Contains operations
+
+%% ** Comment on some bad/weird ones
+
+%% ** Conclusion
+
+%% * Predictions
+\section{Selections}
+
+%% ** Chosen benchmarks
+Our test cases broadly fall into two categories: Example cases, which just repeat a few operations many times, and our 'real' cases, which are implementations of common algorithms and solutions to programming puzles.
+We expect the results from our example cases to be relatively unsurprising, while our real cases are more complex and harder to predict.
+
+Most of our real cases are solutions to puzzles from Advent of Code\parencite{wastl_advent_2015}, a popular collection of programming puzzles.
+Table \ref{table:test_cases} lists and briefly describes our test cases.
+
+\begin{table}[h]
+ \centering
+ \begin{tabular}{|c|c|}
+ Name & Description \\
+ \hline
+ example\_sets & Repeated insert and contains on a set. \\
+ example\_stack & Repeated push and pop from a stack. \\
+ example\_mapping & Repeated insert and get from a mapping. \\
+ prime\_sieve & Sieve of eratosthenes algorithm. \\
+ aoc\_2021\_09 & Flood-fill like algorithm (Advent of Code 2021, Day 9) \\
+ aoc\_2022\_08 & Simple 2D raycasting (AoC 2022, Day 8) \\
+ aoc\_2022\_09 & Simple 2D soft-body simulation (AoC 2022, Day 9) \\
+ aoc\_2022\_14 & Simple 2D particle simulation (AoC 2022, Day 14) \\
+ \end{tabular}
+
+ \caption{Our test applications}
+ \label{table:test_cases}
+\end{table}
+
+%% ** Effect of selection on benchmarks (spread in execution time)
+
+%% ** Summarise predicted versus actual
+
+%% ** Evaluate performance
+
+%% ** Comment on distribution of best implementation
+
+%% ** Surprising ones / Explain failures
+
+%% * Performance of adaptive containers
+\section{Adaptive containers}
+
+%% ** Find where adaptive containers get suggested
+
+%% ** Comment on relative performance speedup
+
+%% ** Suggest future improvements?
+
+%% * Selection time / developer experience
+\section{Selection time}
+
+%% ** Mention speedup versus naive brute force