aboutsummaryrefslogtreecommitdiff
path: root/thesis/parts/results.tex
blob: 17b608898f5688d15ce746cd1357c028a78c9663 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
%% * Testing setup, benchmarking rationale
\section{Testing setup}

%% ** Specs and VM setup
In order to ensure consistent results and reduce the chance of outliers, all benchmarks were run on a KVM virtual machine on server hardware.
We used 4 cores of an Intel Xeon E5-2687Wv4 CPU, and 4GiB of RAM.

%% ** Reproducibility
The VM was managed and provisioned using NixOS, meaning it can be easily reproduced with the exact software we used.

\section{Cost models}

We start by looking at our generated cost models, and comparing them both to the observations they are based on, and what we expect from asymptotic analysis.
As we build a total of 51 cost models from our library, we will not examine all of them.
We look at ones for the most common operations, and group them by containers that are commonly selected together.

%% ** Insertion operations
Starting with the \code{insert} operation, Figure \ref{fig:cm_insert} shows how the estimated cost changes with the size of the container.
To help readability, we group these into regular \code{Container} implementations, and our associative key-value \code{Mapping} implementations.

\begin{figure}[h]
  \centering
  \includegraphics[width=10cm]{assets/insert_containers.png}
  \par\centering\rule{11cm}{0.5pt}
  \includegraphics[width=10cm]{assets/insert_mappings.png}
  \caption{Estimated cost of insert operation on \code{Container} implementations and \code{Mapping} implementations}
  \label{fig:cm_insert}
\end{figure}

%% ** Contains operations

%% ** Comment on some bad/weird ones

%% ** Conclusion

%% * Predictions
\section{Selections}

%% ** Chosen benchmarks
Our test cases broadly fall into two categories: Example cases, which just repeat a few operations many times, and our 'real' cases, which are implementations of common algorithms and solutions to programming puzles.
We expect the results from our example cases to be relatively unsurprising, while our real cases are more complex and harder to predict.

Most of our real cases are solutions to puzzles from Advent of Code\parencite{wastl_advent_2015}, a popular collection of programming puzzles.
Table \ref{table:test_cases} lists and briefly describes our test cases.

\begin{table}[h]
  \centering
  \begin{tabular}{|c|c|}
    Name & Description \\
    \hline
    example\_sets & Repeated insert and contains on a set. \\
    example\_stack & Repeated push and pop from a stack. \\
    example\_mapping & Repeated insert and get from a mapping. \\
    prime\_sieve & Sieve of eratosthenes algorithm. \\
    aoc\_2021\_09 & Flood-fill like algorithm (Advent of Code 2021, Day 9) \\
    aoc\_2022\_08 & Simple 2D raycasting (AoC 2022, Day 8) \\
    aoc\_2022\_09 & Simple 2D soft-body simulation (AoC 2022, Day 9) \\
    aoc\_2022\_14 & Simple 2D particle simulation (AoC 2022, Day 14) \\
  \end{tabular}

  \caption{Our test applications}
  \label{table:test_cases}
\end{table}

%% ** Effect of selection on benchmarks (spread in execution time)

%% ** Summarise predicted versus actual

%% ** Evaluate performance

%% ** Comment on distribution of best implementation

%% ** Surprising ones / Explain failures

%% * Performance of adaptive containers
\section{Adaptive containers}

%% ** Find where adaptive containers get suggested

%% ** Comment on relative performance speedup

%% ** Suggest future improvements?

%% * Selection time / developer experience
\section{Selection time}

%% ** Mention speedup versus naive brute force