aboutsummaryrefslogtreecommitdiff
path: root/thesis/parts/results.tex
diff options
context:
space:
mode:
Diffstat (limited to 'thesis/parts/results.tex')
-rw-r--r--thesis/parts/results.tex30
1 files changed, 30 insertions, 0 deletions
diff --git a/thesis/parts/results.tex b/thesis/parts/results.tex
index b07e581..6915b12 100644
--- a/thesis/parts/results.tex
+++ b/thesis/parts/results.tex
@@ -115,6 +115,8 @@ Future improvements could address the overfitting problems some operations had,
%% * Predictions
\section{Selections}
+\subsection{Benchmarks}
+
%% ** Chosen benchmarks
Our test cases broadly fall into two categories: Example cases, which just repeat a few operations many times, and our 'real' cases, which are implementations of common algorithms and solutions to programming puzles.
We expect the results from our example cases to be relatively unsurprising, while our real cases are more complex and harder to predict.
@@ -142,10 +144,34 @@ Table \ref{table:test_cases} lists and briefly describes our test cases.
\end{table}
%% ** Effect of selection on benchmarks (spread in execution time)
+Table \ref{table:benchmark_spread} shows the difference in benchmark results between the slowest possible assignment of containers, and the fastest.
+Even in our example projects, we see that the wrong choice of container can slow down our programs substantially.
+
+
+\begin{table}[h]
+\centering
+\begin{tabular}{|c|c|}
+ Project & Total difference between best and worst benchmarks (seconds) & Maximum slowdown from bad container choices \\
+ \hline
+ aoc\_2021\_09 & 29.685 & 4.75 \\
+ aoc\_2022\_08 & 0.036 & 2.088 \\
+ aoc\_2022\_09 & 10.031 & 132.844 \\
+ aoc\_2022\_14 & 0.293 & 2.036 \\
+ prime\_sieve & 28.408 & 18.646 \\
+ example\_mapping & 0.031 & 1.805 \\
+ example\_sets & 0.179 & 12.65 \\
+ example\_stack & 1.931 & 8.454 \\
+\end{tabular}
+\caption{Spread in total benchmark results by project}
+\label{table:benchmark_spread}
+\end{table}
+
%% ** Summarise predicted versus actual
+\subsection{Prediction accuracy}
%% ** Evaluate performance
+\subsection{Evaluation}
%% ** Comment on distribution of best implementation
@@ -154,6 +180,8 @@ Table \ref{table:test_cases} lists and briefly describes our test cases.
%% * Performance of adaptive containers
\section{Adaptive containers}
+\todo{Try and make these fucking things work}
+
%% ** Find where adaptive containers get suggested
%% ** Comment on relative performance speedup
@@ -163,4 +191,6 @@ Table \ref{table:test_cases} lists and briefly describes our test cases.
%% * Selection time / developer experience
\section{Selection time}
+\todo{selection time}
+
%% ** Mention speedup versus naive brute force