aboutsummaryrefslogtreecommitdiff
path: root/thesis
diff options
context:
space:
mode:
authorAria Shrimpton <me@aria.rip>2024-03-11 00:45:24 +0000
committerAria Shrimpton <me@aria.rip>2024-03-11 00:45:24 +0000
commit761e2ac49069ecf5eedbffe5a34d6d6ba69a2928 (patch)
tree93af95ad726c84cb5a965ed42721ec6aac089323 /thesis
parent87839cbbaf707d0315af989064141267517088fc (diff)
more writing whatever
Diffstat (limited to 'thesis')
-rw-r--r--thesis/parts/results.tex30
1 files changed, 30 insertions, 0 deletions
diff --git a/thesis/parts/results.tex b/thesis/parts/results.tex
index b07e581..6915b12 100644
--- a/thesis/parts/results.tex
+++ b/thesis/parts/results.tex
@@ -115,6 +115,8 @@ Future improvements could address the overfitting problems some operations had,
%% * Predictions
\section{Selections}
+\subsection{Benchmarks}
+
%% ** Chosen benchmarks
Our test cases broadly fall into two categories: Example cases, which just repeat a few operations many times, and our 'real' cases, which are implementations of common algorithms and solutions to programming puzles.
We expect the results from our example cases to be relatively unsurprising, while our real cases are more complex and harder to predict.
@@ -142,10 +144,34 @@ Table \ref{table:test_cases} lists and briefly describes our test cases.
\end{table}
%% ** Effect of selection on benchmarks (spread in execution time)
+Table \ref{table:benchmark_spread} shows the difference in benchmark results between the slowest possible assignment of containers, and the fastest.
+Even in our example projects, we see that the wrong choice of container can slow down our programs substantially.
+
+
+\begin{table}[h]
+\centering
+\begin{tabular}{|c|c|}
+ Project & Total difference between best and worst benchmarks (seconds) & Maximum slowdown from bad container choices \\
+ \hline
+ aoc\_2021\_09 & 29.685 & 4.75 \\
+ aoc\_2022\_08 & 0.036 & 2.088 \\
+ aoc\_2022\_09 & 10.031 & 132.844 \\
+ aoc\_2022\_14 & 0.293 & 2.036 \\
+ prime\_sieve & 28.408 & 18.646 \\
+ example\_mapping & 0.031 & 1.805 \\
+ example\_sets & 0.179 & 12.65 \\
+ example\_stack & 1.931 & 8.454 \\
+\end{tabular}
+\caption{Spread in total benchmark results by project}
+\label{table:benchmark_spread}
+\end{table}
+
%% ** Summarise predicted versus actual
+\subsection{Prediction accuracy}
%% ** Evaluate performance
+\subsection{Evaluation}
%% ** Comment on distribution of best implementation
@@ -154,6 +180,8 @@ Table \ref{table:test_cases} lists and briefly describes our test cases.
%% * Performance of adaptive containers
\section{Adaptive containers}
+\todo{Try and make these fucking things work}
+
%% ** Find where adaptive containers get suggested
%% ** Comment on relative performance speedup
@@ -163,4 +191,6 @@ Table \ref{table:test_cases} lists and briefly describes our test cases.
%% * Selection time / developer experience
\section{Selection time}
+\todo{selection time}
+
%% ** Mention speedup versus naive brute force