aboutsummaryrefslogtreecommitdiff
path: root/thesis/parts/implementation.tex
diff options
context:
space:
mode:
Diffstat (limited to 'thesis/parts/implementation.tex')
-rw-r--r--thesis/parts/implementation.tex43
1 files changed, 30 insertions, 13 deletions
diff --git a/thesis/parts/implementation.tex b/thesis/parts/implementation.tex
index 43ed2ba..d175d63 100644
--- a/thesis/parts/implementation.tex
+++ b/thesis/parts/implementation.tex
@@ -4,24 +4,41 @@
\section{Tooling integration}
-As well as a standalone compiler, the Rust toolchain provides an official package manager, which is widely adopted.
-We interface with cargo through the \code{cargo metadata} command, which allows us to work seamlessly with virtually any Rust project.
+get project metadata from cargo
+available benchmarks and source directories
+works with most projects
+could be expanded to run as cargo command
-From cargo, we retrieve information about the user's project source directories, the available benchmarks and tests, and the location of the build cache.
+parse minimal amount of information from criterion benchmark
+most common benchmarking tool, closest there is to a standard
+should be easy to adapt if/when cargo ships proper benchmarking support
-\section{Cost model generation}
+benchmarking of container implementations also outputs this format, but without using criterion harness
+allows more flexibility, ie pre-processing results
-Generation of cost models is implemented in three locations:
+each trait has its own set of benchmarks, which run different workloads
+benchmarker trait doesn't have Ns
+example benchmarks for hashmap and vec
-\begin{itemize}
-\item The \code{candelabra_benchmarker} crate, which provides code for benchmarking anything that implements the primrose library's traits
-\item The \code{candelabra::cost::benchmark} module, which generates, runs, and parses benchmarks for container types in the primrose library
-\item The \code{candelabra::cost::fit} module, which fits a linear regression model to a set of observations and allows for
-\end{itemize}
+\code{candelabra::cost::benchmark} generates code which just calls candelabra_benchmarker methods
+Ns are set there, and vary from [...]
-\todo{Fitting of cost models}
-\todo{Profiling wrapper}
-\todo{Selection and comparison code}
+fitting done with least squares in \code{candelabra::cost::fit}
+list other methods tried
+simple, which helps 'smooth out' noisy benchmark results
+
+profiler type in \code{primrose_library::profiler}}
+wraps an 'inner' implementation and implements whatever operations it does, keeping track of number of calls
+on drop, creates new file in folder specified by env variable
+
+primrose results generated in \code{primrose::codegen}, which is called in \code{candelabra::profiler}
+picks the first valid candidate - performance doesn't really matter for this case
+each drop generates a file, so we get details of every individual collection allocated
+\todo{immediately aggregate these into summary statistics, for speed}
+\todo{mention benchmark repetition}
+
+estimate a cost for each candidate: op(avg_n) * op_times for each op
+pick the smallest one
\todo{Caching}
\todo{Other implementation details?}