aboutsummaryrefslogtreecommitdiff
path: root/thesis/parts/implementation.tex
diff options
context:
space:
mode:
authorAria Shrimpton <me@aria.rip>2024-02-12 15:19:48 +0000
committerAria Shrimpton <me@aria.rip>2024-02-12 15:19:48 +0000
commit00173ffcb99511e7bc8a4f476a9dfff87d3b1c4e (patch)
tree2702cc8de4c579defaadf637845874473611cbaa /thesis/parts/implementation.tex
parenteecdde83e1bcccee7cdfa4af8191ec1aa727acb7 (diff)
make outline / update tasks
Diffstat (limited to 'thesis/parts/implementation.tex')
-rw-r--r--thesis/parts/implementation.tex85
1 files changed, 56 insertions, 29 deletions
diff --git a/thesis/parts/implementation.tex b/thesis/parts/implementation.tex
index f5d8d1f..5009836 100644
--- a/thesis/parts/implementation.tex
+++ b/thesis/parts/implementation.tex
@@ -1,42 +1,69 @@
\todo{Introduction}
-\todo{Primrose adaptations}
+\section{Modifications to Primrose}
-\section{Tooling integration}
+%% API
-get project metadata from cargo
-available benchmarks and source directories
-works with most projects
-could be expanded to run as cargo command
+%% Mapping trait
-parse minimal amount of information from criterion benchmark
-most common benchmarking tool, closest there is to a standard
-should be easy to adapt if/when cargo ships proper benchmarking support
+%% We add a new mapping trait to primrose to express KV maps
-benchmarking of container implementations also outputs this format, but without using criterion harness
-allows more flexibility, ie pre-processing results
+%% \todo{add and list library types}
-each trait has its own set of benchmarks, which run different workloads
-benchmarker trait doesn't have Ns
-example benchmarks for hashmap and vec
+%% the constraint solver has been updated to allow properties on dicts (dictproperty), but this was unused.
-\code{candelabra::cost::benchmark} generates code which just calls \code{candelabra\_benchmarker} methods
-Ns are set there, and vary from [...]
+%% Resiliency, etc
-fitting done with least squares in \code{candelabra::cost::fit}
-list other methods tried
-simple, which helps 'smooth out' noisy benchmark results
+\section{Cost models}
-profiler type in \code{primrose\_library::profiler}
-wraps an 'inner' implementation and implements whatever operations it does, keeping track of number of calls
-on drop, creates new file in folder specified by env variable
+%% Benchmarker crate
-primrose results generated in \code{primrose::codegen}, which is called in \code{candelabra::profiler}
-picks the first valid candidate - performance doesn't really matter for this case
-each drop generates a file, so we get details of every individual collection allocated
+%% each trait has its own set of benchmarks, which run different workloads
+%% benchmarker trait doesn't have Ns
+%% example benchmarks for hashmap and vec
-estimate a cost for each candidate: op(avg\_n) * op\_times for each op
-pick the smallest one
+%% Code generation
-\todo{Caching}
-\todo{Other implementation details?}
+%% \code{candelabra::cost::benchmark} generates code which just calls \code{candelabra\_benchmarker} methods
+%% Ns are set there, and vary from [...]
+
+%% fitting done with least squares in \code{candelabra::cost::fit}
+%% list other methods tried
+%% simple, which helps 'smooth out' noisy benchmark results
+
+\section{Profiling}
+
+%% profiler type in \code{primrose\_library::profiler}
+%% wraps an 'inner' implementation and implements whatever operations it does, keeping track of number of calls
+%% on drop, creates new file in folder specified by env variable
+
+%% primrose results generated in \code{primrose::codegen}, which is called in \code{candelabra::profiler}
+%% picks the first valid candidate - performance doesn't really matter for this case
+%% each drop generates a file, so we get details of every individual collection allocated
+
+\section{Selection and Codegen}
+
+%% Generated code (opaque types)
+
+%% Selection Algorithm incl Adaptiv
+
+%% Implementation w/ const generics
+
+\section{Misc Concerns}
+
+\todo{Justify Rust as language}
+
+\todo{Explain cargo's role in rust projects \& how it is integrated}
+
+%% get project metadata from cargo
+%% available benchmarks and source directories
+%% works with most projects
+%% could be expanded to run as cargo command
+
+\todo{Caching and stuff}
+
+\todo{Ease of use}
+
+%% parse minimal amount of information from criterion benchmark
+%% most common benchmarking tool, closest there is to a standard
+%% should be easy to adapt if/when cargo ships proper benchmarking support