blob: d175d63561f8e0396bf2e67628bb2bce8fcf3bfa (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
|
\todo{Introduction}
\todo{Primrose adaptations}
\section{Tooling integration}
get project metadata from cargo
available benchmarks and source directories
works with most projects
could be expanded to run as cargo command
parse minimal amount of information from criterion benchmark
most common benchmarking tool, closest there is to a standard
should be easy to adapt if/when cargo ships proper benchmarking support
benchmarking of container implementations also outputs this format, but without using criterion harness
allows more flexibility, ie pre-processing results
each trait has its own set of benchmarks, which run different workloads
benchmarker trait doesn't have Ns
example benchmarks for hashmap and vec
\code{candelabra::cost::benchmark} generates code which just calls candelabra_benchmarker methods
Ns are set there, and vary from [...]
fitting done with least squares in \code{candelabra::cost::fit}
list other methods tried
simple, which helps 'smooth out' noisy benchmark results
profiler type in \code{primrose_library::profiler}}
wraps an 'inner' implementation and implements whatever operations it does, keeping track of number of calls
on drop, creates new file in folder specified by env variable
primrose results generated in \code{primrose::codegen}, which is called in \code{candelabra::profiler}
picks the first valid candidate - performance doesn't really matter for this case
each drop generates a file, so we get details of every individual collection allocated
\todo{immediately aggregate these into summary statistics, for speed}
\todo{mention benchmark repetition}
estimate a cost for each candidate: op(avg_n) * op_times for each op
pick the smallest one
\todo{Caching}
\todo{Other implementation details?}
|