\todo{Introduction} \todo{Primrose adaptations} \section{Tooling integration} get project metadata from cargo available benchmarks and source directories works with most projects could be expanded to run as cargo command parse minimal amount of information from criterion benchmark most common benchmarking tool, closest there is to a standard should be easy to adapt if/when cargo ships proper benchmarking support benchmarking of container implementations also outputs this format, but without using criterion harness allows more flexibility, ie pre-processing results each trait has its own set of benchmarks, which run different workloads benchmarker trait doesn't have Ns example benchmarks for hashmap and vec \code{candelabra::cost::benchmark} generates code which just calls candelabra_benchmarker methods Ns are set there, and vary from [...] fitting done with least squares in \code{candelabra::cost::fit} list other methods tried simple, which helps 'smooth out' noisy benchmark results profiler type in \code{primrose_library::profiler}} wraps an 'inner' implementation and implements whatever operations it does, keeping track of number of calls on drop, creates new file in folder specified by env variable primrose results generated in \code{primrose::codegen}, which is called in \code{candelabra::profiler} picks the first valid candidate - performance doesn't really matter for this case each drop generates a file, so we get details of every individual collection allocated \todo{immediately aggregate these into summary statistics, for speed} \todo{mention benchmark repetition} estimate a cost for each candidate: op(avg_n) * op_times for each op pick the smallest one \todo{Caching} \todo{Other implementation details?}