aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAria Shrimpton <me@aria.rip>2024-02-01 01:52:41 +0000
committerAria Shrimpton <me@aria.rip>2024-02-01 01:52:41 +0000
commit7ce81d97d9b04d537907704e6883c65ba52f56e2 (patch)
treee01b816b20aad98519f112e1739cb487297e1186
parent05f63fc9f2277cce210a322b700f84040fcbd763 (diff)
minor thesis fixes
-rw-r--r--thesis/Justfile6
-rw-r--r--thesis/parts/implementation.tex6
2 files changed, 6 insertions, 6 deletions
diff --git a/thesis/Justfile b/thesis/Justfile
index 2805cb0..4e71af3 100644
--- a/thesis/Justfile
+++ b/thesis/Justfile
@@ -1,10 +1,10 @@
default: build
build:
- cd thesis/; latexmk -bibtex -pdf
+ latexmk -bibtex -pdf
watch:
- cd thesis/; latexmk -bibtex -pdf -pvc
+ latexmk -bibtex -pdf -pvc
clean:
- cd thesis/; latexmk -c
+ latexmk -c
diff --git a/thesis/parts/implementation.tex b/thesis/parts/implementation.tex
index 4faa8a8..884f675 100644
--- a/thesis/parts/implementation.tex
+++ b/thesis/parts/implementation.tex
@@ -20,14 +20,14 @@ each trait has its own set of benchmarks, which run different workloads
benchmarker trait doesn't have Ns
example benchmarks for hashmap and vec
-\code{candelabra::cost::benchmark} generates code which just calls candelabra_benchmarker methods
+\code{candelabra::cost::benchmark} generates code which just calls \code{candelabra\_benchmarker} methods
Ns are set there, and vary from [...]
fitting done with least squares in \code{candelabra::cost::fit}
list other methods tried
simple, which helps 'smooth out' noisy benchmark results
-profiler type in \code{primrose_library::profiler}}
+profiler type in \code{primrose\_library::profiler}
wraps an 'inner' implementation and implements whatever operations it does, keeping track of number of calls
on drop, creates new file in folder specified by env variable
@@ -37,7 +37,7 @@ each drop generates a file, so we get details of every individual collection all
\todo{immediately aggregate these into summary statistics, for speed}
\todo{mention benchmark repetition}
-estimate a cost for each candidate: op(avg_n) * op_times for each op
+estimate a cost for each candidate: op(avg\_n) * op\_times for each op
pick the smallest one
\todo{update for nsplit stuff}
\todo{mention difficulties with lazy vecs}