aboutsummaryrefslogtreecommitdiff
path: root/thesis/parts/background.tex
diff options
context:
space:
mode:
authorAria Shrimpton <me@aria.rip>2024-03-19 23:45:14 +0000
committerAria Shrimpton <me@aria.rip>2024-03-19 23:45:14 +0000
commitdb7c7a0bb4b96635737c4351d641aa87318865bf (patch)
tree4ab058c62a5537fffff5225323516f0bf2f1c457 /thesis/parts/background.tex
parent4f34dc852c94f36e972799cfe87257ed547af906 (diff)
improve citation style & bibliography
Diffstat (limited to 'thesis/parts/background.tex')
-rw-r--r--thesis/parts/background.tex10
1 files changed, 5 insertions, 5 deletions
diff --git a/thesis/parts/background.tex b/thesis/parts/background.tex
index a4383bc..f705aad 100644
--- a/thesis/parts/background.tex
+++ b/thesis/parts/background.tex
@@ -77,7 +77,7 @@ This means that developers are forced to guess based on their knowledge of the u
\subsection{Rules-based approaches}
One approach to the container selection problem is to allow the developer to make the choice initially, but use some tool to detect poor choices.
-Chameleon\parencite{shacham_chameleon_2009} uses this approach.
+Chameleon\citep{shacham_chameleon_2009} uses this approach.
It first collects statistics from program benchmarks using a ``semantic profiler''.
This includes the space used by collections over time and the counts of each operation performed.
@@ -94,13 +94,13 @@ This results in selection rules being more restricted than they otherwise could
For instance, a rule cannot suggest a \code{HashSet} instead of a \code{LinkedList} as the two are not semantically identical.
Chameleon has no way of knowing if doing so will break the program's functionality and so it does not make the suggestion.
-CoCo \parencite{hutchison_coco_2013} and work by \"{O}sterlund \parencite{osterlund_dynamically_2013} use similar techniques, but work as the program runs.
+CoCo \citep{hutchison_coco_2013} and work by \"{O}sterlund \citep{osterlund_dynamically_2013} use similar techniques, but work as the program runs.
This works well for programs with different phases of execution, such as loading and then working on data.
However, the overhead from profiling and from checking rules may not be worth the improvements in other programs, where access patterns are roughly the same throughout.
\subsection{ML-based approaches}
-Brainy\parencite{jung_brainy_2011} gathers statistics similarly, however it uses machine learning (ML) for selection instead of programmed rules.
+Brainy\citep{jung_brainy_2011} gathers statistics similarly, however it uses machine learning (ML) for selection instead of programmed rules.
ML has the advantage of being able to detect patterns a human may not be aware of.
For example, Brainy takes into account statistics from hardware counters, which are difficult for a human to reason about.
@@ -108,7 +108,7 @@ This also makes it easier to add new collection implementations, as rules do not
\subsection{Estimate-based approaches}
-CollectionSwitch\parencite{costa_collectionswitch_2018} is an online solution which adapts as the program runs and new information becomes available.
+CollectionSwitch\citep{costa_collectionswitch_2018} is an online solution which adapts as the program runs and new information becomes available.
First, a performance model is built for each container implementation.
This gives an estimate of some cost for each operation at a given collection size.
@@ -131,7 +131,7 @@ However, it does not take the collection size into account.
Most of the approaches we have highlighted focus on non-functional requirements, and use programming language features to enforce functional requirements.
We will now examine tools which focus on container selection based on functional requirements.
-Primrose \parencite{qin_primrose_2023} is one such tool, which uses a model-based approach.
+Primrose \citep{qin_primrose_2023} is one such tool, which uses a model-based approach.
It allows the application developer to specify semantic requirements using a Domain-Specific Language (DSL), and syntactic requirements using Rust's traits.
The semantic requirements are expressed as a list of predicates, each representing a semantic property.