aboutsummaryrefslogtreecommitdiff
path: root/thesis/parts/background.tex
diff options
context:
space:
mode:
Diffstat (limited to 'thesis/parts/background.tex')
-rw-r--r--thesis/parts/background.tex58
1 files changed, 56 insertions, 2 deletions
diff --git a/thesis/parts/background.tex b/thesis/parts/background.tex
index 7e412fe..ca2a670 100644
--- a/thesis/parts/background.tex
+++ b/thesis/parts/background.tex
@@ -33,7 +33,7 @@ Depending on the application however, the choice of concrete implementation can
\subsection{Chameleon}
-Chameleon is one paper which attempts to solve the container selection problem.
+Chameleon\parencite{shacham_chameleon_2009} is one paper which attempts to solve the container selection problem.
It works on Java programs, and requires both a runtime library and a modified garbage collector.
First, it runs the program normally, and collects data on the collections used using a ``semantic profiler''.
@@ -58,8 +58,62 @@ Although users are able to add rules, Chameleon still requires effort in order f
It is also limited to patterns that the developer is able to formalise, such as the above rule for indexing a linked list.
In many cases, there may be patterns that could be used to suggest a better option, but that the developer does not see or cannot formalise.
-Finally, Chameleon assuems that all implementations are semantically identical.
+Finally, Chameleon assumes that all implementations are semantically identical.
In other words, the program will function the same no matter which one is used.
This results in selection rules needing to be more restricted than they otherwise could be.
For instance, a rule cannot suggest a \code{HashSet} instead of a \code{LinkedList}, as the two are not semantically identical.
Chameleon has no way of knowing if doing so will break the program's functionality, and so it does not make a suggestion.
+
+\subsection{Brainy}
+
+%% - uses ai model to predict based on target microarchitecture, and runtime behaviour
+%% - uses access pattersn, etc.
+%% - also assumes semantically identical set of candidates
+%% - uses application generator for training data
+%% - focuses on the performance difference between microarchitectures
+%% - intended to be run at each install site
+
+Brainy\parencite{jung_brainy_2011} attempts to solve the container selection problem using Machine Learning.
+
+Similar to Chameleon, Brainy runs the program with developer-provided input and collects statistics on how the collection is used.
+Unlike Chameleon, these statistics include some hardware counters, such as cache utilisation and branch misprediction rate.
+
+This profiling information is then fed to an ML model, which predicts the implementation likely to be most performant for the specific program and microarchitecture, from the models that the model was trained to use.
+
+Of the existing literature, Brainy appears to be the only method which directly accounts for hardware factors.
+The authors propose that their tool can be run at install-time for each target system, and then used by developers or by applications integrated with it to select the best data structure for that hardware.
+This allows it to compensate for the differences in performance that can come from different hardware configurations - for instance, the size of the cache may affect the performance of a growable list compared to a linked list.
+The paper itself demonstrates the effectiveness of this, stating that ``On average, 43\% of the randomly generated applications have different optimal data structures [across different architectures]''.
+
+The model itself is trained on a dataset of randomly generated applications, which are randomly generated sequences of operations.
+This is intended to avoid overfitting on specific applications, as a large number of applications with different characteristics can be generated.
+However, the applications generated are unlikely to be representative of real applications.
+In practice, there are usually patterns of certain combinations that are repeated, meaning the next operation is never truly random.
+
+Like the other approaches mentioned, Brainy picks from a list of semantically different containers at each site.
+However, this list is picked depending on the usage at each call site, meaning it is somewhat aware of the semantics required by each usage.
+The set of alternate datasets is decided based on the original data structure (vector, list, set), and whether the order is ever used.
+This allows for a bigger pool of containers to choose from, for instance a vector can also be swapped for a set in some circumstances.
+However, this approach is still limited in the semantics it can identify, for instance it cannot differentiate a stack or queue from any other type of list.
+
+While Brainy achieves significant improvements in program performance (``average performance improvements of 27\% and 33\% on both microarchitectures''), it is subject to similar limitations as Chameleon.
+
+\subsection{CollectionSwitch}
+
+%% - online selection - uses library so easier to integrate
+%% - collects access patterns, size patterns, etc.
+%% - performance model is built beforehand for each concrete implementation, with a cost model used to estimate the relative performance of each based on observed usage
+%% - switches underlying implementation dynamically
+%% - also able to decide size thresholds where the implementation should be changed and do this
+%% - doesn't require specific knowledge of the implementations, although does still assume all are semantically equivalent
+
+CollectionSwitch\parencite{costa_collectionswitch_2018} takes a different approach to the container selection problem, adapting as the program runs and new information becomes available.
+
+First, a performance model is built for each container implementation.
+This is done by performing each operation many times in succession, varying the length of the collection.
+This data is used to fit a polynomial, which gives an estimate of cost per operation at a given n.
+
+The total cost for each collection type is then calculated for each individual instance over its lifetime.
+If switching to another implementation will drop the average total cost more than a certain threshold, then CollectionSwitch will start using that collection for newly allocated instances, and may also switch existing instances over to it.
+
+By generating a cost model based on benchmarks, CollectionSwitch manages to be more flexible than other rules-based approaches such as Chameleon.