aboutsummaryrefslogtreecommitdiff
path: root/thesis/parts/background.tex
blob: c832e505558b0dfee831c7a0b399e0a3716269df (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
In this chapter we provide an overview of the problem of container selection, and its effect on program correctness and performance.
We then provide an overview of approaches from modern programming languages, and existing literature.
Finally, we examine the gaps in the existing literature, and explain what we aim to contribute.

\section{Container Selection}

The vast majority of programs will make extensive use of collection data types --- types intended to hold many different instances of other data types.
This includes structures like fixed-size arrays, growable lists, and key-value mappings.

In many languages, the standard library provides a variety of collections, forcing us to choose which is best.
Consider the Rust types \code{Vec<T>} (a dynamic array) and \code{HashSet<T>} (a hash-based set). %comma separate definitions instead of parentheses
If we care about the ordering, or about preserving duplicates, then we must use \code{Vec<T>}.
But if we don't, then \code{HashSet<T>} might be more performant, if we use \code{contains} a lot.%bad contraction (there can be good contractions)

We refer to this problem as container selection, and say that we must satisfy both functional requirements, and non-functional requirements.%comma?

\subsection{Functional requirements}

The functional requirements tell us how the container will be used, and how it must behave.%comma?

Continuing with our previous example, we can see that \code{Vec} and \code{HashSet} implement different methods.
\code{Vec} implements \code{.get(index)} while \code{HashSet} doesn't - it wouldn't make sense for an unordered collection.%contraction
If we try to swap \code{Vec} for \code{HashSet}, the resulting program will likely not compile.

We will call the operations a container implements the ``syntactic  properties'' of the container.
In object-oriented programming, we might say they must implement an interface, while in Rust, we would say that they implement a trait.

However, syntactic properties alone are not always enough to select an appropriate container.
Suppose our program only requires a container to have \code{.insert(value)}, and \code{.len()}.
Both \code{Vec} and \code{HashSet} will satisfy these requirements, but we might rely on \code{.len()} including duplicates.
In this case, \code{HashSet} would give us different behaviour, causing our program to behave incorrectly.

Therefore we also say that a container implementation has ``semantic properties''.
Intuitively we can think of this as what conditions the container upholds.
For a \code{HashSet}, this would include that there are never any duplicates, whereas for a Vec it would include that ordering is preserved.

\subsection{Non-functional requirements}

While meeting the functional requirements should ensure our program runs correctly, we also want to choose the 'best' type that we can.
Here we will consider 'best' as striking a balance between runtime and memory usage.

Prior work has shown that properly considering container selection selection can give substantial performance improvements.
For instance, \cite{l_liu_perflint_2009} found  and suggested fixes for ``hundreds of suboptimal patterns in a set of large C++ benchmarks,'' with one such case improving performance by 17\%.
Similarly, \cite{jung_brainy_2011} achieves an average speedup of 27-33\% on real-world applications and libraries.

If we can find a selection of types that satisfy our functional requirements, then one obvious solution is to benchmark the program with each of these implementations in place, and see which works best.
This will obviously work, so long as our benchmarks are roughly representative of 'real world' inputs.

Unfortunately, this technique scales poorly for bigger applications.
As the number of container types we must select increases, the number of combinations we must try increases exponentially (assuming they all have roughly the same number of candidates).
This quickly becomes unfeasible, and so we must find other selection methods.

\section{Prior Literature}

In this section, we outline methods for container selection available in current programming languages, and their limitations.
We then examine some of the existing solutions for container selection, and their limitations.

\subsection{Approaches in common programming languages}

Modern programming languages broadly take one of two approaches to container selection.

Some languages, usually higher-level ones, recommend built-in structures as the default, using implementations that perform fine for the vast majority of use-cases.
One popular examples is Python, which uses dynamic arrays as its built-in list implementation.
This approach prioritises developer ergonomics: Programmers do not need to think about how these are implemented.
Usually, other implementations are possible, but are used only when needed and come at the cost of code readability.

In other languages, collections are given as part of a standard library, or must be written by the user.
Java comes with growable lists as part of its standard library, as does Rust (with some macros to make use easier).
In both cases, the ``blessed'' implementation of collections is not special - users can implement their own.

Often interfaces, or their closest equivalent, are used to distinguish 'similar' collections.
In Java, ordered collections implement the interface \code{List<E>}, while similar interfaces exist for \code{Set<E>}, \code{Queue<E>}, etc.
This means that when the developer chooses a type, the compiler enforces the syntactic requirements of the collection, and the writer of the implementaiton ``promises'' they have met the semantic requirements.

Whilst the approach Java takes is the most expressive, both of these approaches either put the choice on the developer, or remove the choice entirely.
This means that developers are forced to guess based on their knowledge of the underlying implementations, or to just pick the most common implementation.

\subsection{Rules-based approaches}

One approach to the container selection problem is to allow the developer to make the choice initially, but use some tool to detect bad choices.

Chameleon\parencite{shacham_chameleon_2009} is one example of this.
It first collects statistics from program benchmarks using a ``semantic profiler''.
This includes the space used by collections over time, and the counts of each operation performed.
These statistics are tracked per individual collection allocated, and then aggregated by 'allocation context' -  the call stack at the point where the allocation occured.

These aggregated statistics are then passed to a rules engine, which uses a set of rules to suggest places a different container type might improve performance.
This results in a flexible engine for providing suggestions, which can be extended with new rules and types as necessary.

To satisfy functional requirements, Chameleon only suggests new types that behave identically to the existing type.
This results in selection rules needing to be more restricted than they otherwise could be.
For instance, a rule cannot suggest a \code{HashSet} instead of a \code{LinkedList}, as the two are not semantically identical.
Chameleon has no way of knowing if doing so will break the program's functionality, and so it does not make a suggestion.

A similar rules-based approach is used by \cite{l_liu_perflint_2009} for the C++ standard library.
\cite{hutchison_coco_2013} and \cite{osterlund_dynamically_2013} use similar techniques, but work as the program runs.
This works well for programs with different phases of execution, however does incur an overhead.

\subsection{ML-based approaches}

%% TODO
\cite{jung_brainy_2011} uses a machine learning approach with similar statistics collection

\cite{thomas_framework_2005} also uses an ML approach, but focuses on parallel algorithms rather than data structures, and does not take hardware counters into account.

\subsection{Estimate-based approaches}

CollectionSwitch\parencite{costa_collectionswitch_2018} is an online solution, which adapts as the program runs and new information becomes available.

First, a performance model is built for each container implementation.
This is done by performing each operation many times in succession, varying the length of the collection.
This data is used to fit a polynomial, which gives an estimate of cost of a specific operation at a given n.

This is then combined with the frequency of each operation counts to give cost estimates for each collection type, operation, and 'cost dimension' (time and space).
Rules then decide when switching to a new implementation is worth it based on these cost estimates and defined thresholds.

By generating a cost model based on benchmarks, CollectionSwitch manages to be more flexible than other rules-based approaches such as Chameleon.
It expects applications to use Java's \code{List}, \code{Set}, or \code{Map} interfaces, which express enough functional requirements for most problems.

\subsection{Functional requirements}

Most of the approaches highlighted above have focused on non-functional requirements, and used programming language features to enforce functional requirements.
By contrast, Primrose \parencite{qin_primrose_2023} focuses on the functional requirements of container selection.

It allows the application developer to specify semantic requirements using a DSL, and syntactic requirements using Rust's traits.

A semantic property is simply a predicate, acting on an abstract model of the container type.
Similarly, each implementation provides an abstract version of its operations acting on this model.
An SMT solver then checks if a given implementation will always meet the conditions required by the predicate(s).

Developers must then choose which of these implementations will work best for their non-functional requirements.

This allows developers to express any combination of semantic requirements, rather than limiting them to common ones like Java's approach.
It can also be extended with new implementations as needed, although this does require modelling the semantics of the new implementation.

\cite{franke_collection_2022} also uses the idea of refinement types, but is limited to properties defined by the library authors.

\section{Contributions}