aboutsummaryrefslogtreecommitdiff
path: root/thesis/parts/background.tex
blob: 38cadacb3c229267029a86f73915199ff961fa7d (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
In this chapter we provide an overview of the problem of container selection and its effect on program correctness and performance.
We then provide an overview of approaches from modern programming languages and existing literature.
Finally, we explain how our system is novel, and the weaknesses in existing literature it solves.

\section{Container Selection}

The vast majority of programs will make extensive use of collection data types --- types intended to hold multiple instances of other types.

In many languages, the standard library provides a variety of collections, with users able to choose which is best for their program.
This saves users a lot of time, however selecting the best type is not always straightforward.

Consider a program which needs to store and query a set of numbers, and doesn't care about ordering or duplicates.
If the number of items ($n$) is small enough, it might be fastest to use a dynamic array, and scan through each time we want to check if a number is inside.
On the other hand, if the set we deal with is much larger, we may want the constant-time lookups provided by hash sets, at the cost of a generally slower lookup.

In this case, there are two factors driving our decision.
Our functional requirements, that we don't care about ordering or duplicates, and our non-functional requirements, that we want our program to be fast.

\subsection{Functional requirements}

Functional requirements tell us how the container will be used and how it must behave.
Continuing with our previous example, we'll compare Rust's \code{Vec} type (a dynamic array), with the \code{HashSet} type.

Note that the two types have different methods: \code{Vec} implements \code{.get(index)}, while \code{HashSet} does not; \code{HashSet}s aren't ordered so this doesn't make sense.
If we were building a program that needed an ordered collection, replacing \code{Vec} with \code{HashSet} probably wouldn't compile.

We will call the operations a container provides the ``syntactic  properties'' of the container.
In object-oriented programming, we might say they must implement an ``interface'', while in Rust, we could say that they implement a ``trait''.

However, syntactic properties alone are not always enough to select an appropriate container.
Suppose our program only requires a container to have \code{.insert(value)} and \code{.len()}.
Both \code{Vec} and \code{HashSet} will satisfy these requirements, but we might rely on \code{.len()} including duplicates.
In this case, \code{HashSet} would give us different behaviour, causing our program to behave incorrectly.

Therefore we also say that a container implementation has ``semantic properties''.
Intuitively we can think of this as what conditions the container upholds.
For a \code{HashSet}, this would include that there are never any duplicates, whereas for a \code{Vec} it would include that ordering is preserved.

\subsection{Non-functional requirements}

While meeting the functional requirements should ensure our program runs correctly, we also want to choose the 'best' type that we can, striking a balance between runtime and memory usage.

Prior work has demonstrated that proper container selection can result in substantial performance improvements.
\cite{l_liu_perflint_2009} found and suggested fixes for ``hundreds of suboptimal patterns in a set of large C++ benchmarks,'' with one such case improving performance by 17\%.
Similarly, \cite{jung_brainy_2011} demonstrates an average increase in speed of 27-33\% on real-world applications and libraries using a similar approach.

If we can find a selection of types that satisfy our functional requirements, then one obvious solution is to benchmark the program with each of these implementations in place and determine which works best.
This will work so long as our benchmarks are roughly representative of 'real world' inputs.

Unfortunately, this technique scales poorly for larger applications.
As the number of types we must select increases linearly, the number of possible permutations increases exponentially (provided they have roughly the same number of candidates).
This quickly becomes unfeasible, so we must explore other selection methods.

\section{Prior literature}

In this section we outline methods for container selection available within and outside of current programming languages and their limitations.

\subsection{Approaches in common programming languages}

Modern programming languages broadly take one of two approaches to container selection.

Some languages, usually higher-level ones, recommend built-in structures as the default, using implementations that perform well enough for the vast majority of use-cases.
One popular examples is Python, which uses dynamic arrays as its built-in list implementation.
This approach prioritises developer ergonomics: programmers do not need to think about how these are implemented.
Often other implementations are possible, but are used only when needed and come at the cost of code readability.

In other languages, collections are given as part of a standard library or must be written by the user.
Java comes with growable lists as part of its standard library, as does Rust.
In both cases, the standard library implementation is not special --- users can implement their own and use them in the same ways.

Often interfaces, or their closest equivalent, are used to abstract over 'similar' collections.
In Java, ordered collections implement the interface \code{List<E>}, with similar interfaces for \code{Set<E>}, \code{Queue<E>}, etc.

This allows most code to be implementation-agnostic, however the developer must still choose a concrete implementation at some point.
This means that developers are forced to guess based on their knowledge of the underlying implementations, or simply choose the most common implementation.

\subsection{Rules-based approaches}

One approach to the container selection problem is to allow the developer to make the choice initially, but use some tool to detect poor choices.
Chameleon\parencite{shacham_chameleon_2009} uses this approach.

It first collects statistics from program benchmarks using a ``semantic profiler''.
This includes the space used by collections over time and the counts of each operation performed.
These statistics are tracked per individual collection allocated and then aggregated by 'allocation context' --- the call stack at the point where the allocation occured.

These aggregated statistics are passed to a rules engine, which uses a set of rules to suggest container types which might improve performance.
This results in a flexible engine for providing suggestions which can be extended with new rules and types as necessary.

However, adding new implementations requires the developer to write new suggestion rules.
This can be difficult as it requires the developer to understand all of the existing implementations' performance characteristics.

To satisfy functional requirements, Chameleon only suggests new types that behave identically to the existing type.
This results in selection rules being more restricted than they otherwise could be.
For instance, a rule cannot suggest a \code{HashSet} instead of a \code{LinkedList} as the two are not semantically identical.
Chameleon has no way of knowing if doing so will break the program's functionality and so it does not make the suggestion.

CoCo \parencite{hutchison_coco_2013} and work by \"{O}sterlund \parencite{osterlund_dynamically_2013} use similar techniques, but work as the program runs.
This works well for programs with different phases of execution, such as loading and then working on data.
However, the overhead from profiling and from checking rules may not be worth the improvements in other programs, where access patterns are roughly the same throughout.

\subsection{ML-based approaches}

Brainy\parencite{jung_brainy_2011} gathers statistics similarly, however it uses machine learning (ML) for selection instead of programmed rules.

ML has the advantage of being able to detect patterns a human may not be aware of.
For example, Brainy takes into account statistics from hardware counters, which are difficult for a human to reason about.
This also makes it easier to add new collection implementations, as rules do not need to be written by hand.

\subsection{Estimate-based approaches}

CollectionSwitch\parencite{costa_collectionswitch_2018} is an online solution which adapts as the program runs and new information becomes available.

First, a performance model is built for each container implementation.
This gives an estimate of some cost for each operation at a given collection size.
We call the measure of cost the ``cost dimension''.
Examples of cost dimensions include memory usage and execution time.

This is combined with profiling information to give cost estimates for each collection type and cost dimension.
Switching between container types is then done based on the potential change in each cost dimension.
For instance, we may choose to switch if we reduce the estimated space cost by more than 20\%, so long as the estimated time cost doesn't increase by more than 20\%.

By generating a cost model based on benchmarks, CollectionSwitch manages to be more flexible than rule-based approaches.
Like ML approaches, adding new implementations requires little extra work, but has the advantage of being possible without having to re-train a model.

A similar approach is used by \cite{l_liu_perflint_2009} for the C++ standard library.
It focuses on measuring the cost and frequency of more fine-grained operations, such as list resizing.
However, it does not take the collection size into account.

\subsection{Functional requirements}

Most of the approaches we have highlighted focus on non-functional requirements, and use programming language features to enforce functional requirements.
We will now examine tools which focus on container selection based on functional requirements.

Primrose \parencite{qin_primrose_2023} is one such tool, which uses a model-based approach.
It allows the application developer to specify semantic requirements using a Domain-Specific Language (DSL), and syntactic requirements using Rust's traits.

The semantic requirements are expressed as a list of predicates, each representing a semantic property.
Predicates act on an abstract model of the container type.
Each implementation also specifies the conditions it upholds using an abstract model.
A constraint solver then checks if a given implementation will always meet the conditions required by the predicate(s).

This allows developers to express any combination of semantic requirements, rather than limiting them to common ones (as in Java's approach).
It can also be extended with new implementations as needed, though this does require modelling the semantics of the new implementation.

\cite{franke_collection_2022} uses a similar idea, but is limited to properties defined by the library authors and implemented on the container implementations.

To select the final container implementation, both tools rely on benchmarking each candidate.
As we note above, this scales poorly.

\section{Contribution}

We aim to create a container selection method that addresses both functional and non-functional requirements.

Users should be able to specify their functional requirements in a way that is expressive enough for most usecases, and easy to integrate with existing projects.
It should also be easy to add new container implementations, and we should be able to use it on large projects without selection time becoming an issue.

We focus on offline container selection (done before the program is compiled), however we also attempt to detect when changing implementation at runtime is desirable.