summaryrefslogtreecommitdiffstats
path: root/eval.tex
blob: cfa46e075e1306806692f3e4f283edc46566ce2e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
\section{Evaluation}\label{sec:evaluation}

We generate \totaltestcases{} test-cases and provide them to three HLS tools: Vivado HLS, LegUp HLS and Intel i++. 
We use the same test-cases across all tools for fair comparison (except the HLS directives, which have tool-specific syntax).
We were able to test three different versions of Vivado HLS (v2018.3, v2019.1 and v2019.2). 
We tested one version of Intel i++ (version 18.1), and one version of LegUp (4.0). 
LegUp 7.5 is GUI-based and therefore we could not script our tests. 
However, we were able to manually reproduce all the bugs found in LegUp 4.0 in LegUp 7.5. 

%  Three different tools were tested, including three different versions of Vivado HLS. We were only able to test one version of LegUp HLS (version 4.0), because although LegUp 7.5 is available, it is GUI-based and not amenable to scripting. However, bugs we found in LegUp 4.0 were reproduced manually in LegUp 7.5. 
%  LegUp and Vivado HLS were run under Linux, while the Intel HLS Compiler was run under Windows.

\subsection{Results across different HLS tools}

\definecolor{vivado}{HTML}{7fc97f}
\definecolor{intel}{HTML}{beaed4}
\definecolor{legup}{HTML}{fdc086}
\begin{figure}
  \centering
    \begin{tikzpicture}[scale=0.61]
      \draw (-7.2,7.0) rectangle (7.2,0.7);
      \fill[intel,fill opacity=0.5] (2.5,4.4) ellipse (3.75 and 1.5);
      \fill[vivado,fill opacity=0.5] (-2.5,4.4) ellipse (3.75 and 1.5);
      \fill[legup,fill opacity=0.5] (0,3) ellipse (3.75 and 1.5);
      \draw[white, thick] (2.5,4.4) ellipse (3.75 and 1.5);
      \draw[white, thick] (-2.5,4.4) ellipse (3.75 and 1.5);
      \draw[white, thick] (0,3) ellipse (3.75 and 1.5);
      \node[align=center, anchor=south west] at (-6.4,6) {\textcolor{vivado}{\bf Xilinx Vivado HLS v2019.1}};
      \node[anchor=south east] at (6.4,6) {\textcolor{intel}{\bf Intel i++ 18.1}};
      \node at (4,1.6) {\textcolor{legup}{\bf LegUp 4.0}};

      \node at (1.8,3.5) {1 (\textcolor{red}{1})};
      \node at (-1.8,3.5) {4 (\textcolor{red}{0})};
      \node at (4.0,4.5) {26 (\textcolor{red}{540})};
      \node at (-4.0,4.5) {79 (\textcolor{red}{20})};
      \node at (0,2.1) {162 (\textcolor{red}{6})};
      \node at (0,4.9) {0 (\textcolor{red}{5})};
      \node at (0,3.9) {0 (\textcolor{red}{0})};
      \node at (-6,1.4) {5856};
    \end{tikzpicture}
\caption{The number of failures per tool out of \totaltestcases{} test-cases. Overlapping regions mean that the same test-cases failed in multiple tools. The numbers in parentheses report how many test-cases timed out.}\label{fig:existing_tools}
\end{figure}

Figure~\ref{fig:existing_tools} shows a Venn diagram of our results. 
We see that 167 (2.5\%), 83 (1.2\%) and 26 (0.4\%) test-cases fail in LegUp, Vivado HLS and Intel i++ respectively. 
Despite i++ having the lowest failure rate, it has the highest time-out rate (540 test-cases), because of its remarkably long compilation time.
% We remark that although the Intel HLS Compiler had the smallest number of confirmed test-case failures, it had the most time-outs (which could be masking additional failures)  
Note that the absolute numbers here do not necessarily correspond to the number of bugs in the tools, because a single bug in a language feature that appears frequently in our test suite could cause many programs to crash or fail.
Hence, we reduce many of the failing test-cases in an effort to identify unique bugs; these are summarised in the table below.

\begin{table}[h]
\centering
\begin{tabular}{lr}\toprule
    \textbf{Tool} & \textbf{Unique Bugs}\\
    \midrule
    Xilinx Vivado HLS v2019.1 & $\ge 2$\\
    LegUp HLS & $\ge 3$\\
    Intel i++ & $\ge 1$\\
    Bambu HLS & $\ge 2$\\
    \bottomrule
  \end{tabular}
\end{table}

We write `$\ge$' above to emphasise that all the bug counts are lower bounds -- we did not have time to go through the rather arduous test-case reduction process for every failure.

\subsection{Results across versions of an HLS tool}

\definecolor{ribbon1}{HTML}{8dd3c7}
\definecolor{ribbon2}{HTML}{b3de69}
\definecolor{ribbon3}{HTML}{bebada}
\definecolor{ribbon4}{HTML}{fb8072}
\definecolor{ribbon5}{HTML}{80b1d3}
\definecolor{ribbon6}{HTML}{fdb462}
\begin{figure}
  \centering
  \begin{tikzpicture}[xscale=1.25]
    \draw[white, fill=ribbon1] (-1.0,4.1) -- (0.0,4.1000000000000005) to [out=0,in=180] (2.0,4.1000000000000005) to [out=0,in=180] (4.0,4.1000000000000005) -- (6.0,4.1000000000000005) -- %(7.55,3.325) -- 
    (6.0,2.5500000000000003) -- (4.0,2.5500000000000003) to [out=180,in=0] (2.0,2.5500000000000003) to [out=180,in=0] (0.0,2.5500000000000003) -- (-1.0,2.55) -- cycle;
    \draw[white, fill=ribbon2] (-1.0,2.55) -- (0.0,2.5500000000000003) to [out=0,in=180] (2.0,1.8) to [out=0,in=180] (4.0,1.55) -- (6.0,1.55) -- %(7.3,0.9) -- 
    (6.0,0.25) -- (4.0,0.25) to [out=180,in=0] (2.0,0.5) to [out=180,in=0] (0.0,1.25) -- (-1.0,1.25) -- cycle;
    \draw[white, fill=ribbon3] (-1.0,1.25) -- (0.0,1.25) to [out=0,in=180] (2.0,2.5500000000000003) to [out=0,in=180] (4.0,0.25) -- (6.0,0.25) -- %(6.05,0.225) -- 
    (6.0,0.2) -- (4.0,0.2) to [out=180,in=0] (2.0,2.5) to [out=180,in=0] (0.0,1.2000000000000002) -- (-1.0,1.2) -- cycle;
    \draw[white, fill=ribbon4] (-1.0,0.5) -- (0.0,0.5) to [out=0,in=180] (2.0,2.5) to [out=0,in=180] (4.0,0.2) -- (6.0,0.2) -- %(6.2,0.1) --
    (6.0,0.0) -- (4.0,0.0) to [out=180,in=0] (2.0,2.3000000000000003) to [out=180,in=0] (0.0,0.30000000000000004) -- (-1.0,0.3) -- cycle;
    \draw[white, fill=ribbon5] (-1.0,1.2) -- (0.0,1.2000000000000002) to [out=0,in=180] (2.0,0.5) to [out=0,in=180] (4.0,2.5500000000000003) -- (6.0,2.5500000000000003) -- %(6.2,2.45) --
    (6.0,2.35) -- (4.0,2.35) to [out=180,in=0] (2.0,0.30000000000000004) to [out=180,in=0] (0.0,1.0) -- (-1.0,1.0) -- cycle;
    \draw[white, fill=ribbon6] (-1.0,0.3) -- (0.0,0.30000000000000004) to [out=0,in=180] (2.0,0.30000000000000004) to [out=0,in=180] (4.0,2.35) -- (6.0,2.35) -- %(6.3,2.2) -- 
    (6.0,2.0500000000000003) -- (4.0,2.0500000000000003) to [out=180,in=0] (2.0,0.0) to [out=180,in=0] (0.0,0.0) -- (-1.0,0.0) -- cycle;

    \draw[white, fill=black] (-0.4,4.1) rectangle (0.0,1.0);
    \draw[white, fill=black] (1.8,4.1) rectangle (2.2,2.3);
    \draw[white, fill=black] (3.8,4.1) rectangle (4.2,2.05);

    \node at (-0.2,4.5) {v2018.3};
    \node at (2,4.5) {v2019.1};
    \node at (4,4.5) {v2019.2};
    %\node at (2,5) {Vivado HLS};

    \node at (5.5,3.325) {31};
    \node at (5.5,0.9) {26};
    \node at (5.5,2.2) {6};

    \node[white] at (-0.2,1.2) {62};
    \node[white] at (2,2.5) {36};
    \node[white] at (4,2.25) {41};
  \end{tikzpicture}
  \caption{A Sankey diagram that tracks \vivadotestcases{} test-cases through three different versions of Vivado HLS. The ribbons collect the test-cases that pass and fail together. The black bars are labelled with the total number of test-case failures per version. The 3573 test-cases that pass in all three versions are not depicted.
  }\label{fig:sankey_diagram}
\end{figure}

Besides comparing the reliability of different HLS tools, we also investigated the reliability of Vivado HLS over time. Figure~\ref{fig:sankey_diagram} shows the results of giving \vivadotestcases{} test-cases to Vivado HLS v2018.3, v2019.1 and v2019.2. 
Test-cases that pass and fail in the same tools are grouped together into a ribbon. 
For instance, the topmost ribbon represents the 31 test-cases that fail in all three versions of Vivado HLS. Other ribbons can be seen weaving in and out; these indicate that bugs were fixed or reintroduced in the various versions. We see that Vivado HLS v2018.3 had the most test-case failures (62).
Interestingly, as an indicator of reliability of HLS tools, the blue ribbon shows that there are test-cases that fail in v2018.3, pass in v2019.1 but then fail again in v2019.2.
As in our Venn diagram, the absolute numbers here do not necessary correspond to the number of actual bugs, but we can deduce that there must be at least six unique bugs in Vivado HLS, given that each ribbon corresponds to at least one unique bug.




%\YH{Contradicts value of 3 in Table~\ref{tab:unique_bugs}, maybe I can change that to 6?} \JW{I'd leave it as-is personally; we have already put a `$\ge$' symbol in the table, so I think it's fine.}
%In addition to that, it can then be seen that Vivado HLS v2018.3 must have at least 4 individual bugs, of which two were fixed and two others stayed in Vivado HLS v2019.1.  However, with the release of v2019.1, new bugs were introduced as well. % Finally, for version 2019.2 of Vivado HLS, there seems to be a bug that was reintroduced which was also present in Vivado 2018.3, in addition to a new bug.  In general it seems like each release of Vivado HLS will have new bugs present, however, will also contain many previous bug fixes.  However, it cannot be guaranteed that a bug that was previously fixed will remain fixed in future versions as well.

\subsection{Some specific bugs found}

We now describe two more of the bugs we found: one crash bug in LegUp and one miscompilation bug in Vivado HLS. As in Example~\ref{ex:vivado_miscomp}, each bug was first reduced automatically using \creduce{}, and then reduced further manually to achieve the minimal test-case. Although we did find test-case failures in Intel i++, the long compilation times for that tool meant that we did not have time to reduce any of the failures down to an example that is minimal enough to present here.

\begin{example}[A crash bug in LegUp]
The program shown below leads to an internal compiler error (an unhandled assertion in this case) in LegUp 4.0 and 7.5. 
\begin{minted}{c}
int a[2][2][1] = {{{0},{1}},{{0},{0}}};
int main() { a[0][1][0] = 1; }
\end{minted}
%An assertion error counts as a crash of the tool, as it means that an unexpected state was reached by this input. 
%This shows that there is a bug in one of the compilation passes in LegUp, however, due to the assertion the bug is caught in the tool before it produces an incorrect design.
It initialises a 3D array with zeroes, and then assigns to one element. The bug only appears when function inlining is disabled (\texttt{NO\_INLINE}). % The code initialises the array with zeroes except for \texttt{a[0][1][0]}, which is set to one.  Then the main function assigns one to that same location.  This code on its own should not actually produce a result and should just terminate by returning 0, which is also what the design that LegUp generates does when the \texttt{NO\_INLINE} flag is turned off.

%The following code also produces an assertion error in LegUp, which is a different one this time.  This bug was not discovered during the main test runs of 10 thousand test cases, but beforehand, which meant that we disabled unions from being generated.  However, this bug also requires the \texttt{volatile} keyword which seems to be the reason for quite a few mismatches in LegUp and Vivado.
%
%\begin{minted}{c}
%union U1 { int a; };
%
%volatile union U1 un = {0};
%
%int main() { return un.a; }
%\end{minted}

\end{example}

\begin{figure}
\begin{minted}{c}
static volatile int a[9][1][7];
int main() {
  int tmp = 1;
  for (int b = 0; b < 2; b++) {
    a[0][0][0] = 3;
    a[0][0][0] = a[0][0][0];
  }
  for (int i = 0; i < 9; i++)
    for (int k = 0; k < 7; k++)
      tmp ^= a[i][0][k];
  return tmp;
}
\end{minted}
\caption{Miscompilation bug in Intel i++.  It should return 2 because \code{3 \^{} 1 = 2}, however, Intel i++ returns 0 instead.}\label{fig:eval:intel:mismatch}
\end{figure}

\begin{figure}
\begin{minted}{c}
static int b = 0x10000;
static volatile short a = 0;

int result() {
  a++;
  b = (b >> 8) & 0x100;
  return b;
}
\end{minted}
\caption{Miscompilation bug in Bambu HLS.  As the value of \texttt{b} is shifted to the right by 8, the output should be \texttt{0x100}.  However, the actual output is 0 in Bambu.}\label{fig:eval:intel:mismatch}
\end{figure}

%\begin{example}[A miscompilation bug in Vivado HLS]

%Figure~\ref{fig:eval:vivado:mismatch} shows code that should output \texttt{0xF}, but outputs \texttt{0x0} when compiled with Vivado HLS (all three versions).  
%This test-case is much larger compared to the other test-cases that were reduced. 
%We could not reduce this program any further, as everything in the code was necessary to trigger the bug.
%This test-case is interesting because the root cause appears to be the hashing boilerplate that we added to Csmith, rather than the randomly-generated code that Csmith produced. In the main function, an array is partially initialised, and then a function \texttt{hashl} for hashing a long integer is called twice, once on a volatile global \texttt{x}, and once on a constant. That hashing function invokes a second hashing function on some of the bytes in the long integer.

%The array \texttt{arr} is initialised to all zeroes, as well as the other global variables \texttt{x} and \texttt{y}, so as to not introduce any undefined behaviour.  However, \texttt{x} is also given the \texttt{volatile} keyword, which ensures that the variable is not optimised away.  The function \texttt{hashc} then accumulates the values that it is passed into a hash stored in \texttt{h}.  Each \texttt{b} is eight bits wide, so function \texttt{hashl} calls the function seven times for some of the bits in the 64-bit value of \texttt{f} that it is passed.  Finally, in the main function, the array is initialised partially with a \code{for} loop, after which the \texttt{hashl} function is called twice, once on the volatile function and once on a constant.  Interestingly, the second function call with the constant is also necessary to trigger the bug.

%\end{example}

\begin{example}[A miscompilation bug in Intel i++]
Figure~\ref{fig:eval:intel:mismatch} shows a miscompilation bug that was found in Intel i++.  Intel i++ does not seem to notice the assignment to 3 in the previous for loop, or tries to perform some optimisations that seem to analyse the array incorrectly and therefore results in a wrong value being returned.
\end{example}

\begin{example}[A miscompilation bug in Bambu HLS]

\end{example}

%%% Local Variables:
%%% mode: latex
%%% TeX-master: "main"
%%% End: