\section{Evaluation}\label{sec:evaluation} We generate \totaltestcases{} test-cases and provide them to four HLS tools: Vivado HLS, LegUp HLS, Intel i++, and Bambu. We use the same test-cases across all tools for fair comparison (except the HLS directives, which have tool-specific syntax). We were able to test three different versions of Vivado HLS (v2018.3, v2019.1 and v2019.2). We tested one version of Intel i++ (included in Quartus Lite 18.1), LegUp (4.0) and two versions of Bambu (v0.9.7, v0.9.7-dev). % Three different tools were tested, including three different versions of Vivado HLS. We were only able to test one version of LegUp HLS (version 4.0), because although LegUp 7.5 is available, it is GUI-based and not amenable to scripting. However, bugs we found in LegUp 4.0 were reproduced manually in LegUp 7.5. % LegUp and Vivado HLS were run under Linux, while the Intel HLS Compiler was run under Windows. \subsection{Results across different HLS tools} \definecolor{intel}{HTML}{66c2a5} \definecolor{vivado}{HTML}{fc8d62} \definecolor{legup}{HTML}{8da0cb} \definecolor{bambu}{HTML}{e78ac3} \definecolor{bambudev}{HTML}{bf8be8} \definecolor{timeout}{HTML}{ef4c4c} \begin{figure} \centering \begin{tikzpicture}[scale=0.61,yscale=0.9] \draw (-7.2,7.0) rectangle (7.2,0.7); \fill[vivado,fill opacity=0.5] (0.9,4.4) ellipse (3.3 and 1.5); \fill[intel,fill opacity=0.5] (-4.5,4.8) ellipse (2.0 and 1.3); \fill[bambu,fill opacity=0.5] (2,3) ellipse (3 and 1.5); \fill[bambudev,fill opacity=0.5] (5,3.3) ellipse (1.3 and 0.8); \fill[legup,fill opacity=0.5] (-2.5,3) ellipse (3.75 and 1.5); \draw[white, thick] (0.9,4.4) ellipse (3.3 and 1.5); \draw[white, thick] (-4.5,4.8) ellipse (2.0 and 1.3); \draw[white, thick] (2,3) ellipse (3 and 1.5); \draw[white, thick] (5,3.3) ellipse (1.3 and 0.8); \draw[white, thick] (-2.5,3) ellipse (3.75 and 1.5); \node[align=center, anchor=south west] at (-0.5,6) {\textcolor{vivado}{\bf Xilinx Vivado HLS v2019.1}}; \node[anchor=south west] at (-6.4,6) {\textcolor{intel}{\bf Intel i++ 18.1}}; \node at (-3,1.1) {\textcolor{legup}{\bf LegUp 4.0}}; \node at (2.3,1.1) {\textcolor{bambu}{\bf Bambu 0.9.7}}; \node[align=center] at (5.8,1.7) {\textcolor{bambudev}{\bf Bambu} \\ \textcolor{bambudev}{\bf 0.9.7-dev}}; \node at (-3.5,2.5) {\small 159}; \node at (-5,5) {\small 26}; \node at (-4,3.9) {\small 1}; \node at (-1.5,3.8) {\small 4}; \node at (0,2.5) {\small 3}; \node at (2.5,2.3) {\small 902}; \node at (2.5,3.8) {\small 9}; \node at (0,5) {\small 70}; \node at (4.5,3.3) {\small 4}; \node at (5.5,3.3) {\small 13}; \node at (-6,1.4) {5509}; \end{tikzpicture} \caption{The number of failures per tool out of \totaltestcases{} test-cases. Overlapping regions mean that the same test-cases failed in multiple tools.}\label{fig:existing_tools} \end{figure} Figure~\ref{fig:existing_tools} shows an Euler diagram of our results. We see that 918 (13.7\%), 167 (2.5\%), 83 (1.2\%) and 26 (0.4\%) test-cases fail in Bambu, LegUp, Vivado HLS and Intel i++ respectively. The bugs we reported to the Bambu developers were fixed during our testing campaign, so we also tested the development branch of Bambu (0.9.7-dev) with the bug fixes, and found only 17 (0.25\%) failing test-cases remained. Although i++ has a low failure rate, it has the highest time-out rate (540 test-cases) due to its remarkably long compilation time. No other tool had more than 20 time-outs. Note that the absolute numbers here do not necessarily correspond to the number of bugs in the tools, because a single bug in a language feature that appears frequently in our test suite could cause many failures. Moreover, we are reluctant to draw conclusions about the relative reliability of each tool by comparing the number of failures, because these numbers are so sensitive to the parameters of the randomly generated test suite we used. In other words, we can confirm the \emph{presence} of bugs, but cannot deduce the \emph{number} of them (nor their importance). We have reduced several of the failing test-cases in an effort to identify particular bugs, and our findings are summarised in Table~\ref{tab:bugsummary}. We emphasise that the bug counts here are lower bounds -- we did not have time to go through the arduous test-case reduction process for every failure. Figures~\ref{fig:eval:legup:crash}, \ref{fig:eval:intel:mismatch}, and~\ref{fig:eval:bambu:mismatch} present three of the bugs we found. As in Example~\ref{ex:vivado_miscomp}, each bug was first reduced automatically using \creduce{}, and then further reduced manually to achieve the minimal test-case. % \AD{Could spell out why it's so arduous -- involves testing an enormous number of programs and each one takes ages.} \JW{I'd be inclined to leave this as-is, actually.} \begin{figure} \begin{minted}{c} int a[2][2][1] = {{{0},{1}},{{0},{0}}}; int main() { a[0][1][0] = 1; } \end{minted} \caption{This program leads to an internal compiler error (an unhandled assertion in this case) in LegUp 4.0. It initialises a 3D array with zeroes and then assigns one element. The bug only appears when function inlining is disabled (\texttt{NO\_INLINE}), thus confirming the effectiveness of generating random directives.} \label{fig:eval:legup:crash} \end{figure} %An assertion error counts as a crash of the tool, as it means that an unexpected state was reached by this input. %This shows that there is a bug in one of the compilation passes in LegUp, however, due to the assertion the bug is caught in the tool before it produces an incorrect design. % The code initialises the array with zeroes except for \texttt{a[0][1][0]}, which is set to one. Then the main function assigns one to that same location. This code on its own should not actually produce a result and should just terminate by returning 0, which is also what the design that LegUp generates does when the \texttt{NO\_INLINE} flag is turned off. %The following code also produces an assertion error in LegUp, which is a different one this time. This bug was not discovered during the main test runs of 10 thousand test-cases, but beforehand, which meant that we disabled unions from being generated. However, this bug also requires the \texttt{volatile} keyword which seems to be the reason for quite a few mismatches in LegUp and Vivado. % %\begin{minted}{c} %union U1 { int a; }; % %volatile union U1 un = {0}; % %int main() { return un.a; } %\end{minted} %\end{example} \begin{figure} \begin{minted}{c} static volatile int a[9][1][7]; int main() { int tmp = 1; for (int b = 0; b < 2; b++) { a[0][0][0] = 3; a[0][0][0] = a[0][0][0]; } for (int i = 0; i < 9; i++) for (int k = 0; k < 7; k++) tmp ^= a[i][0][k]; return tmp; } \end{minted} \caption{This program miscompiles in Intel i++. It should return 2 because \code{3 \^{} 1 = 2}, but Intel i++ generates a design that returns 0 instead. Perhaps the assignment to 3 in the first for-loop is being overlooked.}\label{fig:eval:intel:mismatch} \end{figure} \begin{figure} \begin{minted}{c} static int b = 0x10000; static volatile short a = 0; int main() { a++; b = (b >> 8) & 0x100; return b; } \end{minted} \caption{This program miscompiles in Bambu. As the value of \texttt{b} is shifted to the right by 8, the output should be \texttt{0x100}, but Bambu generates a design that returns 0. The increment operation on \texttt{a} appears unrelated, but is necessary to trigger the bug.}\label{fig:eval:bambu:mismatch} \end{figure} %\begin{example}[A miscompilation bug in Vivado HLS] %Figure~\ref{fig:eval:vivado:mismatch} shows code that should output \texttt{0xF}, but outputs \texttt{0x0} when compiled with Vivado HLS (all three versions). %This test-case is much larger compared to the other test-cases that were reduced. %We could not reduce this program any further, as everything in the code was necessary to trigger the bug. %This test-case is interesting because the root cause appears to be the hashing boilerplate that we added to Csmith, rather than the randomly-generated code that Csmith produced. In the main function, an array is partially initialised, and then a function \texttt{hashl} for hashing a long integer is called twice, once on a volatile global \texttt{x}, and once on a constant. That hashing function invokes a second hashing function on some of the bytes in the long integer. %The array \texttt{arr} is initialised to all zeroes, as well as the other global variables \texttt{x} and \texttt{y}, so as to not introduce any undefined behaviour. However, \texttt{x} is also given the \texttt{volatile} keyword, which ensures that the variable is not optimised away. The function \texttt{hashc} then accumulates the values that it is passed into a hash stored in \texttt{h}. Each \texttt{b} is eight bits wide, so function \texttt{hashl} calls the function seven times for some of the bits in the 64-bit value of \texttt{f} that it is passed. Finally, in the main function, the array is initialised partially with a \code{for} loop, after which the \texttt{hashl} function is called twice, once on the volatile function and once on a constant. Interestingly, the second function call with the constant is also necessary to trigger the bug. %\end{example} %\begin{example}[A miscompilation bug in Intel i++] %Figure~\ref{fig:eval:intel:mismatch} shows a miscompilation bug found in Intel i++. Intel i++ does not seem to notice the assignment to 3 in the previous for loop, or tries to perform some optimisations that seem to analyse the array incorrectly and therefore results in a wrong value being returned. %\end{example} %\begin{example}[A miscompilation bug in Bambu] %Figure~\ref{fig:eval:bambu:mismatch} shows a miscompilation bug in Bambu, where the result of the value in \texttt{b} is affected by the increment operation on \texttt{a}. %\end{example} %We have reduced several of the failing test-cases in an effort to identify particular bugs; these are summarised in the table below.\footnote{Link to detailed bug reports available from PC Chair.} \JW{One reviewer complained about this table not having a caption.} \JW{How about we extend this table so it has one bug per row? See my attempt in Table~\ref{tab:bugsummary}.} %\begin{table}[h] %\centering %\begin{tabular}{lr}\toprule % \textbf{Tool} & \textbf{Unique Bugs}\\ % \midrule % Xilinx Vivado HLS v2019.1 & $\ge 2$\\ % LegUp HLS & $\ge 3$\\ % Intel i++ & $\ge 1$\\ % Bambu HLS & $\ge 2$\\ % \bottomrule % \end{tabular} %\end{table} \begin{table}[t] \centering \caption{A summary of the bugs we found.} \label{tab:bugsummary} \begin{tabular}{llll}\toprule \textbf{Tool} & \textbf{Bug type} & \textbf{Details} & \textbf{Status} \\ \midrule Vivado HLS & miscompile & Fig.~\ref{fig:vivado_bug1} & reported, confirmed \\ Vivado HLS & miscompile & online* & reported \\ LegUp HLS & crash & Fig.~\ref{fig:eval:legup:crash} & reported, fixed \\ LegUp HLS & crash & online* & reported, fixed \\ LegUp HLS & miscompile & online* & reported, confirmed \\ Intel i++ & miscompile & Fig.~\ref{fig:eval:intel:mismatch} & reported \\ Bambu HLS & miscompile & Fig.~\ref{fig:eval:bambu:mismatch} & reported, confirmed, fixed \\ Bambu HLS & miscompile & online* & reported, confirmed, fixed \\ \bottomrule \end{tabular} \\ \vphantom{\large A}*See \url{https://ymherklotz.github.io/fuzzing-hls/} for detailed bug reports \end{table} %We write `$\ge$' above to emphasise that all the bug counts are lower bounds -- we did not have time to go through the rather arduous test-case reduction process for every failure. \subsection{Results across versions of an HLS tool} \definecolor{ribbon1}{HTML}{8dd3c7} \definecolor{ribbon2}{HTML}{b3de69} \definecolor{ribbon3}{HTML}{bebada} \definecolor{ribbon4}{HTML}{fb8072} \definecolor{ribbon5}{HTML}{80b1d3} \definecolor{ribbon6}{HTML}{fdb462} \begin{figure} \centering \begin{tikzpicture}[xscale=1.25,yscale=0.85] \draw[white, fill=ribbon1] (-1.0,4.1) -- (0.0,4.1) to [out=0,in=180] (2.0,4.1) to [out=0,in=180] (4.0,4.1) -- (6.0,4.1) -- %(7.55,3.325) -- (6.0,2.55) -- (4.0,2.55) to [out=180,in=0] (2.0,2.55) to [out=180,in=0] (0.0,2.55) -- (-1.0,2.55) -- cycle; \draw[white, fill=ribbon2] (-1.0,2.55) -- (0.0,2.55) to [out=0,in=180] (1.8,1.8) -- (2.2,1.8) to [out=0,in=180] (4.0,1.55) -- (6.0,1.55) -- %(7.3,0.9) -- (6.0,0.25) -- (4.0,0.25) to [out=180,in=0] (2.2,0.5) -- (1.8,0.5) to [out=180,in=0] (0.0,1.25) -- (-1.0,1.25) -- cycle; \draw[white, fill=ribbon3] (-1.0,1.25) -- (0.0,1.25) to [out=0,in=180] (1.8,2.55) -- (2.2,2.55) to [out=0,in=180] (4.0,0.25) -- (6.0,0.25) -- %(6.05,0.225) -- (6.0,0.2) -- (4.0,0.2) to [out=180,in=0] (2.2,2.5) -- (1.8,2.5) to [out=180,in=0] (0.0,1.2) -- (-1.0,1.2) -- cycle; \draw[white, fill=ribbon4] (-1.0,0.5) -- (0.0,0.5) to [out=0,in=180] (1.8,2.5) -- (2.2,2.5)to [out=0,in=180] (4.0,0.2) -- (6.0,0.2) -- %(6.2,0.1) -- (6.0,0.0) -- (4.0,0.0) to [out=180,in=0] (2.2,2.3) -- (1.8,2.3) to [out=180,in=0] (0.0,0.3) -- (-1.0,0.3) -- cycle; \draw[white, fill=ribbon5] (-1.0,1.2) -- (0.0,1.2) to [out=0,in=180] (1.8,0.5) -- (2.2,0.5) to [out=0,in=180] (4.0,2.55) -- (6.0,2.55) -- %(6.2,2.45) -- (6.0,2.35) -- (4.0,2.35) to [out=180,in=0] (2.2,0.3) -- (1.8,0.3) to [out=180,in=0] (0.0,1.0) -- (-1.0,1.0) -- cycle; \draw[white, fill=ribbon6] (-1.0,0.3) -- (0.0,0.3) to [out=0,in=180] (1.8,0.3) -- (2.2,0.3) to [out=0,in=180] (4.0,2.35) -- (6.0,2.35) -- %(6.3,2.2) -- (6.0,2.05) -- (4.0,2.05) to [out=180,in=0] (2.2,0.0) -- (1.8,0.0) to [out=180,in=0] (0.0,0.0) -- (-1.0,0.0) -- cycle; \draw[white, fill=black] (-0.4,4.1) rectangle (0.0,1.0); \draw[white, fill=black] (1.8,4.1) rectangle (2.2,2.3); \draw[white, fill=black] (4.0,4.1) rectangle (4.4,2.05); \node at (-0.2,4.5) {v2018.3}; \node at (2,4.5) {v2019.1}; \node at (4.2,4.5) {v2019.2}; %\node at (2,5) {Vivado HLS}; \node at (5.5,3.325) {31}; \node at (5.5,0.9) {26}; \node at (5.5,2.2) {6}; \node[white] at (-0.2,1.2) {62}; \node[white] at (2,2.5) {36}; \node[white] at (4.2,2.25) {41}; \end{tikzpicture} \caption{A Sankey diagram that tracks \vivadotestcases{} test-cases through three different versions of Vivado HLS. The ribbons collect the test-cases that pass and fail together. The black bars are labelled with the total number of test-case failures per version. The 3573 test-cases that pass in all three versions are not depicted. }\label{fig:sankey_diagram} \end{figure} Besides studying the reliability of different HLS tools, we also studied the reliability of Vivado HLS over time. Figure~\ref{fig:sankey_diagram} shows the results of giving \vivadotestcases{} test-cases to Vivado HLS v2018.3, v2019.1 and v2019.2. Test-cases that pass and fail in the same tools are grouped together into a ribbon. For instance, the topmost ribbon represents the 31 test-cases that fail in all three versions of Vivado HLS. Other ribbons can be seen weaving in and out; these indicate that bugs were fixed or reintroduced in the various versions. Interestingly, the blue ribbon shows that there are test-cases that fail in v2018.3, pass in v2019.1, and then fail again in v2019.2! As in our Euler diagram, the numbers do not necessary correspond to the number of actual bugs, though we can observe that there must be at least six unique bugs in Vivado HLS, given that each ribbon corresponds to at least one unique bug. This method of identifying unique bugs is similar to the ``correcting commits'' metric introduced by Chen et al.~\cite{chen16_empir_compar_compil_testin_techn}. %\AD{This reminds me of the correcting commits metric from Junjie Chen et al.'s empirical study on compiler testing. https://xiongyingfei.github.io/papers/ICSE16.pdf. Could be worth making the connection. } %\YH{Contradicts value of 3 in Table~\ref{tab:unique_bugs}, maybe I can change that to 6?} \JW{I'd leave it as-is personally; we have already put a `$\ge$' symbol in the table, so I think it's fine.} %In addition to that, it can then be seen that Vivado HLS v2018.3 must have at least 4 individual bugs, of which two were fixed and two others stayed in Vivado HLS v2019.1. However, with the release of v2019.1, new bugs were introduced as well. % Finally, for version 2019.2 of Vivado HLS, there seems to be a bug that was reintroduced which was also present in Vivado 2018.3, in addition to a new bug. In general it seems like each release of Vivado HLS will have new bugs present, however, will also contain many previous bug fixes. However, it cannot be guaranteed that a bug that was previously fixed will remain fixed in future versions as well. %\subsection{Some specific bugs found} %%% Local Variables: %%% mode: latex %%% TeX-master: "main" %%% End: