summaryrefslogtreecommitdiffstats
path: root/eval.tex
diff options
context:
space:
mode:
authorYann Herklotz <ymh15@ic.ac.uk>2020-09-15 08:57:28 +0000
committeroverleaf <overleaf@localhost>2020-09-15 09:01:53 +0000
commit4daefd9bed7dea9500b3cc626266fd979fb2edcc (patch)
tree138282b20b6de440a2cd0468a744364e00e4b26a /eval.tex
parentebaf38ee9fa9ed6a74230c93fa2d15df44521c9a (diff)
downloadfccm21_esrhls-4daefd9bed7dea9500b3cc626266fd979fb2edcc.tar.gz
fccm21_esrhls-4daefd9bed7dea9500b3cc626266fd979fb2edcc.zip
Update on Overleaf.
Diffstat (limited to 'eval.tex')
-rw-r--r--eval.tex69
1 files changed, 40 insertions, 29 deletions
diff --git a/eval.tex b/eval.tex
index 1708519..c3c66fc 100644
--- a/eval.tex
+++ b/eval.tex
@@ -13,9 +13,9 @@
\draw[white] (-4.4,4.4) ellipse (3.75 and 2.75); % making the
\draw[white] (-10.2,4.4) ellipse (3.75 and 2.75); % outlines
\draw[white] (-7.3,2) ellipse (3.75 and 2.75); % fully opaque
- \node[align=center] at (-10.2,6.3) {\Large\textsf{\textbf{Xilinx Vivado HLS}} \\ \textsf{\textbf{(all versions)}}};
- \node at (-4.4,6.3) {\Large\textsf{\textbf{Intel HLS Compiler}}};
- \node at (-7.3,0) {\Large\textsf{\textbf{LegUp}}};
+ \node[align=center] at (-10.2,6.3) {\Large\textsf{\textbf{Xilinx Vivado HLS}} \\ \Large\textsf{\textbf{2019.1}}};
+ \node at (-4.4,6.3) {\Large\textsf{\textbf{Intel i++}}};
+ \node at (-7.3,0) {\Large\textsf{\textbf{LegUp 4.0}}};
\node at (-5.5,3) {\Huge 1 (\textcolor{red}{1})};
\node at (-9.1,3) {\Huge 4 (\textcolor{red}{0})};
@@ -27,7 +27,7 @@
\node at (-13.6,-0.5) {\Huge 5856};
\end{tikzpicture}
}
-\caption{A Venn diagram showing the number of failures in each tool out of 6700 test cases that were run. Overlapping regions mean that the test cases failed in all those tools. The numbers in parentheses represent the number of test cases that timed out.}\label{fig:existing_tools}
+\caption{A Venn diagram showing the number of failures in each tool out of 6700 test-cases that were run. Overlapping regions mean that the test-cases failed in all those tools. The numbers in parentheses represent the number of test-cases that timed out.}\label{fig:existing_tools}
\end{figure}
\begin{table}
@@ -37,20 +37,20 @@
\midrule
Xilinx Vivado HLS (all versions) & $\ge 2$\\
LegUp HLS & $\ge 3$\\
- Intel HLS Compiler & $\ge 1$\\
+ Intel i++ & $\ge 1$\\
\bottomrule
\end{tabular}
\caption{Unique bugs found in each tool.}
\label{tab:unique_bugs}
\end{table}
-We generate 6700 test cases and provide them to three HLS tools: Vivado HLS, LegUp HLS and Intel HLS.
-We use the same test cases across all tools for fair comparison.
-We were able to test three different versions of Vivado HLS (v2018.3,v2019.1 and v2019.2).
+We generate 6700 test-cases and provide them to three HLS tools: Vivado HLS, LegUp HLS and Intel i++.
+We use the same test-cases across all tools for fair comparison.
+We were able to test three different versions of Vivado HLS (v2018.3, v2019.1 and v2019.2).
We were only able to test one version of LegUp: 4.0.
At the point of writing, LegUp 7.5 is still GUI-based and therefore we could not script our tests.
-However, we are able to reproduce bugs found in LegUp 4.0 in LegUp 7.5.
-Finally, we tested one version of Intel HLS (vXXXX.X).
+However, we were able to manually reproduce bugs found in LegUp 4.0 in LegUp 7.5.
+Finally, we tested one version of Intel i++ (v\ref{XXXX.X}).
% Three different tools were tested, including three different versions of Vivado HLS. We were only able to test one version of LegUp HLS (version 4.0), because although LegUp 7.5 is available, it is GUI-based and not amenable to scripting. However, bugs we found in LegUp 4.0 were reproduced manually in LegUp 7.5.
% LegUp and Vivado HLS were run under Linux, while the Intel HLS Compiler was run under Windows.
@@ -58,8 +58,8 @@ Finally, we tested one version of Intel HLS (vXXXX.X).
\subsection{Results across different HLS tools}
Figure~\ref{fig:existing_tools} shows a Venn diagram of our results.
-We see that 167 (2.5\%), 83 (1.2\%) and 26 (0.4\%) test-cases fail in LegUp, Vivado HLS and Intel HLS respectively.
-Despite Intel HLS having the lowet failure rate, it has the highest time-out rate with 540 programs, because of its long compilation time.
+We see that 167 (2.5\%), 83 (1.2\%) and 26 (0.4\%) test-cases fail in LegUp, Vivado HLS and Intel i++ respectively.
+Despite i++ having the lowest failure rate, it has the highest time-out rate (540 test-cases), because of its remarkably long compilation time.
% We remark that although the Intel HLS Compiler had the smallest number of confirmed test-case failures, it had the most time-outs (which could be masking additional failures)
Note that the absolute numbers here do not necessary correspond number of bugs found.
Multiple programs could crash or fail due to the same bug.
@@ -68,9 +68,9 @@ We write `$\ge$' in the table to indicate that all the bug counts are lower boun
\subsection{Results across versions of an HLS tool}
-Besides comparing the reliability of different HLS tools, we also investigated the reliability of Vivado HLS over time. Figure~\ref{fig:sankey_diagram} shows the results of giving 3645 test cases to Vivado HLS 2018.3, 2019.1 and 2019.2.
-Test cases that pass and fail in the same tools are grouped together into a ribbon.
-For instance, the topmost ribbon represents the 31 test-cases that fail in all three versions of Vivado HLS. Other ribbons can be seen weaving in and out; these indicate that bugs were fixed or reintroduced in the various versions. The diagram demonstrates that Vivado HLS 2018.3 contains the most failing test cases compared to the other versions, having 62 test cases fail in total. %Interestingly, Vivado HLS 2019.1 and 2019.2 have a different number of failing test cases, meaning feature improvements that introduced bugs as well as bug fixes between those minor versions.
+Besides comparing the reliability of different HLS tools, we also investigated the reliability of Vivado HLS over time. Figure~\ref{fig:sankey_diagram} shows the results of giving 3645 test-cases to Vivado HLS 2018.3, 2019.1 and 2019.2.
+Test-cases that pass and fail in the same tools are grouped together into a ribbon.
+For instance, the topmost ribbon represents the 31 test-cases that fail in all three versions of Vivado HLS. Other ribbons can be seen weaving in and out; these indicate that bugs were fixed or reintroduced in the various versions. The diagram demonstrates that Vivado HLS 2018.3 contains the most failing test-cases compared to the other versions, having 62 test-cases fail in total. %Interestingly, Vivado HLS 2019.1 and 2019.2 have a different number of failing test cases, meaning feature improvements that introduced bugs as well as bug fixes between those minor versions.
Interestingly, as an indicator of reliability of HLS tools, the blue ribbon shows that there are test-cases that fail in v2018.3, pass in v2019.1 but then fail again in 2019.2.
\definecolor{ribbon1}{HTML}{8dd3c7}
@@ -112,17 +112,18 @@ Interestingly, as an indicator of reliability of HLS tools, the blue ribbon show
\node[white] at (2,2.5) {36};
\node[white] at (4,2.25) {41};
\end{tikzpicture}
- \caption{A Sankey diagram that tracks 3645 test cases through three different versions of Vivado HLS. The ribbons collect the test cases that pass and fail together. The 3573 test cases that pass in all three versions are not depicted.
+ \caption{A Sankey diagram that tracks 3645 test-cases through three different versions of Vivado HLS. The ribbons collect the test-cases that pass and fail together. The 3573 test-cases that pass in all three versions are not depicted.
}\label{fig:sankey_diagram}
\end{figure}
% \NR{Why are there missing numbers in the ribbons?}
-As in our Venn diagram, the absolute numbers in Figure~\ref{fig:sankey_diagram} do not necessary correspond to the number of bugs. However, we can deduce from this diagram that there must be at least six unique bugs in Vivado HLS, given that a ribbon must contain at least one unique bug. \YH{Contradicts value of 3 in Table~\ref{tab:unique_bugs}, maybe I can change that to 6?} \JW{I'd leave it as-is personally; we have already put a `$\ge$' symbol in the table, so I think it's fine.}
+As in our Venn diagram, the absolute numbers in Figure~\ref{fig:sankey_diagram} do not necessary correspond to the number of bugs. However, we can deduce from this diagram that there must be at least six unique bugs in Vivado HLS, given that a ribbon must contain at least one unique bug.
+%\YH{Contradicts value of 3 in Table~\ref{tab:unique_bugs}, maybe I can change that to 6?} \JW{I'd leave it as-is personally; we have already put a `$\ge$' symbol in the table, so I think it's fine.}
In addition to that, it can then be seen that Vivado HLS v2018.3 must have at least 4 individual bugs, of which two were fixed and two others stayed in Vivado HLS v2019.1. However, with the release of v2019.1, new bugs were introduced as well. % Finally, for version 2019.2 of Vivado HLS, there seems to be a bug that was reintroduced which was also present in Vivado 2018.3, in addition to a new bug. In general it seems like each release of Vivado HLS will have new bugs present, however, will also contain many previous bug fixes. However, it cannot be guaranteed that a bug that was previously fixed will remain fixed in future versions as well.
\subsection{Some specific bugs found}
-This section describes some of the bugs that were found in the various tools that were tested. We describe two bugs in LegUp and one in Vivado HLS; in each case, the bug was first reduced automatically using \creduce{}, and then reduced further manually to achieve the minimal test case. Although we did find test-case failures in the Intel HLS Compiler, the very long compilation times for that tool meant that we did not have time to reduce any of the failures down to an example that is minimal enough to present here.
+This section describes some of the bugs that were found in the various tools that were tested. We describe two bugs in LegUp and one in Vivado HLS; in each case, the bug was first reduced automatically using \creduce{}, and then reduced further manually to achieve the minimal test-case. Although we did find test-case failures in Intel i++, the very long compilation times for that tool meant that we did not have time to reduce any of the failures down to an example that is minimal enough to present here.
\subsubsection{LegUp assertion error}
@@ -134,12 +135,14 @@ This shows that there is a bug in one of the compilation passes in LegUp, howeve
\begin{minted}{c}
int a[2][2][1] = {{{0},{1}},{{0},{0}}};
-int main() { a[0][1][0] = 1; }
+int main() {
+ a[0][1][0] = 1;
+}
\end{minted}
\caption{This program causes an assertion failure in LegUp HLS when \texttt{NO\_INLINE} is set.}\label{fig:eval:legup:assert}
\end{figure}
-The buggy test case has to do with initialisation and assignment to a three-dimensional array, for which the above piece of code is the minimal example. However, in addition to that it requires the \texttt{NO\_INLINE} flag to be set, which disables function inlining. The code initialises the array with zeroes except for \texttt{a[0][1][0]}, which is set to one. Then the main function assigns one to that same location. This code on its own should not actually produce a result and should just terminate by returning 0, which is also what the design that LegUp generates does when the \texttt{NO\_INLINE} flag is turned off.
+The buggy test-case has to do with initialisation and assignment to a three-dimensional array, for which the above piece of code is the minimal example. However, in addition to that it requires the \texttt{NO\_INLINE} flag to be set, which disables function inlining. The code initialises the array with zeroes except for \texttt{a[0][1][0]}, which is set to one. Then the main function assigns one to that same location. This code on its own should not actually produce a result and should just terminate by returning 0, which is also what the design that LegUp generates does when the \texttt{NO\_INLINE} flag is turned off.
%The following code also produces an assertion error in LegUp, which is a different one this time. This bug was not discovered during the main test runs of 10 thousand test cases, but beforehand, which meant that we disabled unions from being generated. However, this bug also requires the \texttt{volatile} keyword which seems to be the reason for quite a few mismatches in LegUp and Vivado.
%
@@ -153,7 +156,7 @@ The buggy test case has to do with initialisation and assignment to a three-dime
\subsubsection{LegUp miscompilation}
-The test case in Figure~\ref{fig:eval:legup:wrong} produces an incorrect Verilog in LegUp 4.0, which means that the results of RTL simulation is different to the C execution.
+The test-case in Figure~\ref{fig:eval:legup:wrong} produces an incorrect Verilog in LegUp 4.0, which means that the results of RTL simulation is different to the C execution.
\begin{figure}
\begin{minted}{c}
@@ -162,19 +165,23 @@ int b = 1;
int main() {
int d = 1;
- if (d + a) b || 1;
- else b = 0;
+ if (d + a)
+ b || 1;
+ else
+ b = 0;
return b;
}
\end{minted}
\caption{An output mismatch: LegUp HLS returns 0 but the correct result is 1.}\label{fig:eval:legup:wrong}
\end{figure}
-In the code above, \texttt{b} has value 1 when run in GCC, but has value 0 when run with LegUp 4.0. If the \texttt{volatile} keyword is removed from \texttt{a}, then the Verilog produces the correct result. As \texttt{a} and \texttt{d} are constants, the \code{if} statement should always produce go into the \texttt{true} branch, meaning \texttt{b} should never be set to 0. The \texttt{true} branch of the \code{if} statement only executes an expression which is not assigned to any variable, meaning the initial state of all variables should not change. However, LegUp HLS generates a design which enters the \texttt{else} branch instead and assigns \texttt{b} to be 0. The cause of this bug seems to be the \texttt{volatile} keyword and the analysis that is performed to simplify the \code{if} statement.
+In the code above, \texttt{b} has value 1 when run in GCC, but has value 0 when run with LegUp 4.0. If the \texttt{volatile} keyword is removed from \texttt{a}, then the Verilog produces the correct result. As \texttt{a} and \texttt{d} are constants, the \code{if} statement should always produce go into the \texttt{true} branch, meaning \texttt{b} should never be set to 0. The \texttt{true} branch of the \code{if} statement only executes an expression which is not assigned to any variable, meaning the initial state of all variables should not change. However, LegUp HLS generates a design which enters the \texttt{else} branch instead and assigns \texttt{b} to be 0. The cause of this bug seems to be the use of \texttt{volatile} keyword, which interferes with the analysis that attempts to simplify the \code{if} statement.
\subsubsection{Vivado HLS miscompilation}
-Figure~\ref{fig:eval:vivado:mismatch} shows code that does not output the right value when compiled with all Vivado HLS versions, as it returns \texttt{0x0} with Vivado HLS whereas it should be returning \texttt{0xF}. This test case is much longer compared to the other test cases that were reduced and could not be made any smaller, as everything in the code seems to be necessary to trigger the bug.
+Figure~\ref{fig:eval:vivado:mismatch} shows code that does not output the right result when compiled with all Vivado HLS versions.
+It returns \texttt{0x0} with Vivado HLS, instead of \texttt{0xF}. This test-case is much larger compared to the other test-cases that were reduced.
+We could not reduce this program any further, as everything in the code was necessary to trigger the bug.
The array \texttt{a} is initialised to all zeroes, as well as the other global variables \texttt{g} and \texttt{c}, so as to not introduce any undefined behaviour. However, \texttt{g} is also given the \texttt{volatile} keyword, which ensures that the variable is not optimised away. The function \texttt{d} then accumulates the values \texttt{b} that it is passed into a hash stored in \texttt{c}. Each \texttt{b} is eight bits wide, so function \texttt{e} calls the function 7 times for some of the bits in the 64-bit value of \texttt{f} that it is passed. Finally, in the main function, the array is initialised partially with a \code{for} loop, after which the \texttt{e} function is called twice, once on the volatile function and once on a constant. Interestingly, the second function call with the constant is also necessary to trigger the bug.
@@ -184,7 +191,9 @@ volatile unsigned int g = 0;
int a[256] = {0};
int c = 0;
-void d(char b) { c = (c & 4095) ^ a[(c ^ b) & 15]; }
+void d(char b) {
+ c = (c & 4095) ^ a[(c ^ b) & 15];
+}
void e(long f) {
d(f); d(f >> 8); d(f >> 16); d(f >> 24);
@@ -192,12 +201,14 @@ void e(long f) {
}
int main() {
- for (int i = 0; i < 56; i++) a[i] = i;
- e(g); e(-2L);
+ for (int i = 0; i < 56; i++)
+ a[i] = i;
+ e(g);
+ e(-2L);
return c;
}
\end{minted}
-\caption{An output mismatch where GCC returns \texttt{0xF}, whereas Vivado HLS return \texttt{0x0}.}\label{fig:eval:vivado:mismatch}
+\caption{An output mismatch: Vivado HLS returns \texttt{0x0} but the correct result is \texttt{0xF}.}\label{fig:eval:vivado:mismatch}
\end{figure}