summaryrefslogtreecommitdiffstats
path: root/conclusion.tex
diff options
context:
space:
mode:
authorYann Herklotz <ymh15@ic.ac.uk>2020-09-15 08:57:28 +0000
committeroverleaf <overleaf@localhost>2020-09-15 09:01:53 +0000
commit4daefd9bed7dea9500b3cc626266fd979fb2edcc (patch)
tree138282b20b6de440a2cd0468a744364e00e4b26a /conclusion.tex
parentebaf38ee9fa9ed6a74230c93fa2d15df44521c9a (diff)
downloadfccm21_esrhls-4daefd9bed7dea9500b3cc626266fd979fb2edcc.tar.gz
fccm21_esrhls-4daefd9bed7dea9500b3cc626266fd979fb2edcc.zip
Update on Overleaf.
Diffstat (limited to 'conclusion.tex')
-rw-r--r--conclusion.tex4
1 files changed, 2 insertions, 2 deletions
diff --git a/conclusion.tex b/conclusion.tex
index 273b705..e027b27 100644
--- a/conclusion.tex
+++ b/conclusion.tex
@@ -1,11 +1,11 @@
\section{Conclusion}
-We have shown how existing fuzzing tools can be modified so that their outputs are compatible with HLS tools. We have used this testing framework to run 6,700 test cases through three different HLS tools, including 3,645 test cases through three different version of Vivado HLS to show how bugs are fixed and introduced. In total, we found at least 6 individual and unique bugs in all the tools, which have been reduced, analysed, and reported to the tool vendors. These bugs include crashes as well as instances of generated designs not behaving in the same way as the original code.
+We have shown how existing fuzzing tools can be modified so that their outputs are compatible with HLS tools. We have used this testing framework to run 6,700 test-cases through three different HLS tools, and 3,645 test-cases through three different version of Vivado HLS to show how bugs are fixed and introduced. In total, we found at least 6 unique bugs in all the tools. These bugs include crashes as well as instances of generated designs not behaving in the same way as the original code.
One can always question how much bugs found by fuzzers really \emph{matter}, given that they are usually found by combining language features in ways that are vanishingly unlikely to happen `in the wild'~\cite{marcozzi+19}. This question is especially pertinent for our particular context of HLS tools, which are well-known to have restrictions on the language features that they handle. Nevertheless, we would argue that any errors in the HLS tool are worth identifying because they have the potential to cause problems, either now or in the future. And when HLS tools \emph{do} go wrong (or indeed any sort of compiler for that matter), it is particularly infuriating for end-users because it is so difficult to identify whether the fault lies with the tool or with the program it has been given to compile.
Further work could be done on supporting more HLS tools, especially ones that claim to prove that their output is correct before terminating. This could give an indication on how effective these proofs are, and how often they are actually able to complete their equivalence proofs during compilation in a feasible time scale.
-As HLS is becoming increasingly relied upon, it is important to make sure that HLS tools are also reliable. We hope that this work further motivates the need for rigorous engineering of HLS tools, either by validating that each output the tool produces is correct or by proving the HLS tool itself correct once and for all.
+As HLS is becoming increasingly relied upon, it is important to make sure that HLS tools are also reliable. We hope that this work further motivates the need for rigorous engineering of HLS tools, either by validating that each output the tool produces is correct or by proving the HLS tool itself correct once and for all.
%%% Local Variables:
%%% mode: latex