From dca94ceaf47afd42298fea1812c8549ca7f55462 Mon Sep 17 00:00:00 2001 From: John Wickerson Date: Wed, 18 Nov 2020 13:02:05 +0000 Subject: Update on Overleaf. --- introduction.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'introduction.tex') diff --git a/introduction.tex b/introduction.tex index 9f9903a..d753bae 100644 --- a/introduction.tex +++ b/introduction.tex @@ -18,7 +18,7 @@ This undermines any reasoning conducted at the software level. And there are reasons to doubt that HLS tools actually \emph{do} preserve equivalence. First, just like conventional compilers, HLS tools are large pieces of software that perform a series of complex analyses and transformations. Second, unlike conventional compilers, HLS tools target HDL, which has superficial syntactic similarities to software languages but a subtly different semantics, making it easy to introduce behavioural mismatches between the output of the HLS tool and its input. %Hence, the premise of this work is: Can we trust these compilers to translate high-level languages like C/C++ to HDL correctly? -These doubts have been realised by +\JW{These doubts have been shown to be justified by... / These doubts have been borne out by... } For instance, \citet{lidbury15_many_core_compil_fuzzin} had to abandon their attempt to fuzz-test Altera's (now Intel's) OpenCL compiler since it ``either crashed or emitted an internal compiler error'' on so many of their test inputs. More recently, Du et al.~\cite{du+20} fuzz-tested three commercial HLS tools using Csmith~\cite{yang11_findin_under_bugs_c_compil}, and despite restricting the generated programs to the C fragment explicitly supported by all the tools, they still found that on average 2.5\% of test cases generated a design that did not match the behaviour of the input. % -- cgit