summaryrefslogtreecommitdiffstats
path: root/introduction.tex
diff options
context:
space:
mode:
authorJohn Wickerson <j.wickerson@imperial.ac.uk>2020-11-18 13:04:32 +0000
committeroverleaf <overleaf@localhost>2020-11-18 13:37:59 +0000
commitfbf4644b30635c890b5ce1ec2e7b4b2482f67c01 (patch)
tree685fb5f56c462d04380ad1267c8a63d6dfe63e1d /introduction.tex
parentac0c5651fb855d28eaeed5fb71dbf8f654944ae8 (diff)
downloadoopsla21_fvhls-fbf4644b30635c890b5ce1ec2e7b4b2482f67c01.tar.gz
oopsla21_fvhls-fbf4644b30635c890b5ce1ec2e7b4b2482f67c01.zip
Update on Overleaf.
Diffstat (limited to 'introduction.tex')
-rw-r--r--introduction.tex2
1 files changed, 1 insertions, 1 deletions
diff --git a/introduction.tex b/introduction.tex
index d753bae..1ea640a 100644
--- a/introduction.tex
+++ b/introduction.tex
@@ -18,7 +18,7 @@ This undermines any reasoning conducted at the software level.
And there are reasons to doubt that HLS tools actually \emph{do} preserve equivalence. First, just like conventional compilers, HLS tools are large pieces of software that perform a series of complex analyses and transformations.
Second, unlike conventional compilers, HLS tools target HDL, which has superficial syntactic similarities to software languages but a subtly different semantics, making it easy to introduce behavioural mismatches between the output of the HLS tool and its input.
%Hence, the premise of this work is: Can we trust these compilers to translate high-level languages like C/C++ to HDL correctly?
-\JW{These doubts have been shown to be justified by... / These doubts have been borne out by... }
+\JW{These doubts have been shown to be justified by... / These doubts have been borne out by / corroborated... }
For instance, \citet{lidbury15_many_core_compil_fuzzin} had to abandon their attempt to fuzz-test Altera's (now Intel's) OpenCL compiler since it ``either crashed or emitted an internal compiler error'' on so many of their test inputs.
More recently, Du et al.~\cite{du+20} fuzz-tested three commercial HLS tools using Csmith~\cite{yang11_findin_under_bugs_c_compil}, and despite restricting the generated programs to the C fragment explicitly supported by all the tools, they still found that on average 2.5\% of test cases generated a design that did not match the behaviour of the input.
%