summaryrefslogtreecommitdiffstats
path: root/introduction.tex
diff options
context:
space:
mode:
authorJohn Wickerson <j.wickerson@imperial.ac.uk>2020-11-18 13:02:05 +0000
committeroverleaf <overleaf@localhost>2020-11-18 13:03:35 +0000
commitdca94ceaf47afd42298fea1812c8549ca7f55462 (patch)
tree73ad0101052c4e2f3caf3b704a7a0ee32acbb25c /introduction.tex
parent32a81d31ecb5cae1e4c464dcc6f7aefc8d1610ab (diff)
downloadoopsla21_fvhls-dca94ceaf47afd42298fea1812c8549ca7f55462.tar.gz
oopsla21_fvhls-dca94ceaf47afd42298fea1812c8549ca7f55462.zip
Update on Overleaf.
Diffstat (limited to 'introduction.tex')
-rw-r--r--introduction.tex2
1 files changed, 1 insertions, 1 deletions
diff --git a/introduction.tex b/introduction.tex
index 9f9903a..d753bae 100644
--- a/introduction.tex
+++ b/introduction.tex
@@ -18,7 +18,7 @@ This undermines any reasoning conducted at the software level.
And there are reasons to doubt that HLS tools actually \emph{do} preserve equivalence. First, just like conventional compilers, HLS tools are large pieces of software that perform a series of complex analyses and transformations.
Second, unlike conventional compilers, HLS tools target HDL, which has superficial syntactic similarities to software languages but a subtly different semantics, making it easy to introduce behavioural mismatches between the output of the HLS tool and its input.
%Hence, the premise of this work is: Can we trust these compilers to translate high-level languages like C/C++ to HDL correctly?
-These doubts have been realised by
+\JW{These doubts have been shown to be justified by... / These doubts have been borne out by... }
For instance, \citet{lidbury15_many_core_compil_fuzzin} had to abandon their attempt to fuzz-test Altera's (now Intel's) OpenCL compiler since it ``either crashed or emitted an internal compiler error'' on so many of their test inputs.
More recently, Du et al.~\cite{du+20} fuzz-tested three commercial HLS tools using Csmith~\cite{yang11_findin_under_bugs_c_compil}, and despite restricting the generated programs to the C fragment explicitly supported by all the tools, they still found that on average 2.5\% of test cases generated a design that did not match the behaviour of the input.
%