diff options
author | Yann Herklotz <git@yannherklotz.com> | 2020-11-19 22:47:13 +0000 |
---|---|---|
committer | Yann Herklotz <git@yannherklotz.com> | 2020-11-19 22:47:13 +0000 |
commit | 0159982015b96224fd5a67ab57cd61d72b625177 (patch) | |
tree | 821308db62d7b67e04e813e37ce2c40ea5db24f1 /introduction.tex | |
parent | 9ef3587cdb6f58e8f28c6b8ac98ca43255db0d6b (diff) | |
download | oopsla21_fvhls-0159982015b96224fd5a67ab57cd61d72b625177.tar.gz oopsla21_fvhls-0159982015b96224fd5a67ab57cd61d72b625177.zip |
Add unpublished citation
Diffstat (limited to 'introduction.tex')
-rw-r--r-- | introduction.tex | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/introduction.tex b/introduction.tex index 59e0fce..656d17c 100644 --- a/introduction.tex +++ b/introduction.tex @@ -21,7 +21,7 @@ Indeed, there are reasons to doubt that HLS tools actually \emph{do} always pres %Other reasons are more specific: For instance, Vivado HLS has been shown to apply pipelining optimisations incorrectly\footnote{\url{https://bit.ly/vivado-hls-pipeline-bug}} or to silently generate wrong code should the programmer stray outside the fragment of C that it supports.\footnote{\url{https://bit.ly/vivado-hls-pointer-bug}} Meanwhile, \citet{lidbury15_many_core_compil_fuzzin} had to abandon their attempt to fuzz-test Altera's (now Intel's) OpenCL compiler since it ``either crashed or emitted an internal compiler error'' on so many of their test inputs. -And more recently, Du et al.~\cite{du+20} fuzz-tested three commercial HLS tools using Csmith~\cite{yang11_findin_under_bugs_c_compil}, and despite restricting the generated programs to the C fragment explicitly supported by all the tools, they still found that on average 2.5\% of test cases generated a design that did not match the behaviour of the input. +And more recently, \citet{du_fuzzin_high_level_synth_tools} fuzz-tested three commercial HLS tools using Csmith~\cite{yang11_findin_under_bugs_c_compil}, and despite restricting the generated programs to the C fragment explicitly supported by all the tools, they still found that on average 2.5\% of test cases generated a design that did not match the behaviour of the input. \paragraph{Existing workarounds} |