summaryrefslogtreecommitdiffstats
path: root/related.tex
diff options
context:
space:
mode:
authorJohn Wickerson <j.wickerson@imperial.ac.uk>2021-03-26 21:16:22 +0000
committeroverleaf <overleaf@localhost>2021-03-28 14:43:30 +0000
commit546590b262efafc33133059f1184268dd25d89fe (patch)
treef379f428c06d1fafe2e0932d03cc500d8a40c5de /related.tex
parentb16f3242909e90bdb547ab1500d46f5e38527981 (diff)
downloadfccm21_esrhls-546590b262efafc33133059f1184268dd25d89fe.tar.gz
fccm21_esrhls-546590b262efafc33133059f1184268dd25d89fe.zip
Update on Overleaf.
Diffstat (limited to 'related.tex')
-rw-r--r--related.tex6
1 files changed, 3 insertions, 3 deletions
diff --git a/related.tex b/related.tex
index b2fd442..b2edec2 100644
--- a/related.tex
+++ b/related.tex
@@ -1,14 +1,14 @@
\section{Related Work}
-The only other work of which we are aware on fuzzing HLS tools is that by Lidbury et al. \cite{lidbury15_many_core_compil_fuzzin}, who tested several OpenCL compilers, including an HLS compiler from Altera (now Intel). They were only able to subject that compiler to superficial testing because so many of the test-cases they generated led to it crashing. In comparison to our work: where Lidbury et al. generated target-independent OpenCL programs that could be used to test HLS tools and conventional compilers alike, we specifically generate programs that are tailored for HLS (e.g. with HLS-specific pragmas and only including supported constructs) with the aim of testing the HLS tools more deeply. Another difference is that where we test using sequential C programs, they test using highly concurrent OpenCL programs, and thus have to go to great lengths to ensure that any discrepancies observed between compilers cannot be attributed to the inherent nondeterminism of concurrency.
+The only other work of which we are aware on fuzzing HLS tools is that by Lidbury et al. \cite{lidbury15_many_core_compil_fuzzin}, who tested several OpenCL compilers, including an HLS compiler from Altera (now Intel). They were only able to subject that compiler to superficial testing because so many of the test-cases \JW{Need to audit `test case' vs `test-case' throughout. I don't mind which we use; I mildly prefer `test-case' because it's less ambiguous when parsing.} they generated led to it crashing. In comparison to our work: where Lidbury et al. generated target-independent OpenCL programs that could be used to test HLS tools and conventional compilers alike, we specifically generate programs that are tailored for HLS (e.g. with HLS-specific pragmas and only including supported constructs) with the aim of testing the HLS tools more deeply. Another difference is that where we test using sequential C programs, they test using highly concurrent OpenCL programs, and thus have to go to great lengths to ensure that any discrepancies observed between compilers cannot be attributed to the inherent nondeterminism of concurrency.
-Other stages of the FPGA toolchain have been subjected to fuzzing. Herklotz et al.~\cite{verismith} tested several FPGA synthesis tools using randomly generated Verilog programs. Where they concentrated on the RTL-to-netlist stage of hardware design, we focus our attention on the earlier C-to-RTL stage.
+Other stages of the FPGA toolchain have been subjected to fuzzing. In previous work~\cite{verismith}, we tested several FPGA synthesis tools using randomly generated Verilog programs. Where that work concentrated on the RTL-to-netlist stage of hardware design, this work focuses on the C-to-RTL stage.
Several authors have taken steps toward more rigorously engineered HLS tools that may be more resilient to testing campaigns such as ours.
The Handel-C compiler by Perna and Woodcock~\cite{perna12_mechan_wire_wise_verif_handel_c_synth} has been mechanically proven correct, at least in part, using the HOL theorem prover; however, the tool does not support C as input directly, so is not amenable to fuzzing.
Ramanathan et al.~\cite{ramanathan+17} proved their implementation of C atomic operations in LegUp correct up to a bound using model checking; however, our testing campaign is not applicable to their implementation because we do not generate concurrent C programs.
In the SPARK HLS tool~\cite{gupta03_spark}, some compiler passes, such as scheduling, are mechanically validated during compilation~\cite{chouksey20_verif_sched_condit_behav_high_level_synth}; unfortunately, this tool is no longer available.
-Finally, the Catapult C HLS tool~\cite{mentor20_catap_high_level_synth} is designed only to produce an output netlist if it can mechanically prove it equivalent to the input program; it should therefore never produce wrong RTL. In future work, we intend to test Catapult C alongside Vivado HLS, LegUp, and Intel i++.
+Finally, the Catapult C HLS tool~\cite{mentor20_catap_high_level_synth} is designed only to produce an output netlist if it can mechanically prove it equivalent to the input program; it should therefore never produce wrong RTL. In future work, we intend to test Catapult C alongside Vivado HLS, LegUp, Intel i++, and Bambu.
%more prevalent these were prioritised.
% JW: We're not really sure that LegUp is more prevalent than Catapult HLS. Indeed, it probably isn't!
%\JW{Obvious reader question at this point: why not test that claim by giving our Csmith test cases to Catapult C too? Can we address that here? No worries if not; but shall we try and do that after the deadline anyway?}\YH{Yes, definitely, it would be great to get an idea of how Catapult C performs, and I currently have it installed already. I have added a small sentence for that now, but let me know if I should mention this in the conclusion instead though. }