summaryrefslogtreecommitdiffstats
path: root/related.tex
diff options
context:
space:
mode:
authorYann Herklotz <git@yannherklotz.com>2021-03-30 18:48:23 +0100
committerYann Herklotz <git@yannherklotz.com>2021-03-30 18:48:23 +0100
commit0f40e13fab830957ac055e076055280cdb82efff (patch)
tree45853a4552e3535f5ae8993a12d16537f984d23e /related.tex
parent4eb24224015629936727e37f60738ac412578f50 (diff)
downloadfccm21_esrhls-0f40e13fab830957ac055e076055280cdb82efff.tar.gz
fccm21_esrhls-0f40e13fab830957ac055e076055280cdb82efff.zip
Fix use of test-case
Diffstat (limited to 'related.tex')
-rw-r--r--related.tex4
1 files changed, 2 insertions, 2 deletions
diff --git a/related.tex b/related.tex
index b2edec2..208f1fc 100644
--- a/related.tex
+++ b/related.tex
@@ -1,6 +1,6 @@
\section{Related Work}
-The only other work of which we are aware on fuzzing HLS tools is that by Lidbury et al. \cite{lidbury15_many_core_compil_fuzzin}, who tested several OpenCL compilers, including an HLS compiler from Altera (now Intel). They were only able to subject that compiler to superficial testing because so many of the test-cases \JW{Need to audit `test case' vs `test-case' throughout. I don't mind which we use; I mildly prefer `test-case' because it's less ambiguous when parsing.} they generated led to it crashing. In comparison to our work: where Lidbury et al. generated target-independent OpenCL programs that could be used to test HLS tools and conventional compilers alike, we specifically generate programs that are tailored for HLS (e.g. with HLS-specific pragmas and only including supported constructs) with the aim of testing the HLS tools more deeply. Another difference is that where we test using sequential C programs, they test using highly concurrent OpenCL programs, and thus have to go to great lengths to ensure that any discrepancies observed between compilers cannot be attributed to the inherent nondeterminism of concurrency.
+The only other work of which we are aware on fuzzing HLS tools is that by Lidbury et al. \cite{lidbury15_many_core_compil_fuzzin}, who tested several OpenCL compilers, including an HLS compiler from Altera (now Intel). They were only able to subject that compiler to superficial testing because so many of the test-cases they generated led to it crashing. In comparison to our work: where Lidbury et al. generated target-independent OpenCL programs that could be used to test HLS tools and conventional compilers alike, we specifically generate programs that are tailored for HLS (e.g. with HLS-specific pragmas and only including supported constructs) with the aim of testing the HLS tools more deeply. Another difference is that where we test using sequential C programs, they test using highly concurrent OpenCL programs, and thus have to go to great lengths to ensure that any discrepancies observed between compilers cannot be attributed to the inherent nondeterminism of concurrency.
Other stages of the FPGA toolchain have been subjected to fuzzing. In previous work~\cite{verismith}, we tested several FPGA synthesis tools using randomly generated Verilog programs. Where that work concentrated on the RTL-to-netlist stage of hardware design, this work focuses on the C-to-RTL stage.
@@ -11,7 +11,7 @@ In the SPARK HLS tool~\cite{gupta03_spark}, some compiler passes, such as schedu
Finally, the Catapult C HLS tool~\cite{mentor20_catap_high_level_synth} is designed only to produce an output netlist if it can mechanically prove it equivalent to the input program; it should therefore never produce wrong RTL. In future work, we intend to test Catapult C alongside Vivado HLS, LegUp, Intel i++, and Bambu.
%more prevalent these were prioritised.
% JW: We're not really sure that LegUp is more prevalent than Catapult HLS. Indeed, it probably isn't!
-%\JW{Obvious reader question at this point: why not test that claim by giving our Csmith test cases to Catapult C too? Can we address that here? No worries if not; but shall we try and do that after the deadline anyway?}\YH{Yes, definitely, it would be great to get an idea of how Catapult C performs, and I currently have it installed already. I have added a small sentence for that now, but let me know if I should mention this in the conclusion instead though. }
+%\JW{Obvious reader question at this point: why not test that claim by giving our Csmith test-cases to Catapult C too? Can we address that here? No worries if not; but shall we try and do that after the deadline anyway?}\YH{Yes, definitely, it would be great to get an idea of how Catapult C performs, and I currently have it installed already. I have added a small sentence for that now, but let me know if I should mention this in the conclusion instead though. }
%%% Local Variables:
%%% mode: latex