summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorn.ramanathan14 <n.ramanathan14@imperial.ac.uk>2020-09-14 20:06:48 +0000
committeroverleaf <overleaf@localhost>2020-09-14 20:07:05 +0000
commit2a7c2703bc85575622d051ca6e38e9e2717325cf (patch)
tree9e3e6babe98a1fa1ff28e44c0720eed97b69ab50
parent92cb76e238f1d5773443c80cf8fcadff6e3892bd (diff)
downloadfccm21_esrhls-2a7c2703bc85575622d051ca6e38e9e2717325cf.tar.gz
fccm21_esrhls-2a7c2703bc85575622d051ca6e38e9e2717325cf.zip
Update on Overleaf.
-rw-r--r--method-new.tex7
-rw-r--r--related.tex3
2 files changed, 6 insertions, 4 deletions
diff --git a/method-new.tex b/method-new.tex
index 0e471f3..7dcee51 100644
--- a/method-new.tex
+++ b/method-new.tex
@@ -174,13 +174,14 @@ These crashes can range from compilation errors, failing assertions to segmentat
Both programs that exhibit mismatching return values and that crash the HLS tools are provided to the reduction stage to further minimise the programs and identify the root cause(s).
\paragraph{Tool-specific trivia}
-Vivado HLS implement C and RTL co-simulation by default.
+Vivado HLS implement C and RTL simulation by default.
We were only able to test one version of LegUp HLS (4.0).
LegUp 7.5 is GUI-based and hence it was not friendly enough for scripting at the time of testing.
-For both these tools, we set a 5-minute and 2-hour for C simulation and RTL simulation respectively.
+For all tools, we set a 5-minute and 2-hour for C simulation and RTL simulation respectively.
+In cases where simulation takes
There are cases in which it takes more than 2 hours to generate RTL and simulate it.
In such cases, we do not consider them as failures or crashes.
-When testing Intel HLS, we place three time-outs since its execution is generally slower than other HLS tools: a \ref{XX}-minute time-out for C compilation, a \ref{XX}-hour time out for HLS compilation and a \ref{XX}-hour time-out for co-simulation.
+% When testing Intel HLS, we place three time-outs since its execution is generally slower than other HLS tools: a \ref{XX}-minute time-out for C compilation, a \ref{XX}-hour time out for HLS compilation and a \ref{XX}-hour time-out for co-simulation.
% And the number of timeouts placed has increased to 4. The first timeout sets when compiling the C++ program to CPU and returning an executable once finished. The second timeout is placed when running the executable to get the C++ result. The third timeout, which been given the most extended period, is at synthesizing the design and generating the co-simulation executable. Finally, running the co-simulation executable requires the fourth timeout. The test case can be dumped at any timeout period if the task is not finished within the limited time.
diff --git a/related.tex b/related.tex
index 1e4b0eb..811c326 100644
--- a/related.tex
+++ b/related.tex
@@ -4,7 +4,8 @@ The only other work of which we are aware on fuzzing HLS tools is that by Lidbur
Other stages of the FPGA toolchain have been subjected to fuzzing. Herklotz et al.~\cite{verismith} tested several FPGA synthesis tools using randomly generated Verilog programs. Where they concentrated on the RTL-to-netlist stage of hardware design, we focus our attention on the earlier C-to-RTL stage.
-Several authors have taken steps toward more rigorously engineered HLS tools that may be more resilient to testing campaigns such as ours. One such example might be the Handel-C compiler by Perna and Woodcock~\cite{perna12_mechan_wire_wise_verif_handel_c_synth} which has been mechanically proven correct, at least in part, using the HOL theorem prover. However, the Handel-C language relies upon the user to perform the parallelisation that is necessary for good hardware performance, and in any case, the tool does not support would require many more checks to avoid data races. Another example is an HLS tool called SPARK~\cite{gupta03_spark}, which contains passes that are mechanically validated during compilation~\cite{chouksey20_verif_sched_condit_behav_high_level_synth}, such as the scheduling algorithm. Unfortunately the tools are not readily available yet to test properly, or do not support C as input directly. Finally, the HLS tool Catapult C~\cite{mentor20_catap_high_level_synth} is designed only to produce an output netlist if it can mechanically prove it equivalent to the input program. It should therefore never produce wrong RTL. Future work will address testing Catapult-C as well, however, as Vivado HLS, LegUp HLS and Intel HLS are more prevalent these were prioritised. \JW{Obvious reader question at this point: why not test that claim by giving our Csmith test cases to Catapult C too? Can we address that here? No worries if not; but shall we try and do that after the deadline anyway?}\YH{Yes, definitely, it would be great to get an idea of how Catapult C performs, and I currently have it installed already. I have added a small sentence for that now, but let me know if I should mention this in the conclusion instead though. }
+Several authors have taken steps toward more rigorously engineered HLS tools that may be more resilient to testing campaigns such as ours. One such example might be the Handel-C compiler by Perna and Woodcock~\cite{perna12_mechan_wire_wise_verif_handel_c_synth} which has been mechanically proven correct, at least in part, using the HOL theorem prover. However, the Handel-C language relies upon the user to perform the parallelisation that is necessary for good hardware performance, and in any case, the tool does not support C as input directly. %would require many more checks to avoid data races.
+Another example is an HLS tool called SPARK~\cite{gupta03_spark}, which contains passes that are mechanically validated during compilation~\cite{chouksey20_verif_sched_condit_behav_high_level_synth}, such as the scheduling algorithm. Unfortunately the tools are not readily available yet to test properly, or do not support C as input directly. Finally, the HLS tool Catapult C~\cite{mentor20_catap_high_level_synth} is designed only to produce an output netlist if it can mechanically prove it equivalent to the input program. It should therefore never produce wrong RTL. Future work will address testing Catapult-C as well, however, as Vivado HLS, LegUp HLS and Intel HLS are more prevalent these were prioritised. \JW{Obvious reader question at this point: why not test that claim by giving our Csmith test cases to Catapult C too? Can we address that here? No worries if not; but shall we try and do that after the deadline anyway?}\YH{Yes, definitely, it would be great to get an idea of how Catapult C performs, and I currently have it installed already. I have added a small sentence for that now, but let me know if I should mention this in the conclusion instead though. }
%%% Local Variables:
%%% mode: latex