summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorYann Herklotz <git@yannherklotz.com>2021-02-24 12:36:21 +0000
committerYann Herklotz <git@yannherklotz.com>2021-02-24 12:52:43 +0000
commit95f15280705a5aee5215666675f666d35f5f43f4 (patch)
tree51362f8f064fc341cdb1ea7cf96001de41eee2cd
parent946ce4e23e55227fcf583c56da676f59ea736109 (diff)
downloadlatte21_hlstpc-95f15280705a5aee5215666675f666d35f5f43f4.tar.gz
latte21_hlstpc-95f15280705a5aee5215666675f666d35f5f43f4.zip
Work on adding more position paper arguments
-rw-r--r--main.tex12
1 files changed, 9 insertions, 3 deletions
diff --git a/main.tex b/main.tex
index 2089652..650d02b 100644
--- a/main.tex
+++ b/main.tex
@@ -198,11 +198,17 @@ The solution to both of these points is to have a formally verified high-level s
The standard methods for checking the outputs of the HLS tools are the following. First, the output could be checked by using a test bench and checking the outputs against the model. However, this does not provide many guarantees, as many edge cases may never be tested. Second, if testing is not rigorous enough, there has been research on checking that the generated hardware design has the same functionality as the input design, where the focus is on creating translation validators~\cite{pnueli98_trans} to prove the equivalence between the design and input code, while supporting various optimisations such as scheduling~\cite{kim04_autom_fsmd,karfa06_formal_verif_method_sched_high_synth,chouksey20_verif_sched_condit_behav_high_level_synth} or code motion~\cite{banerjee14_verif_code_motion_techn_using_value_propag,chouksey19_trans_valid_code_motion_trans_invol_loops}. However, these aren't perfect solutions either, as there is no guarantee that these proofs really compose with each other. This means that an equivalence checker normally needs to work for the all of the translations that the HLS tool might perform, increasing the chances that it cannot check for equivalence anymore.
-\subsection{Construction of a realistic verified HLS tool}
-
The radical solution to this problem is to formally verify the complete tool, writing the complete tool in Coq~\cite{coquand86}, an interactive theorem prover. This has been proven to be successful, for example, \compcert{}~\cite{leroy09_formal_verif_realis_compil} is a formally verified C compiler written in Coq. The reliability of formal verification in Coq was demonstrated by CSmith~\cite{yang11_findin_under_bugs_c_compil}, a random C, valid C generator, which found more than 300 bugs in GCC and Clang, whereas only 5 bugs were found in the unverified parts of \compcert{}, which prompted the verification of those parts as well. In addition to that, recently Lutsig~\cite{loow21_lutsig}, a synthesis tool going from behavioural Verilog to a technology mapped netlist in Verilog, was also proven correct.
-In this respect, we have developed our own verified HLS tool called \vericert{}~\cite{herklotz21_formal_verif_high_level_synth} based on \compcert{}, adding a Verilog back end to the compiler. We currently have a completely verified translation from a large subset of C into Verilog, which fullfills the minimum requirements for an HLS tool.
+\subsection{Will this not take too long?}
+
+One might argue that developing a formally verified tool in a theorem prover and proving correctness theorems about it might be too tedious and take too long. However, we have already been developed our own verified HLS tool called \vericert{}~\cite{herklotz21_formal_verif_high_level_synth} based on \compcert{}, adding a Verilog back end to the compiler. We currently have a completely verified translation from a large subset of C into Verilog, which fullfills the minimum requirements for an HLS tool. We found that it normally takes $5\times$ or $10\times$ longer to prove a translation correct compared to writing the algorithm.
+
+However, this could be seen as being worth it, as proving the correctness of the HLS tool proves the absence of any bugs according to the language semantics, meaning much less time spent on fixing bugs. In addition to that, even though simple testing would still be required to check the tool does indeed perform simple translations, and generates performant designs, these test benches would not have to cover every single edge case.
+
+\subsection{Current testing techniques are enough}
+
+Another argument against building a verified HLS tool is that current testing techniques for these tools are good enough. There are many internal test benches that ensure the correctness of the HLS tool, as well as equivalence checkers to ensure that the final design behaves the same as the input.
\section{Guarantees of trusted code}