summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorYann Herklotz <git@yannherklotz.com>2021-02-25 09:50:59 +0000
committerYann Herklotz <git@yannherklotz.com>2021-02-25 09:50:59 +0000
commit29d537e1c99f862ecf69728843e334ec997d08c2 (patch)
treeec97c5096cdbc224a1c7fadd0c734740789f9474
parentfacb9cd3cc3af18ac9061dab5af59fa1858cae28 (diff)
downloadlatte21_hlstpc-29d537e1c99f862ecf69728843e334ec997d08c2.tar.gz
latte21_hlstpc-29d537e1c99f862ecf69728843e334ec997d08c2.zip
More changes
-rw-r--r--main.tex22
1 files changed, 11 insertions, 11 deletions
diff --git a/main.tex b/main.tex
index f4a93f3..e4d81c6 100644
--- a/main.tex
+++ b/main.tex
@@ -196,19 +196,13 @@ The solution to both of these points is to have a formally verified high-level s
\section{Arguments against formalised HLS}
-\subsection{People should not be designing hardware in C to begin with.}
+\paragraph{People should not be designing hardware in C to begin with.}
In fact, formally verifying HLS of C is the wrong approach, as it should not be used to design hardware, let alone hardware that is important to be reliable. Instead, there have been many efforts to formally verify the translation of high-level hardware description languages like Bluespec with K\^{o}i\-ka~\cite{bourgeat20_essen_blues}, formalising the synthesis of Verilog into technology mapped net lists with Lutsig~\cite{loow21_lutsig}, or work on formalising circuit design in Coq itself to ease design verification~\cite{choi17_kami,singh_silver_oak}.
-However,
+However, verifying HLS also important. Not only in HLS becoming more popular, as it requires much less design effort to produce new hardware~\cite{lahti19_are_we_there_yet}, but much of that convenience comes from the easy behavioural testing that HLS allows to ensure correct functionality of the design. This assumes that HLS tools are correct, so to ensure
-\subsection{HLS applications don't require the levels of reliability that a formally verified compiler affords.}
-
-One might argue that developing a formally verified tool in a theorem prover and proving correctness theorems about it might be too tedious and take too long. However, we have already been developed our own verified HLS tool called \vericert{}~\cite{herklotz21_formal_verif_high_level_synth} based on \compcert{}, adding a Verilog back end to the compiler. We currently have a completely verified translation from a large subset of C into Verilog, which fulfills the minimum requirements for an HLS tool. We found that it normally takes $5\times$ or $10\times$ longer to prove a translation correct compared to writing the algorithm.
-
-However, this could be seen as being beneficial, as proving the correctness of the HLS tool proves the absence of any bugs according to the language semantics, meaning much less time spent on fixing bugs. In addition to that, even though simple testing would still be required to check the tool does indeed perform simple translations, and generates performant designs, these test benches would not have to cover every single edge case. Finally, verification also forces the algorithm to deal with many different edge cases that may be hard to identify normally, and may even allow for more optimisations as one can be certain about assumptions one would usually have to make.
-
-\subsection{Existing approaches for testing or formally verifying hardware designs are sufficient for ensuring reliability.}
+\paragraph{Existing approaches for testing or formally verifying hardware designs are sufficient for ensuring reliability.}
The standard methods for checking the outputs of the HLS tools are the following. First, the output could be checked by using a test bench and checking the outputs against the model. However, this does not provide many guarantees, as many edge cases may never be tested. Second, if testing is not rigorous enough, there has been research on checking that the generated hardware design has the same functionality as the input design, where the focus is on creating translation validators~\cite{pnueli98_trans} to prove the equivalence between the design and input code, while supporting various optimisations such as scheduling~\cite{kim04_autom_fsmd,karfa06_formal_verif_method_sched_high_synth,chouksey20_verif_sched_condit_behav_high_level_synth} or code motion~\cite{banerjee14_verif_code_motion_techn_using_value_propag,chouksey19_trans_valid_code_motion_trans_invol_loops}. However, these aren't perfect solutions either, as there is no guarantee that these proofs really compose with each other. This means that an equivalence checker normally needs to work for the all of the translations that the HLS tool might perform, increasing the chances that it cannot check for equivalence anymore.
@@ -218,7 +212,13 @@ Another argument against building a verified HLS tool is that current testing te
However, \citet{du21_fuzzin_high_level_synth_tools}, for example, show that on average 2.5\% of randomly generated C, tailored to the specific HLS tool, end up with incorrect designs. This means that such test cases were not present in the internal test benches used to test the tools. In addition to that, with the increasing complexity of HLS designs, it may sometimes not be possible to run an equivalence checker on the whole design, as the state space might be too big.
-\subsection{Any HLS tool that is simple enough for formal verification to be feasible won't produce sufficiently optimised designs to be useful.}
+\paragraph{HLS applications don't require the levels of reliability that a formally verified compiler affords.}
+
+One might argue that developing a formally verified tool in a theorem prover and proving correctness theorems about it might be too tedious and take too long. However, we have already been developed our own verified HLS tool called \vericert{}~\cite{herklotz21_formal_verif_high_level_synth} based on \compcert{}, adding a Verilog back end to the compiler. We currently have a completely verified translation from a large subset of C into Verilog, which fulfills the minimum requirements for an HLS tool. We found that it normally takes $5\times$ or $10\times$ longer to prove a translation correct compared to writing the algorithm.
+
+However, this could be seen as being beneficial, as proving the correctness of the HLS tool proves the absence of any bugs according to the language semantics, meaning much less time spent on fixing bugs. In addition to that, even though simple testing would still be required to check the tool does indeed perform simple translations, and generates performant designs, these test benches would not have to cover every single edge case. Finally, verification also forces the algorithm to deal with many different edge cases that may be hard to identify normally, and may even allow for more optimisations as one can be certain about assumptions one would usually have to make.
+
+\paragraph{Any HLS tool that is simple enough for formal verification to be feasible won't produce sufficiently optimised designs to be useful.}
Another concern might be that a verified HLS tool might not be performant enough to be usable in practice. If that is the case, then the verification effort could be seen as useless, as it could not be used.
@@ -230,7 +230,7 @@ There are many optimisations that need to be added to \vericert{} to turn it int
The main idea used to prove the scheduling correct, is to split it up into multiple stages. The first stage of the translation will turn basic blocks into a parallel basic block representation, whereas the second
-\subsection{Even a formally verified HLS tool can't give absolute guarantees about the hardware designs it produces.}
+\paragraph{Even a formally verified HLS tool can't give absolute guarantees about the hardware designs it produces.}
It is true that a verified tool is still allowed to fail at compilation time, meaning none of the correctness proofs need to hold, as no output was produced. However, this is only a matter of putting more engineering work into the tool to make it reliable. If a test bench is available, it is also quite simple to check this property, as it just has to be randomly tested without even having to execute the output.