summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorYann Herklotz <git@yannherklotz.com>2021-02-24 19:32:29 +0000
committerYann Herklotz <git@yannherklotz.com>2021-02-24 19:32:29 +0000
commit153d42af9c630d34e39efc88acbe1ee56fd89fa8 (patch)
treee635ccef30d03e012c44c7477b159d5ceecf49b2
parent3890c6e036dfbc12018341edc99c537dcd342ca5 (diff)
downloadlatte21_hlstpc-153d42af9c630d34e39efc88acbe1ee56fd89fa8.tar.gz
latte21_hlstpc-153d42af9c630d34e39efc88acbe1ee56fd89fa8.zip
Move code around
-rw-r--r--main.tex30
1 files changed, 14 insertions, 16 deletions
diff --git a/main.tex b/main.tex
index fc4a0f8..a19c0fd 100644
--- a/main.tex
+++ b/main.tex
@@ -194,11 +194,11 @@ It is often assumed that because current HLS tools should transform the input sp
The solution to both of these points is to have a formally verified high-level synthesis tool. It not only guarantees that the output is correct, but also brings a formal specification of the input and output language semantics. These are the only parts of the compiler that need to be trusted, and if these are well-specified, then the behaviour of the resulting design can be fully trusted. In addition to that, if the semantics of the input semantics are taken from a tool that is widely trusted already, then there should not be any strange behaviour; the resultant design will either behave exactly like the input specification, or the translation will fail early at compile time.
-\section{How to prove an HLS tool correct}
+\section{Arguments for correct HLS}
-The standard methods for checking the outputs of the HLS tools are the following. First, the output could be checked by using a test bench and checking the outputs against the model. However, this does not provide many guarantees, as many edge cases may never be tested. Second, if testing is not rigorous enough, there has been research on checking that the generated hardware design has the same functionality as the input design, where the focus is on creating translation validators~\cite{pnueli98_trans} to prove the equivalence between the design and input code, while supporting various optimisations such as scheduling~\cite{kim04_autom_fsmd,karfa06_formal_verif_method_sched_high_synth,chouksey20_verif_sched_condit_behav_high_level_synth} or code motion~\cite{banerjee14_verif_code_motion_techn_using_value_propag,chouksey19_trans_valid_code_motion_trans_invol_loops}. However, these aren't perfect solutions either, as there is no guarantee that these proofs really compose with each other. This means that an equivalence checker normally needs to work for the all of the translations that the HLS tool might perform, increasing the chances that it cannot check for equivalence anymore.
+\subsection{Do HLS applications require reliability?}
-The radical solution to this problem is to formally verify the complete tool, writing the complete tool in Coq~\cite{coquand86}, an interactive theorem prover. This has been proven to be successful, for example, \compcert{}~\cite{leroy09_formal_verif_realis_compil} is a formally verified C compiler written in Coq. The reliability of formal verification in Coq was demonstrated by CSmith~\cite{yang11_findin_under_bugs_c_compil}, a random C, valid C generator, which found more than 300 bugs in GCC and Clang, whereas only 5 bugs were found in the unverified parts of \compcert{}, which prompted the verification of those parts as well. In addition to that, recently Lutsig~\cite{loow21_lutsig}, a synthesis tool going from behavioural Verilog to a technology mapped netlist in Verilog, was also proven correct.
+One might argue
\subsection{Will this not take too long?}
@@ -208,6 +208,10 @@ However, this could be seen as being beneficial, as proving the correctness of t
\subsection{Current testing techniques are enough}
+The standard methods for checking the outputs of the HLS tools are the following. First, the output could be checked by using a test bench and checking the outputs against the model. However, this does not provide many guarantees, as many edge cases may never be tested. Second, if testing is not rigorous enough, there has been research on checking that the generated hardware design has the same functionality as the input design, where the focus is on creating translation validators~\cite{pnueli98_trans} to prove the equivalence between the design and input code, while supporting various optimisations such as scheduling~\cite{kim04_autom_fsmd,karfa06_formal_verif_method_sched_high_synth,chouksey20_verif_sched_condit_behav_high_level_synth} or code motion~\cite{banerjee14_verif_code_motion_techn_using_value_propag,chouksey19_trans_valid_code_motion_trans_invol_loops}. However, these aren't perfect solutions either, as there is no guarantee that these proofs really compose with each other. This means that an equivalence checker normally needs to work for the all of the translations that the HLS tool might perform, increasing the chances that it cannot check for equivalence anymore.
+
+The radical solution to this problem is to formally verify the complete tool, writing the complete tool in Coq~\cite{coquand86}, an interactive theorem prover. This has been proven to be successful, for example, \compcert{}~\cite{leroy09_formal_verif_realis_compil} is a formally verified C compiler written in Coq. The reliability of formal verification in Coq was demonstrated by CSmith~\cite{yang11_findin_under_bugs_c_compil}, a random C, valid C generator, which found more than 300 bugs in GCC and Clang, whereas only 5 bugs were found in the unverified parts of \compcert{}, which prompted the verification of those parts as well. In addition to that, recently Lutsig~\cite{loow21_lutsig}, a synthesis tool going from behavioural Verilog to a technology mapped netlist in Verilog, was also proven correct.
+
Another argument against building a verified HLS tool is that current testing techniques for these tools are good enough. There are many internal test benches that ensure the correctness of the HLS tool, as well as equivalence checkers to ensure that the final design behaves the same as the input.
However, \citet{du21_fuzzin_high_level_synth_tools}, for example, show that on average 2.5\% of randomly generated C, tailored to the specific HLS tool, end up with incorrect designs. This means that such test cases were not present in the internal test benches used to test the tools. In addition to that, with the increasing complexity of HLS designs, it may sometimes not be possible to run an equivalence checker on the whole design, as the state space might be too big.
@@ -216,7 +220,13 @@ However, \citet{du21_fuzzin_high_level_synth_tools}, for example, show that on a
Another concern might be that a verified HLS tool might not be performant enough to be usable in practice. If that is the case, then the verification effort could be seen as useless, as it could not be used.
-First of all, performing comparisons between the fully verified bits of \vericert{} and \legup{}~\cite{canis11_legup},
+First of all, performing comparisons between the fully verified bits of \vericert{} and \legup{}~\cite{canis11_legup}, we find that
+
+Finally, it is interesting to compare the performance of \vericert{} compared to existing HLS tools, to see how correctness guarantees affect the size and speed of the final hardware. This was done on a common compiler benchmark called PolyBench/C~\cite{polybench}.
+
+There are many optimisations that need to be added to \vericert{} to turn it into a viable and competitive HLS tool. First of all, the most important addition is a good scheduling implementation, which supports operation chaining and properly pipelining operators. Our main focus is trying to implement scheduling based on system of difference constraint~\cite{cong06_sdc}, which is the same algorithm which \legup{} uses. We have currently implemented the scheduling algorithm, as well as a simple verifier for the translation, but are missing the semantic preservation proofs that the verifier should produce.
+
+The main idea used to prove the scheduling correct, is to split it up into multiple stages. The first stage of the translation will turn basic blocks into a parallel basic block representation, whereas the second
\subsection{A Verified tool can still be flaky and fail}
@@ -224,8 +234,6 @@ It is true that a verified tool is still allowed to fail at compilation time, me
In addition to that, specifically for an HLS tool taking C as input, undefined behaviour will allow the HLS tool to behave any way it wishes. This becomes even more important when passing the C to a verified HLS tool, as if it is not free of undefined behaviour, then none of the proofs will hold.
-\section{Guarantees of trusted code}
-
There are a couple of theorems one can prove about the HLS tool once it is written in Coq. The first, which is the most important one, is that transforming the C code into Verilog is semantics preserving, meaning the translated code will behave the same as the C code. The following backward simulation, as initially presented in \compcert{}, shows the main property that we should want to show:
\begin{equation*}
@@ -238,16 +246,6 @@ However, there are also assumptions that have to be made about the tool. In the
Trusting the Verilog semantics is not that easy, since it is not quite clear which semantics to take. Verilog can either be simulated or synthesised into hardware, and has different semantics based on how it is used. We therefore use existing Verilog semantics by \citet{loow19_proof_trans_veril_devel_hol}, which are designed to only model a small part of the semantics, while ensuring that the behaviour for synthesis and simulation stays the same.
-\section{Performance of such a tool}
-
-Finally, it is interesting to compare the performance of \vericert{} compared to existing HLS tools, to see how correctness guarantees affect the size and speed of the final hardware. This was done on a common compiler benchmark called PolyBench/C~\cite{polybench}.
-
-\section{Improvements to \vericert{}}
-
-There are many optimisations that need to be added to \vericert{} to turn it into a viable and competitive HLS tool. First of all, the most important addition is a good scheduling implementation, which supports operation chaining and properly pipelining operators. Our main focus is trying to implement scheduling based on system of difference constraint~\cite{cong06_sdc}, which is the same algorithm which \legup{} uses. We have currently implemented the scheduling algorithm, as well as a simple verifier for the translation, but are missing the semantic preservation proofs that the verifier should produce.
-
-The main idea used to prove the scheduling correct, is to split it up into multiple stages. The first stage of the translation will turn basic blocks into a parallel basic block representation, whereas the second
-
\section{Conclusion}
In conclusion, we have shown that a verified HLS tool could achieve similar performance to an existing unverified HLS tool, while guaranteeing that the output is always correct.