summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorYann Herklotz <git@yannherklotz.com>2021-02-25 08:40:22 +0000
committerYann Herklotz <git@yannherklotz.com>2021-02-25 08:40:22 +0000
commit293dc6742022713d5275e9edbb876c168d272039 (patch)
tree83692821cfd33c6abc7de515e276baa8027743d8
parent153d42af9c630d34e39efc88acbe1ee56fd89fa8 (diff)
downloadlatte21_hlstpc-293dc6742022713d5275e9edbb876c168d272039.tar.gz
latte21_hlstpc-293dc6742022713d5275e9edbb876c168d272039.zip
Update the titles
-rw-r--r--main.tex14
-rw-r--r--references.bib16
2 files changed, 23 insertions, 7 deletions
diff --git a/main.tex b/main.tex
index a19c0fd..5363671 100644
--- a/main.tex
+++ b/main.tex
@@ -194,19 +194,19 @@ It is often assumed that because current HLS tools should transform the input sp
The solution to both of these points is to have a formally verified high-level synthesis tool. It not only guarantees that the output is correct, but also brings a formal specification of the input and output language semantics. These are the only parts of the compiler that need to be trusted, and if these are well-specified, then the behaviour of the resulting design can be fully trusted. In addition to that, if the semantics of the input semantics are taken from a tool that is widely trusted already, then there should not be any strange behaviour; the resultant design will either behave exactly like the input specification, or the translation will fail early at compile time.
-\section{Arguments for correct HLS}
+\section{Arguments against formalised HLS}
-\subsection{Do HLS applications require reliability?}
+\paragraph{People should not be designing hardware in C to begin with.}
-One might argue
+One might argue that HLS does not normally have to be reliable, and that it is important to instead focus on formalising different areas of hardware. For example, there have been efforts formalising a Bluespec like language in K\^{o}ika~\cite{bourgeat20_essen_blues}, which is closer to a high-level hardware description language, or
-\subsection{Will this not take too long?}
+\paragraph{HLS applications don't require the levels of reliability that a formally verified compiler affords.}
One might argue that developing a formally verified tool in a theorem prover and proving correctness theorems about it might be too tedious and take too long. However, we have already been developed our own verified HLS tool called \vericert{}~\cite{herklotz21_formal_verif_high_level_synth} based on \compcert{}, adding a Verilog back end to the compiler. We currently have a completely verified translation from a large subset of C into Verilog, which fullfills the minimum requirements for an HLS tool. We found that it normally takes $5\times$ or $10\times$ longer to prove a translation correct compared to writing the algorithm.
However, this could be seen as being beneficial, as proving the correctness of the HLS tool proves the absence of any bugs according to the language semantics, meaning much less time spent on fixing bugs. In addition to that, even though simple testing would still be required to check the tool does indeed perform simple translations, and generates performant designs, these test benches would not have to cover every single edge case. Finally, verification also forces the algorithm to deal with many different edge cases that may be hard to identify normally, and may even allow for more optimisations as one can be certain about assumptions one would usually have to make.
-\subsection{Current testing techniques are enough}
+\subsection{Existing approaches for testing or formally verifying hardware designs are sufficient for ensuring reliability.}
The standard methods for checking the outputs of the HLS tools are the following. First, the output could be checked by using a test bench and checking the outputs against the model. However, this does not provide many guarantees, as many edge cases may never be tested. Second, if testing is not rigorous enough, there has been research on checking that the generated hardware design has the same functionality as the input design, where the focus is on creating translation validators~\cite{pnueli98_trans} to prove the equivalence between the design and input code, while supporting various optimisations such as scheduling~\cite{kim04_autom_fsmd,karfa06_formal_verif_method_sched_high_synth,chouksey20_verif_sched_condit_behav_high_level_synth} or code motion~\cite{banerjee14_verif_code_motion_techn_using_value_propag,chouksey19_trans_valid_code_motion_trans_invol_loops}. However, these aren't perfect solutions either, as there is no guarantee that these proofs really compose with each other. This means that an equivalence checker normally needs to work for the all of the translations that the HLS tool might perform, increasing the chances that it cannot check for equivalence anymore.
@@ -216,7 +216,7 @@ Another argument against building a verified HLS tool is that current testing te
However, \citet{du21_fuzzin_high_level_synth_tools}, for example, show that on average 2.5\% of randomly generated C, tailored to the specific HLS tool, end up with incorrect designs. This means that such test cases were not present in the internal test benches used to test the tools. In addition to that, with the increasing complexity of HLS designs, it may sometimes not be possible to run an equivalence checker on the whole design, as the state space might be too big.
-\subsection{Will the generated designs be fast enough?}
+\subsection{Any HLS tool that is simple enough for formal verification to be feasible won't produce sufficiently optimised designs to be useful.}
Another concern might be that a verified HLS tool might not be performant enough to be usable in practice. If that is the case, then the verification effort could be seen as useless, as it could not be used.
@@ -228,7 +228,7 @@ There are many optimisations that need to be added to \vericert{} to turn it int
The main idea used to prove the scheduling correct, is to split it up into multiple stages. The first stage of the translation will turn basic blocks into a parallel basic block representation, whereas the second
-\subsection{A Verified tool can still be flaky and fail}
+\subsection{Even a formally verified HLS tool can't give absolute guarantees about the hardware designs it produces.}
It is true that a verified tool is still allowed to fail at compilation time, meaning none of the correctness proofs need to hold, as no output was produced. However, this is only a matter of putting more engineering work into the tool to make it reliable. If a test bench is available, it is also quite simple to check this property, as it just has to be randomly tested without even having to execute the output.
diff --git a/references.bib b/references.bib
index b05f07f..9b5fd27 100644
--- a/references.bib
+++ b/references.bib
@@ -244,3 +244,19 @@
location = {Monterey, CA, USA},
series = {FPGA '11}
}
+
+@inproceedings{bourgeat20_essen_blues,
+ author = {Bourgeat, Thomas and Pit-Claudel, Cl\'{e}ment and Chlipala, Adam and Arvind},
+ title = {The Essence of Bluespec: A Core Language for Rule-Based Hardware Design},
+ booktitle = {Proceedings of the 41st ACM SIGPLAN Conference on Programming Language Design and Implementation},
+ year = 2020,
+ pages = {243-257},
+ doi = {10.1145/3385412.3385965},
+ url = {https://doi.org/10.1145/3385412.3385965},
+ address = {New York, NY, USA},
+ isbn = 9781450376136,
+ location = {London, UK},
+ numpages = 15,
+ publisher = {Association for Computing Machinery},
+ series = {PLDI 2020}
+}