summaryrefslogtreecommitdiffstats
path: root/introduction.tex
diff options
context:
space:
mode:
Diffstat (limited to 'introduction.tex')
-rw-r--r--introduction.tex20
1 files changed, 12 insertions, 8 deletions
diff --git a/introduction.tex b/introduction.tex
index c597e01..35d5ce7 100644
--- a/introduction.tex
+++ b/introduction.tex
@@ -6,7 +6,9 @@
\JW{A few high-level comments: \begin{enumerate} \item Create more tension from the start by making the reader doubt whether existing HLS tools are trustworthy. \item The intro currently draws quite a bit of motivation from Lidbury et al. 2015, but we should also now lean on our FPGA submission too. \item I wonder whether the paragraph `To mitigate the problems...' should be demoted to a `related work' discussion (perhaps as a subsection towards the end of the introduction). It outlines (and nicely dismisses) some existing attempts to tackle the problem, which is certainly useful motivation for your work, especially for readers already familiar with HLS, but I feel that it's not really on the critical path for understanding the paper.\end{enumerate}}
-One current approach to writing energy-efficient and high-throughput applications is to use application-specific hardware, instead of relying on a general-purpose CPU.\@ However, custom hardware designs come at the cost of having to design and produce them, which can be a tedious and error-prone process using hardware description languages (HDL) such as Verilog. Especially with the size of hardware designs growing over the years, it can become difficult to verify that the hardware design behaves in the expected way, as simulation of HDLs can be quite inefficient. Furthermore, the algorithms that are being accelerated in hardware often already have a software implementation, meaning they have to be reimplemented efficiently in a hardware description language which can be time-consuming.
+One current approach to writing energy-efficient and high-throughput applications is to use application-specific hardware, instead of relying on a general-purpose CPU.\@ However, custom hardware designs come at the cost of having to design and produce them, which can be a tedious and error-prone process using hardware description languages (HDL) such as Verilog. High-level synthesis (HLS) is becoming a
+
+Especially with the size of hardware designs growing over the years, it can become difficult to verify that the hardware design behaves in the expected way, as simulation of HDLs can be quite inefficient. Furthermore, the algorithms that are being accelerated in hardware often already have a software implementation, meaning they have to be reimplemented efficiently in a hardware description language which can be time-consuming.
%% Definition and benefits of HLS
One possible alternative to the tedious design process of custom hardware is to use high-level synthesis (HLS), which is the process of generating custom hardware, represented in an HDL, based on a behavioural description, often in a subset of C. This elevates the level of abstraction, because the description of the algorithm in C is inherently untimed, meaning actions don't have to be scheduled into clock cycles manually. The higher level of abstraction makes it easier to reason about the algorithms and therefore also facilitates maintaining them. As a result the time to design the hardware is reduced significantly, especially if a software description of the algorithm already exists, because there is no need to redesign at a lower level and directly in hardware. In addition, using HLS to design the hardware has the benefit of making the functional verification of the design much more efficient and simple than at the HDL stage, since the entire software ecosystem can be mobilised for this
@@ -20,13 +22,6 @@ However, the fact that the behaviour is preserved after HLS cannot be guaranteed
% JW: Another candidate, probably less interesting:
% https://bit.ly/intel-hls-memory-bug
-%% Current work in formal verification of HLS
-%%\NR{This is a good paragraph, but we need to relate it more to this work and why this work is different.}
-%%\NR{Focus on more high-level of "why this work is interesting"? Two key points we want to get across to the reader is that in existing works: validation is neccessary every time a new program is compiled and the verifying algorithm is not verified.}
-%%\NR{Also define words like validation, verifying algorithm (can you use the word ``verifier'',mechanisation)}
-%%\NR{Having said that, keep the text for related work section.}\YH{Added into related works.}
-To mitigate the problems about the unreliability of synthesis tool, it is often required to check the generated hardware for functional correctness. This is commonly done by simulating the design with a large test-bench, however, the guarantees are only as good as the test-bench, meaning if all the inputs were not covered, there is a risk that bugs remain in the untested code. To ensure that the hardware does indeed behave in the same way as the C code, it may therefore be necessary to prove that they are equivalent. Translation validation~\cite{pnueli98_trans} is the main method which is used to prove that the HLS translation was successful, and has been successfully applied to many HLS optimisations~\cite{kim04_autom_fsmd,karfa06_formal_verif_method_sched_high_synth,chouksey20_verif_sched_condit_behav_high_level_synth,banerjee14_verif_code_motion_techn_using_value_propag,chouksey19_trans_valid_code_motion_trans_invol_loops}. However, the main problem is that the validator itself has often not been mechanically proven correct, meaning that the implementation is quite separate from the proof. In addition to that, with large designs it may not be feasible to perform translation validation, as the state space would grow exponentially. \JW{Does this link back to Mentor's Catapult-C, which you mentioned earlier? Does Catapult-C attempt to do translation validation as part of its HLS process? And if so, can you make the point that this effort is largely ineffective because once the design is a reasonable size, the translation validation usually fails anyway?}\YH{TODO: Currently I only have a whitepaper which goes over the translation validation in a very high level, but they do actually mention some flakiness and state that the user would have to manually change the code to fix that. So I think I can actually make that point. I just realised they have a pretty funny diagram of verification $\rightarrow$ differences $\rightarrow$ adjustments $\rightarrow$ ... until it is finally verified.} A mechanically verified HLS tool would remove the need to perform simulation after the synthesis process if one has proven desirable properties about the C code. In addition to that, it would allow for the implementation of optimisation passes which are also proven correct mechanically by translation validation, thereby greatly improving the reliability of these passes.
-
\compcert{}~\cite{leroy06_formal_certif_compil_back_end} is a C compiler that has been written and formally verified in the Coq theorem prover~\cite{bertot04_inter_theor_provin_progr_devel}. First of all, most of the compiler passes in \compcert{} have been proven correct, meaning that once the compiler is built, the proofs can be erased as the algorithm has been shown to be correct independent of the input. However, some optimisation passes such as software pipelining require translation validation~\cite{tristan08_formal_verif_trans_valid}, in which case the correctness of the compiler pass needs to be checked at runtime. However, even in this case the verifier itself is proven to only verify code correct that does indeed behave in the same way.
%% Contributions of paper
@@ -50,6 +45,15 @@ The first section will describe the Verilog semantics that were used and extende
%%\NR{A tool-flow diagram here will be useful. I thought you had one previously?}
\NR{Do you think we have a very simple example of a program where wrong Verilog is generated in VHLS or LegUp, but not in \vericert{}?}\YH{The LegUp example actually with the shift might be a good one.}
+\subsection{Existing works}
+
+%% Current work in formal verification of HLS
+%%\NR{This is a good paragraph, but we need to relate it more to this work and why this work is different.}
+%%\NR{Focus on more high-level of "why this work is interesting"? Two key points we want to get across to the reader is that in existing works: validation is neccessary every time a new program is compiled and the verifying algorithm is not verified.}
+%%\NR{Also define words like validation, verifying algorithm (can you use the word ``verifier'',mechanisation)}
+%%\NR{Having said that, keep the text for related work section.}\YH{Added into related works.}
+To mitigate the problems about the unreliability of synthesis tool, it is often required to check the generated hardware for functional correctness. This is commonly done by simulating the design with a large test-bench, however, the guarantees are only as good as the test-bench, meaning if all the inputs were not covered, there is a risk that bugs remain in the untested code. To ensure that the hardware does indeed behave in the same way as the C code, it may therefore be necessary to prove that they are equivalent. Translation validation~\cite{pnueli98_trans} is the main method which is used to prove that the HLS translation was successful, and has been successfully applied to many HLS optimisations~\cite{kim04_autom_fsmd,karfa06_formal_verif_method_sched_high_synth,chouksey20_verif_sched_condit_behav_high_level_synth,banerjee14_verif_code_motion_techn_using_value_propag,chouksey19_trans_valid_code_motion_trans_invol_loops}. However, the main problem is that the validator itself has often not been mechanically proven correct, meaning that the implementation is quite separate from the proof. In addition to that, with large designs it may not be feasible to perform translation validation, as the state space would grow exponentially. \JW{Does this link back to Mentor's Catapult-C, which you mentioned earlier? Does Catapult-C attempt to do translation validation as part of its HLS process? And if so, can you make the point that this effort is largely ineffective because once the design is a reasonable size, the translation validation usually fails anyway?}\YH{TODO: Currently I only have a whitepaper which goes over the translation validation in a very high level, but they do actually mention some flakiness and state that the user would have to manually change the code to fix that. So I think I can actually make that point. I just realised they have a pretty funny diagram of verification $\rightarrow$ differences $\rightarrow$ adjustments $\rightarrow$ ... until it is finally verified.} A mechanically verified HLS tool would remove the need to perform simulation after the synthesis process if one has proven desirable properties about the C code. In addition to that, it would allow for the implementation of optimisation passes which are also proven correct mechanically by translation validation, thereby greatly improving the reliability of these passes.
+
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "main"