\section{Introduction} %% Motivation for why HLS might be needed The current approach to writing energy-efficient and high-throughput applications is to use custom hardware which has been specialised for that application~\NR{You mean application-specific?}, instead of relying on and optimising for a CPU.\@ This comes at the cost of having to design the customised hardware, which, if using hardware description languages (HDL) such as Verilog can be tedious and quite error prone. Especially with the size~\NR{size of programs or HDL designs?} growing over the years, it can become difficult to verify that the hardware design behaves in the expected way, as simulation of hardware description languages can be quite inefficient. Furthermore, the algorithms that are being accelerated in hardware often already have a software implementation, meaning they have to be reimplemented efficiently in a hardware description language which can be time consuming. %% Definition and benefits of HLS \NR{What does "this" refer to in the first, second and fourth sentence? You had "this" in the previous paragraph too.} One possible solution to this is to use high-level synthesis (HLS), which is the process of generating custom hardware, represented in a hardware description language, based on a behavioural description, often a subset of C. This elevates the level of abstraction, because the description of the algorithm in C is inherently untimed, meaning actions don't have to be scheduled into clock cycles. The higher level of abstraction makes it easier to reason about the algorithms and therefore also makes them easier to maintain. This already reduces the time taken to design the hardware, especially if a software description of the algorithm already exists, because it won't have to be designed again at a lower level and directly in hardware. However, another benefit of using HLS to design the hardware, is that function verification of the design is much simpler and more efficient than if it was done at the HDL stage, as the whole software ecosystem can be used to do that. Instead of having to run simulations of the hardware, the C code can just be compiled and executed natively, as the hardware design after HLS should have the same behaviour. \NR{The abstraction of HLS helps in two ways: improving productivity of hardware designers and reducing the entry barrier of hardware design for software programmers. Both these audiences benefit from better stand to benefit from the guarantees provided by verification and correct-by-construction C-to-Verilog generation.} %% Unreliability of HLS However, the fact that the behaviour is preserved after HLS cannot be guaranteed most existing tools,\YH{Mentor's catapult C can in some cases} meaning behavioural simulation of the hardware design still has to be performed. HLS tools are also known to be quite unreliable, for example, Intel's (formerly Altera's) OpenCL SDK compiler contained too many bugs to even be considered for random testing, as more than 25\% of the testcases failed~\cite{lidbury15_many_core_compil_fuzzin}. In addition to that, designers often feel like HLS tools are quite unreliable and fragile with respect to which language features that are supported.\YH{Need citation} As HLS tools are extremely complex and can therefore incorrectly change the behaviour of the design, it is not possible to guarantee that all the properties of the code that were proven in software will also hold for the generated hardware. %% Current work in formal verification of HLS \NR{This is a good paragraph, but we need to relate it more to this work and why this work is different.} \NR{Focus on more high-level of "why this work is interesting"? Two key points we want to get across to the reader is that in existing works: validation is neccessary every time a new program is compiled and the verifying algorithm is not verified.} \NR{Also define words like validation, verifying algorithm (can you use the word ``verifier'',mechanisation} \NR{Having said that, keep the text for related work section.} Therefore, work is being done to prove the equivalence between the generated hardware and the original behavioural description in C. An example of a tool that implements this is Mentor's Catapult~\cite{mentor20_catap_high_level_synth}, which tries to match the states in the register transfer level (RTL) description to states in the original C code after an unverified translation. This technique is called translation validation~\cite{pnueli98_trans}, whereby the translation that the HLS tool performed is proven to have been correct for that input, by showing that they behave in the same way for all possible inputs. Using translation validation is quite effective for proving complex optimisations such as scheduling~\cite{kim04_autom_fsmd,karfa06_formal_verif_method_sched_high_synth,chouksey20_verif_sched_condit_behav_high_level_synth} or code motion~\cite{banerjee14_verif_code_motion_techn_using_value_propag,chouksey19_trans_valid_code_motion_trans_invol_loops}, however, the validation has to be run every time the high-level synthesis is performed. In addition to that, the proofs are often not mechanised or directly related to the actual implementation, meaning the verifying algorithm might be wrong and could give false positives or false negatives. \NR{If we convey the right message in the above paragraph, then it follows up nicely with what CompCert provides us that these other tools do not. Then, we can get into how we use CompCert.} CompCert~\cite{leroy06_formal_certif_compil_back_end} is a C compiler that has been written and formally verified in the Coq theorem prover~\cite{bertot04_inter_theor_provin_progr_devel}. First of all, most of the proofs in CompCert have been formally verified directly, meaning that once the compiler is built, the proofs can be erased as the algorithm has been shown to be correct independent of the input. However, some optimisations in CompCert have also been proven using translation validation~\cite{tristan08_formal_verif_trans_valid}, in which case the proof still happens at runtime. The difference is that in this case, the verifier itself is actually formally verified in Coq and is therefore proven that it will only ever say that the input and output are equivalent if that is actually the case. %% Contributions of paper In this paper we describe a fully verified high-level synthesis tool called CoqUp, which adds a Verilog backend to CompCert and proves that the behaviour of the C code does not change according to existing Verilog semantics. The main contributions of the paper are the following: \begin{itemize} \item First mechanised and formally verified HLS flow. \item Proof by simulation mechanised in Coq between CompCert's intermediate language and Verilog. \item Description of the Verilog semantics used to integrate it into CompCert's model with some improvements. \item Able to run HLS benchmarks which are now known to be correct. \item \NR{We implement our Verilog semantics in CompCert and we are able to generate correct-by-construction Verilog for all programs in the CHStone benchmark suite, which is a well-known HLS benchmark.} \end{itemize} The first section will describe the Verilog semantics that were used and extended to fit into CompCert's model. The second section will then describe the HLS algorithm, together with it's proof. CoqUp is open source and is hosted on Github\footnote{https://github.com/ymherklotz/coqup}. \NR{Other comments:} \NR{Is both the translator and verifier written in Coq?} \NR{A tool-flow diagram here will be useful. I thought you had one previously?} \NR{Do you think we have a very simple example of a program where wrong Verilog is generated in VHLS or LegUp, but not in CoqUp?} %%% Local Variables: %%% mode: latex %%% TeX-master: "main" %%% End: