summaryrefslogtreecommitdiffstats
path: root/introduction.tex
diff options
context:
space:
mode:
Diffstat (limited to 'introduction.tex')
-rw-r--r--introduction.tex35
1 files changed, 15 insertions, 20 deletions
diff --git a/introduction.tex b/introduction.tex
index 2ef9218..4e69cd8 100644
--- a/introduction.tex
+++ b/introduction.tex
@@ -4,28 +4,23 @@
%\JW{A few high-level comments: \begin{enumerate} \item Create more tension from the start by making the reader doubt whether existing HLS tools are trustworthy. \item The intro currently draws quite a bit of motivation from Lidbury et al. 2015, but we should also now lean on our FPGA submission too. \item I wonder whether the paragraph `To mitigate the problems...' should be demoted to a `related work' discussion (perhaps as a subsection towards the end of the introduction). It outlines (and nicely dismisses) some existing attempts to tackle the problem, which is certainly useful motivation for your work, especially for readers already familiar with HLS, but I feel that it's not really on the critical path for understanding the paper.\end{enumerate}}
-\NR{I couldn't have subsections in comments so I have appended my writing to the bottom of this file.}\YH{The original intro is in the archive, we can maybe merge them in the future a bit.}
+%\NR{I couldn't have subsections in comments so I have appended my writing to the bottom of this file.}\YH{The original intro is in the archive, we can maybe merge them in the future a bit.}
+
+\subsection{Can you trust your high-level synthesis tool?}
As latency, throughput and energy efficiency become increasingly important, custom hardware accelerators are being designed for numerous applications~\cite{??}.
-Alas, designing these accelerators come at a cost of additional engineering effort and risk, since it typically involves designing in hardware description languages (HDL), such as Verilog.
-Designing at HDL level is not only arduous and time-consuming but also it limits the expressiveness and abstraction of computation.
-As such, high-level synthesis (HLS) is becoming an attractive alternative, since it offers abstraction by compiling high-level languages like C/C++ to HDL, whilst achieving comparable quality of results relative HDL designs~\cite{bdti,autoesl}.
-Modern HLS tools such as LegUp~\cite{}, Vivado HLS~\cite{}, and Intel i++~\cite{} can produce designs with comparable performance and energy-efficiency to those manually coded in HDL [citation needed], while offering the convenient abstractions and rich ecosystem of software development.
-
-\subsection{Can we trust our high-level synthesis tools?}
-The common starting point for most HLS tool development is to leverage an existing software compiler framework like LLVM, such as LegUp HLS, Vivado HLS and Bambu HLS~\cite{}.
-Re-using software concepts, optimisation and codebases make HLS compiliation as susceptible to bugs as any software compilers.
-These native codebases are large and they perform complex and non-trivial analyses and transformations to translate software into efficient assembly or HDL, in the case of HLS.
-Consequently, HLS tools cannot always guarantee that the generated hardware is equivalent to the input program, undermining any reasoning conducted at the software level.
-Furthermore, HDL design in itself is complex, error-prone and requires careful reasoning and formal verification to ensure behaviour, data-flow and structural correctness~\cite{}.
-The added complexity of HDL design thus increases the likelihood of HLS compilation mismatches.
-% between the software program and the generated hardware.
-% There are valid reasons to doubt that HLS tools actually \emph{do} preserve equivalence, not least because they are large pieces of software that perform complex transformations.
-
-Hence, the premise of this work is: Can we trust these compilers to translate high-level languages like C/C++ to HDL correctly?
-For instance, \citet{lidbury15_many_core_compil_fuzzin} abandoned fuzzing Altera's (now Intel's) OpenCL compiler since it ``either crashed or emitted an internal compiler error'' on a large number of their test inputs.
-Also, Du \emph{et al.}~\cite{?} fuzz tested three commercial HLS tools using Csmith~\cite{yang11_findin_under_bugs_c_compil}, while restricting the C programs to the constructs explicitly supported by the synthesis tools.
-They found that on average 2.5\% of test cases generated a design that did not match the behaviour of the input.
+Alas, designing these accelerators can be a tedious and error-prone process using a hardware description language (HDL) such as Verilog.
+An attractive alternative is high-level synthesis (HLS), in which hardware designs are automatically compiled from software written in a language like C.
+Modern HLS tools such as \legup{}~\cite{canis11_legup}, Vivado HLS~\cite{xilinx20_vivad_high_synth}, Intel i++~\cite{??}, and Bambu HLS~\cite{} can produce designs with comparable performance and energy-efficiency to those manually coded in HDL~\cite{bdti,autoesl}, while offering the convenient abstractions and rich ecosystem of software development.
+
+But existing HLS tools cannot always guarantee that the hardware designs they produce are equivalent to the software they were given.
+This undermines any reasoning conducted at the software level.
+And there are reasons to doubt that HLS tools actually \emph{do} preserve equivalence. First, just like conventional compilers, HLS tools are large pieces of software that perform a series of complex analyses and transformations.
+Second, unlike conventional compilers, HLS tools target HDL, which has superficial syntactic similarities to software languages but a subtly different semantics, making it easy to introduce behavioural mismatches between the output of the HLS tool and its input.
+%Hence, the premise of this work is: Can we trust these compilers to translate high-level languages like C/C++ to HDL correctly?
+For instance, \citet{lidbury15_many_core_compil_fuzzin} had to abandon their attempt to fuzz-test Altera's (now Intel's) OpenCL compiler since it ``either crashed or emitted an internal compiler error'' on so many of their test inputs.
+More recently, Du et al.~\cite{du+20} fuzz-tested three commercial HLS tools using Csmith~\cite{yang11_findin_under_bugs_c_compil}, and despite restricting the generated programs to the C fragment explicitly supported by all the tools, they still found that on average 2.5\% of test cases generated a design that did not match the behaviour of the input.
+%
Meanwhile, Xilinx's Vivado HLS has been shown to apply pipelining optimisations incorrectly\footnote{\url{https://bit.ly/vivado-hls-pipeline-bug}} or to silently generate wrong code should the programmer stray outside the fragment of C that it supports\footnote{\url{https://bit.ly/vivado-hls-pointer-bug}}.
\subsection{Existing verification workarounds}