summaryrefslogtreecommitdiffstats
path: root/evaluation.tex
blob: 5970de18d4393de6d301b3e4d6140483949868ef (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
\section{Evaluation}
\label{sec:evaluation}

Our evaluation is designed to answer the following three research questions.
\begin{description}
\item[RQ1] How fast is the hardware generated by \vericert{}, and how does this compare to existing HLS tools?
\item[RQ2] How area-efficient is the hardware generated by \vericert{}, and how does this compare to existing HLS tools?
\item[RQ3] How long does \vericert{} take to produce hardware?
\end{description}

\subsection{Experimental Setup}

\paragraph{Choice of HLS tool for comparison.} We compare \vericert{} against \legup{} 5.1 because it is open-source and hence easily accessible, but still produces hardware ``of comparable quality to a commercial high-level synthesis tool''~\cite{canis11_legup}.

\paragraph{Choice of benchmarks.} We evaluate \vericert{} using the PolyBench/C benchmark suite~\cite{polybench}, consisting of a collection of well-known numerical kernels. PolyBench/C is widely-used in the HLS context~\cite{choi+18,poly_hls_pouchet2013polyhedral,poly_hls_zhao2017,poly_hls_zuo2013}, since it consists of affine loop bounds, making it attractive for regular and streaming computation on FPGA architectures.
We chose Polybench 4.2.1 for our experiments, which consists of 30 programs. 
Out of these 30 programs, three programs utilise square root functions: \texttt{co-relation}, \texttt{gramschmidt} and \texttt{deriche}. 
Hence, we were unable evaluate these programs, since they mandatorily require \texttt{float}s.
% Interestingly, we were also unable to evaluate \texttt{cholesky} on \legup{}, since it produce an error during its HLS compilation. 
In summary, we evaluate 27 programs from the latest Polybench suite. 

\paragraph{Configuring Polybench for experimentation}
We configure Polybench's metadata and slightly modified the source code to suit our purposes.
First, we restrict Polybench to only generate integer data types, since we do not support floats or doubles currently.
Secondly, we utilise Polybench's smallest data set size for each program to ensure that data can reside within on-chip memories of the FPGA, avoiding any need for off-chip memory accesses.
Furthermore, using the C divide or modulo operators results in directly translate to built-in Verilog divide and modulo operators. 
Unfortunately, the built-in operators are designed as single-cycle operation, causing large penalties in clock frequency. 
To work around this issue, we use a C implementation of the divide and modulo operations, which is indirectly compiles them as multi-cycle operations on the FPGA. 
In addition, we initial the input arrays and check the output arrays of all programs entirely on-chip. 

% For completeness, we use the full set of 24 benchmarks. We set the benchmark parameters so that all datatypes are integers (since \vericert{} only supports integers) and all datasets are `small' (to fit into the small on-chip memories). A current limitation of \vericert{}, as discussed in Section~\ref{?}, is that it does not support addition and subtraction operations involving integer literals not divisible by 4. To work around this, we lightly modified each benchmark program so that literals other than multiples of 4 are stored into variables before being added or subtracted. \JW{Any other notable changes to the benchmarks?}

\paragraph{Synthesis setup.} The generated Verilog, by both \vericert{} and \legup{}, is provided to Intel Quartus v16.0~\cite{quartus}, that synthesises the Verilog into a netlist and place-and-routes this netlist onto an Arria 10 FPGA device, consists of approximately 430000 LUTs.

\subsection{RQ1: How fast is \vericert{}-generated hardware?}

\begin{figure}
\begin{tikzpicture}
\begin{axis}[
    xmode=log,
    ymode=log,
    height=80mm,
    width=80mm,
    xlabel={\legup{} cycle count},
    ylabel={\vericert{} cycle count},
    xmin=1000,
    xmax=10000000,
    ymax=10000000,
    ymin=1000,
    %log ticks with fixed point,
  ]
  
\addplot[draw=none, mark=*, draw opacity=0, fill opacity=0.3]
  table [x=legupcycles, y=vericertcycles, col sep=comma]
  {results/poly.csv};

\addplot[dotted, domain=1000:10000000]{x};
%\addplot[dashed, domain=10:10000]{9.02*x};
  
\end{axis}
\end{tikzpicture}
\caption{A comparison of the cycle count of hardware designs generated by \vericert{} and by \legup{}, where the diagonal represents $y=x$.}
\label{fig:comparison_cycles}
\end{figure}

\begin{figure}
\begin{tikzpicture}
\begin{axis}[
    xmode=log,
    ymode=log,
    height=80mm,
    width=80mm,
    xlabel={\legup{} execution time (ms)},
    ylabel={\vericert{} execution time (ms)},
    xmin=10,
    xmax=100000,
    ymax=100000,
    ymin=10,
    %log ticks with fixed point,
  ]
  
\addplot[draw=none, mark=*, draw opacity=0, fill opacity=0.3]
  table [x expr={\thisrow{legupcycles}/\thisrow{legupfreqMHz}}, y expr={\thisrow{vericertcycles}/\thisrow{vericertfreqMHz}}, col sep=comma]
  {results/poly.csv};

\addplot[dotted, domain=10:100000]{x};
%\addplot[dashed, domain=10:10000]{9.02*x + 442};
  
\end{axis}
\end{tikzpicture}
\caption{A comparison of the execution time of hardware designs generated by \vericert{} and by \legup{}, where the diagonal represents $y=x$.}
\label{fig:comparison_time}
\end{figure}

Firstly, before comparing any performance metrics, it is worth highlighting that any Verilog produced by \vericert{} is guaranteed to be \emph{correct}, whilst no such guarantee can be provided by \legup{}.
This guarantee in itself provides a significant leap in terms of reliability of HLS, compared to any other HLS tools available. 

Figure~\ref{fig:comparison_cycles} compares the cycle counts of our 27 programs executed by \vericert{} and \legup{} respectively. 
In most cases, we see that the data points are above the diagonal, demonstrating that the \legup{}-generated hardware is faster than \vericert{}-generated Verilog.
This performance gap is mostly due to \legup{} optimisations such as scheduling and memory analysis, which are designed to exploit parallelism from input programs.
On average, \legup{} designs are $4\times$ faster than \vericert{} designs on Polybench programs.
This gap does not represent the performance cost that comes with formally proving a HLS tool.  
Instead, it is simply a gap between an unoptimised \vericert{} versus an optimised \legup{}.
In fact, even without any optimisations, a few data points are close to diagonal and even below diagonal, which means \vericert{} is competitive to \legup{}.
We are very encouraged by these data points. 
As we optimise \vericert{} to incorporate other HLS optimisations in a formally-proved manner, this gap can reduce whilst preserving our correctness guarantees.

Cycle count is one factor in calculating execution latency. The other factor is the generated clock frequency, which tells us how fast a cycle executes. Figure~\ref{fig:comparison_time} compared the execution times of \vericert{} and \legup{}. On average, \vericert{} are $9\times$ slower than \legup{}. As mentioned earlier, we modify the Polybench programs to utilise C-based divides and modulos. We had to avoid using the built-in Verilog divides and modulos, since Quartus interprets them as single-cycle operations. This interpretation affects clock frequency drastically. On average, when using the built-in Verilog approach, \vericert{}'s clock frequency was 21MHz, compared to \legup{}'s clock frequency of 246MHz. By moving to the C-based approach, our average clock frequency is now 113MHz. Hence, we reduce the frequency gap from approximately $12\times$ to $2\times$. This gap still exists because \legup{} uses various optimisation tactics and Intel-specific IPs, which requires further engineering effort and testing from our side. 

% The difference in cycle counts shows the degree of  parallelism that \legup{}'s scheduling and memory system can offer. However, their Verilog generation is not guaranteed to be correct. Although the runtime LegUp outputs are tested to be correct for these programs, this does not provide any confidence on the correctness of Verilog generation of any other programs. Our Coq proof mechanisation guarantees that generated Verilog is correct for any input program that uses the subset of \compcert{} instructions that we have proven to be correct.

\subsection{RQ2: How area-efficient is \vericert{}-generated hardware?}

\begin{figure}
\begin{tikzpicture}
\begin{axis}[
    height=80mm,
    width=80mm,
    xlabel={\legup{} resource utilisation (\%)},
    ylabel={\vericert{} resource utilisation (\%)},
    xmin=0, ymin=0,
    xmax=1, ymax=30,
  ]
  
\addplot[draw=none, mark=*, draw opacity=0, fill opacity=0.3]
  table [x expr=(\thisrow{legupluts}/427200*100), y expr=(\thisrow{vericertluts}/427200*100), col sep=comma]
  {results/poly.csv};
  
%   \addplot[dashed, domain=0:1]{x};
  
\end{axis}
\end{tikzpicture}
\caption{A comparison of the resource utilisation of designs generated by \vericert{} and by \legup{}}
\label{fig:comparison_area}
\end{figure}

Figure~\ref{fig:comparison_area} compares the resource utilisation of Polybench programs generated by \vericert{} and \legup{}.
On average, \vericert{} produces hardware that is $21\times$ larger than \legup{} for the same Polybench programs. 
\vericert{} designs are also filling up to 30\% of a large FPGA chip, which indicates that unoptimised Verilog generation can costly. 
The key reason for this behaviour, is that absence of RAM inference for the \vericert{} designs. 
RAM are dedicated hardware blocks on the FPGA that are customly designed to allow for array synthesis.
Synthesis tools like Quartus generally require array accesses to be in a certain template or form to enable inference of RAM use.
\legup{}'s Verilog generation is tailored to enable RAM inference by Quartus.
\vericert{}'s array accesses in Verilog are more generic, allowing us to target different FPGA synthesis tools and vendors, but underfits Quartus requirements.
For a fair comparison, we chose Quartus for these experiments because LegUp supports Quartus efficiently. 
% Consequently, on average, \legup{} designs use $XX$ RAMs whereas \vericert{} use none. 
Improving RAM inference is part of our future plans. 

% We see that \vericert{} designs use between 1\% and 30\% of the available logic on the FPGA, averaging at around 10\%, whereas LegUp designs all use less than 1\% of the FPGA, averaging at around 0.45\%. The main reason for this is mainly because RAM is not inferred automatically for the Verilog that is generated by \vericert{}.  Other synthesis tools can infer the RAM correctly for \vericert{} output, so this issue could be solved by either using a different synthesis tool and targeting a different FPGA, or by generating the correct template which allows Quartus to identify the RAM automatically.

\subsection{RQ3: How long does \vericert{} take to produce hardware?}

\begin{figure}
\begin{tikzpicture}
\begin{axis}[
    height=80mm,
    width=80mm,
    xlabel={\legup{} compilation time (s)},
    ylabel={\vericert{} compilation time (s)},
    yticklabel style={
        /pgf/number format/fixed,
        /pgf/number format/precision=2},
    xmin=4.6,
    xmax=5.1,
    ymin=0.06,
    ymax=0.20,
  ]
  
\addplot[draw=none, mark=*, draw opacity=0, fill opacity=0.3] 
  table [x=legupcomptime, y=vericertcomptime, col sep=comma]
  {results/poly.csv};

  %\addplot[dashed, domain=4.5:5.1]{0.1273*x-0.5048};

\end{axis}
\end{tikzpicture}
\caption{A comparison of compilation time for \vericert{} and for \legup{}}
\label{fig:comparison_comptime}
\end{figure}

Figure~\ref{fig:comparison_comptime} compares the compilation times of \vericert{} and by \legup{}, with each data point corresponding to one of the PolyBench/C benchmarks. On average, \vericert{} compilation is about $47\times$ faster than \legup{} compilation. \vericert{} is much faster because it omits many complext time-consuming HLS optimisations performed by \legup{}, such as scheduling and memory analysis. This comparison also shows our approach do not add any significant overheads in compilation time, since we do not invoke verification for every compilation instance.


\NR{Do we want to finish the section off with some highlights or a summary?}

%%% Local Variables:
%%% mode: latex
%%% TeX-master: "main"
%%% TeX-command-extra-options: "-shell-escape"
%%% End: