summaryrefslogtreecommitdiffstats
path: root/main.tex
diff options
context:
space:
mode:
authorYann Herklotz <git@yannherklotz.com>2021-02-25 13:21:24 +0000
committerYann Herklotz <git@yannherklotz.com>2021-02-25 13:21:24 +0000
commit8e3dfda8b700385317b9ed26a93d4f49eb2137e1 (patch)
tree07c8fb271e0acaf69aae1fc7615bd51098c225d6 /main.tex
parent8627e6424bcd32665c57378ff2a67fb83d30f216 (diff)
downloadlatte21_hlstpc-8e3dfda8b700385317b9ed26a93d4f49eb2137e1.tar.gz
latte21_hlstpc-8e3dfda8b700385317b9ed26a93d4f49eb2137e1.zip
Work on clarifying sections
Diffstat (limited to 'main.tex')
-rw-r--r--main.tex2
1 files changed, 1 insertions, 1 deletions
diff --git a/main.tex b/main.tex
index dffecb4..7e8468f 100644
--- a/main.tex
+++ b/main.tex
@@ -220,7 +220,7 @@ However, this could be seen as being beneficial, as proving the correctness of t
Another concern might be that a verified HLS tool might not be performant enough to be usable in practice. If that is the case, then the verification effort could be seen as useless, as it could not be used. Taking \vericert{} as an example, which does not currently include many optimisations, we found that performing comparisons between the fully verified bits of \vericert{} and \legup{}~\cite{canis11_legup}, we found that the speed and area was comparable ($1\times$ - $1.5\times$) that of \legup{} without LLVM optimisations and without operation chaining. With those optimisations fully turned on, \vericert{} is around $4.5\times$ slower than \legup{}, with half of the speed-up being due to LLVM.\@
-There are many optimisations that need to be added to \vericert{} to turn it into a viable and competitive HLS tool. First of all, the most important addition is a good scheduling implementation, which supports operation chaining and properly pipelining operators. Our main focus is trying to implement scheduling based on system of difference constraint~\cite{cong06_sdc}, which is the same algorithm which \legup{} uses. We have currently implemented the scheduling algorithm, as well as a simple verifier for the translation, but are missing the semantic preservation proofs that the verifier should produce. With this optimisation turned on, we are only around $2\times$ to $3\times$ slower than fully optimised \legup{}, with a slightly larger area. As scheduling is implemented using translation validation, the scheduling algorithm can be tweaked and optimised without ever having to touch the correctness proof, with the downside of other translation validation approaches where the equivalence may sometimes not be provable, but errors will at least be caught at compilation time.
+There are many optimisations that need to be added to \vericert{} to turn it into a viable and competitive HLS tool. First of all, the most important addition is a good scheduling implementation, which supports operation chaining and properly pipelining operators. Our main focus is implementing scheduling based on system of difference constraint~\cite{cong06_sdc}, which is the same algorithm \legup{} uses. With this optimisation turned on, \vericert{} is only $~2\times$ to $~3\times$ slower than fully optimised \legup{}, with a slightly larger area. The scheduling step is implemented using verified translation validation, meaning the scheduling algorithm can be tweaked and optimised without ever having to touch the correctness proof.
\paragraph{Even a formally verified HLS tool can't give absolute guarantees about the hardware designs it produces.}