From b3374e905b62665a84ddb75927e3a55df2bfd126 Mon Sep 17 00:00:00 2001 From: Yann Herklotz Date: Mon, 18 Jan 2021 16:21:47 +0000 Subject: Add final reduction --- intro.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/intro.tex b/intro.tex index 11fc016..c269b8e 100644 --- a/intro.tex +++ b/intro.tex @@ -50,7 +50,7 @@ int main() { \label{fig:vivado_bug1} \end{figure} -The example above demonstrates the effectiveness of fuzzing. It seems unlikely that a human-written test-suite would discover this particular bug, given that it requires several components all to coincide -- a for-loop, shift-values accessed from an array with at least six elements, and a rather random-looking value for \code{x} -- before the bug is revealed! +The example above demonstrates the effectiveness of fuzzing. It seems unlikely that a human-written test-suite would discover this particular bug, given that it requires several components all to coincide before the bug is revealed! Yet this example also begs the question: do bugs found by fuzzers really \emph{matter}, given that they are usually found by combining language features in ways that are vanishingly unlikely to happen `in the real world'~\cite{marcozzi+19}. This question is especially pertinent for our particular context of HLS tools, which are well-known to have restrictions on the language features that they handle. Nevertheless, although the \emph{test-cases} we generated do not resemble the programs that humans write, the \emph{bugs} that we exposed using those test-cases are real, and \emph{could also be exposed by realistic programs}. %Moreover, it is worth noting that HLS tools are not exclusively provided with human-written programs to compile: they are often fed programs that have been automatically generated by another compiler. -- cgit