summaryrefslogtreecommitdiffstats
path: root/chapters/background.tex
diff options
context:
space:
mode:
Diffstat (limited to 'chapters/background.tex')
-rw-r--r--chapters/background.tex40
1 files changed, 20 insertions, 20 deletions
diff --git a/chapters/background.tex b/chapters/background.tex
index d4ec91f..3eb525a 100644
--- a/chapters/background.tex
+++ b/chapters/background.tex
@@ -45,8 +45,8 @@ SMTCoq~\cite[armand11_modul_integ_sat_smt_solver].
\subsection[sec:background:interactive]{Interactive theorem provers}
-Interactive theorem provers, on the other hand, focus on the checking proofs that are provided to
-them. These can either be written manually by the user, or automatically generated by some decision
+Interactive theorem provers, on the other hand, focus on checking proofs that are provided to them.
+These can either be written manually by the user, or automatically generated by some decision
procedure. However, these two ways of generating proofs can be combined, so the general proof
structure can be manually written, and simpler steps inside of the proof can be automatically
solved.
@@ -75,7 +75,7 @@ or a custom representation of the code.
\desc{Hardware resource allocation} It is possible to select a device to target, which gives
information about how many resources are available and can be used to implement the logic. However,
-tools often assume that an unlimited amount of resources are available instead. This is resource
+tools often assume that an unlimited amount of resources are available instead. This resource
sharing can be too expensive for adders, as the logic for sharing the adder would have approximately
the same cost as adding a separate adder. However, multipliers and divider blocks should be shared,
especially as divider blocks are often implemented in logic and are therefore quite expensive.
@@ -91,8 +91,8 @@ could be used, such as static or dynamic scheduling, which are described further
the variables need to be bound to hardware units and registers respectively. It is often not that
practical to share many hardware units though, as this requires adding multiplexers, which are often
the same size as the hardware units themselves. It is therefore more practical to only share
-operations that take up a lot of space, such as modulo or divide circuits, as well as multiplies if
-there are not available any more on the FPGA.
+operations that take up a lot of space, such as modulo or divide circuits, as well as multipliers if
+they are not available any more on the FPGA.
\desc{Hardware description generation} Finally, the hardware description is generated from the code
that was described in the intermediate language. This is often a direct translation of the
@@ -109,7 +109,7 @@ semantics.
\subsection[sec:language-blocks]{Intermediate Language}
This section describes some possible characteristics of a language that could be used as an input to
-an \HLS\ tool. These language normally require some structure to allow for easier optimisation and
+an \HLS\ tool. These languages normally require some structure to allow for easier optimisation and
analysis passes. In particular, it is often useful to have contiguous blocks of instructions that
do not contain any control-flow in one list. This means that these instructions can safely be
rearranged by only looking at local information of the block itself, and in particular it allows for
@@ -168,7 +168,7 @@ be tricky if there are loads and stores in the loop or if the loops are nested.
\subsection[sec:dynamic_scheduling]{Dynamic Scheduling}
On the other hand, Dynamic scheduling~\cite{josipovic18_dynam_sched_high_level_synth} does not
-require the schedule to be known at compile time and it instead generates circuits using tokens to
+require the schedule to be known at compile time and instead it generates circuits using tokens to
schedule the operations in parallel at run time. Whenever the data for an operation is available,
it sends a token to the next operation, signalling that the data is ready to be read. The next
operation does not start until all the required inputs to the operation are available, and once that
@@ -248,17 +248,17 @@ further in \in{Section}[sec:wire-to-wire].
Even though a proof for the correctness of an algorithm might exist, this does not guarantee that
the algorithm itself behaves in the same way as the assumed algorithm in the proof. The
implementation of the algorithm is separate from the actual implementation, meaning there could be
-various implementation bugs in the algorithm that mean that it does not behave correctly. C
-compilers are a good example of this, where many optimisations performed by the compilers have been
-proven correct, however these proofs are not linked directly to the actual implementations of these
-algorithms in GCC or Clang. Yang et al.~\cite{yang11_findin_under_bugs_c_compil} found
-more than 300 bugs in GCC and Clang, many of them being in the optimisation phases of the compiler.
-One way to link the proofs to the actual implementations in these compilers is to write the compiler
-in a language which allows for a theorem prover to check properties about the algorithms. This
-allows for the proofs to be directly linked to the algorithms, ensuring that the actual
-implementations are proven correct. Yang et al.~\cite{yang11_findin_under_bugs_c_compil}
-found that CompCert, a formally verified C Compiler, only had five bugs in all the unverified parts
-of the compiler, meaning this method of proving algorithms correct ensures a correct compiler.
+various implementation bugs in the algorithm that cause it to behave incorrectly. C compilers are a
+good example of this, where many optimisations performed by the compilers have been proven correct,
+however these proofs are not linked directly to the actual implementations of these algorithms in
+GCC or Clang. Yang et al.~\cite{yang11_findin_under_bugs_c_compil} found more than 300 bugs in GCC
+and Clang, many of them appearing in the optimisation phases of the compiler. One way to link the
+proofs to the actual implementations in these compilers is to write the compiler in a language which
+allows for a theorem prover to check properties about the algorithms. This allows for the proofs to
+be directly linked to the algorithms, ensuring that the actual implementations are proven correct.
+Yang et al.~\cite{yang11_findin_under_bugs_c_compil} found that CompCert, a formally verified C
+Compiler, only had five bugs in all the unverified parts of the compiler, meaning this method of
+proving algorithms correct ensures a correct compiler.
\subsection{CompCert}
@@ -333,8 +333,8 @@ As both the input IR and output netlist format have been designed from scratch,
useful for real world applications, as they require a different back end to be implemented from
existing compilers. In addition to that, it would most likely mean that the translation from
higher-level language to the IR is unverified and could therefore contain bugs. As the resulting
-netlist also uses a custom format, it cannot be fed directly to tools that can then translate it a
-bitstream to place onto an FPGA. The reason for designing a custom IR and netlist was so that
+netlist also uses a custom format, it cannot be fed directly to tools that can then translate it to
+a bitstream to place onto an FPGA. The reason for designing a custom IR and netlist was so that
these were compatible with each other, making proofs of equivalence between the two simpler.
Finally, it is unclear whether or not a translation algorithm from the IR to the netlist format was