summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorYann Herklotz <git@yannherklotz.com>2020-11-18 17:39:33 +0000
committerYann Herklotz <git@yannherklotz.com>2020-11-18 17:39:33 +0000
commit91621b32867110c6feb6fb97d01bfecfb388f446 (patch)
tree1be452868eacf167b01e47e3b4eb0d3d5fdd9ed1
parent416227d361e75d05e87e6dde469c6cce76db874d (diff)
downloadoopsla21_fvhls-91621b32867110c6feb6fb97d01bfecfb388f446.tar.gz
oopsla21_fvhls-91621b32867110c6feb6fb97d01bfecfb388f446.zip
Add notes
-rw-r--r--proof.tex7
-rw-r--r--verilog.tex2
2 files changed, 4 insertions, 5 deletions
diff --git a/proof.tex b/proof.tex
index 1de5a56..2d1a42f 100644
--- a/proof.tex
+++ b/proof.tex
@@ -1,8 +1,7 @@
\section{Proof}
\label{sec:proof}
-\NR{Now that we have adapted the Verilog semantics to the CompCert model, we are in a position to formally prove the correctness of our C to Verilog compilation.}
-This section describes the main correctness theorem that was proven and the main ideas behind the proofs.
+Now that the Verilog semantics have been adapted to the CompCert model, we are in a position to formally prove the correctness of our C to Verilog compilation. This section describes the main correctness theorem that was proven and the main ideas behind the proofs.
The main correctness theorem is the exact same correctness theorem as stated in \compcert{}~\cite{leroy09_formal_verif_realis_compil} which states that for all Clight source programs $C$, if the translation from the source to the target Verilog code succeeds, assuming that $C$ has correct observable behaviour $B$ when executed, then the target Verilog code will simulate with the same behaviour $B$, where behaviour can either be convergent, divergent or ``going wrong'' and is associated with a trace $t$ of any external function calls. The following backwards simulation theorem describes this property, where $\Downarrow_{s}$ and $\Downarrow$ stand for simulation and execution respectively.
@@ -14,7 +13,7 @@ To prove this top-level theorem, a forward simulation has to be done between eac
\subsection{Specification}
-To simplify the proof, instead of using the translation algorithm as an assumption, as was done in the backward simulation stated above, a specification of the translation can be constructed instead which contains all the properties that are needed to prove the correctness. For example, for the translation from 3AC to HTL, the translation might be defined using the following function: $\yhfunction{transl\_{3ac}} (c) = \yhconstant{OK} (h)$, where $c$ is the 3AC input code and $h$\NR{is the output HTL code?}. However, instead we can define a relation \texttt{tr\_htl} between the 3AC and HTL code, which contains all the properties about the translation that are needed, without any of the implementation. If the following can be proven, it can then be used instead of the translation algorithm when performing the proof of correctness.
+To simplify the proof, instead of using the translation algorithm as an assumption, as was done in the backward simulation stated above, a specification of the translation can be constructed instead which contains all the properties that are needed to prove the correctness. For example, for the translation from 3AC to HTL, the translation might be defined using the following function: $\yhfunction{transl\_3ac\_htl} (c) = \yhconstant{OK} (h)$, where $c$ is the 3AC input code and $h$ is the generated HTL code. However, instead we can define a relation \texttt{tr\_3ac\_htl} between the 3AC and HTL code, which contains all the properties about the translation that are needed, without any of the implementation. If the following can be proven, it can then be used instead of the translation algorithm when performing the proof of correctness.
\begin{equation*}
\forall\ c\ h,\ \yhfunction{transl\_3ac} (c) = \yhconstant{OK}(h) \implies \yhfunction{tr\_htl}\ c\ h.
@@ -99,7 +98,7 @@ Another problem with the representation of the state as an actual register is th
Finally, to prove the backward simulation given the forward simulation, it has to be shown that if we generate hardware with a specific behaviour, that it is the only possible program with that behaviour. This only has to be performed for the final intermediate language, which is Verilog, so that the backward simulation holds for the whole chain from Clight to Verilog.
-The Verilog semantics that are used are deterministic, as the order of operation of all the constructs is defined, and there is therefore only one way to evaluate the module and only one possible behaviour. This was proven correct for the small-step semantics shown in Figure~\ref{fig:inferrence_module}
+The Verilog semantics that are used are deterministic, as the order of operation of all the constructs is defined, and there is therefore only one way to evaluate the module and only one possible behaviour. This was proven correct for the small-step semantics shown in Figure~\ref{fig:inferrence_module}.
%\subsection{Coq Mechanisation}
diff --git a/verilog.tex b/verilog.tex
index 03cafad..ae30c9c 100644
--- a/verilog.tex
+++ b/verilog.tex
@@ -22,7 +22,7 @@ An example of a rule \JWcouldcut{in the semantics} for executing an always block
\end{equation*}
\noindent which shows that assuming the statement $s$ in the always block runs with state $\Sigma$ and produces the new state $\Sigma'$, the always block will result in the same final state. As only clocked always blocks are supported, and one step in the semantics correspond to one clock cycle, it means that this rule is run once per clock cycle \NRcouldcut{which is what it is defined to do}.
-\NR{The mention about small steps being one cycle and 'only clocked/synchronous \alwaysblock{} are supported' can move earlier.} \JW{Actually couldn't this rule be run multiple times per clock cycle in the case where you have multiple always blocks in your module all triggered on the same rising edge?}
+\NR{The mention about small steps being one cycle and 'only clocked/synchronous \alwaysblock{} are supported' can move earlier.} \JW{Actually couldn't this rule be run multiple times per clock cycle in the case where you have multiple always blocks in your module all triggered on the same rising edge?}\YH{Yes, it is run multiple times, which is controlled by the \textsc{Module} rule, which runs the always blocks sequentially. This is controlled by another big step semantics that either takes the Cons of the list and runs the left and right hand sides, or nil which returns the same state.}
Two types of assignments are supported in always blocks: nonblocking and blocking assignment. Nonblocking assignment modifies the signal at the end of the timestep \JW{Assuming `timestep' and `clock cycle' are the same, I suggest we stick with `clock cycle' everywhere.} and atomically. Blocking assignment, on the other hand, assigns the variable \NRreplace{directly}{instantaneously?} in the always block for later signals to pick up. To model both these assignments, the state $\Sigma$ has to be split into two \NRreplace{parts}{sets?}: $\Gamma$, containing the current values of all variables, and $\Delta$, containing the values that will be assigned to the variables at the end of the clock cycle, we can therefore say that $\Sigma = (\Gamma, \Delta)$.~\NR{Can we say that $\Gamma$ contains instantaneous (ephemeral) updates for the current cycle and $\Delta$ contains periodical updates for the next cycle? Will be good to distinguish those two updates with the same terms across the paper, which can tie in with small-step and big-step distinction maybe asynchronous vs synchronous updates. }\YH{Hmm, so I don't think nonblocking and blocking ties in with big-step and smallstep, but yes it would be better to use the same terms throughoug the paper} The nonblocking assignment can therefore be expressed as the following: