summaryrefslogtreecommitdiffstats
path: root/archive/verilog.tex
blob: e4c7816aa6b3d86fa11f431de5feaed3936be15a (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
%\JW{I think most of the following paragraph could be chopped. We've already mentioned that we're only targeting a small fragment of Verilog, so we could mention at that point that in the fragment we target there is no discrepancy between the simulation behaviour and the synthesis behaviour. I don't know how important it is to talk about the details of evaluation order here -- does the reader really need to be aware of this in order to understand your work?}
When targeting a hardware description language such as Verilog, it is important to be consistent between simulating the hardware and the behaviour of the synthesised result on actual hardware.  In the target Verilog semantics, only clocked \alwaysblock{}s are supported as these ensure that there are not mismatches between simulation and synthesis, as combinational \alwaysblock{}s that trigger on any change of an internal signal may behave differently in simulation or synthesis based on the order of operations.  This limitation in the semantics therefore helps keep the Verilog correct and consistent.  In addition to that, only nonblocking assignment is used in signals that are used in multiple \alwaysblock{}, which stops any race conditions from occurring as all the signal updates happen deterministically. \NR{So the semantics allows multiple drivers then?}  Finally, a specific order of evaluation for the \alwaysblock{} is chosen, and because of the limitations listed before, choosing an order is guaranteed to have the same behaviour as executing the \alwaysblock{} in any order.\NR{Order is not guaranteed if we have multiple drivers.}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Using these constructs it is therefore possible to describe how hardware functions, where always-blocks that are triggered by a clock periodically get translated into flip-flops and always blocks triggered by changes in any internal signals are translated into combinational logic.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\subsection{Semantics}

An existing operational semantics for Verilog~\cite{loow19_verif_compil_verif_proces} was adapted for the semantics of the language that Vericert eventually targets.  This semantics is a small-step operational semantics at the clock cycle level, as hardware typically does not terminate in any way, however, within each clock cycle the semantics are constructed in a big-step style semantics.  This style of semantics matches the small-step operational semantics of \compcert{}'s three address code (3AC) quite well.

%~\NR{So the semantics does not support combinational always blocks or assignments?}\YH{Currently not, as they are not needed.}
%~\NR{Does the semantics also enforce that variables can only be modified within one always block (multiple drivers not allowed)? PL community might think of data races. }\YH{Will add that.}
%~\NR{Oh, do we sequentialise always execution?}\YH{Yes}
%~\NR{Have you discussed what an association map is?}\YH{Will add a section about that.}
%~\NR{Do blocking and non-blocking assignments each have individual maps? Also, association map is used before definition in this paragraph.}
At the top-level, always blocks describe logic which is run every time some event occurs.  The only event that is supported by these semantics is detecting the rising rising edge of the clock, so that we can implement synchronous logic.  As soon as an event occurs, the hardware will be executed, meaning if there are multiple always blocks that get triggered by the event, these will run in parallel.  Assignments to a variable in multiple always blocks is not allowed, so that there cannot be any data races during assignments.  However, as the semantics should be deterministic, we impose an order on the always blocks and execute them sequentially.  However, to preserve the fact that the statements inside of the always block are executed in parallel, nonblocking assignments to variables need to be kept in a different association map compared to blocking assignments to variables, where association maps are mappings from a variable name to its current value.  This preserves the behaviour that blocking assignments change the value of the variable inside of the clock cycle, whereas the nonblocking assignments only take place at the end of the clock cycle, and in parallel.  We can denote these two association maps as $s = (\Gamma_{\rm r}, \Gamma_{\rm a}, \Delta_{\rm r}, \Delta_{\rm a})$, where $\Gamma_{\rm r}$ is the current value of the registers, $\Gamma_{\rm a}$ is the current value of the array, and $\Delta_{\rm r}$ and $\Delta_{\rm a}$ are the values of the variables and arrays when the clock cycle ends.  Blocking assignments will therefore modify $\Gamma_{\rm r}$ and $\Gamma_{\rm a}$, whereas nonblocking assignments modify $\Delta_{\rm r}$ and $\Delta_{\rm a}$

%~\NR{Are you referring to the C stack?}\YH{Yes}
We can then define how one step in the semantics looks like.  We therefore first need to define the structure of the main module which will contain the logic for the program.  In general, functions that are translated to hardware will require basic handshaking signals so that the translated function can be used in hardware.  Firstly, they require an input for the clock, so that all the sequential circuits are run at the right time.  They then require a start and reset input, so that the hardware generated from the function can be reused multiple times.  Finally, they need a finish and return signal, where finish will go high when the result is ready to be read.  In addition to that, the module could take an arbitrary number of inputs which act as arguments to the original function, so that the module may be instantiated with any number of arguments.  In addition to inputs and outputs to the module, we also need to keep track of some internal signals and properties about the module.  Firstly, we need to keep track of the internal variables that contain the current state of the module, and the current contents of the stack.  Next, the module contains the entry point of the module, the list of module items that declare all of the internal registers and the encoding of the state machine that behaves in the same way as the function.  We can therefore declare it in the following way:
~\NR{We should distinguish handshaking signals of a Verilog module to arguments of a sw function. In the sw world, an argument is either value or pointer. Values are unmodifiable, and pointers can be only be modified by the function body. In contrast, handshaking signals are constantly active, interacting with external modules. These signals can also be trigger events, such as the clock.}\YH{Yes that's true, although currently we don't really support modules and therefore assume that the signals do not change.}
~\NR{Also, I don't think a Verilog module is equivalent to a Verilog function. I think they both exist in Verilog. Please double-check this before we use module and function interchangably in the text above.}\YH{Clarified that, functions are indeed different to modules in Verilog, did not want to use them interchangably.}
%\JW{I'd be inclined to write `$r~\mathrm{list}$' rather than $\vec{r}$, as it's a little more readable. (Assuming it's more-or-less the same thing?)}

The two main evaluation functions are then \textit{erun}, which takes in the current state together with an expression and returns a value, and \textit{srun}, which takes the current state and a statement as input, and returns the updated state.  The inductive rules defining \textit{srun} are shown below, where $\sigma_{n} = (\Gamma_{\rm r}^{n}, \Gamma_{\rm a}^{n}, \Delta_{\rm r}^{n}, \Delta_{\rm a}^{n})$:
\NR{The evaluation function here relates to the semantics, right? Can it be confused with sw function and Verilog module?}

Taking the \textsc{CondTrue} rule as an example, this rule will only apply if the Boolean result of running the expression results in a \texttt{true} value.  It then also states that the statement in the true branch of the conditional statement \textit{stt}~\NR{\textit{st} or \textit{stt}?} runs from state $\sigma_{0}$ to state $\sigma_{1}$.  If both of these conditions hold~\NR{How are these conditions checked? Can \textit{erun} or \textit{srun} evaluate to false?}, we then get that the conditional statement will also run from state $\sigma_{0}$ to state $\sigma_{1}$.  The \textsc{Blocking} and \textsc{Nonblocking} rules are a bit more interesting, as these modify the blocking and nonblocking association maps respectively.

One main difference between these semantics and the Verilog semantics by \citet{loow19_verif_compil_verif_proces} is that there is no function for external nondeterministic effects, such as memories and inputs and outputs.  These are instead handled explicitly in the semantics by using two dimensional unpacked arrays to model memories and assuming that inputs to modules cannot change.  Another difference with these semantics is that partial updates to arrays are fully supported, due to the fact that there are two different queues for arrays and variables.  Originally, if there was a blocking assignment to an array, and then a nonblocking assignment to a different region in the array, then the blocking assignment would disappear at the end of the clock cycle.  This is because the complete array would be overwritten with the updated array in the nonblocking association maps.  However, in our semantics, only the values that were changed in the array are actually recorded in the nonblocking assignment queue, meaning once the blocking and nonblocking array association maps are merged, only the actual indices that changed with nonblocking assignment are updated in the blocking assignment map.
%\NR{I did not understand this paragraph. Discuss with Yann.}

In these semantics, module instantiations are not supported, as they can be modelled by inlining the logic that modules would have produced.  This therefore means that function calls are implemented by inlining all the functions as well.  Consequently, recursive function calls are not supported, however, these are not supported by other high-level synthesis tools either, as allocating memory dynamically is not possible with fixed size RAM blocks.
\NR{Were these last two paragraphs referring to your assumptions or limitations or simplications you made? Discuss with Yann.}

To integrate our semantics with \compcert{}, we need to define the same states that \compcert{} uses to define their semantics, which are the \texttt{State}, \texttt{Callstate} and \texttt{Returnstate}.  The \texttt{Callstate} and \texttt{Returnstate} are needed because the initial state describes a call to main, and the final state results in the return value from the main function.  Even though the Verilog semantics do not support function calls, semantics for the initial function call and final return state need to be implemented to be properly shown equivalent to the Verilog semantics.  Firstly, the \texttt{State} is contains all the information necessary to identify the current execution state of the function or module.  In Verilog, we therefore keep track of the current module $m$ we are working on, the current value of the program counter $v$, which translates to the current value of the state register $s_{t}$, the current contents of the stack frame \textit{sf} and finally the current states of the association maps for variables $\Gamma_{r}$ and arrays $\Gamma_{a}$.  We can therefore define the state as \texttt{State} \textit{sf} $m$ $v$ $\Gamma_{r}$ $\Gamma_{a}$.  The \texttt{Callstate} only needs to be supported for the initial call to the function and the \texttt{Returnstate} only needs to be supported for the return of the main function.  As function calls are not supported in the semantics, there cannot be any other possible calling state or return state in the program.
\NR{We had a lot of discussion on functions earlier in this section, but then now we claim that the Verilog semantics does not support functions. If I recall correctly, our C program can't have functions either, or we inline all functions. Aren't they contradicting? Discuss with Yann.}

We then define the semantics of running the module for one clock cycle in the following way:

\YH{TODO:\@ Need to fix the last rule, as it is actually only used for a case that shouldn't ever be hit.}

The \textsc{Module} rule is the main rule for the execution of one clock cycle of the module.  Given that the value of the $s_{t}$ register is the value of the program counter at the current instruction and that the value of the $s_{t}$ register in the resulting association map is equal to the next program counter value, we can then say that if all the module items in the body go from one state to another, that the whole module will step from that state to the other.

%\NR{Bigger picture question: Why don't always blocks show up in the rules?}\YH{Mainly because these are at a different level of abstraction, here only statements are modelled.}

%%\input{verilog_notes}