+++ title = "How to make optimisations more predictable" author = "Yann Herklotz" tags = [] categories = [] backlinks = ["1f4b", "1f4"] forwardlinks = ["3a7", "1f4b"] zettelid = "1f4a" +++ The way to address this issue is to make optimisations more predictable. This is quite tough though, because it's hard to even imagine what this means. One way this could be approached is by formalising the optimisations, so that properties can be proven about them. For example, one could formalise loop pipelining in polyhedral analysis \[1\], \[2\] to understand more about the algorithm. However, this would probably be done using translation validation ([\#3a7]), therefore not being that perfect for understanding the algorithm. However, in addition to that, it might be possible to define a theorem that proves predictability of the algorithms. For example, one might imagine a situation where adding no-ops to the input of an algorithm drastically changes the output. One might therefore want to define a theorem that states that adding a no-op will either not change the hardware in any way, or weaken it by saying that the adding a no-op should always increase the size of the hardware if there are any changes to it.
\[1\] J. Liu, J. Wickerson, S. Bayliss, and G. A. Constantinides, “Polyhedral-based dynamic loop pipelining for high-level synthesis,” *IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems*, vol. 37, no. 9, pp. 1802–1815, Sep. 2018, doi: [10.1109/TCAD.2017.2783363].
\[2\] N. Courant and X. Leroy, “Verified code generation for the polyhedral model,” *Proc. ACM Program. Lang.*, vol. 5, no. POPL, Jan. 2021, doi: [10.1145/3434321].
[\#3a7]: /zettel/3a7 [10.1109/TCAD.2017.2783363]: https://doi.org/10.1109/TCAD.2017.2783363 [10.1145/3434321]: https://doi.org/10.1145/3434321