aboutsummaryrefslogtreecommitdiffstats
path: root/content.org
diff options
context:
space:
mode:
authorYann Herklotz <git@yannherklotz.com>2021-09-10 20:00:19 +0100
committerYann Herklotz <git@yannherklotz.com>2021-09-10 20:00:19 +0100
commit3badbecee97a15855eb9e3975eaf6150d5d24ef1 (patch)
tree8b11e404c8b45e5d8b9f02f2aa8b83d381bf3412 /content.org
parentb2124d6eb0c8bc878156b3f5b8d6d7230a0031d3 (diff)
downloadyannherklotz.com-3badbecee97a15855eb9e3975eaf6150d5d24ef1.tar.gz
yannherklotz.com-3badbecee97a15855eb9e3975eaf6150d5d24ef1.zip
Fix indentation
Diffstat (limited to 'content.org')
-rw-r--r--content.org881
1 files changed, 372 insertions, 509 deletions
diff --git a/content.org b/content.org
index aab3a32..777ce28 100644
--- a/content.org
+++ b/content.org
@@ -386,136 +386,105 @@ these notes.
:CUSTOM_ID: nix-for-coq-development
:END:
-I first learnt about [[https://nixos.org/nix/][Nix]] because of Haskell,
-eventually running into cabal hell while working on many different
-projects simultaneously. These used the same dependencies but with
-different versions, which could not be handled by cabal. The common
-solution is to use the
-[[https://docs.haskellstack.org/en/stable/README/][stack]] build tool.
-This manages a database of compatible libraries in called
-[[https://www.stackage.org/][stackage]] which contains a snapshot of
-[[https://hackage.haskell.org/][hackage]] packages that have been tested
-together. This works very well for packages that are on stackage,
-however, if one uses dependencies that are only on hackage or, even
-worse, only available through git, it can become hard to integrate these
-into stack and use them. In addition to that, if one depends on specific
-versions of command line tools, these can only be documented and will
-not be installed or tracked by stack.
-
-The other option is to use Nix, which is a general purpose package
-manager. The feature that sets it apart from other package managers is
-that it completely isolates every package using hashes and can therefore
-seamlessly support multiple versions of the same package. This
-inherently eliminates the cabal hell problem. In addition to that, if
-you create a nix derivation (name for a nix package) for the current
-project one is working on, you can launch a shell containing the exact
-versions that were specified in the derivation to let it build. This
-also gives Nix the ability to act as a package manager for reproducible
-builds, instead of needing to resort to Docker. In addition to that, Nix
-provides one command, =nix-build=, which will build the whole project.
-
-For projects using Coq, this is exceptionally useful as one can specify
-dependencies for Coq, OCaml and the shell using the same nix derivation,
-and launch a shell which contains all the right versions which will let
-the project build.
+I first learnt about [[https://nixos.org/nix/][Nix]] because of Haskell, eventually running into cabal hell while working on
+many different projects simultaneously. These used the same dependencies but with different
+versions, which could not be handled by cabal. The common solution is to use the [[https://docs.haskellstack.org/en/stable/README/][stack]] build tool.
+This manages a database of compatible libraries in called [[https://www.stackage.org/][stackage]] which contains a snapshot of
+[[https://hackage.haskell.org/][hackage]] packages that have been tested together. This works very well for packages that are on
+stackage, however, if one uses dependencies that are only on hackage or, even worse, only available
+through git, it can become hard to integrate these into stack and use them. In addition to that, if
+one depends on specific versions of command line tools, these can only be documented and will not be
+installed or tracked by stack.
+
+The other option is to use Nix, which is a general purpose package manager. The feature that sets it
+apart from other package managers is that it completely isolates every package using hashes and can
+therefore seamlessly support multiple versions of the same package. This inherently eliminates the
+cabal hell problem. In addition to that, if you create a nix derivation (name for a nix package) for
+the current project one is working on, you can launch a shell containing the exact versions that
+were specified in the derivation to let it build. This also gives Nix the ability to act as a
+package manager for reproducible builds, instead of needing to resort to Docker. In addition to
+that, Nix provides one command, =nix-build=, which will build the whole project.
+
+For projects using Coq, this is exceptionally useful as one can specify dependencies for Coq, OCaml
+and the shell using the same nix derivation, and launch a shell which contains all the right
+versions which will let the project build.
*** Nix Basics
:PROPERTIES:
:CUSTOM_ID: nix-basics
:END:
-First, to understand nix derivations it is important to go over some Nix
-basics and explain the language itself, as it is quite different to
-usual package descriptions. The most interesting fact about the language
-is that it is purely functional, only using immutable data structures.
-
-The most important thing to know when learning a new language is where
-to look up existing functions and new syntax for that language. This is
-particularly important for Nix as it does not have that big of a
-presence on stackoverflow, and it can be hard to google for the Nix
-language as that is often an abbreviation “*nix” used for linux. I
-therefore tend to google for the term "nixpkgs" instead of "nix" which
-seems to give better results. The main resources are the following:
-
-- the amazing [[https://nixos.org/nix/manual/][Nix manual]], especially
- the [[https://nixos.org/nix/manual/#ch-expression-language][Nix
- expression chapter]],
-- [[https://nixos.org/nixos/nix-pills/index.html][Nix pills]] for a
- collection of useful tutorials, and finally
-- the [[https://nixos.org/nixpkgs/manual/][Nix contributors guide]] for
- more information about how to package derivations.
-
-Searching for anything on those sites should be much more useful than
-using Google.
-
-There are three main important structures that need to be understood in
-Nix. There are
+First, to understand nix derivations it is important to go over some Nix basics and explain the
+language itself, as it is quite different to usual package descriptions. The most interesting fact
+about the language is that it is purely functional, only using immutable data structures.
+
+The most important thing to know when learning a new language is where to look up existing functions
+and new syntax for that language. This is particularly important for Nix as it does not have that
+big of a presence on stackoverflow, and it can be hard to google for the Nix language as that is
+often an abbreviation “*nix” used for linux. I therefore tend to google for the term "nixpkgs"
+instead of "nix" which seems to give better results. The main resources are the following:
+
+- the amazing [[https://nixos.org/nix/manual/][Nix manual]], especially the [[https://nixos.org/nix/manual/#ch-expression-language][Nix expression chapter]],
+- [[https://nixos.org/nixos/nix-pills/index.html][Nix pills]] for a collection of useful tutorials, and finally
+- the [[https://nixos.org/nixpkgs/manual/][Nix contributors guide]] for more information about how to package derivations.
+
+Searching for anything on those sites should be much more useful than using Google.
+
+There are three main important structures that need to be understood in Nix. There are
- sets (={ a = 2; b = 3; }=),
- lists (=[ 1 2 3 4 ]=), and
- functions (=pattern: body=).
-These are the structures you will come across most often, however, there
-are many other useful features Nix has that makes it pleasant to work
-with.
+These are the structures you will come across most often, however, there are many other useful
+features Nix has that makes it pleasant to work with.
-Just like many other functional languages, Nix has =let= expressions
-which bind an expression to a name.
+Just like many other functional languages, Nix has =let= expressions which bind an expression to a
+name.
#+begin_src emacs-lisp
let name = expr; in body
#+end_src
-It also supports importing an expression, which just evaluates and
-inserts an expression.
+It also supports importing an expression, which just evaluates and inserts an expression.
#+begin_src emacs-lisp
import ./expression.nix;
#+end_src
-The =with= expression is also interesting, which makes all the
-attributes of a set available in the next expression, unqualified.
+The =with= expression is also interesting, which makes all the attributes of a set available in the
+next expression, unqualified.
#+begin_src emacs-lisp
with set; expr
#+end_src
-There are many other useful constructs such as recursive sets (allows
-you to refer to keys from the set inside the set), inheriting (copy the
-current scope into a set or =let= expression) or conditionals
-(=if c then e1 else e2=). However, this should be enough to learn about
-derivations.
+There are many other useful constructs such as recursive sets (allows you to refer to keys from the
+set inside the set), inheriting (copy the current scope into a set or =let= expression) or
+conditionals (=if c then e1 else e2=). However, this should be enough to learn about derivations.
*** Nix Integration with Coq
:PROPERTIES:
:CUSTOM_ID: nix-integration-with-coq
:END:
-I'll go through an example of how I created a nix derivation for a Coq
-project which extracted to OCaml, going through all the steps that were
-necessary to make it more general. Each package in nix is actually a
-derivation. These are functions that take in the whole collection of all
-the derivations that are available, select the derivations that are
-needed using the pattern matching that functions inherently do and
-returns a new derivation. This is just a set containing information on
-how to build the package by defining multiple different stages in the
-build pipeline.
-
-The main function we will use is the =mkDerivation= helper function
-which is a wrapper around the more manual =derivation= function. This
-function takes in a set which can be used to override various build
+I'll go through an example of how I created a nix derivation for a Coq project which extracted to
+OCaml, going through all the steps that were necessary to make it more general. Each package in nix
+is actually a derivation. These are functions that take in the whole collection of all the
+derivations that are available, select the derivations that are needed using the pattern matching
+that functions inherently do and returns a new derivation. This is just a set containing information
+on how to build the package by defining multiple different stages in the build pipeline.
+
+The main function we will use is the =mkDerivation= helper function which is a wrapper around the more
+manual =derivation= function. This function takes in a set which can be used to override various build
stages and dependencies.
-This example will build a Nix derivation for the
-[[https://github.com/vellvm/vellvm][Vellvm]], so that it builds without
-any errors and contains all the nix packages that are required.
+This example will build a Nix derivation for the [[https://github.com/vellvm/vellvm][Vellvm]], so that it builds without any errors and
+contains all the nix packages that are required.
-The first derivation one could come up with is the following, which is
-just a declaration of all the packages that are needed. The =with=
-declaration can be used to bring all the members of the =<nixpkgs>= set
-into scope. We then call the =mkDerivation= function to override some of
-the attributes inside the set, such as the name (=name=), the location
-of the source code (=src=). These are the only two required attributes
-for the =mkDerivation= function, however, that does not mean it will
-build yet.
+The first derivation one could come up with is the following, which is just a declaration of all the
+packages that are needed. The =with= declaration can be used to bring all the members of the =<nixpkgs>=
+set into scope. We then call the =mkDerivation= function to override some of the attributes inside the
+set, such as the name (=name=), the location of the source code (=src=). These are the only two required
+attributes for the =mkDerivation= function, however, that does not mean it will build yet.
#+begin_src emacs-lisp
with import <nixpkgs> {};
@@ -531,12 +500,10 @@ build yet.
}
#+end_src
-To actually get it to build, there are a few attributes that we need to
-specify in addition to those. The first is customise the build step
-using the =buildPhase= attribute. By default, the build will just
-execute =make=, however, in this project the =makefile= is actually in
-the =src= directory. We therefore have to change to that directory first
-before we can do anything.
+To actually get it to build, there are a few attributes that we need to specify in addition to
+those. The first is customise the build step using the =buildPhase= attribute. By default, the build
+will just execute =make=, however, in this project the =makefile= is actually in the =src= directory. We
+therefore have to change to that directory first before we can do anything.
#+begin_src emacs-lisp
buildPhase = ''
@@ -544,13 +511,11 @@ before we can do anything.
'';
#+end_src
-This will now execute the makefile correctly, however, it will fail the
-build because =Vellvm= has a few dependencies that need to be installed
-first. These are described in the =README=, so we can just try and find
-them in Nix and can add them as build dependencies. Here we can specify
-Coq dependencies using =coqPackages=, OCaml dependencies using
-=ocamlPackages=, and finally command line tools such as the OCaml
-compiler or the OCaml build system =dune=.
+This will now execute the makefile correctly, however, it will fail the build because =Vellvm= has a
+few dependencies that need to be installed first. These are described in the =README=, so we can just
+try and find them in Nix and can add them as build dependencies. Here we can specify Coq
+dependencies using =coqPackages=, OCaml dependencies using =ocamlPackages=, and finally command line
+tools such as the OCaml compiler or the OCaml build system =dune=.
#+begin_src emacs-lisp
buildInputs = [ git coq ocamlPackages.menhir dune coqPackages.flocq
@@ -558,17 +523,15 @@ compiler or the OCaml build system =dune=.
coqPackages.ceres ocaml ];
#+end_src
-Finally, Nix will execute =make install= automatically at the end to
-install the program correctly. In this case, we need to set the =COQLIB=
-flag so that it knows where to place the compiled Coq theories. These
-can be set using the =installFlags= attribute.
+Finally, Nix will execute =make install= automatically at the end to install the program correctly. In
+this case, we need to set the =COQLIB= flag so that it knows where to place the compiled Coq
+theories. These can be set using the =installFlags= attribute.
#+begin_src emacs-lisp
installFlags = [ "COQLIB=$(out)/lib/coq/${coq.coq-version}/" ];
#+end_src
-We then have the following Nix derivation which should download =Vellvm=
-and build it correctly.
+We then have the following Nix derivation which should download =Vellvm= and build it correctly.
#+begin_src emacs-lisp
with import <nixpkgs> {};
@@ -590,17 +553,13 @@ and build it correctly.
}
#+end_src
-However, one last problem we'll have is that =coqPackages.ceres= does
-not actually exist in =coqPackages=, we were a bit too optimistic. To
-solve this, however, we can easily define a derivation for =ceres= from
-the GitHub repo and insert it as a dependency into the set. Luckily
-=ceres= has a nice makefile at the base of the repository and does not
-have any external dependencies, except for Coq itself. We can therefore
-define a derivation in the following way. We can use
-=propagatedBuildInputs= to define dependencies that the package needs
-and that derivations using this package will also need. In this case,
-any derivation using =ceres= will need Coq, otherwise it would not be
-useful.
+However, one last problem we'll have is that =coqPackages.ceres= does not actually exist in
+=coqPackages=, we were a bit too optimistic. To solve this, however, we can easily define a derivation
+for =ceres= from the GitHub repo and insert it as a dependency into the set. Luckily =ceres= has a nice
+makefile at the base of the repository and does not have any external dependencies, except for Coq
+itself. We can therefore define a derivation in the following way. We can use =propagatedBuildInputs=
+to define dependencies that the package needs and that derivations using this package will also
+need. In this case, any derivation using =ceres= will need Coq, otherwise it would not be useful.
#+begin_src emacs-lisp
ceres = stdenv.mkDerivation {
@@ -616,9 +575,8 @@ useful.
};
#+end_src
-Finally, we can use a =let= expression to insert it as a dependency into
-our own derivation. We now have a complete nix expression that will
-always build =Vellvm= correctly in a containerised manner.
+Finally, we can use a =let= expression to insert it as a dependency into our own derivation. We now
+have a complete nix expression that will always build =Vellvm= correctly in a containerised manner.
#+begin_src shell
nix-prefetch-url --unpack https://github.com/Lysxia/coq-ceres/archive/4e682cf97ec0006a9d5b3f98e648e5d69206b614.tar.gz
@@ -667,9 +625,9 @@ always build =Vellvm= correctly in a containerised manner.
}
#+end_src
-If one saves the file in =default.nix=, one can then build the nix
-expression using the =nix-build= command. This should return a binary
-that runs the compiled OCaml code, which was extracted from Coq.
+If one saves the file in =default.nix=, one can then build the nix expression using the =nix-build=
+command. This should return a binary that runs the compiled OCaml code, which was extracted from
+Coq.
** MSR PhD Workshop on Next-Generation Cloud Infrastructure :workshop:FPGA:
:PROPERTIES:
@@ -680,85 +638,72 @@ that runs the compiled OCaml code, which was extracted from Coq.
:CUSTOM_ID: msr-phd-workshop-on-next-generation-cloud-infrastructure
:END:
-Microsoft Research held their first PhD Workshop which focused on new
-technologies for the cloud. The main themes of the workshop were optics
-and custom hardware. The workshop was spread over two days and included
-a keynote by Simon Peyton Jones on how to give a research talk, talks
-about three projects that are investigating various optical solutions
-(Silica, Sirius and Iris) and finally a talk on Honeycomb by Shane
-Fleming, replacing CPU's in the database storage by custom hardware
-using FPGAs.
-
-The workshop also included two poster sessions, and all the attendees
-presented the current projects that were being worked. These were also
-quite varied and included projects from various universities and also
-covered topics from optical circuits and storage to custom hardware and
-FPGAs. I presented our work on
-[[/blog/2019-06-19-verismith.html][Verismith]], our Verilog synthesis
-tool fuzzer.
-
-I would like to thank Shane Fleming for inviting us to the workshop, and
-also thank all the organisers of the workshop.
+Microsoft Research held their first PhD Workshop which focused on new technologies for the
+cloud. The main themes of the workshop were optics and custom hardware. The workshop was spread over
+two days and included a keynote by Simon Peyton Jones on how to give a research talk, talks about
+three projects that are investigating various optical solutions (Silica, Sirius and Iris) and
+finally a talk on Honeycomb by Shane Fleming, replacing CPU's in the database storage by custom
+hardware using FPGAs.
+
+The workshop also included two poster sessions, and all the attendees presented the current projects
+that were being worked. These were also quite varied and included projects from various universities
+and also covered topics from optical circuits and storage to custom hardware and FPGAs. I presented
+our work on [[/blog/2019-06-19-verismith.html][Verismith]], our Verilog synthesis tool fuzzer.
+
+I would like to thank Shane Fleming for inviting us to the workshop, and also thank all the
+organisers of the workshop.
*** Microsoft projects
:PROPERTIES:
:CUSTOM_ID: microsoft-projects
:END:
-Four microsoft projects were presented, with the main themes being
-custom hardware and optical electronics.
+Four microsoft projects were presented, with the main themes being custom hardware and optical
+electronics.
**** [[https://www.microsoft.com/en-us/research/project/honeycomb/][Honeycomb]]: Hardware Offloads for Distributed Systems
:PROPERTIES:
:CUSTOM_ID: honeycomb-hardware-offloads-for-distributed-systems
:END:
-A big problem that large cloud infrastructures like azure suffer from is
-the time it takes to retrieve data from databases. At the moment,
-storage is setup by having many nodes with a CPU that accesses a data
-structure to retrieve the location of the data that it is looking for.
-The CPU then accesses the correct location in the hard drives, retrieves
-the data, and finally sends it to the correct destination.
-
-General CPUs incur a /Turing Tax/, meaning that because they are so
-general, they become more expensive and less efficient than specialised
-hardware. It would therefore be much more efficient to have custom
-hardware that can walk the tree structure to find out the location of
-the data that it is trying to access, and then also access it.
-
-The idea is to use FPGAs to catch and process the network packet
-requesting data, fetch the data from the correct location and then
-return it through the network. Using an FPGA means that each network
-layer can be customised and reworked for that specific use case. Even
-the data structure can be optimised for the FPGA so that it becomes
-extremely fast to traverse the data and find it's location.
+A big problem that large cloud infrastructures like azure suffer from is the time it takes to
+retrieve data from databases. At the moment, storage is setup by having many nodes with a CPU that
+accesses a data structure to retrieve the location of the data that it is looking for. The CPU then
+accesses the correct location in the hard drives, retrieves the data, and finally sends it to the
+correct destination.
+
+General CPUs incur a /Turing Tax/, meaning that because they are so general, they become more
+expensive and less efficient than specialised hardware. It would therefore be much more efficient to
+have custom hardware that can walk the tree structure to find out the location of the data that it
+is trying to access, and then also access it.
+
+The idea is to use FPGAs to catch and process the network packet requesting data, fetch the data
+from the correct location and then return it through the network. Using an FPGA means that each
+network layer can be customised and reworked for that specific use case. Even the data structure can
+be optimised for the FPGA so that it becomes extremely fast to traverse the data and find it's
+location.
**** [[https://www.microsoft.com/en-us/research/project/project-silica/][Silica]]: Optical Storage
:PROPERTIES:
:CUSTOM_ID: silica-optical-storage
:END:
-The second talk was on optical storage which is a possible solution for
-archiving data. Today, all storage is done using either magnetic storage
-(tapes and hard drives), discs or flash storage (SSDs).
-
-- Flash storage is extremely expensive and is therefore not ideal for
- archiving, instead, it is fast and often used as the main storage.
-
-- Discs can maybe only store 300GB and there are physical limitations
- which stop the storage from growing. However, as the storage is done
- by engraving the disc. Even there, the life time of the disc is not
- permanent, as the film on the disc becomes bluish over time.
-
-- To archive a lot of data for a long time, the current solution is to
- use magnetic storage, as it is very cheap. However, the problem with
- magnetic storage is that it inherently degrades with time, and
- therefore extra costs are incurred every few years when migrating the
- data to new discs.
-
-We therefore need a storage solution that will not degrade with time and
-is compact enough to store a lot of data. The answer to this is /optical
-storage/. Data is stored by creating tiny deformities in the glass using
-a femtosecond laser, which can then be read back using LEDs. The
-deformities have different properties such as angle and phase which
-dictate the current value at that location.
+The second talk was on optical storage which is a possible solution for archiving data. Today, all
+storage is done using either magnetic storage (tapes and hard drives), discs or flash storage
+(SSDs).
+
+- Flash storage is extremely expensive and is therefore not ideal for archiving, instead, it is fast
+ and often used as the main storage.
+
+- Discs can maybe only store 300GB and there are physical limitations which stop the storage from
+ growing. However, as the storage is done by engraving the disc. Even there, the life time of the
+ disc is not permanent, as the film on the disc becomes bluish over time.
+
+- To archive a lot of data for a long time, the current solution is to use magnetic storage, as it
+ is very cheap. However, the problem with magnetic storage is that it inherently degrades with
+ time, and therefore extra costs are incurred every few years when migrating the data to new discs.
+
+We therefore need a storage solution that will not degrade with time and is compact enough to store
+a lot of data. The answer to this is /optical storage/. Data is stored by creating tiny deformities in
+the glass using a femtosecond laser, which can then be read back using LEDs. The deformities have
+different properties such as angle and phase which dictate the current value at that location.
#+caption: Project silica: image of 1978 "Superman" movie encoded on silica glass. Photo by Jonathan Banks for Microsoft.
[[/images/msr_research/project_silica.jpg]]
@@ -767,33 +712,29 @@ dictate the current value at that location.
:PROPERTIES:
:CUSTOM_ID: sirius-optical-data-center-networks
:END:
-With the increased use of fiber in data centers, the cost of switching
-using electronic switches increases dramatically, because the light
-needs to be converted to electricity first, and then converted back
-after the switch. This incurs a large latency in the network because
-that process takes a long time.
+With the increased use of fiber in data centers, the cost of switching using electronic switches
+increases dramatically, because the light needs to be converted to electricity first, and then
+converted back after the switch. This incurs a large latency in the network because that process
+takes a long time.
-Instead, one can switch on the incoming light directly using optical
-switches, which reduce the switching latency to a few nanoseconds. This
-makes it possible to have fully optical data-centers.
+Instead, one can switch on the incoming light directly using optical switches, which reduce the
+switching latency to a few nanoseconds. This makes it possible to have fully optical data-centers.
**** [[https://www.microsoft.com/en-us/research/project/iris/][Iris]]: Optical Regional Networks
:PROPERTIES:
:CUSTOM_ID: iris-optical-regional-networks
:END:
-Finally, project Iris explores how regional and wide area cloud networks
-could be designed to work better with the growing network traffic.
+Finally, project Iris explores how regional and wide area cloud networks could be designed to work
+better with the growing network traffic.
*** How to Give a Research Talk
:PROPERTIES:
:CUSTOM_ID: how-to-give-a-research-talk
:END:
-Simon Peyton Jones gave the first talk which was extremely entertaining
-and insightful. He really demonstrated how to give a good talk, and at
-the same time gave great advice on how to give a good talk, with tips
-that applied to conference talks as well as broader talks. The slides
-used Comic Sans but that only showed how good a talk can if one doesn't
-mind that.
+Simon Peyton Jones gave the first talk which was extremely entertaining and insightful. He really
+demonstrated how to give a good talk, and at the same time gave great advice on how to give a good
+talk, with tips that applied to conference talks as well as broader talks. The slides used Comic
+Sans but that only showed how good a talk can if one doesn't mind that.
The main things that should be included in a talk are:
@@ -801,25 +742,20 @@ The main things that should be included in a talk are:
2. Key idea (80%).
3. There is no 3.
-The purpose of the motivation is to /wake people up/ so that they can
-decide if they would be interested in the talk. Most people will only
-give you two minutes before they open their phone and look at emails, so
-it is your job as the speaker to captivate them before they do that.
-Following this main structure, his advice is that introductions and
-acknowledgements should not come at the start of the talk, but should
-only come at the end of the talk. If the introductions are important,
+The purpose of the motivation is to /wake people up/ so that they can decide if they would be
+interested in the talk. Most people will only give you two minutes before they open their phone and
+look at emails, so it is your job as the speaker to captivate them before they do that. Following
+this main structure, his advice is that introductions and acknowledgements should not come at the
+start of the talk, but should only come at the end of the talk. If the introductions are important,
these could also come after the main motivation.
-The rest of the talk should be on the key idea of the project, and it
-should on explaining the idea /deeply/ to really satisfy the audience.
-Talks that only briefly touch on many areas often leaves the audience
-wanting more. However, this does not mean that the talk should be full
-of technical detail and slides containing complex and abstract formulas.
-Instead, examples should be given wherever possible, as it is much
-easier to convey the intuition behind the more general formulas.
-Examples also allow you to show edge cases which may show the audience
-why the general case is maybe not as straightforward as they are
-thinking.
+The rest of the talk should be on the key idea of the project, and it should on explaining the idea
+/deeply/ to really satisfy the audience. Talks that only briefly touch on many areas often leaves the
+audience wanting more. However, this does not mean that the talk should be full of technical detail
+and slides containing complex and abstract formulas. Instead, examples should be given wherever
+possible, as it is much easier to convey the intuition behind the more general formulas. Examples
+also allow you to show edge cases which may show the audience why the general case is maybe not as
+straightforward as they are thinking.
** Verismith :verilog:synthesis:haskell:FPGA:
:PROPERTIES:
@@ -830,85 +766,72 @@ thinking.
:CUSTOM_ID: verismith
:END:
-Most hardware is designed using a hardware description language (HDL)
-such as Verilog or VHDL. These languages allow some abstraction over the
-hardware that is produced so that it is more modular and easier to
-maintain. To translate these hardware descriptions to actual hardware,
-they can be passed through a synthesis tool, which generates a netlist
-referring to the specific hardware connections that will appear in the
-actual hardware.
-
-Furthermore, high-level synthesis (HLS) is also becoming more popular,
-which allows for a much more behavioural description of the circuit
-using a standard programming language such as C or OpenCL. However, even
-designs written in these languages need to be translated to an HDL such
-as Verilog or VHDL and therefore also need to rely on a synthesis tool.
-
-Fuzzing is a way to randomly test tools to check if their behaviour
-remains correct. This has been very effective at finding bugs in
-compilers, such as GCC and Clang. CSmith, a C fuzzer, found more than
-300 bugs in these tools by randomly generating C programs and checking
-that all the C compilers execute these programs in the same fashion. We
-therefore thought it would be a good idea to test synthesis tools in a
-similar fashion and improve their reliability. There are three main
-sections that I'll go over to explain how we fuzz these tools: Verilog
-generation, Equivalence checking and Verilog reduction.
+Most hardware is designed using a hardware description language (HDL) such as Verilog or VHDL. These
+languages allow some abstraction over the hardware that is produced so that it is more modular and
+easier to maintain. To translate these hardware descriptions to actual hardware, they can be passed
+through a synthesis tool, which generates a netlist referring to the specific hardware connections
+that will appear in the actual hardware.
+
+Furthermore, high-level synthesis (HLS) is also becoming more popular, which allows for a much more
+behavioural description of the circuit using a standard programming language such as C or
+OpenCL. However, even designs written in these languages need to be translated to an HDL such as
+Verilog or VHDL and therefore also need to rely on a synthesis tool.
+
+Fuzzing is a way to randomly test tools to check if their behaviour remains correct. This has been
+very effective at finding bugs in compilers, such as GCC and Clang. CSmith, a C fuzzer, found more
+than 300 bugs in these tools by randomly generating C programs and checking that all the C compilers
+execute these programs in the same fashion. We therefore thought it would be a good idea to test
+synthesis tools in a similar fashion and improve their reliability. There are three main sections
+that I'll go over to explain how we fuzz these tools: Verilog generation, Equivalence checking and
+Verilog reduction.
*** Verilog Generation
:PROPERTIES:
:CUSTOM_ID: verilog-generation
:END:
-To test these tools, we have to first generate random Verilog which can
-be passed to the synthesis tool. There are a few important properties
-that we have to keep in mind though.
+To test these tools, we have to first generate random Verilog which can be passed to the synthesis
+tool. There are a few important properties that we have to keep in mind though.
-First, the Verilog should always have the same behaviour no matter which
-synthesis tool it passes through. This is not always the case, as
-undefined values can either result in a 1 or a 0.
+First, the Verilog should always have the same behaviour no matter which synthesis tool it passes
+through. This is not always the case, as undefined values can either result in a 1 or a 0.
-Second, we have to make sure that our Verilog is actually correct and
-will not fail synthesis. This is important as we are trying to find deep
-bugs in the synthesis tools and not just it's error reporting.
+Second, we have to make sure that our Verilog is actually correct and will not fail synthesis. This
+is important as we are trying to find deep bugs in the synthesis tools and not just it's error
+reporting.
-Once we have generated the Verilog, it's time to give it to the
-synthesis tools to check that the output is correct. This is done using
-a formal equivalence check on the output of the synthesis tool.
+Once we have generated the Verilog, it's time to give it to the synthesis tools to check that the
+output is correct. This is done using a formal equivalence check on the output of the synthesis
+tool.
*** Equivalence Check
:PROPERTIES:
:CUSTOM_ID: equivalence-check
:END:
-The synthesis tools output a netlist, which is a lower level description
-of the hardware that will be produced. As the design that we wrote is
-also just hardware, we can compare these using the various equivalence
-checking tools that exist. This mathematically proves that the design
-was the same as the netlist.
+The synthesis tools output a netlist, which is a lower level description of the hardware that will
+be produced. As the design that we wrote is also just hardware, we can compare these using the
+various equivalence checking tools that exist. This mathematically proves that the design was the
+same as the netlist.
-If this fails, or if the synthesis tool crashed as it was generating the
-netlist, we want to then locate the cause for the bug. This can be done
-automatically by reducing the design until the bug is not present
-anymore and we cannot reduced the Verilog further.
+If this fails, or if the synthesis tool crashed as it was generating the netlist, we want to then
+locate the cause for the bug. This can be done automatically by reducing the design until the bug is
+not present anymore and we cannot reduced the Verilog further.
*** Verilog Reduction
:PROPERTIES:
:CUSTOM_ID: verilog-reduction
:END:
-To find the cause of the bug, we want to reduce the design to a minimal
-representation that still shows the bug. This can be done by cutting the
-Verilog design into two, and checking which half still contains the bug.
-Once we do this a few times at different levels of granularity, we
-finally get to a smaller piece of Verilog code that still executes the
-bug in the synthesis tool. This is then much easier to analyse further
-and report to the tool vendors.
+To find the cause of the bug, we want to reduce the design to a minimal representation that still
+shows the bug. This can be done by cutting the Verilog design into two, and checking which half
+still contains the bug. Once we do this a few times at different levels of granularity, we finally
+get to a smaller piece of Verilog code that still executes the bug in the synthesis tool. This is
+then much easier to analyse further and report to the tool vendors.
*** Results
:PROPERTIES:
:CUSTOM_ID: results
:END:
-In total, we reported 12 bugs to all the synthesis tools that we tested.
-A full summary of the bugs that were found can be seen in the
-[[https://github.com/ymherklotz/verismith/tree/master/bugs][Github
-repository]].
+In total, we reported 12 bugs to all the synthesis tools that we tested. A full summary of the bugs
+that were found can be seen in the [[https://github.com/ymherklotz/verismith/tree/master/bugs][Github repository]].
*** Resources
:PROPERTIES:
@@ -935,75 +858,66 @@ The following resources provide more context about Verismith:
:CUSTOM_ID: realistic-graphics
:END:
-To explore realistic graphics rendering, I have written two Haskell
-modules on two different techniques that are used in graphics to achieve
-realism. The first project is a small lighting implementation using a
-latitude longitude map, the second one is an implementation of the
-median cut algorithm. The latter is used to deterministically sample an
-environment map.
+To explore realistic graphics rendering, I have written two Haskell modules on two different
+techniques that are used in graphics to achieve realism. The first project is a small lighting
+implementation using a latitude longitude map, the second one is an implementation of the median cut
+algorithm. The latter is used to deterministically sample an environment map.
-A latitude longitude map is a 360 degree image which captures the
-lighting environments
+A latitude longitude map is a 360 degree image which captures the lighting environments
*** Mirror Ball
:PROPERTIES:
:CUSTOM_ID: mirror-ball
:END:
-To use a latitude longitude map when lighting a sphere in the
-environment, the reflection vector at every point on the sphere is used
-to get it's colour. As a simplification, the sphere is assumed to be a
-perfect mirror, so that one reflection vector is enough to get the right
-colour.
+To use a latitude longitude map when lighting a sphere in the environment, the reflection vector at
+every point on the sphere is used to get it's colour. As a simplification, the sphere is assumed to
+be a perfect mirror, so that one reflection vector is enough to get the right colour.
#+caption: *Figure 1*: Urban latitude and longitude map.
[[/images/realistic-graphics/urbanEM_latlong.jpg]]
-The latitude longitude map was created by taking a photo of a mirror
-ball and mapping the spherical coordinates to a rectangle.
+The latitude longitude map was created by taking a photo of a mirror ball and mapping the spherical
+coordinates to a rectangle.
#+caption: *Figure 2*: Normals calculated on a sphere.
[[/images/realistic-graphics/normal.jpg]]
-The first step is to calculate the normals at every pixel using the
-position and size of the sphere. These can be visualised by setting the
-RGB to the XYZ of the normal at the pixel.
+The first step is to calculate the normals at every pixel using the position and size of the
+sphere. These can be visualised by setting the RGB to the XYZ of the normal at the pixel.
#+caption: *Figure 3*: Reflection vectors calculated on a sphere.
[[/images/realistic-graphics/reflect.jpg]]
-The reflection vector can then be calculated and visualised in the same
-way, by using the following formula: $r = 2 (n \cdot v) n - v$.
+The reflection vector can then be calculated and visualised in the same way, by using the following
+formula: $r = 2 (n \cdot v) n - v$.
#+caption: *Figure 4*: Final image after indexing into the latitude longitude map using reflection vectors.
[[/images/realistic-graphics/final.jpg]]
-The reflection vector can be converted to spherical coordinates, which
-can in turn be used to index into the lat-long map. The colour at the
-indexed pixel is then set to the position that has that normal.
+The reflection vector can be converted to spherical coordinates, which can in turn be used to index
+into the lat-long map. The colour at the indexed pixel is then set to the position that has that
+normal.
*** Median Cut
:PROPERTIES:
:CUSTOM_ID: median-cut
:END:
-The median cut algorithm is a method to deterministically sample an
-environment map. This is achieved by splitting the environment map along
-the longest dimension so that there is equal energy in both halves. This
-is repeated n times recursively in each partition. Once there have been
-n iterations, the lights are placed in the centroid of each region.
-Below is an example with 6 splits, meaning there are 2^6 = 64
-partitions.
+The median cut algorithm is a method to deterministically sample an environment map. This is
+achieved by splitting the environment map along the longest dimension so that there is equal energy
+in both halves. This is repeated n times recursively in each partition. Once there have been n
+iterations, the lights are placed in the centroid of each region. Below is an example with 6
+splits, meaning there are 2^6 = 64 partitions.
#+caption: *Figure 5*: Latitude longitude map of the Grace cathedral.
[[/images/realistic-graphics/grace_latlong.jpg]]
-The average colour of each region is assigned to each light source that
-was created in each region.
+The average colour of each region is assigned to each light source that was created in each region.
#+caption: *Figure 6*: After running the median cut algorithm for 6 iterations.
[[/images/realistic-graphics/median_cut6.jpg]]
-Finally, these discrete lights can be used to light diffuse objects
-efficiently, by only having to sample a few lights.
+Finally, these discrete lights can be used to light diffuse objects efficiently, by only having to
+sample a few lights.
#+caption: *Figure 7*: The radiance at each individual sample.
[[/images/realistic-graphics/median_cut_radiance6.jpg]]
@@ -1036,12 +950,11 @@ in the project markdown files.
:PROPERTIES:
:CUSTOM_ID: file-organisation
:END:
-By default, Jekyll only supports blog posts that get put into a =_posts=
-directory, however, it is extensible enough to allow for different types
-of posts, which are called *collections* in Jekyll.
+By default, Jekyll only supports blog posts that get put into a =_posts= directory, however, it is
+extensible enough to allow for different types of posts, which are called *collections* in Jekyll.
-My layout, which supports project descriptions for a portfolio and blog
-posts, looks like the following.
+My layout, which supports project descriptions for a portfolio and blog posts, looks like the
+following.
#+begin_src emacs-lisp
.
@@ -1062,8 +975,8 @@ posts, looks like the following.
:PROPERTIES:
:CUSTOM_ID: portfolio-collection
:END:
-To make Jekyll recognise the =_portfolio= directory, it has to be
-declared in Jekyll's configuration file =_config.yml=.
+To make Jekyll recognise the =_portfolio= directory, it has to be declared in Jekyll's configuration
+file =_config.yml=.
#+begin_src emacs-lisp
collections:
@@ -1071,9 +984,8 @@ declared in Jekyll's configuration file =_config.yml=.
output: true
#+end_src
-Jekyll will now parse and turn the markdown files into HTML. To get a
-coherent link to the files, it is a good idea to add a *permalink* to
-the YAML front matter like the following.
+Jekyll will now parse and turn the markdown files into HTML. To get a coherent link to the files, it
+is a good idea to add a *permalink* to the YAML front matter like the following.
#+begin_src emacs-lisp
---
@@ -1082,22 +994,19 @@ the YAML front matter like the following.
---
#+end_src
-This means that the file will then be accessible using
-=https://www.website.com/portfolio/fmark/=.
+This means that the file will then be accessible using =https://www.website.com/portfolio/fmark/=.
*** Using Jekyll Parameters
:PROPERTIES:
:CUSTOM_ID: using-jekyll-parameters
:END:
-Now that we have generated the portfolio directory, and have written the
-descriptions to a few projects, we can see how we can use the Jekyll
-variables that are to our disposal in Liquid.
+Now that we have generated the portfolio directory, and have written the descriptions to a few
+projects, we can see how we can use the Jekyll variables that are to our disposal in Liquid.
-First of all, to generate a great view on the main page of some of the
-projects that you have made, you can use a for loop to iterate through
-the projects, and even use a limit to limit the projects to a specific
-number. This can be useful when showing a few projects on the main page,
-and also want a page displaying all the projects.
+First of all, to generate a great view on the main page of some of the projects that you have made,
+you can use a for loop to iterate through the projects, and even use a limit to limit the projects
+to a specific number. This can be useful when showing a few projects on the main page, and also want
+a page displaying all the projects.
#+begin_src emacs-lisp
{%- raw -%}
@@ -1106,8 +1015,8 @@ and also want a page displaying all the projects.
{% endraw %}
#+end_src
-By default, the projects are listed from earliest to latest, so to
-display the three latest projects, the list first has to be reversed.
+By default, the projects are listed from earliest to latest, so to display the three latest
+projects, the list first has to be reversed.
Inside the for loop, variables like
@@ -1118,15 +1027,14 @@ Inside the for loop, variables like
{% endraw %}
#+end_src
-can be used to access the variables declared in the YAML, to generate
-views of the projects.
+can be used to access the variables declared in the YAML, to generate views of the projects.
*** Conclusion
:PROPERTIES:
:CUSTOM_ID: conclusion
:END:
-In conclusion, Jekyll and Liquid make it very easy to organise projects
-and make it easy to write the descriptions and blogs using markdown.
+In conclusion, Jekyll and Liquid make it very easy to organise projects and make it easy to write
+the descriptions and blogs using markdown.
** Noise Silencer :DSP:
:PROPERTIES:
@@ -1152,21 +1060,17 @@ original spectrum of the signal.
:CUSTOM_ID: fmark
:END:
-FMark is a markdown parser that features many extensions to
-[[https://github.github.com/gfm/][GFM]] (Github Flavoured Markdown),
-such as macros, spread sheet functionality, and more extensions
-described below. It was written in a test driven manner, using purely
-functional F#.
+FMark is a markdown parser that features many extensions to [[https://github.github.com/gfm/][GFM]] (Github Flavoured Markdown), such as
+macros, spread sheet functionality, and more extensions described below. It was written in a test
+driven manner, using purely functional F#.
*** Introduction
:PROPERTIES:
:CUSTOM_ID: introduction
:END:
-The [[https://github.com/ymherklotz/FMark][markdown parser]] is written
-and implemented in F#. Even though there is an available module, called
-[[file:TODO][FSharp.Formatting]] that is also written in F# that
-supports markdown parsing and converting the output to HTML, we decided
-to write our own markdown parser. In addition to the simple parser, a
+The [[https://github.com/ymherklotz/FMark][markdown parser]] is written and implemented in F#. Even though there is an available module,
+called [[file:TODO][FSharp.Formatting]] that is also written in F# that supports markdown parsing and converting
+the output to HTML, we decided to write our own markdown parser. In addition to the simple parser, a
lot of extensions were added to support the features mentioned below:
- GFM parsing
@@ -1175,26 +1079,23 @@ lot of extensions were added to support the features mentioned below:
- Citations and Footnote support
- Full HTML generation
-A Visual Studio Code
-[[https://github.com/ymherklotz/FMark-vscode][extension]] was also
-developed to integrate the parser into and editor and make it more
-useable.
+A Visual Studio Code [[https://github.com/ymherklotz/FMark-vscode][extension]] was also developed to integrate the parser into and editor and make
+it more useable.
*** Motivation
:PROPERTIES:
:CUSTOM_ID: motivation
:END:
-The main motivation for this project was to create a more powerful
-version of markdown that could be used to easily take notes in lectures.
+The main motivation for this project was to create a more powerful version of markdown that could be
+used to easily take notes in lectures.
**** Re-implementation of the Parser
:PROPERTIES:
:CUSTOM_ID: re-implementation-of-the-parser
:END:
-Instead of using the FSharp.Formatting module, it was decided that the
-parser should be re-implemented. We found that the FSharp.Formatting
-module was not generic enough and did not allow as easy implementation
-of the extensions we wanted to add to markdown.
+Instead of using the FSharp.Formatting module, it was decided that the parser should be
+re-implemented. We found that the FSharp.Formatting module was not generic enough and did not allow
+as easy implementation of the extensions we wanted to add to markdown.
*** Custom Features
:PROPERTIES:
@@ -1204,21 +1105,18 @@ of the extensions we wanted to add to markdown.
:PROPERTIES:
:CUSTOM_ID: macros
:END:
-Macros were a feature that we thought should definitely be added to the
-parser, as it allows for a large extensibility, as the markdown parser
-also supports full HTML pass-through. This means that macros can be
-created that contain pure HTML which will be output in exactly the same
-way, and enables the addition of things like text boxes or colourful
-boxes.
+Macros were a feature that we thought should definitely be added to the parser, as it allows for a
+large extensibility, as the markdown parser also supports full HTML pass-through. This means that
+macros can be created that contain pure HTML which will be output in exactly the same way, and
+enables the addition of things like text boxes or colourful boxes.
**** Spread Sheet
:PROPERTIES:
:CUSTOM_ID: spread-sheet
:END:
-Sometimes it is useful to directly do calculations in tables, as they
-are often used to display information. This can then be used to make
-tables more generic and not have to copy all the values over once
-something changes.
+Sometimes it is useful to directly do calculations in tables, as they are often used to display
+information. This can then be used to make tables more generic and not have to copy all the values
+over once something changes.
** Emotion Classification using Images :ML:
:PROPERTIES:
@@ -1229,8 +1127,8 @@ something changes.
:CUSTOM_ID: emotion-classification-using-images
:END:
-We wrote a Convolutional Neural Network that would classify images into the 6
-base emotions. We used tensorflow and python to design the network and train it.
+We wrote a Convolutional Neural Network that would classify images into the 6 base emotions. We used
+tensorflow and python to design the network and train it.
** YAGE :graphics:
:PROPERTIES:
@@ -1241,24 +1139,21 @@ base emotions. We used tensorflow and python to design the network and train it.
:CUSTOM_ID: yage
:END:
-YAGE is a fully featured 2D game engine written in C++ using OpenGL to
-render textured quads to the screen. It also includes an efficient and
-extensible Entity Component System to handle many different game
-objects. In addition to that, it provides a simple interface to draw
-shapes to a window and efficient texturing using Sprite Batching.
+YAGE is a fully featured 2D game engine written in C++ using OpenGL to render textured quads to the
+screen. It also includes an efficient and extensible Entity Component System to handle many
+different game objects. In addition to that, it provides a simple interface to draw shapes to a
+window and efficient texturing using Sprite Batching.
*** Goal of the Engine
:PROPERTIES:
:CUSTOM_ID: goal-of-the-engine
:END:
-YAGE is a personal project that I started to learn about computer
-graphics, using tools like modern OpenGL to learn about the graphics
-pipeline and common optimisations that are used in a lot of game
+YAGE is a personal project that I started to learn about computer graphics, using tools like modern
+OpenGL to learn about the graphics pipeline and common optimisations that are used in a lot of game
engines.
-However, writing the complete engine has taught me a lot more in
-addition to that, such as efficient memory management by taking
-advantage of caching to keep load times short and the number of textures
+However, writing the complete engine has taught me a lot more in addition to that, such as efficient
+memory management by taking advantage of caching to keep load times short and the number of textures
that can be drawn high.
** Emacs as an Email Client :emacs:
@@ -1270,37 +1165,32 @@ that can be drawn high.
:CUSTOM_ID: emacs-as-an-email-client
:END:
-Emacs is a very powerful editor, therefore there are many benefits from
-using it as an email client, such as direct integration with org-mode
-for todo and task management or the amazing editing capabilities of
-Emacs to write emails.
+Emacs is a very powerful editor, therefore there are many benefits from using it as an email client,
+such as direct integration with org-mode for todo and task management or the amazing editing
+capabilities of Emacs to write emails.
-However, Emacs cannot do this natively, but there is great integration
-with a tool called =mu=. This tool is an indexer for your emails, and
-keeps track of them so that they are easily and quickly searchable. The
-author of this tool also wrote an emacs-lisp file that queries =mu= and
-provides a user interface in emacs to better interact with it and use it
-to read emails.
+However, Emacs cannot do this natively, but there is great integration with a tool called =mu=. This
+tool is an indexer for your emails, and keeps track of them so that they are easily and quickly
+searchable. The author of this tool also wrote an emacs-lisp file that queries =mu= and provides a
+user interface in emacs to better interact with it and use it to read emails.
-=mu= requires the emails to already be on the computer though, so the
-first step is to download them using IMAP.
+=mu= requires the emails to already be on the computer though, so the first step is to download them
+using IMAP.
*** Downloading Emails
:PROPERTIES:
:CUSTOM_ID: downloading-emails
:END:
-IMAP is a protocol that can be used to download a copy of your emails
-from the server. A great tool to use to download them using IMAP is
-=mbsync=. In arch linux, this tool can be downloaded from the official
-repository using
+IMAP is a protocol that can be used to download a copy of your emails from the server. A great tool
+to use to download them using IMAP is =mbsync=. In arch linux, this tool can be downloaded from the
+official repository using
#+begin_src shell
sudo pacman -S isync
#+end_src
-This command line utility has to first be set up using a config file,
-which is usually located in =~/.mbsyncrc=, so that it knows where to
-download the emails from and how to authenticate properly.
+This command line utility has to first be set up using a config file, which is usually located in
+=~/.mbsyncrc=, so that it knows where to download the emails from and how to authenticate properly.
The most important parts to set up in the config file are
@@ -1313,8 +1203,8 @@ The most important parts to set up in the config file are
CertificateFile /etc/ssl/certs/ca-certificates.crt
#+end_src
-to setup the account, and then the following to setup the directories
-where it should download emails to
+to setup the account, and then the following to setup the directories where it should download
+emails to
#+begin_src emacs-lisp
IMAPStore gmail-remote
@@ -1333,9 +1223,8 @@ where it should download emails to
SyncState *
#+end_src
-It should then be mostly ready to download all the emails. If using two
-factor authentication, one can generate an app password which can be
-used instead of the user password.
+It should then be mostly ready to download all the emails. If using two factor authentication, one
+can generate an app password which can be used instead of the user password.
Once =mbsync= is configured, the emails can be downloaded using
@@ -1347,16 +1236,15 @@ Once =mbsync= is configured, the emails can be downloaded using
:PROPERTIES:
:CUSTOM_ID: indexing-the-emails
:END:
-Once they are downloaded (in this case in the =~/.mail= directory), they
-have to be indexed so that they can quickly be searched using Emacs.
-This is done by using the following shell command.
+Once they are downloaded (in this case in the =~/.mail= directory), they have to be indexed so that
+they can quickly be searched using Emacs. This is done by using the following shell command.
#+begin_src shell
mu index --maildir=~/.mail
#+end_src
-However, as =mu= also has an emacs-lisp plugin, the following will also
-work after it has been configured correctly in emacs.
+However, as =mu= also has an emacs-lisp plugin, the following will also work after it has been
+configured correctly in emacs.
#+begin_src shell
emacsclient -e '(mu4e-update-index)'
@@ -1366,26 +1254,22 @@ work after it has been configured correctly in emacs.
:PROPERTIES:
:CUSTOM_ID: emacs-configuration
:END:
-To use =mu= in emacs as well, one first has to load the emacs lisp file
-using
+To use =mu= in emacs as well, one first has to load the emacs lisp file using
#+begin_src emacs-lisp
(require 'mu4e)
#+end_src
-After that, =mu4e= can be configured with different things like the home
-directory, and shortcuts that should be used in emacs. The full
-configuration can be seen in my Emacs configuration, which is
-[[https://github.com/ymherklotz/dotfiles/blob/master/emacs/loader.org][hosted
-on Github]]
+After that, =mu4e= can be configured with different things like the home directory, and shortcuts that
+should be used in emacs. The full configuration can be seen in my Emacs configuration, which is
+[[https://github.com/ymherklotz/dotfiles/blob/master/emacs/loader.org][hosted on Github]]
*** Sending Emails
:PROPERTIES:
:CUSTOM_ID: sending-emails
:END:
-Sending emails from Emacs requires a different protocol which is SMTP,
-however, that is already included in Emacs. The most basic setup can be
-seen below.
+Sending emails from Emacs requires a different protocol which is SMTP, however, that is already
+included in Emacs. The most basic setup can be seen below.
#+begin_src emacs-lisp
(smtpmail-smt-user . "email@gmail.com")
@@ -1410,12 +1294,10 @@ Emacs is now ready to be used as a full featured email client.
:CUSTOM_ID: cpu-introduction
:END:
-The best way to understand how a processor works in detail is to create
-one from scratch or, in my case, write a software simulation of one.
-This was one of our assignments for our Computer Architecture course,
-and required us to to implement a Mips I processor, as it was not too
-complex, but did implement the fundemental ideas of a processor and how
-it operates.
+The best way to understand how a processor works in detail is to create one from scratch or, in my
+case, write a software simulation of one. This was one of our assignments for our Computer
+Architecture course, and required us to to implement a Mips I processor, as it was not too complex,
+but did implement the fundemental ideas of a processor and how it operates.
*** Quick introduction to processors
:PROPERTIES:
@@ -1425,48 +1307,40 @@ it operates.
:PROPERTIES:
:CUSTOM_ID: what-is-a-processor
:END:
-A processor is a digital circuit that receives instructions and exectues
-them sequentially. These instructions can be anything from branching
-instructions, that go to a different location in the code, to arithmetic
-instructions that can add two numbers together. Instructions are
-normally stored in memory and are produced by a higher level language
-such as C or C++ using a compiler. However, other languages exist as
-well, such as python, that run using an interpreter, which compiles the
-files on the fly and runs them. The image above shows a possible basic
-setup for a processor, which in this case is the Mips I processor.
+A processor is a digital circuit that receives instructions and exectues them sequentially. These
+instructions can be anything from branching instructions, that go to a different location in the
+code, to arithmetic instructions that can add two numbers together. Instructions are normally stored
+in memory and are produced by a higher level language such as C or C++ using a compiler. However,
+other languages exist as well, such as python, that run using an interpreter, which compiles the
+files on the fly and runs them. The image above shows a possible basic setup for a processor, which
+in this case is the Mips I processor.
#+caption: Mips processor layout.
[[/images/mips_processor_pipeline.jpg]]
-A processor also has to be able to store information to be able to
-execute these instructions. Normally, processors have a small number of
-registers that are suited for this purpose. This number varies between
-architectures such as Arm, Mips and intel, but in this case the Mips
-processor has 32 registers. These registers are 32 bit wide, which means
-they can store an integer or an address to a location in memory, which
-can then be used to load data from.
+A processor also has to be able to store information to be able to execute these
+instructions. Normally, processors have a small number of registers that are suited for this
+purpose. This number varies between architectures such as Arm, Mips and intel, but in this case the
+Mips processor has 32 registers. These registers are 32 bit wide, which means they can store an
+integer or an address to a location in memory, which can then be used to load data from.
**** Types of Processors
:PROPERTIES:
:CUSTOM_ID: types-of-processors
:END:
-There are two main types of processor architectures, RISC and CISC
-processors. RISC processors are designed around the principle that the
-actual circuit should be as simple as possible, which means that the
-instructions have to be fairly simple too. The advantage of this is that
-the circuit and the actual processor gets simpler to build and optimise,
-however, it also means that the compiler that will eventually turn a
-program into instructions, will have to be clever about optimisations it
-makes, so as to minimize the amount of instructions. Examples of a RISC
-processor is an Arm process or a MIPS processor.
-
-In contrast to this, a CISC processor is much more complex when looking
-at the architecture and has many more instructions than a RISC
-processor, that will often do multiple things at once. The advantage of
-this is that the compiler does not have to be as clever anymore, as
-there are many instructions that correspond directly with something that
-a programmer wants to do in the code, however, it means that the
-complexity of the hardware increases by a lot.
+There are two main types of processor architectures, RISC and CISC processors. RISC processors are
+designed around the principle that the actual circuit should be as simple as possible, which means
+that the instructions have to be fairly simple too. The advantage of this is that the circuit and
+the actual processor gets simpler to build and optimise, however, it also means that the compiler
+that will eventually turn a program into instructions, will have to be clever about optimisations it
+makes, so as to minimise the amount of instructions. Examples of a RISC processor is an Arm process
+or a MIPS processor.
+
+In contrast to this, a CISC processor is much more complex when looking at the architecture and has
+many more instructions than a RISC processor, that will often do multiple things at once. The
+advantage of this is that the compiler does not have to be as clever anymore, as there are many
+instructions that correspond directly with something that a programmer wants to do in the code,
+however, it means that the complexity of the hardware increases by a lot.
*** C to MIPS32 Compiler
:PROPERTIES:
@@ -1480,16 +1354,16 @@ complexity of the hardware increases by a lot.
Implemented a C compiler to MIPS assembly code using flex and bison as the frontend. An abstract syntax tree is built from the parser and code generation is implemented as part of the AST.
#+end_summary
-The compiler was made to translate C code to MIPS assembly. It was then
-tested using =qemu= and simulating the MIPS processor to run the binary.
+The compiler was made to translate C code to MIPS assembly. It was then tested using =qemu= and
+simulating the MIPS processor to run the binary.
**** What is a compiler?
:PROPERTIES:
:CUSTOM_ID: what-is-a-compiler
:END:
-A Compiler is a program that transforms code written in one language
-into another language. This is normally done to transform a higher level
-language such as C, into a lower level language such as assembly.
+A Compiler is a program that transforms code written in one language into another language. This is
+normally done to transform a higher level language such as C, into a lower level language such as
+assembly.
#+caption: Compiler Workflow
[[/assets/img/compiler_flow.svg]]
@@ -1572,23 +1446,18 @@ The three main parts of the project were
:PROPERTIES:
:CUSTOM_ID: music-sheet-conversion
:END:
-The custom music sheet is a grid which represents different notes that
-are to be played.
+The custom music sheet is a grid which represents different notes that are to be played.
#+caption: Custom music sheet as a grid.
[[/images/A4Grid.jpg]]
-The notes are encoded in binary in each column using 5 bits which allows
-for 32 different pitches. The grid supports 32 different notes that are
-played sequentially.
+The notes are encoded in binary in each column using 5 bits which allows for 32 different
+pitches. The grid supports 32 different notes that are played sequentially.
-The grid is automatically converted by a program we wrote using
-[[https://opencv.org/][OpenCV]] called
-[[https://github.com/ymherklotz/NoteReader/][NoteReader]]. The main
-method used to detect the stave and the individual notes for simple
-sheet music was to generate a histogram of intensities vertically and
-horizontally. Below is an example of this process using twinkle twinkle
-little star as an example.
+The grid is automatically converted by a program we wrote using [[https://opencv.org/][OpenCV]] called [[https://github.com/ymherklotz/NoteReader/][NoteReader]]. The main
+method used to detect the stave and the individual notes for simple sheet music was to generate a
+histogram of intensities vertically and horizontally. Below is an example of this process using
+twinkle twinkle little star as an example.
#+caption: Horizontal histogram.
[[/images/cut_hist_horiz.jpg]]
@@ -1596,49 +1465,43 @@ little star as an example.
#+caption: Vertical histogram.
[[/images/cut_hist_vert.jpg]]
-The maximum was then used to as a threshold value to determine the
-individual notes and where the stave lines and notes were located.
+The maximum was then used to as a threshold value to determine the individual notes and where the
+stave lines and notes were located.
#+caption: Notes detected and turned grey
[[/images/pitch_detect.jpg]]
-The grid can then be generated by encoding the pitch of the note as a 5
-bit binary number and drawing the grid.
+The grid can then be generated by encoding the pitch of the note as a 5 bit binary number and
+drawing the grid.
-One problem that was encountered during the generation was that the
-detection algorithm only worked on one line at a time, and threw away
-the rest of the sheet music. In addition to that, it was hard to test
-the grid generation and the note detection at the same time. We solved
-this by first generating a file with all the notes present, then uses
-that file to generate the grid. This allowed the grid generation and
-note detection to be completely separated and independent of each other,
+One problem that was encountered during the generation was that the detection algorithm only worked
+on one line at a time, and threw away the rest of the sheet music. In addition to that, it was hard
+to test the grid generation and the note detection at the same time. We solved this by first
+generating a file with all the notes present, then uses that file to generate the grid. This allowed
+the grid generation and note detection to be completely separated and independent of each other,
making them much easier to test as well.
**** Real-time Note Reading
:PROPERTIES:
:CUSTOM_ID: real-time-note-reading
:END:
-The purpose of the custom grid is to allow the FPGA to read the notes
-and then play them in real time. The grid is designed so that the
-orientation can be picked up by the FPGA easily using two red markers in
-the top two corners. Now that the FPGA has the orientation and size of
-the grid, it can detect all the squares that define the notes. It goes
-through the grid columns sequentially and reads the notes as 5 bits.
+The purpose of the custom grid is to allow the FPGA to read the notes and then play them in real
+time. The grid is designed so that the orientation can be picked up by the FPGA easily using two red
+markers in the top two corners. Now that the FPGA has the orientation and size of the grid, it can
+detect all the squares that define the notes. It goes through the grid columns sequentially and
+reads the notes as 5 bits.
**** Real-time Note Playing
:PROPERTIES:
:CUSTOM_ID: real-time-note-playing
:END:
-Once a note is detected from the grid, the 5 bit number has to be
-converted to a frequency so that the note can be played. The frequencies
-are stored in a look-up table in the FPGA, which can be indexed using
-the 5 bit number.
-
-A wave generator then outputs a square wave with the required frequency
-through one of the GPIO pins on the FPGA board. Using a low-pass filter
-and an amplifier, the square wave is transformed into a continuous
-sinusoidal wave. This wave is passed to a speaker so that the tone is
-heard.
+Once a note is detected from the grid, the 5 bit number has to be converted to a frequency so that
+the note can be played. The frequencies are stored in a look-up table in the FPGA, which can be
+indexed using the 5 bit number.
+
+A wave generator then outputs a square wave with the required frequency through one of the GPIO pins
+on the FPGA board. Using a low-pass filter and an amplifier, the square wave is transformed into a
+continuous sinusoidal wave. This wave is passed to a speaker so that the tone is heard.
* Photos
:PROPERTIES: