diary at Telent Netowrks

Cross product#

Sun, 31 Oct 2021 15:47:10 +0000

I had cause this afternoon to remember the Monad Tutorial Fallacy, which has been summarised as saying that when you finally understand them, you lose the ability to explain it to others.

I hypothesise that the same is probably true of cross-compilation in the Nix package system, and therefore I present these notes not as a superior alternative to any of the existing documentation, but because I wrote them down to aid my understanding and now need to put them somewhere I can refer back to them.

So. Let's suppose we're building NixWRT. Most likely we're building on an x86-64 system, to produce an image which will run on a MIPS device. In the event that there are any programs in that image which generate code (which is unlikely as we're not shipping compilers), we want them also to generate MIPS code. Thus, in the standard terminology we have:

(This naming convention comes from Autoconf, and so we are stuck with it. To make it make sense, consider the built product rather than the build process: we are describing a thing that was built on x86-64, is hosted on MIPS, and would - if it emitted any code - emit code that runs on MIPS)

However, not all of the software we create (or depend on) will be needed on the MIPS system - some of it (e.g. buid tools, compilers and other kinds of translators) will have to run on x86-64. So how do we keep track of it all?

Let's look at some examples:

Why am I caring about this right now? I rearranged bits of NixWRT and updated it to a recent Nixpkgs revision, causing OCaml to stop building for reasons I didn't fully understand

So, here is what I think is happening:

Clear? If this doesn't help, I invite you to consider the possibility that cross-compilation is like a burrito.

NixWRT Nics#

Sun, 21 Nov 2021 19:17:29 +0000

According to Git history I've been back on hacking NixWRT fairly regularly for a couple of months, and am pleased to be able to write that by dint of saying "screw it" to every previous attempt to write a replacement init system, I have got to the point that I can create an image that runs in Qemu which actually routes packets.

Given that it's qemu, we don't even have to attach eth1 to any real hardware. The nixwrt vm is started with options including

  -netdev socket,id=mynet1,listen=:5133 \
  -device virtio-net-pci,netdev=mynet1

and then I start a second VM with

qemu-system-x86_64 -enable-kvm \
 -netdev socket,id=mynet0,connect=:5133 \
 -device virtio-net-pci,netdev=mynet0 \
 -cdrom sysrescue.iso   -m 2048

whose eth0 can see nixwrt's eth1, and which successfully gets a (real! globally routable!) IPV6 address from it.

At some point I should try it on some real hardware,but there are a few other things to do first. DNS would be nice, for one. So would NAT (so I can have IPv4 as well as v6) and some kind of firewalling.

In replacement init system news, I am now using shell scripts to start services where I was previously implementing them as Monit services. The codebase is in a very transitional state right now: existing services (anything defined with services.foo - q.v.) continue to be started using Monit, for the moment, but new services go into the config under the svcs key - see this dnsmasq example. Most likely I will rename this back to services once I've moved everything over.

New-style service definitions can also specify changes to the config, meaning they can require busybox applets or kernel config. This means that if service B depends on service A it doesn't have to also know what A's system requirements are.