diary at Telent Netowrks

NixOS (again) - declarative VMs with QEMU#

Fri, 20 Oct 2017 08:43:49 +0000

I built a new PC to sit in the study at home. This isn't going to be a blog post about that, though: it all worked the first time and so there is nothing to rant about. The new box is smaller, quieter and faster than the old one, as it should be given that the old one is about 9 years old now.

Having got it running I wanted to put some VMs on it (hey, just like last time), but this time around I want to do it slightly less ad-hoc (hey, not just like last time) so I have been playing with creating them declaratively.

Goals (and non-goals)

Prior art

The Virtualization in NixOS page on the new Wiki is thoroughly worth reading and I am very much indebted to it for ideas and even some bits of code. The author has different requirements to me and therefore has different answers in places, but I borrowed a lot. In particular you should read that if you are wondering "doesn't {nixos, nixops} do this out of the box already?"

My approach

Note to the reader: there are many snippets of code in the rest of this post. They are all extracted from the actual system at the time I write this, and provided to help explain the approach, but probably not the best place to start if you just want something you can run. If you want something you can run, look instead at telent/nixos-configs on github which at time of writing is basically the same thing, but more likely to be updated, refined, bugfixed etc than I am ever to revisit this blog post.

Describe the guests

Unlike the Wiki author, I am managing the host machine as a plain Nixos machine and not using Nixops here. So I have created a module /etc/nixos/virtual.nix and added it to my imports in /etc/nixos/configuration.nix

  imports =
    [ # Include the results of the hardware scan.
      # [ ... ]

In that module, I define the VMs I want using an attribute set bound to a local variable. I know, I should do this properly with the module config system. Some day I will.

let guests = {
      alice = {
        memory = "1g";
        diskSize = "40g";
      bob = {
        memory = "1g";
        diskSize = "20g";

Start the guest VM processes

We map over the guests variable to make a systemd service for each VM that checks it has a disk image and brings it up (or takes it down, as appropriate).

    systemd.services = lib.mapAttrs' (name: guest: lib.nameValuePair "qemu-guest-${name}" {
      wantedBy = [ "multi-user.target" ];
      script =
          mkdir -p $disks
          if ! test -f $hda; then
            ${firstRunScript} $hda ${guest.diskSize}
          ${pkgs.qemu_kvm}/bin/qemu-kvm -m ${guest.memory} -display vnc=${guest.vncDisplay} -monitor unix:$sock,server,nowait -netdev tap,id=net0,ifname=${guest.netDevice},script=no,downscript=no -device virtio-net-pci,netdev=net0 -usbdevice teablet -drive file=$hda,if=virtio,boot=on
      preStop =
          echo 'system_powerdown' | ${pkgs.socat}/bin/socat - UNIX-CONNECT:/run/qemu-${name}.mon.sock
          sleep 10
    }) guests;

Create the guest disk images

These systemd services expect the guest machine to have a working disk image, so we need some way to create those.

The recipe for this on the Wiki creates a partition image, resize it appropriately, then uses pkgs.vmTools.runInLinuxVM to install NixOS on it. The way it does this is somewhat low-level and to my mind uncomfortably close to Dark Arts: it manually creates /nix/store and calculates package closures and makes directories and runs Grub and ...

I took a different approach which I feel is both cleaner and and more hacky: I created a custom CD image which has a service on it that looks for a disk called vda and runs nixos-generate-config and nixos-install on it. When a new VM is needed, it boots from this virtual CD instead of from its own disk. Note that the auto-install service has no safeguards or checks - this is definitely not a CD image that you would burn onto an actual disk and leave around the office.

(I claim it's more clean because it uses the "standard" installation method, but it is definitely more hacky because it uses sed on the generated configuration.nix to enable ssh and configure grub, and we all know what happens when sed is invited to the party.)

The dangerous unattended install service is defined in nixos-auto-install-service.nix which I'm not going to copy and paste here but you can view on Github. In virtual.nix we write a derivation to create a NixOS config including it and build an ISO image

    iso = system: (import <nixpkgs/nixos/lib/eval-config.nix> {
      inherit system;
      modules = [

and then we need something to create the disk image and run a QEMU which boots the ISO

firstRunScript = pkgs.writeScript "firstrun.sh" '' #!${pkgs.bash}/bin/bash hda=$1 size=$2 iso=$(echo /etc/nixos-cdrom.iso/nixos-*-linux.iso) PATH=/run/current-system/sw/bin:$PATH ${pkgs.qemu_kvm}/bin/qemu-img create -f qcow2 $hda.tmp $size mkdir -p /tmp/keys cp ${pubkey} /tmp/keys/ssh.pub ${pkgs.qemu_kvm}/bin/qemu-kvm -display vnc= -m 512 -drive file=$hda.tmp,if=virtio -drive file=fat:floppy:/tmp/keys,if=virtio,readonly -drive file=$iso,media=cdrom,readonly -boot order=d -serial stdio > $hda.console.log if grep INSTALL_SUCCESSFUL $hda.console.log ; then mv $hda.tmp $hda fi '';

(This is called from the systemd service defined previously, if you hadn't noticed and were wondering)

SSH keys

Eagle-eyed readers might notice the shenanigens with /tmp/keys and file=fat:floppy in that script. I didn't really want to bake my ssh public key into the ISO just because that's a vast amount of churn every time the key changes, so this is how we get an SSH key into the image. We're using a feature of QEMU that I did not previously know about - it can create a virtual FAT system from the contents of a directory on the host machine.


The guests are bridged onto the host LAN, because there is too much NAT in the world already and I do not wish to be the cause of more.

    networking.interfaces = lib.foldl (m: g: m // {${g} = {virtual=true; virtualType="tap";};}) {} (map (g: g.netDevice) (builtins.attrValues guests));
    networking.bridges.vbridge0.interfaces = [hostNic] ++ (map (g: g.netDevice) (builtins.attrValues guests));

A note of caution here: messing with bridges while connected via ssh is a bad idea, if your connection is through one of the interfaces you want to add to the bridge. As soon as you add eth0 (or wlp3s0 or enp0s31f6zz9pluralzalpha or whatever systemd thinks your network card should be called today) to the bridge it will lose its IP address and things will probably not Be Right until dhclient next refreshes. Learn from my mistakes: do this at the console or have some kind of backup connection.

In practice

So far, It Seems To Work. There are some points you may want to note:

Building Clojure projects with Nix#

Sat, 19 Aug 2017 23:23:07 +0000

Way back in May, or "the previous entry" as we like to call it in Telent Blog Time, I said

[using mvn2nix] doesn't actually work for Clojure/Leiningen projects. Look out for another blog entry soon that describes a whole completely different approach you can use there.

and then it turned out not to work, and then I thought I'd found a different approach, and then I got distracted, and then $WORK Happened, and, you know, Events. But here is an account of one way you can do it (at least, It Worked For Me) and some things I think I remember about the dead-ends I hit along the way.

Starting with the conclusion


Try it. Send feedback

Why Leiningen didn't work

What we planned to do with Leiningen was not completely dissimilar. At "release time" we were going to

then at "deploy time",

The trouble with that was that Leiningen appears to want to do dependency resolution all over again whenever it's run, which means that we had to download not only the required jar but in some cases one or more older versions too, just so it can persuade itself it has the right versions.

Kick me harder

Then I tried converting it to use boot. On the face of it this should be much simpler because boot has a rather lovely with-cp task which lets one explicitly specify the CLASSPATH. In practice, though, boot itself wants to auto-update and install stuff in $HOME which is a problem for NixOS autobuilders which run in a context where $HOME is set to a non-existent directory.

I'm still using boot in development and to provide the scaffolding for the bit of code that creates the json file (I made it a boot task), but I can't use it to do the production build.

Next up, use NixOps to creat the rest of the machine on which the app is going to run

Building Maven packages with Nix#

Wed, 10 May 2017 01:06:07 +0000

[ Note to Clojurists using Leiningen: I found, towards the end of this maven2nix process, that the pom.xml files that lein pom creates are not actually sufficient to the task of building jars. If that's why you're reading this, this may not be the right thing to be reading. ]

Nixpkgs has some tools for building Maven projects but (as far as I can determine) absolutely no examples of their use. It took me some time to figure things out from trawling through mailing lists, not least because my Nix-fu is not (yet?) up to reading the buildMaven source and shell script and saying "hey, that's obvious". I write these notes in the hope that eddies in the space-time continuum will cause them to pop up a week ago when I needed them, but failing that, maybe they'll be useful to someone else who struggles with it.

If you have a Java project whose dependencies are managed by Maven, you will find that building a Nix derivation for it is non-trivial, because maven expects to be able to figure out the dependencies and download them as it runs. Nix doesn't appreciate this, because derivations should be pure functions of their inputs and not depend on what happens to be hosted on random web sites at any given time. Indeed, proper "pure" builds are run in a context that doesn't even allow them outbound network access.

Other languages with similar package dependency managers have a similar problem: for example, Ruby projects that use Bundler, or Node projects that use npm/yarn. The solution is also similar: we break it into two parts. First, generate a list of all the dependencies - including the transitive dependencies - so that Nix can download them into the store; second, run the build in "offline" mode and tell it to find the dependencies in the store instead of on the Internet.

Get the package list

My overpowering impression of Maven based on having used it for three days is that it is entirely composed of plugins. Great plugins have little plugins upon their backs to bite 'em, and little plugins have lesser plugins - and so, ad infinitum. In this case, the plugin is called org.nixos.mvn2nix:mvn2nix-maven-plugin and it provides a command called mvn2nix. In principle using this is a simple matter of writing (or somehow obtaining) a pom.xml file, then doing

$ mvn org.nixos.mvn2nix:mvn2nix-maven-plugin:mvn2nix # don't do this

to generate a file called project-info.json. However, in practice there are a couple of issues:

The issue is that mvn2mix generates different output for dependencies that already exist in your local repository than it does for packages it fetches, and the workaround (as described in the issue) is to run it against an empty directory:

$ mvn  -Dmaven.repo.local=$(mktemp -d -t maven)  org.nixos.mvn2nix:mvn2nix-maven-plugin:mvn2nix # mmm

                                ^ non-ideal

This is the kind of thing for which sed would be the unix-philosophy-solution, if you can handle the resulting leaning toothpick syndrome. Or you could just use Perl

$ cat project-info.json | jq . | \
 perl -pe 's,https://repo1.maven.org/maven2//,https://repo.maven.org/maven2/,g' > $$ \
     && mv $$ project-info.json # or something

This probably also constitutes a bug but I've not yet taken the time to find out if it's with mvn2nix or with my generated POM, or whether it's been reported previously. Anyway, achievement unlocked.

Writing the Nix expression.

I've done this as a standalone expression that you can drop into your project as default.nix and run nix-build on: if you were writing something to add to the packages set in the nixpkgs repo then I think the preamble would look different (caveat: don't understand, haven't tried it).

with import  {};
let jar = buildMaven ./app/project-info.json;
in stdenv.mkDerivation rec {
  buildInputs = [ makeWrapper ];
  name = "my-app";
  version = "0.1.0";
  builder = writeText "install.sh" ''
    source $stdenv/setup
    mkdir -p $out/share/java $out/bin
    cp ${jar.build} $out/share/java/
    makeWrapper ${jre_headless}/bin/java $out/bin/${name} \
      --add-flags "-jar $out/share/java/${jar.build}"

Some points of note

I'm not a Nix expert (did I say that already?). Where this is non-idiomatic it's more likely because I don't understand the idiom than through any kind of conscious decision process.

Coming soon

As I said at the start, this doesn't actually work for Clojure/Leiningen projects. Look out for another blog entry soon that describes a whole other completely different approach you can use there.

NixOS on Linode KVM#

Mon, 19 Dec 2016 21:28:08 +0000

This is much easier than you'd think by Googling - but still complicated enough to be worth writing down. Note that this description is rather on the terse side, and probably more understandable if you've done a NixOS install on "regular" hardware previously.

1) create a new Linode instance

2) select it in Linode Manager

3) make some disks: for each of these, go to 'Create a new Disk' and fill in the appropriate fields

4) now go to 'create a new configuration profile'

5) before you turn it on, go to the Settings tab and disable Lassie - otherwise it just gets confusing having it reboot when you're not expecting

6) first time boot will be into the Rescue system, so go to the Rescue tab and click 'Reboot into Rescue Mode'

7) now connect to it using lish. You should be shown a typical boot sequence ending in something like

Welcome to Finnix!

[*] Total memory: 1998MiB, shared ramdisk: 1543MiB [*] Finnix media found at sdh [*] System: Intel Xeon E5-2680 v3 2.50GHz [*] Running Linux kernel 4.1.2-finnix on x86_64 [*] Finnix 111 ready; 464 packages, 146M compressed


8) now it's time to download the ISO. From the Getting NixOS page, find the URL for the latest 64 bit Minimal Installation CD and then run something rather like

root@ttyS0:~# curl -k https://d3g5gsiof5omrk.cloudfront.net/nixos/16.09/nixos-16.09.1272.81428dd/nixos-minimal-16.09.1272.81428dd-x86_64-linux.iso |dd bs=1M of=/dev/sda
root@ttyS0:~# sync
root@ttyS0:~# halt

9) Now we can boot into the NixOS installer. Check that you still have lish running in a terminal window (reconnect to it if not), then click 'Boot' in the Dashboard. You should see a Grub menu appear: cursor down to the 'NixOS ... without modeset' option then hit TAB to edit the command. Append console=ttyS0, then hit RETURN to boot. You get a NixOS boot sequence in lish, then


10) This is a good time to bring up the NixOS manual in a browser tab, especially if you can't remember what you did last time

11) Create a partition table on /dev/sdb and a primary partition that fills it, then put a filesystem on it:

[root@nixos:~]# fdisk /dev/sdb

Welcome to fdisk (util-linux 2.28.1). Changes will remain in memory only, until you decide to write them. Be careful before using the write command.

Device /dev/sdb already contains a ext4 signature. The signature will be removed by a write command.

Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0x01a28df9.

Command (m for help): n Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p): p Partition number (1-4, default 1): 1 First sector (2048-2097151, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-2097151, default 2097151):

Created a new partition 1 of type 'Linux' and of size 1023 MiB.

Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. [ 441.705223] sdb: sdb1 Syncing disks.

[root@nixos:~]# mkfs -t ext4 /dev/sdb1

(This looks verbose but it's almost all default options - what I actually typed to produce that output was n, RETURN, RETURN, RETURN, RETURN, w, Control-D)

12) mount the disks

[root@nixos:~]# mount /dev/sdc /mnt
[root@nixos:~]# mkdir /mnt/boot
[root@nixos:~]# mount /dev/sdb1 /mnt/boot

13) now you can run nixos-generate-config --root /mnt. Edit the generated /mnt/etc/nixos/configuration.nix and uncomment/amend/check the following items

14) and run

[root@nixos:~]# nixos-install    # stuff happens, at length
[root@nixos:~]# umount /mnt/boot
[root@nixos:~]# umount /mnt

15) now it's time to try booting from the "hard disk": go back to the Linode Manager and create another Configuration Profile

Select the new profile in Dashboard and hit 'Reboot'. Eventually, you will see the very welcome Welcome to NixOS banner and be presented with a login prompt.

16) Login as root, using the password you set duing nixos-install. Check you have network connectivity. Arrange credentials (password or ssh key or whatever) for whatever non-root user(s) you put in configuration.nix. Try ssh into the box. When you're happy it works, use C-a d to exit lish. Your work here is done.

Hotplug scripts in NixOS#

Thu, 15 Dec 2016 08:46:21 +0000

Prompted partly by the new Met police online traffic incident reporting tool (but it's been something i've been thinking about for a while) I bought an action camera the other day: it's a "Savfy", which is a cheap clone of the SJ4000 (which is itself a cheap clone of a Gopro).

Since I need to plug it into a USB port every day or two to recharge (battery life is a claimed 1.5 hours, haven't tested this yet) I thought it would be good to automate downloading the data off it as well. This is almost my first foray into systemd and udev, and certainly the first time I've tried it in NixOS, so I have reproduced my findings below.

First, we need to recognise when the camera is connected. It's USB mass storage, which means it shows up as a SCSI disk device (e.g. sdb). In /etc/nixos/configuration.nix we add a stanza something like this:

  services.udev = {
    path = [ "/home/dan/udev/bin" ];
    extraRules = ''
    ACTION=="add", KERNEL=="sd*[0-9]", ENV{ID_SERIAL}=="NOVATEKN_vt-DSC*", RUN+="${pkgs.systemd}/bin/systemctl --no-block start copyCamFiles@%k.service"
    '' ;

There are a few things worth noting here.

Great. We've got the trigger, where's the service? Again in configuration.nix

  systemd.services."copyCamFiles@" = {
    bindsTo = [ "dev-%i.device"] ;
    environment = { 
        RSYNC = "${pkgs.rsync}/bin/rsync"; 
        MOUNT = "${pkgs.utillinux}/bin/mount"; 
        UMOUNT = "${pkgs.utillinux}/bin/umount"; 
    serviceConfig = {
        Type = "simple";
        ExecStart = "${pkgs.bash}/bin/bash /home/dan/udev/bin/cp-actioncam.sh %I";

The @-sign in the service name means this isn't actually a service, it's a template (i.e. we can pass parameters to it to instantiate services). When it's invoked by udev, the parameter passed will be the device name of the newly-added partition. We export the pathnames of some utilities that the script will need, because I haven't built a nixos derivation for the script itself. simple as a service type means (I hope) that it's not started unless asked for and that nothing is going to try to restart it when it terminates.

Finally, here's the cp-actioncam.sh script

#!/usr/bin/env bash
set -e
unmount() {
    $UMOUNT $MP;
    rmdir $MP;
trap unmount 0
mkdir -p $MP $OUT 

And there you have it. There are a bunch of refinements that could profitably be made: most obviously, notifying the user somehow when the copying is finished and the device may be unplugged, and not putting root-owned files into a non-root-users home directory. But this will do for now.

Some browser tabs I can now close: