diary at Telent Netowrks

Shell out tour#

Wed, 08 Aug 2018 12:51:52 +0000

Nothing to show this week. I have more or less proved to my own satisfaction that I can reboot into a new image using kexec and a small C program and some shell scripts. This came at at considerable personal mental cost, but that's what happens when trying to do text processing in a Bourne shell (not bash) script without falling back on awk or sed (not installed). Associative arrays would have been nice. Actually, just arrays in general would have been a help.

The C program is called writemem and is approximately the moral equivalent of cat | dd seek=N of=/dev/mem bs=1 except that it writes in blocks bigger than 1 byte. Just the kind of thing your security auditor wants to find left lying around on random systems, yeah. I can see a need for some proper thought on security posture in the near future: although no-web-interface and ssh-only-with-a-pubkey-embedded-at-build-time probably makes it less of a target than any consumer D-Linksysgear box in its default configuration, there's probably still a lot more to do on that front. The attack we want to protect against is (1) being able to write to random locations in physical memory; (2) being able to reboot into random kernels using kexec; (3) being able to flash anything we like; (4) all of the above. Probably (4)

There will be one user-visible change when this stuff lands: whereas previously we produced separate files for kernel and rootfs when doing a "development" build, now we make a single agglomerated firmware image and rely on the kernel mtdsplit code to find the root filesystem. This is because step 1 of the headless upgrade procedure is to reboot into the current kernel with an additional memmap parameter, so in the case that the current kernel is running from RAM we need the original uImage to still be accessible and not to have been overwritten since boot. It also makes the build a bit more consistent between dev and production, which is a nice side effect.

First things first, though: need to get it into a state where I can actually commit something. Last night I dreamt I was in a bacon-eating competition where the goal was to consume as much as possible during a MongoDB cluster election before a new primary was chosen, but I woke up before the contest finished. I mention this just to give you an idea of where my brain is right now, but it is probably not a very good idea.

notmuch to say#

Thu, 16 Aug 2018 11:57:46 +0000

Very quick one this week, because it's all a bit mad round here right now:

If you are (as I am) using postfix to send and receive mail, and if you are (as I am) using rspamd to identify spam emails, and if you are (as I am) using notmuch to index and search it, you may have noticed (as I did) that notmuch can't add tags based on the presence of the X-Spam header that rspamd adds.

This is what I did:

1) I made postfix do local delivery through maildrop

  services.postfix = {
    extraConfig = ''
      [ ... ]
      mailbox_command = ${pkgs.maildrop}/bin/maildrop -d ''${USER}
This is for NixOS, obviously, if you are not using Nix then make the equivalent change in /etc/postfix/main.cf

maildrop will default to delivering the mail as normal if it can't find a $HOME/.mailfilter file, so once you have established that its idea of "normal" matches yours then this ought to be a safe change to make for all your users

2) Added a .mailfilter file to run notmuch insert, which accepts a message on standard input, adds it to the notmuch maildir and indexes it - with the specified tags

[dan@vritual:~]$ cat ~/.mailfilter 
  to "|/home/dan/.nix-profile/bin/notmuch insert +spam -inbox"

to "|/home/dan/.nix-profile/bin/notmuch insert -inbox"

I don't manage my my home directory declaratively - if you are running NixOS and have home-manager or something like that, then you can make this change in a more Nixy way.

3) Profit.

In not-really-other news, I embarked on a quick let's-learn re-frame hack to build a mobile-frendly notmuch web interface. It's not quite at "works for me" level yet, and is certainly not at "you can run this on a network" level (it's running the notmuch command with user-supplied arguments ..) but there has been progress.

Gardening at night#

Wed, 22 Aug 2018 23:32:53 +0000

No, honestly. My excuse this week for no NixWRT progress is that I have been removing all the turf from the dead lawn in preparation for putting down something greener.

But I will have to get back to it soon. I've just learnt that my NixCon talk proposal has been accepted, so I had better get it into a usable state soon if only so I have something to talk about.

Epsilon has progressed marginally: the search box now has a magnifying glass icon in it

Revisiting Clojure Nix packaging#

Fri, 31 Aug 2018 12:43:27 +0000

Slightly over a year ago I wrote about how to build a Clojure project with Nix and this week I had occasion to refer back to it because I want to have a production-style dogfood build of Epsilon

It's a bit simpler now than it was then, because in the intervening time Clojure has added Deps and CLI tools which gives us a simple readable EDN file in which we can list our project's immediate dependencies (see here deps.edn for Epsilon ) and a function resolve-deps that we can call to get the transitive dependencies

We embed this function call into a script which dumps a JSON file with the same information, and then whenever we add or update deps.edn we run

$ CLJ_CONFIG=. CLJ_CACHE=. nix-shell -p clojure jre --run \
 "clojure -R:build -C:build -Srepro  -Sforce  -m nixtract generated/nix-deps.json"

(Note that the clojure command will fail silently if it cannot find a java runtime on the path, or if the current directory contains a malformed deps.edn file. Each of these problems took me a while to find - I tell you this so that you may learn from my mistakes)

The second half of the puzzle - build time - is not much changed from how we did it last time around, though I did take the opportunity to make it use the generic builder instead of making its own builder.sh.

Probably I should point out that this downloads all the dependencies into /nix/store as JARs and builds a classpath so the JRE can find them - it doesn't make an uberjar. This fits my current use case, because I want to run it on a NixOS box and separate jars provides at least the possibility that some of them might be shared by more than one app. I will obviously have to come back to this if/when I need to build an uberjar for distribution.

Also, if you are doing advanced things with CLJS external libraries (like, Node dependencies and stuff) then this will not help and you are on your own. There is a name for the (CL)JS dependency ecosystem, it's a compound word in which the first part is the collective noun for a collection of stars, and the second part rhymes with "duck"