diary at Telent Netowrks

Advent of Blogpost#

Tue, 01 Dec 2020 21:20:05 +0000

Trying something new: a blog post for every day between now and Christmas. They will necessarily be a bit shorter and even less polished than the usual ones.

Day 1: internet music streaming for people who'd rather own their music than rent it.

I have ~ 200GB of files in /srv/media/music on my desktop PC, most of it from a massive CD ripping exercise about ten years ago, and I'd like to be able to listen to it from anywhere in the house (at minimum) or on the internet (ideally). I accomplish this using Airsonic - a "free, web-based media streamer, providing ubiquitous access to your music", coupled with Audinaut - an Android app to "stream music from Subsonic-compatible servers."

Airsonic is in Nixpkgs and there's a Nixos module for it. Audinaut is in the F-droid repo and apparently also in the Play Store

    services.airsonic = {
      enable = true;
      virtualHost = "muisc.telent.net";
      listenAddress = "";
      contextPath = "/";
      maxMemory = 500;
    services.nginx.virtualHosts."muisc.telent.net".enableACME = true;
    services.nginx.virtualHosts."muisc.telent.net".forceSSL = true;

but you should note that it relies on its own stateful database of some kind ( don't know what exactly, haven't dug into it) for configuration, so you have to set that stuff up by hand outside of Nix.

One caveat with Airsonic is, per the documentation:

Also note that Airsonic will organize your music according to how they are organized on your disk. Unlike many other music applications, Airsonic does not organize the music according to the tag information embedded in the files. (It does, however, also read the tags for presentation and search purposes.)
Consequently, it’s recommended that the music folders you add to Airsonic are organized in an “artist/album/song” manner. There are music managers, like MediaMonkey, that can help you achieve this.

My music collection didn't quite conform to this convention. The bulk of it was in flac/artistname-albumname, a bunch of other albums were in mp3/artist/album, and then the "long tail" included files in Songbird, annabelle, very_old_mp3_player and from_usb_stick. I straightened it all out using beets which describes itself as "the media library management system for obsessive-compulsive music geeks." but also works very nicely as a media library management system for entirely neurotypical (as far as I know) computer geeks who have a significant collection of music just because they've been buying CDs for a long time. I configured it approximately thus:

[dan@loaclhost:~]$ cat .config/beets/config.yaml
directory: /srv/media/music-new
library: /home/dan/beets-library.db
  copy: no
  move: yes
  write: no
  strong_rec_thresh: 0.10
plugins: chroma web fromfilename fetchart duplicates
strong_rec_thresh: 0.10
    countries: ['GB|UK', 'US']
  auto: yes

and then cd /srv/media/music-old && beet import .. Do this inside of tmux, because it takes ages and asks a lot of questions.

Tomorrow I'll talk about sniproxy which is how, with one static IPv4 address but a bazillion IPv6 addresses but a mobile phone network that doesn't support IPv6 (this is the UK. None of them do) I can make my music server visible outside of the house.

Six into 4 won't go#

Wed, 02 Dec 2020 21:45:00 +0000

Advent of Blog, day 2: publishing IPv6 services to the IPv4 net

One of the nice things about IPv6 is that with a reasonably competent ISP (I would recommend Andrews & Arnold who have so far proven to be far more than just reasonably competent) for your home internet you can have squillions of internet addresses and run public services from any box in your home network. Servers under desks, or VMs or laptops or Raspberry Pis or ... - unlike the bad old days of IPv4 you don't have to decide on one single machine to forward port 443 to and have everything running on nginx "virtual hosts" on that one box.

As long as you're accessing them from the IPv6 internet, anyway. If you're stuck with legacy internet (in my case, my mobile provider, and when in some post-covid world they finally let me back into the office, $WORK's wifi) this doesn't help and you have to go back to forwarding ports. Here's a reasonably hassle-free way to do it though: sniproxy.

[SNI Proxy] proxies incoming HTTP and TLS connections based on the hostname contained in the initial request of the TCP session. This enables HTTPS name-based virtual hosting to separate backend servers without installing the private key on the proxy machine.

How we do it:

  networking.firewall.allowedTCPPorts = [ 8443 ];
  services.sniproxy = {
    enable = true;
    config = ''
        resolver {
            mode ipv6_only
        listener {
            protocol tls
            reuseport yes
            table rewrites
        table rewrites {
            ^(.*.telent.net)$ *:443

What this says is: "listen on ipv4 port 8443, when you get a request for a hostname ending in `telent.net`, forward to the same hostname, port 443, but resolved using IPv6".

Job's a good'un and that wraps it up for today. If I don't think of a better idea in the next 23 hours, tomorrow's topic has no Nix in it at all and is all about MQTT-controlled Wifi Christmas lights.

Internet of Bling#

Thu, 03 Dec 2020 22:56:27 +0000

Advent of Blog, day 3: MQTT wifi Christmas lights

I had a Neopixel strip in my electronics bits box, and wanted to make some needlessly complicated Christmas lights.



Wiring a Neopixel strip to an Arduino is well-trodden territory . The capacitor smooths out the power, the resistor is in series with the first LED in the strip, and some digital output pin on the Arduino connects to the resistor.

However: this works only for 5v devices. The Wemos D1, although behaving for many purposes like an Arduino, is a 3.3v device. So, we need something to switch it up. The Adafruit page says "you must use a logic level shifter such as a 74AHCT125 or 74HCT245." but Bitsbox didn't have any, so improvisation was needed.

After some web searching to actually understand what I was sticking together, this is what I found: the 74 series of ICs are logic gates (the AND/OR/NAND/NOT etc that we learned about in secondary school electronics lessons). 74LS are TTL chips (so, power-hungry old technology), 74HC are CMOS, and 74HCT is a special version of 74HC with 74LS TTL-compatible inputs. Why is this interesting? TTL says that logic 0 is 0-0.8V, and logic 1 is 2-5.5V, so the 3.3V emitted by the D1 is nicely inside that range. CMOS says that the output voltage for a logic 1 is the supply voltage. So, all we have to do is implement the identity function using some 74HCT-family chip, and we will get a 3.3v to 5v convertor "for free". The simplest possible way to do this with components that were in stock seemed to be to get a 74HCT04 - also known as "six NOT gates in one IC" and wire two of them in series. So, the MCU's digital output pin connects to a NOT gate, which connects to another NOT gate, which connects to the resistor, which connects to the strip.

That's the hardware, pretty much.

For the software I wanted to use MQTT, mostly because I have used it before and it's low-effort in the Arduino environment. One thing to be aware of here is that the Arduino MQTT library may have a payload size limit of 128 bytes (some sources say 256), which is clearly not enough to send individual 8 bit RGB values for each of 150 LEDs in the strip, so some kind of encoding/trade-off is needed. I settled on a format that allows a small number of pixels to be defined precisely in HSV format, and has the MCU interpolate the values of all the intervening pixels. A cheap and cheesy Ruby script to send the packets looks like this:

def send_strip(speed, positions)
  IO.popen("mosquitto_pub -h hostname.lan -d --stdin-file --username username --pw password -t sensors/neoxmas", "w") do |io|
    array = [speed] + positions
    io.print array.pack("C*")

send_strip(40, # animation speed [ # pixel offset, hue (0-255), saturation (0-255), value (0-255) 0,9,120,160, 74, 49, 120, 60, 145, 49, 120, 60, ])

and the Arduino sketch to receive them and make the pixels light up is in this gist here

I know I said there would probably be no Nix in this entry, but I can't help myself. Nixpkgs support for Arduino includes only the stuff for actual Arduinos, not for other architectures that also use the IDE. To get something that will also work for ESP8266 and the like, you can do

nix-shell https://github.com/nix-community/nix-environments/archive/master.tar.gz -A arduino

It's not very declarative, but nothing I build on a breadboard is reproducible anyway so ...

In unrelated news, I am deeply indebted to leo homestuck who kindly pointed out after reading yesterday's entry that I should have been more careful writing my regex in the sniproxy config. I should have been and now I have been and I will update the blog entry to reflect this, when it's not two minutes until tomorrow.


Fri, 04 Dec 2020 22:46:56 +0000

Day 4 of Advent of Blog and already I've run out of things to talk about.

Today at work I pushed a change that caused a production bug. I'd set out (and succeeded) to fix a different production bug, but when I pushed it into CI I got a complaint from Rubocop that I'd exceeded the line count limit for module size and the limit for method size, so I added another couple of changes to see if I could make it shorter.

Which I did, but ... if you don't know Rails this explanation - indeed, most of this blog post - will not make sense, and I apologise in advance. Sorry.

It was a controller. The original code had a before_action that checked if the request method was HEAD and handled it specially by sending a head response. I moved this to the beginning of the handler. Two hours later, we're getting DoubleRenderError reports in production. Reverting the commit fixed the error. I don't know the root cause yet (this was on Friday afternoon) but I speculate that there is a subclass of this controller class that overrides handle to call super but not return immediately. Or something. When the HEAD case was in a before_action this wasn't a problem because Rails knows when a before_action has rendered, and skips the rest of the response processing after that point.

So anyway. I screwed up by making an obviously-correct change that wasn't, and because it is in my nature as a programmer to find some reason it wasn't my fault (narrator: it was his fault) I'm directing my ire at Rubocop - or at least, at my willingness to pay too much attention to it.

From https://www.rubypigeon.com/posts/my-beef-with-rubocop/ which I would like to endorse as good general advice about automated code style warnings in any language:

I don’t dislike RuboCop. It’s well made, and a useful tool.
The problem starts when it is viewed not as a tool, but as a set of divine commandments. Thou shalt not make implicit nils explicit. Thou shalt not write long-form conditionals. Thus saith the RuboCop.
This is not necessarily a problem with RuboCop itself — it’s a problem with how people sometimes use RuboCop. One could argue that the defaults aren’t great, but they are not a problem until someone hooks them up to CI.

and from the author's lobste.rs comment -

I feel link linters pull you towards a certain level of readability. Whether they pull you up or pull you down depends on where you started at.

While I'm here and thinking about Rubocop, and because I don't often talk Ruby in this blog, if you are a Ruby programmer and your team uses Rubocop, do yourselves a favour and change the default setting for Style/BlockDelimiters from line_count_based to semantic. Here is Avdi Grimm on the topic.

Static IPv6 addresses#

Sat, 05 Dec 2020 22:00:48 +0000

Advent of Blog, day 5

Short one, today. After all that writing about airsonic and sniproxy the other day, I went to put some music on yesterday and found that it didn't work any more.

A series of suppositions

I broke my network significantly last week by switching from GNOME to KDE

... Let's just take a moment to reflect on how utterly bonkers it is that switching the desktop environment should break networking, but GNOME apparently has NetworkManager as a dependency, or possibly vice versa ...

and in the course of fixing that, I seem to have enabled systemd-networkd, which had changed the way that the machine was getting an IPv6 address. Where I previously had 2001:nnn:nnn:nnn::ed9, now I have 2001:nnn:nnn:nnn::148 and 2001:nnn:nnn:nnn:xxxx:xxxx:xxxx:xxxx.

IPv6 address allocation is simultaneously much simpler and much more complicated than IPv4 was, but the short form is: you can have a global address via SLAAC or via DHCPv6 or both. Addresses from SLAAC are 64 bits of prefix (here, 2001:nnn:nnn:nnn) plus 64 bits derived from your 48 bit MAC address. Addresses from DHCP are the prefix plus whatever the DHCP server gives you when you present it with a DUID, so I'm going to assume that these ::148 and ::ed9 are from DHCP - my MAC address isn't full of zeroes - and that networkd is presenting a different DUID than NetworkManager was.

There are, I guess, two possible routes to win here: find out how to make networkd present the same DUID as NetworkManager did, or reconfigure OpenWrt to accept the new one and give me the old address. I'm going for the latter approach, so, let's login to the router and use uci show dhcp to find the per-host setting


There are two things I now know to be wrong with this:

So we can fix them both:

uci set 'dhcp.@host[0].duid=00020000ab11406665c3f2e75232'
uci set 'dhcp.@host[0].hostid=0e9d'
/etc/init.d/odhcpd stop; /etc/init.d/odhcpd start

And down/up the interface on the client to make a new request (don't use renew, it doesn't help here)

sudo networkctl down vbridge0 ;sleep 1; sudo networkctl up vbridge0

and now I have my local services accessible again. Yay me.

You may be wondering why I have vbridge0 as my primary interface on that machine and not enp1 or en0s31f6 or en4s31f6z1q3zz9zzzα, and the answer is: it's a bridge between the ethernet interface and one or more tap devices for virtual hosts. I don't think anything I've described here would be different for a hardware ethernet, but don't quote me, I haven't tried it.

Wayland Emacs - no more fuzzy fonts#

Sun, 06 Dec 2020 20:28:24 +0000

Advent of blog day 7 blah bah

I haven't touched a computer all day today except to buy wood screws, so this post is neither new now original.

I can't easily find any record of when it was I permanently switched from X11 to Wayland: I know I've tried it a number of times without success before eventually saying "screw it why don't I give in and use GNOME", and mostly because my laptop has a HiDPI screen and GNOME/Wayland seems (or at any rate at the time, seemed) to be the only platform with an option for 1.5x scaling.

(Rant: this "scaling" thing is a nonsense. A not-stupid system would define everything including window decorations and icons and the like using a resolution-independent measure - like "points" - and have no need for all this "actually this pixel is twice as big as a real pixel" rubbish which has always been and will always be a kludge. Bitmapped graphics in any sensible format already carry "x resolution" and "y resolution" metadata: scale them instead of scaling the entire universe they inhabit. End rant)

Anyway, I spent a subjectively long time with a nice crisp Wayland Firefox window, a nice crisp Wayland terminal window, and an ugly fuzzy scaled XWayland Emacs window, because Emacs - the released version, at least - uses X11 for some stuff even when you tell it to build with Gtk so it has to run in XWayland.

No more! https://github.com/masm11/emacs/ is a fork of Emacs to "pure Gtk", which means it works as a native Wayland app. Build it with the --with-pgtk configure flag. I've been using it for about six weeks, and in my experience it has been about stable enough for daily use - there have been a couple of times it's frozen with an eventual "not responding" dialog, but both times when I waited for a few tens of seconds it came back to life without my intervention.

One wrinkle you may want to be aware of if you use Magit (if you use git and you don't use magit, I would heartily endorse it) is that Emacs 28 changes the signature of server-switch-buffer meaning that "the with-editor advice server-switch-buffer--with-editor-server-window-alist has too few arguments", meaning that you can't commit anything. Check the Github issue for more details: if like me you don't want to go around upgrading a bunch of stuff to versions newer than those in NixOS Stable, the workaround is to redefine server-switch-buffer--with-editor-server-window-alist in your .emacs or somewhere

(require 'with-editor)

;; patch needed until new emacs 28-compatible release of magit/with-editor (defun server-switch-buffer--with-editor-server-window-alist (fn &optional next-buffer &rest args) "Honor `with-editor-server-window-alist' (which see)." (let ((server-window (with-current-buffer (or next-buffer (current-buffer)) (when with-editor-mode (setq with-editor-previous-winconf (current-window-configuration))) (with-editor-server-window)))) (apply fn next-buffer args)))


uname it, you own it#

Mon, 07 Dec 2020 23:42:23 +0000

Advent of Blog placeholder

Today I have mostly been playing with AWS Lightsail containers, which look like the result of saying to some AWS employees "hey, why don't you build something user-friendly?". If you're familiar with AWS, you'll know why this would be a bad suggestion. Sure, the web ui is pretty, but oh my lord, the feedback cycles are stupid long ("no you can't abort this failing deploy, you have to wait for it to timeout") and the diagostics are awful. "Here's a window with some 'logs' in it. They might be from this deploy or from a previous one, there is no way to tell and no timestamps on the log lines".

Also I found out that there is some weird circumstance where the rails server will fail to start and instead print out the rather obscure message Switch to inspect mode, but I don't know what that circumstance actually is. It went away when I added -b to the rails server invocation, but I don't see offhand how that could have been the problem. Hey ho.

I'm not quite old enough to remember the days of punching your program into a deck of cards and sending it to the operators to be read in, but I imagine it must have been a bit like this.

No blog of substance today, sorry. The title is because I have also been fighting with (and losing to) Cmake on Darwin

All the things#

Tue, 08 Dec 2020 23:08:44 +0000

I was going to write about how I patched openssl on all my machines with a five line fix and the minimum of fuss, but it turns out that I spent all evening fighting letsencrypt instead. I think it is mostly correct now.

I have to add: I had not fully realised when i set out on this that my simple overlay

    patchesOverlay = self: super: {
      openssl = super.openssl.overrideAttrs(o: {
        patches = o.patches ++ [ (builtins.fetchurl "https://github.com/openssl/openssl/commit/f960d81215ebf3f65e03d4d5d857fb9b666d6920.patch") ];
would necessitate rebuilding ... probably the entire distro. Maybe I'll just wait for https://github.com/NixOS/nixpkgs/pull/106362 to land instead.

https://news.ycombinator.com/item?id=25346133 has more info on the severity

Jam tomorrow#

Wed, 09 Dec 2020 22:13:59 +0000

A complete departure from my usual topics:

This week is Hackathon week at work, and we're solving latency on the Internet so that people in different places can play music together.

Well, not really. The speed of light is the speed of light whatever you do, the speed of light in optical fibres is slightly lower, and the speed of your packets through all the switches/routers/etc in the path to your band members - that's not something we can solve in two days. So what we're doing instead is adding latency : the idea being that if the delay between you and your bandmembers is a multiple of the time to play a measure, at least you can play in sync with what they were playing a bar (or two bars or 8 bars) ago. There is software that does this already: the most well-known is apparently Ninjam , but we're using a fork of it called Wahjam - mostly because Wahjam has simple free-as-in-freedom clients whereas Ninjam steers you quite firmly towards something complicated-looking (and probably $$$) called Reaper.

Anyway, yes, basically it's been two days of compiling code qith Qt on macOS and cargo-culting fixes by copy-paste from Stackoverflow, and I won't deny it's kind of fun. My contribution to the intellectual commons is a patch to allow an unsigned Wahjam binary to open the microphone although I freely admit I have very little idea if I've done it right.


Sat, 12 Dec 2020 00:00:19 +0000

Post title is more expressive than illuminating, sorry

I have just spent a very long time trying to figure out why nix-channel --update on my personal channel for nixos config was failing with 500 errors.

openat(AT_FDCWD, "/home/git/htpasswd", O_RDONLY) = -1 EACCES (Permission denied)

It turns out to be due to this magic morsel in the nginx systemd unit


Now I'm sure (actually, I'm not) that this is a reasonable default behaviour for a daemon that perhaps should not be able to read users' home files, but - I am struggling to avoid swearing in print here - would it kill you to print an error message that has at least some vague hint somewhere of what the error might be?

Pardon typos. It's too late for this shit, am going to bed now. Maybe tomorrow I'll be able to do my upgrades and get back to hacking my thermometer.

Fatally weekend#

Sat, 12 Dec 2020 23:44:32 +0000

Maybe tomorrow I'll be able to do my upgrades and get back to hacking my thermometer.

Score 1/2. I got the upgrades done, I successfully rebuild the Arduino program that sends temperature data to MQTT, now I need to revive the thing on the server that listens to MQTT and logs to Influx. The first step towards this end is run nix-shell and watch it for some length of time (longer than it will take to post this blog entry) as it rebuilds apparently the known Haskell universe.

In other news, bought some wifi-enabled sockets so we can control the Christmas lights without having to grope around behind the TV. This information more relevant to UK readers than anyone in the rest of the world - they are branded "TCPSmart", they're available in Screwfix, and I am 95% sure they have Tuya internals. At least, I successfully added them to the Smart Life app on my phone. Relevant links I haven't tried yet:

Haskell and you shall receive#

Sun, 13 Dec 2020 22:30:15 +0000

Last winter I embarked on a project to replace my annoying wireless thermostat with something less annoying: started out sticking together MQTT and ESP8266 devices and then found myself trying to decode the thermostat's radio transmissions using SDR and then suddenly it was springtime and there was no need to heat the house anyway.

I spent a little while today trying to put it back together, because it's assuredly not springtime any longer. So far I have succeeded in reflashing one of the ESP8266 devices with a slightly newer version of the mqtt-sending software, and rebuilding the Haskell glue that subscribes to all the sensor messages and writes them into InfluxDB.

Whinge of the day: you picked MQTT because it's a simple fast low-overhead protocol ideally suited to MCUs, why are you[1] encoding the payload as JSON objects? It's only one notch less verbose than XML, for goodness sake.

[1] I wasn't quite doing this, but I was formatting floats to strings to send them over the wire. The newer version multiplies the temperature by 100 and sends a little-endian integer instead: I'm sure in the grand scheme of things it makes sod-all difference to anything but it makes me feel happier.

Yes You! I'm talking tuya#

Mon, 14 Dec 2020 22:31:19 +0000

A very short entry today as it's already bedtime, but I would like to note that I have followed all the Tuyapi instructions and can now turn my Christmas tree lights on and off from the command line.

First impressions: seems to work. There are occasional "Timeout waiting for status response from device id" errors, which seem to happen mostly if I try doing two `set` operations without an intervening `get`. I don't yet know the technical details of how it works, but I do find it interesting that (according to strace) it seems to be talking directly to the device instead of to a cloud server:

connect(20, {sa_family=AF_INET, sin_port=htons(6668), sin_addr=inet_addr("")}, 16) = -1 EINPROGRESS (Operation now in progress)
epoll_ctl(13, EPOLL_CTL_ADD, 14, {EPOLLIN, {u32=14, u64=14}}) = 0
epoll_ctl(13, EPOLL_CTL_ADD, 16, {EPOLLIN, {u32=16, u64=16}}) = 0
epoll_ctl(13, EPOLL_CTL_ADD, 20, {EPOLLOUT, {u32=20, u64=20}}) = 0
epoll_wait(13, [{EPOLLIN, {u32=16, u64=16}}], 1024, 0) = 1
read(16, "\1\0\0\0\0\0\0\0", 1024)      = 8
epoll_wait(13, [{EPOLLOUT, {u32=20, u64=20}}], 1024, 4997) = 1
getsockopt(20, SOL_SOCKET, SO_ERROR, [0], [4]) = 0
write(20, "\0\0U\252\0\0\0\1\0\0\0\n\0\0\0\210\234\0\334\351T\302\30\17%\31H\351\36\306\24\312"..., 152) = 152

I wonder if Tuya Haskell bindings exist.

Not writing a Lispos, honest#

Tue, 15 Dec 2020 23:16:40 +0000

When I last stopped doing NixWRT it was after I had decided (perhaps inadvisedly) to write a process/service monitor in Lua , and was getting slightly fed up of debugging it on a (admittedly virtual) read-only system that required a reboot to test anything.

I am not sure that rewriting the Lua in Fennel is going to make this better, but I have converted the files using Antifennel and will note so far that

My hope is that the rather nicer syntax in Fennel will make it a bit easier for a swarm service to "publish" the expected shape (schema, loosely speaking) of its value, and this will make it easier to validate that the services that depend on it are reading properties that are actually likely to exist. Whether this is how it actually works out remains to be seen.

Still alive#

Wed, 16 Dec 2020 22:56:44 +0000

The opposite of progress is regress, but I don't know if there's a name for the neutral position, as it were. Nogress. This is a nogress report, or at least a notmuchgress report.

Been through the autotranslated-to-Fennel Swarm library replacing many occurrences of (set-forcibly! foo (fn [] ... with (lambda foo [] and - where I can see easy ways to do it - rewriting places it said (lua "return v") with constructs that don't need an early return. Right now it passes all its tests, which is both good news (it passes tests!) and bad news (this demonstrates some truly humungous gaps in the test coverage)

Functionally I'll iterate#

Fri, 18 Dec 2020 22:53:03 +0000

I'm going to attempt to explain Lua iterators, in the hope that by the time I've finished writing I'll understand them. So.

There's a construct in Lua called for which is used for iterating over a collection or other sequencey-type thing. You might write e.g.

> for k, v in pairs({a=2, b=5, c=9}) do print(k,v) end
a	2
b	5
c	9


> b=io.open("/etc/hosts", "r"); for l in b:lines() do print(l) end localhost
::1 localhost noetbook
::1 noetbook

Per the Lua manual, the syntax for the for is as follows:

    for <var-list> in <exp-list> do

where <exp-list> is (between one and) three values - or something that produces three values when evaluated. The first value is an "iterator function" f, the second is "invariant state" state and the third is the "control variable" a.

The interpreter then runs the loop: repeatedly, it

until a becomes nil which signals the end of the iteration.

The builtin ipairs iterator, used for traversing "array" tables - tables in which the indices are consecutive integers - provides an iterator function which when provided with a table and an index returns the next indes, and the value at that index

> gen,_,_ = pairs({5,6,7,8})
> =gen
function: 0x41c600
> gen({5,6,7,8}, nil)  -- what's the first element?
1	5
> gen({5,6,7,8}, 3)    -- what's after the third element?
4	8

The standard pairs iterator, used for traversing tables with arbitrary keys, uses the builtin next function as an iterator function. next works similarly to the iterator function in the previous example: given a key and a table, it returns the next key (for some value of "next" we aren't interested in the details of) and the value at that key

> next({a=2, b=4}, "a")
b	4

These both depend on the for construct's behaviour of using the first return value from each call to the iterator function as the second argument to it on the subsequent call. Which is fine if we want that value, but quite often we don't, so we end up doing this:

for _, v in ipairs(an_array) do

If we want to write an iterator values_of that returns only the value not the index, so could be used something more like this:

for v in values_of(an_array) do

then the iterator function returned by values_of would be called each time only with the value v, which would not be sufficient for it to work out how far through the array it had got last time. Instead we have to make our iterator close over the state it needs:

function valuesof(anarray)
  local index = 0
  return function()
    index = index+1
    return an_array[index]

> for v in values_of({7,3,5}) do print(v) end 7 3 5

I note that the io.lines function has similar behaviour to values_of in that it returns only the next data value and not also the index of that value. Assuming it uses the standard C library functions for file input, I am guessing that it does this by reading from the current file position. So it also has internal state, but that state is implicit in the depths of stdio

Yes, but why?

Why did I start looking at this? Reasonable question. Because I wanted to write an idiomatic find function in Fennel and thought it would be a lot neater to have it work on any iterator, not just on tables. So I I could say something like

>> (find (fn [_ v] (= (% v 2) 0)) (pairs [ 1 2 3 4 5]))


>> (find #(even? $1) (pairs [ 1 2 3 4 5]))

but it does look messy having to accept and ignore that first parameter, especially if I want to pass bog standard predicate functions. I want to write something more like

>> (find even? (vals [ 1 2 3 4 5]))

and indeed this is possible, but only if I write the vals iterator to close over its state instead of being able to use the parameters that Lua passes into it when it's called.

See also:

On the eve of destructure#

Sun, 20 Dec 2020 00:03:13 +0000

Nothing, really, to say today, except that I have spent an hour or two trying (and mostly, failing) to find the Fennel equivalent of {table.unpack(tbl)}. Failing mostly because I accidentally had the LuaJIT version of Lua on my path ahead of the Lua 5.3 that I thought I was using, and as a result becoming more than mildly confused that table.unpack appeared to be missing anyway and oh this makes no sense what the hell is going on.

Having now tried it in a Lua 5.3 I discover that table.pack is a thing, wich is good, but that after all that the whole approach is worthless because they only work on array-like tables in the first place.

Some days are better than others, and today wasn't one of them.

Events, dear boy#

Sun, 20 Dec 2020 22:53:41 +0000

Another short one today: still refactoring the autogenerated Lua->Fennel code in Swarm.

So far I have succeeded in

- removing all the calls to (lua "break") and (lua "return x"), and

- deleting an entire function called get-in, and all its calls, because Fennel has a function called . builtin that does exactly the same thing.

Rain nor snow nor glom of nit#

Mon, 21 Dec 2020 22:34:19 +0000

But, to paraphrase Terry Pratchett, "don't arsk about Mrs Cake delivery to Google"

I have run my own mail servers for something around a decade (and run mailservers for other people previous to that) and the longer I do it the less convinced I am that it is worth the faff, considering I receive about three pieces of non-spam non-list mail a week.

Anyway, some (but not all) of my email to Google domains is apparently disappearing into a black hole; other mails are being greylisted and yet others are turning up perfectly OK.

I still don't know why - it's google, there are no rules - but I did find out that my DMARC record was wrong, which can't have been helping. When I moved the service from Bytemark to Mythic Beasts it looks like I forgot to transfer the private key, and NixOS generated a on the new machine a new key pair that I have only now put the public half of in DNS :embarrassed_smiley:

Fail2ban or ban 2 fail#

Tue, 22 Dec 2020 20:52:45 +0000

One of the things that made yesterday's "why does google hate me" (I suppose it is fair to say that the feeling is mutual) introspection so frustrating was the sheer volume of crap that cascades through my syslog systemd journal making it quite hard to see what's going on. Most of it seems to be bots trying not-very-hard to look for open SMTP relays, and a particularly tedious strain is the ones that try to authenticate as

Apr 04 12:46:20 vritual postfix/smtpd[5250]: warning:
 unknown[]: SASL LOGIN authentication failed: UGFzc3dvcmQ6

I really do literally mean UGFzc3dvcmQ6 there. A web search will confirm that I'm not the only one to see this, and it's not the string of random alphanumerics you might think it is on first glance:

$ echo -n Password: | base64 

So I thought I might see about applying the banhammer, and as fail2ban is included in NixOS, let's set ourselves up a rule. This required a lot of futzing around in the "obvious in retrospect" space, so here is what I did that eventually worked.

  environment.etc."fail2ban/filter.d/postfix-login-failed.conf".text = ''

before = common.conf


_daemon = postfix(-\w+)?/\w+(?:/smtp[ds])?

failregex = warning:.*\[<HOST>\]: SASL LOGIN authentication failed: UGFzc3dvcmQ6$ ignoreregex =


journalmatch = _SYSTEMD_UNIT=postfix.service


services.fail2ban.jails.postfix-login-failed = '' filter = postfix-login-failed enabled = true action = iptables-multiport[name=SMTP, port="smtp,submission,submissions,imap,imaps"] '';

Things to look out for:

Observationally, most of the hosts trying to login to my server with the password Password: seem to be the same ones trying to send mail in six other ways, so this rule cuts out a lot of all the kinds of postfix lognoise.

Markdown my words#

Wed, 23 Dec 2020 22:31:00 +0000

As a concept, this blog dates back to November 2001 - though I note now reviewing the first few entries, I was back then quite emphatic in my claim that it was not a blog. How times change.

In that time it has been implemented with

What's notable about Yablog is that it took a week in 2015 and has scarely been touched since, until yesterday, when I decided to add Markdown support. For the last 15 years (ever since I switched to Soks, bascially) I've been writing blog entries using Textile, and for the most recent ~ 10 of them, the blog entries are pretty much the only thing I've been writing in Textile, so this change is long overdue. I was getting a bit fed up of writing backticks and then publishing entries before remembering that Textile uses @ signs for that instead.

A year or so ago I suddenly realised that for the first time in a long time I no longer have a favourite programming language, but it's been apparent to me for a while before that that Clojure isn't it any longer anyway. That said, I must admit I'm quite happy that I can pick up a ~ 6 year old project, update the dependency versions and hack this in, all in the space of a couple of hours. I have trouble imagining that the same would be true in Ruby.

(Ruby isn't it either. But Ruby has never been it)

Shoutout to @yogthos@mastodon.social for markdown-clj which did all the hard work already.

Twas the night before Christmas#

Thu, 24 Dec 2020 23:00:48 +0000

... and I've not picked up a computer all day.

Merry Christmas to all who celebrate it, and culturally appropriate seasonal best wishes to all who don't.

New Years Ruminations#

Fri, 01 Jan 2021 16:13:10 +0000

2021 will be, I assert confidently, the year I get NixWRT running on my internet gateway at home. A short list of the yaks I need to shave to get there, which you will note is a lot more concrete at the front end than the back: