A Kind of Magic#
Sun, 19 Dec 2021 11:52:12 +0000
Lately my greatest programming success is a Lua program that flashes the backlight on my PineTime smartwatch - or, more accurately, creating the conditions that make it possible to flash the backlight on my smartwatch using Lua. By which I mean, installing Lua on it and writing some bindings for GPIO control.
But this post isn't about PineTime per se, this is about setting up a development environment for it: specifically, about installing Black Magic Probe on the WeAct STM32F411 board commonly known as "Black Pill"
Black Magic Probe is an "in-application debugging tool for embedded microprocessors". You connect one end of it to a USB port on a PC, and the other end to some debug pins (SWDIO, SWCLCK, V+ and GND) on the target device. It appears on the PC as two serial ports, and one of them speaks the GDB remote protocol so you can run programs, set breakpoints, single-step and all that cool stuff on your target device. You can buy the BMP as a hardware device (which I would definitely recommend doing to support the developers, except that it's currently sold out) or you can buy a cheap microcontroller and build and install it yourself.
BMP supports a bunch of different microcontrollers. I read on the Internet(sic) that I should use a "Blue Pill", and I would like to summarise the learning experience that ensued:
- Blue Pill is the nickname of a small blue PCB based on the STM32F103 Arm Cortex M3 MCU designed by WeAct Studios, and a zillion clones and variants based on it. You can program it using the Arduino IDE, if that's your idea of a good time, or you can use a "grown up" Arm toolchain. As far as I can tell from reading, to flash it for the first time you need to attach a programmer like the ST-Link (I'm guessing you may also be able to use a Pi with OpenOCD) to its debug pins, but if you flash a program with USB support you can do subsequent uploads over USB.
- Some of the Blue Pill clone devices are printed on black PCBs, and therefore known as Black Pill. Same MCU, slightly different board layout.
- There is also another device (or range of devices) informally known as the Black Pill but these use the STM32F401 or STM32F411 - that 4 is significantly not a 1 there. There are other differences as well: the USB connector is C not micro, they can be USB flashed out of the box, and there are actual buttons instead of jumpers to reset and to enter the bootloader.
The lesson here is: if you want a blue pill, don't assume as I did that something described as "Black Pill [...] better than Blue" is going to do the same job in the same way. Check the MCU model number.
BMP can be built for the F4 MCU, but the documentation isn't very clear on how. Instead of swlink
you use the f4discovery
target, and you have to pass BLACKPILL=1
to make
. And that is alleged to work, but on my Black Pill clone board it didn't until I made a bunch of random changes that shouldn't have made a difference, and then suddenly it did - gory details are in the link. But at least programming the board is simpler: you can do it over USB with dfu-util instead of messing around with serial pins.
For extra credit, and this is a great reason to run BMP instead of using an ST-Link device with OpenOCD, you can enable RTT which gives the target a way to print messages on the host so you can do "printf debugging". At the time of writing this doesn't exist in BMP mainline but there's a PR (see #954) which "just works" and I strongly endorse.
In the next installment, maybe some actual PineTime programming.
NixWRT Nics#
Sun, 21 Nov 2021 19:17:29 +0000
According to Git history I've been back on hacking NixWRT fairly regularly for a couple of months, and am pleased to be able to write that by dint of saying "screw it" to every previous attempt to write a replacement init system, I have got to the point that I can create an image that runs in Qemu which actually routes packets.
- It runs ppp-over-l2tp on the qemu default "user" network device to connect to the internet and then it gets some proper address space by running a DHCP6 client to do a "prefix delegation" on that link.
- It brings up a second ethernet device and assigns it an RFC1918 address, and also a network based on the address prefix that came back from DHCP6
- It starts dnsmasq on eth1, and turns on IP forwarding
Given that it's qemu, we don't even have to attach eth1 to any real hardware. The nixwrt vm is started with options including
-netdev socket,id=mynet1,listen=:5133 \
-device virtio-net-pci,netdev=mynet1
and then I start a second VM with
qemu-system-x86_64 -enable-kvm \
-netdev socket,id=mynet0,connect=:5133 \
-device virtio-net-pci,netdev=mynet0 \
-cdrom sysrescue.iso -m 2048
whose eth0 can see nixwrt's eth1, and which successfully gets a (real! globally routable!) IPV6 address from it.
At some point I should try it on some real hardware,but there are a few other things to do first. DNS would be nice, for one. So would NAT (so I can have IPv4 as well as v6) and some kind of firewalling.
In replacement init system news, I am now using shell scripts to start services where I was previously implementing them as Monit services. The codebase is in a very transitional state right now: existing services (anything defined with services.foo
- q.v.) continue to be started using Monit, for the moment, but new services go into the config under the svcs
key - see this dnsmasq example. Most likely I will rename this back to services
once I've moved everything over.
New-style service definitions can also specify changes to the config
, meaning they can require busybox applets or kernel config. This means that if service B depends on service A it doesn't have to also know what A's system requirements are.
Cross product#
Sun, 31 Oct 2021 15:47:10 +0000
I had cause this afternoon to remember the Monad Tutorial Fallacy, which has been summarised as saying that when you finally understand them, you lose the ability to explain it to others.
I hypothesise that the same is probably true of cross-compilation in the Nix package system, and therefore I present these notes not as a superior alternative to any of the existing documentation, but because I wrote them down to aid my understanding and now need to put them somewhere I can refer back to them.
So. Let's suppose we're building NixWRT. Most likely we're building on an x86-64 system, to produce an image which will run on a MIPS device. In the event that there are any programs in that image which generate code (which is unlikely as we're not shipping compilers), we want them also to generate MIPS code. Thus, in the standard terminology we have:
- build: x86-64
- host: MIPS
- target: MIPS
(This naming convention comes from Autoconf, and so we are stuck with it. To make it make sense, consider the built product rather than the build process: we are describing a thing that was built on x86-64, is hosted on MIPS, and would - if it emitted any code - emit code that runs on MIPS)
However, not all of the software we create (or depend on) will be needed on the MIPS system - some of it (e.g. buid tools, compilers and other kinds of translators) will have to run on x86-64. So how do we keep track of it all?
Let's look at some examples:
- Package A contains source code which is translated into some other form using programs provided by package B (e.g. B provides an SVG to PNG convertor). The programs in B must run on the build machine: thus, the
host
for B is the build
for A. Provided that B is not generating executable code - i.e. we don't have to worry about the target - then we represent this by including B in the nativeBuildInputs
attrribute of A's derivation. - Package A contains source code which is compiled using programs provided by package B for execution on the build system. For example, B is a C compiler which we are using to build
nconf
, which we will then run to create the .config
file that the linux kernel build process uses. In this case the program in B must run on the build system of A and also must target the build system of A. We represent this by putting B in the depsBuildBuild
attribute of A's derivation. - if we have a package A which depends at runtime on another package B (e.g. the build for A creates a shell script, one of whose commands is provided by B) both those derivations have the same
host
. In this case B doesn't care about the target, so we add it to the buildInputs
for A (if it does care, that's more complicated). As the developers of A we must ensure that programs in B are reachable from A, either by embeddding the full pathname of B into the script or using a wrapper that sets $PATH. - if A required at run-time some source code contained in B (e.g. A is a script for some interpreter, and B is a source-distributed library, for it) then B has no
host
to speak of. If there is any native code component in that library, though, it must be code that runs on the same system as A's host - so buildInputs
again. See abcde for an example. Note also the wrapProgram
call which sets PERL5LIB
to ensure that the code in A can find the code in B at runtime. - if A depends when it is built on source code contained in B (suppose: the build invokes a Ruby script, and B is a gem required by that script) then B must be runnable on the build system of A. Host(B) = Build(A) implies
nativeBuildInputs
unless there is some target shenanigans. Consulting the manual it seems that for some interpreters there is support for adding the files in B to interpreter's search path while A is built. - if A is a program that runs on the host, and is linked to binary static libraries provided by B, the
host
for B must be the same as for A, so my reading is that this is buildInputs
. Note that A must be able to find B at compile time, which is handled by the CC Wrapper adding appropriate flags. - if A is a program that runs on the host, and depends on binary shared libraries provided by B, the
host
for B must be the same as for A so this is similar to the previous case. The absolute pathname of the shared library provided by B will be embedded into the binary of A.
Why am I caring about this right now? I rearranged bits of NixWRT and updated it to a recent Nixpkgs revision, causing OCaml to stop building for reasons I didn't fully understand
So, here is what I think is happening:
- the kernel is being built on x86-64 (build) for execution on MIPS (host). It doesn't generate code when it's run (er, as far I know - at any rate it can't be configured to generate code for some other system than the one its running on) so I don't care about target.
- to create the kernel source tree we run Coccinelle on x86-64 - so Coccinelle's
host
is the kernel's build
. Coccinelle produces C source files, so again I don't care about target
. This means we should use nativeBuildInputs
to declare it as a dependency. - Coccinelle is written in OCaml, which is a compiler and generates code for some target system according to its build options. This means that OCamls
host
and target
are both coccinelle's build
system, so we use depsBuildBuild
to declare it as dependency.
Clear? If this doesn't help, I invite you to consider the possibility that cross-compilation is like a burrito.
Hard pass#
Sat, 14 Aug 2021 21:31:26 +0000
I've got the key
I've got the secret
– From the Urban Cookie Collective's guide to password management
For reasons that seemed good at the time, I've written a password manager. It's a lot like pass ("the standard unix password manager") - which I have been using up 'til now - but it uses age instead of GPG to do the heavy lifting.
moss, the Maybe-Ok Secrets Store, is a 400-line Ruby script that uses only libraries provided by a default Ruby installation, plus 520 lines of testing code (Cucumber and RSpec).
Some random observations follow:
- almost all of it was test-driven, but the tests are for the most part end-to-end Cucumber tests that run the script in a subshell instead of calling classes/functions/methods directly. I was not expecting to get that far with end-to-end tests, but when all's said and done it's a pretty small system.
- There are certain deficiencies in the Cucumber step definition language that get more annoying as the test suite grows. Most notably, the use of ad hoc
@var
to pass state from one step to the next. I don't have a solution here, I'm just complaining about the problem. - I don't claim to be a security expert, which might make me not a good person to attempt this kind of project. I have carefully avoided rolling my own crypto by using a third party program for that, and I am quite pleased with my use of the various affordances of
Kernel.system
, IO.popen
, Kernel.spawn
and Tempfile
to avoid even passing plaintext or passphrases in or out of the Ruby process. There's a lot of flexibility in that family of methods if you read the docs. More information about my security-related design choices in the relevant section of the README - I am simultaneously proud and ashamed of the command line parser I created by using various metaprogramming features in ways that may or may not be generally recommended. (Don't try this at work, kids - at least, not if you work for my employer, metaprogramming in company code is firmly in the "presumption of bad" bucket).
- I didn't know, but now I do, that there is support in the Ruby standard library ('Fiddle') for calling C libraries without having to write extensions and needing a C compiler. I didn't actually need to - I thought I was going to need mlock(2) before I realised it wouldn't help and found a way to use a temporary file instead - but it's nice to know it's there.
- It's a single script with no dependencies other than a Ruby installation, and this is intentional: it means you can easily install it anywhere just by copying the script. I might reconsider the "single script" constraint if/when it passes 500 lines. I am less likely to renege on "no external dependencies", because I've too often had trouble running programs that require Bundler from inside a project with its own Gemfile.
It's been a long time since I wrote more than about 5 lines of Ruby for anything outside of a work context: for 'fun' projects I tend to pick languages which I don't get a chance to use 9-5. Ruby for this task was definitely less than awful, though.
Slightly too much Kodi for fun#
Wed, 04 Aug 2021 22:02:24 +0000
I'd been vacillating for a while about buying a new monitor, but eventually I pulled the hammer (is that the idiom?) on a spangly new Dell S2721SQ, which arrived yesterday and provided the incentive to look at NixElec again. Because it (the monitor) has speakers, which means I have the hardware to fix the audio issues without having to commandeer the family TV.
Second rate
I don't claim to understand how ALSA works, and Kodi's approach to ALSA is even more weird, but I did eventually make it work for 44.1kHz sources: define an ALSA fixed-rate pcm for Kodi that is hardcoded to S16_LE
format, and then tell Kodi about it in advancedsettings.xml
A sticky GUI mess
To the extent that Kodi can be configured through files, they're XML files. There is a toXML
builtin in Nix, but it only generates a particular XML representation that would need XSLT to turn into files that Kodi likes - and XSLT for me is assigned firmly to into the same "tried it once, not going back to that" bucket as are m4 and Java applet programming.
What I really wanted is something that would let me write out (or generate!) a nested attrset describing the structure I want, and turn it, possibly via JSON, into XML. Python's dict2xml is very nearly it, but has no support for XML attributes, so I had to invent something slightly more complicated.
Sadly, the extent that Kodi can be configured through files is not the full extent. Although the sources are defined in XML, the content of each source (tv shows? movies? music?) seems to be set in a Sqlite database, which is another level of complexity to manage. So there's still manual twattery on the GUI to deal with.