Back after the break#
Wed, 06 Jan 2010 20:50:17 +0000
If anyone reading this is presently as disillusioned with
computing/hacking/coding as I was a few months ago, I cannot recommend
Peter Seibel's "Coders at Work" highly enough.
Why my discontent? I don't trust anyone who elevates a programming
practice (or indeed, almost anything else) to the status of a
religion, and it seemed to me that "Test Driven Development" shares
the important characteristics of UML, J2EE, pair programming, and
"goto considered harmful" in that it promotes a high-ceremony dogma
that is on the face of it really rather unlikely to save you time, on
the professed basis mostly that it will make you a Better Person.
And, presumably, get your reward in heaven.
But herein lies the problem. If you want to write stuff for a Unixoid
platform right now, after discounting the really tedious heavyweight
stuff like C and its variants your choice is basically Ruby. Or, at a
pinch, Javascript. Common Lisp is - for all that I spent the last
decade or so hacking on SBCL (more about that soon) - a platform not
just a language, and a platform that nobody really wants to stand on;
Perl is basically moribund; Python is dull and patronises its users
(ESR likes it, which fits because he is also dull and patronising);
the Schemes are fragmented and don't have momentum; and Java and C# -
well, honestly they're out of the realms of everyday religion and
straight into the flagellation: not my kink at all.
(Did I miss anyone?)
Anyway, Ruby is actually quite OK apart from the syntax (as a Lisper,
I don't actually care what the syntax is, that it has one is the
problem), it'll have acceptable performance in 1.9, and the lack of
macros/programs-as-data is mostly - I don't say entirely, but mostly
- made up for by a lexically succint notation for blocks, which do
most of what you might want to do with them. (It's actually a minor
source of amusement to me that Ruby folk are so mad keen on DSLs when
they have to jury-rig them all into method syntax, but that's a story
for another time). Anyway, if I can still remember my original point,
I think it was that Ruby is currently the nearest thing around to a
programming language that is both interesting and useful, but what
about the TDD weenies?
So, back to Coders at Work. I bought it after it seeing that JWZ's
interview had generated a certain amount of controversy with the unit
testing dweebs, and reading it over the space of a couple of days I
was immensely cheered to find that none of the other interviewees -
interesting people, great hackers and luminaries among them - were
blind adherents of that particular orthodoxy either. And, bizarre and
pathetic as it sounds, that gave me the innoculation against TDD mind
control that I thought I'd need in a group of Ruby programmers, so I
have started (1) making some kind of an effort to get to the London
Ruby Users Group meetings, (2) making a good-faith
attempt to actually learn the language.
LRUG? Well, the first meeting was all about reinforcing the
stereotype: a "Coding Dojo", or "what do you get if you try to scale
pair programming". Answer is, you get mob programming: one active
pair and ten to fifteen other onlookers. Which doesn't mean it wasn't thought-provoking , of course
The second meeting
I turned up principally for the functional programming talk, which
seemed to create quite a buzz in the pub (that's a good thing) and
left me quite reassured that at least some part of the Ruby community
"get it".
And learning the language? I'm writing a program. It has bits of
Ruby, bits of Javascript, bits of audio coding (fairly noddy audio
coding, as getting Ruby to run fast enough for anything more
interesting looks like a challenge) and will eventually embed webkit.
Yes, I'm writing an mp3 player/library manager, just on the basis that
the downgrade from Amarok 1.4 to 2 in Debian has left me without a working music player that will export playlists to my phone, but that I already have a Twitter client. More on that as it progresses.
I'm never quite sure what "normal" is around here, but
hacking-and-blogging service has been resumed.
[ posted a day later. But look what else I found since - Eleanor McHugh's On Agility rant. This is not exactly what I wanted to say about TDD and BDD, but it has themes I'm going to pick up on ]
OSSes for Courses#
Mon, 11 Jan 2010 18:30:54 +0000
I'd like to talk today about the state of audio output APIs that are
implemented on Linux and accessible to Ruby, but there aren't any.
That's an oversimplification, I know. Oliver is now quite happily
streaming PCM data to my headphones, so it's not even a good
oversimplification. But whereas any of portaudio, pulseaudio, jack or
alsa are the conventionally accepted routes to getting the speakers to
squeak, what did I end up using? OSS. Yes, that's right, the Evil!
Bad Deprecated! API invented by proprietary coders and left
unmaintained for years that doesn't even support AC3-encoded 7.1 HDMI
audio with reverse-phased channels and unlimited software mixing.
Because ... well, because it Just Works. Open a file, send it a few
ioctls for sample format and rate, and for the all-important "we don't
care about latency that much" switch that lets interpreted code keep
up without stuttering, then just chuck PCM data at it until we're out
of PCM data. And it's quite well documented, and because it's all
file-based it plays nice with Ruby's green threads: I just set
the dsp stream to non-blocking mode and the audio keeps on going even
while it's serving HTML and doing JSON calls.
Let me know when the massive ALSA DLL - or any of the plethora of
other Modern audio libraries - becomes that simple to use (and
preferably becomes usable without resorting to C), and I might give
them another go. Yes, I am aware that under the hood the OSS API is
implemented using ALSA drivers: how ironic.
Oh well. This isn't supposed to be the computer contrarian's blog,
but some days it just feels like it.
http://www.googlefight.com/index.php?lang=en_GB&word1=alsa+audio&word2=oss+audio
Streaming XHR with Ruby and Mongrel#
Wed, 13 Jan 2010 10:56:29 +0000
An entirely pun-free post title today, and still it sounds like something you'd see a vet about. Hey ho.
Suppose you are writing a web app in which the server needs to update the client when things change, and you don't want to do it by polling. It turns out there is a technique for this that is probably more than two years old: you make the client do an XMLHttpRequest (aside: that name is almost as bad as its capitalization) to the server, and then the server sends its response v e r y s l o w l y. The clients XmLhTtPReQuEst object will get an onreadystatechange event every time a new packet arrives, and just has to pull the new data out of xhr.responseText and decide what to do with it.
Well, that's the theory. There are a variety of more-or-less-documented bugs and pitfalls to do with browser compatibility, as there always are (google "Comet", there are lots of resources and none of them I have the personal experience to recommend), but the new wrinkle I observed when doing this yesterday was a quarter-of-a-second lag between the server sending and the client receiving. Odd. Ruby can't be that slow, can it?
Wel, no, it's not. After monkeying with wget and netcat and wireshark and mostly failing to find out what was going on, I did
strace -e setsockopt ruby server.rb
and connected to it, and lo, what should I find but that something in Ruby or something in Mongrel was setting the TCP_CORK
socket option
setsockopt(5, SOL_TCP, TCP_CORK, [1], 4) = 0
- TCPCORK (since Linux 2.2)
If set, don't send out partial frames. All queued partial
frames are sent when the option is cleared again. This is use‐
ful for prepending headers before calling sendfile(2), or for
throughput optimization. As currently implemented, there is a
200 millisecond ceiling on the time for which output is corked
by TCPCORK. If this ceiling is reached, then queued data is
automatically transmitted. This option can be combined with
TCP_NODELAY only since Linux 2.5.71. This option should not be
used in code intended to be portable.
Now I don't know where it's being set - a cursory grep of the mongrel sources says probably not there, but it's simple enough to unset again. So, here's one I made earlier. Note that you can't use the
response.start
method (or at least, I don't see how) - you have to reach a little deeper into Mongrel::HttpResponse
class StatusHandler < Mongrel::HttpHandler
def process(request, response)
fh=nil
response.status=200
response.send_status(nil)
response.header['Content-Type'] = "application/x-www-form-urlencoded"
response.send_header
# something inside Ruby or inside Mongrel is setting TCP_CORK,
# which is bad for latency. I suspect Ruby C code, because
# the interpreter complains there is no definition for Socket::TCP_CORK
# :#define TCP_CORK 3 /* Never send partially complete segments */
response.socket.setsockopt(Socket::SOL_TCP, 3, 0)
response.socket.setsockopt(Socket::SOL_TCP, Socket::TCP_NODELAY, 1)
response.write("# gubbins for webkit bug "+("." * 256)+ "\n");
response.write("# stuff follows\n");
300.times.each do
response.write("stuff\n")
response.socket.flush
sleep 30
end
response.done
end
end
We limit the response to 300 lines in case of browser timeout or connection interruption or just to stop the client-side memory going up unboundedly as responseText
grows without let or limit. It's simple for the javascript to kick off another handler when this one dies.
For completeness, here's some client-side code to go with it
// XXX we made the_req global just so that we can look at
// what's going on in firebug. It's not required in normal use
var the_req;
function json_watch_stream(url,callback) {
var req =new XMLHttpRequest();
the_req = req;
req.open("GET",url,true);
req.last_seen=0;
req.onreadystatechange = function() {
if(req.responseText) {
callback(req.readyState,req.status,
req.responseText.substr(req.last_seen))
req.last_seen=req.responseText.length;
}
};
req.send(null);
}
function json_start_status_receiver (){
json_watch_stream
('/status',
function(ready,status,text) {
if(ready==4) {
// server response concluded, need to start again
json_start_status_receiver ();
}
text.split("\n").map(function(line) {
if(!line) { return; };
var data=line.substr(1);
switch(line[0]) {
case '#': break;
case 'O': update_track_timer(data); break;
case 'P': update_track_number(data); break;
case 'S': stop_track_timer(); break;
default:
console.log("status stream: unrecognised flag ",
line[0],data);
}
});
});
}
Testes on testing#
Sat, 16 Jan 2010 00:42:37 +0000
Lest the reader assume from my previous post that I'm against
automated testing: no, I'm not. In fact, Oliver hacking time over the
last couple of days has been all about writing tests for the
playqueue and turning the hacked together OSS interface into something
that can pass them.
I offer for your consideration, though, that the benefits of writing a
test suite are not so much about "having an executable specification"
or even catching regressions as they are about making the software
easier to test, by encouraging the programmer to (1) decouple interfaces so that units may be
tested without a plethora of complicated test doubles (stubs, mocks)
which may themselves contain bugs, and (2) reduce their dependence
on complicated state. The easiest code to test is the purely
functional, and happily this is also the easiest code to statically
reason about (thus reducing our need to write tests in the first
place).
There are other considerations: I object to the false confidence of
"our change passes tests, so we can feel good because it must be
correct" - though I suspect I'm fighting a straw man there anyway - and
I am not sure that even attempting to write tests for some classes of
bug (say, race conditions) is a more productive use of ones time than,
say, Thinking Very Hard at the program to be tested. And there's the
whole issue of whether the executable specification (probably written
by a computer programmer) is a fair reflection of the business
requirement (if written by the "customer", probably much less precise
and probably not even consistent), but by that point we just have to
accept that, well, TDD won't fix world hunger either. Chiefly my
message is that writing (and more importantly, maintaining) tests is
Not Free nor axiomatically Good, and therefore is only to be
commended when there is some expected benefit from it.
I leave you with two final thoughts that are vaguely related but don't fit into the overall argument anywhere else...
First: correct me if I'm wrong here, but
- Test-Driven Development is what, back in the XP days, they called
"test-first programming": you write the unit tests before the code
that they test. In Ruby, Test::Unit is/was the tool for running
unit tests
- Behaviour-Driven Development is an exercise in redefining concepts,
to change the emphasis from unit tests (which typically operate on
a particular class) to functional tests (which may span several
classes, and address user stories or whole-system behaviour). In
Ruby, RSpec was developed with this end.
- Missing the point, therefore, must be the practice of editing all
your Test::Unit cases into RSpec syntax to pretend more
convincingly to the new orthodoxy. They don't magically turn into
functional tests just because
test_barf_when_value_negative
is
now written it "should barf when value negative"
. Er? Do they?
Second: a powerful driver for a good test suite, from my experience
with SBCL, is that of checking the program is not broken by
environmental changes (new os, new cpu architecture, whatever). When
working on that project I saw test suite breakages far more often in
those circumstances than I ever did from hacking the code they were
supposed to protect.
Changing OSS in mid-stream#
Tue, 19 Jan 2010 22:04:11 +0000
It was always a given that the Oliver OSS (that's "/dev/dsp" to you) interface would eventually need to be swapped out for something else, just because it doesn't work on many machines other than mine.
Now Oliver has reached the point that it functions as a rudimentary music player for general use (i.e. it presents me a list of my music on the left that I can drag into the playqueue on the right), I realise that "eventually" is sooner than I was expecting, simply because it opens the dsp exclusively and forces all other audio programs (e.g. youtube vids) to fail. In short, it doesn't work acceptably well on my machine either. Fail. I'd forgotten how miserable Linux audio used to be.
In other news
- webkit doesn't support javascript 1.7, which is a grave disappointment as this is the one with proper lexical scope (via let )
- minor edit to previous post for comprehensibility, as re-reading it I realised it made more sense in my head than on the page.
- as tweeted: "there is a fine line between genius and idiot, and I'm still not sure on which side of it to place the jquery api". Selecting document elements using CSS syntax, yes, great idea. Returning those elements in an object sufficiently similar to an array that Firebug prints it as square brackets, but then making it respond to
map
in an entirely different way to the real Array - no, stupid idea. Hours of my life I won't get back
Oh. Ouch, Sorry#
Fri, 22 Jan 2010 23:03:51 +0000
I've only just spotted this host didn't come back cleanly after last night's power-cut-induced outage. Well, here it is now
In the meantime, I have found the stunningly attractive Ruby FFI gem and written some rather gross code to play music through PortAudio with it. Unfortunately it turns out that PortAudio's Pa_OpenDefaultStream
function is buggy (either the code or the spec) and it defaults to OSS anyway. Still, a step closer