diary at Telent Netowrks

Revisiting Clojure Nix packaging#

Fri, 31 Aug 2018 12:43:27 +0000

Slightly over a year ago I wrote about how to build a Clojure project with Nix and this week I had occasion to refer back to it because I want to have a production-style dogfood build of Epsilon

It's a bit simpler now than it was then, because in the intervening time Clojure has added Deps and CLI tools which gives us a simple readable EDN file in which we can list our project's immediate dependencies (see here deps.edn for Epsilon ) and a function resolve-deps that we can call to get the transitive dependencies

We embed this function call into a script which dumps a JSON file with the same information, and then whenever we add or update deps.edn we run

$ CLJ_CONFIG=. CLJ_CACHE=. nix-shell -p clojure jre --run \
 "clojure -R:build -C:build -Srepro  -Sforce  -m nixtract generated/nix-deps.json"

(Note that the clojure command will fail silently if it cannot find a java runtime on the path, or if the current directory contains a malformed deps.edn file. Each of these problems took me a while to find - I tell you this so that you may learn from my mistakes)

The second half of the puzzle - build time - is not much changed from how we did it last time around, though I did take the opportunity to make it use the generic builder instead of making its own builder.sh.

Probably I should point out that this downloads all the dependencies into /nix/store as JARs and builds a classpath so the JRE can find them - it doesn't make an uberjar. This fits my current use case, because I want to run it on a NixOS box and separate jars provides at least the possibility that some of them might be shared by more than one app. I will obviously have to come back to this if/when I need to build an uberjar for distribution.

Also, if you are doing advanced things with CLJS external libraries (like, Node dependencies and stuff) then this will not help and you are on your own. There is a name for the (CL)JS dependency ecosystem, it's a compound word in which the first part is the collective noun for a collection of stars, and the second part rhymes with "duck"

Week excuse#

Wed, 12 Sep 2018 23:26:51 +0000

I hadn't realised, until I checked the site, that I'd completely forgotten to blog last week.

Nor had I been entirely aware, until I checked on "Mastodon": https://mastodon.social/web/accounts/28445 , that it's been an entire month since I put NixWRT down to make Epsilon happen. Some weekend hack that turned out to be. Bother.

On the bright side, I haven't needed to launch K-9 for a bit over a week and I am learning reagent/re-frame/FRP. On the less bright side, I don't have a lot to blog about right now.

Until next time, then.

re-frame / I'm gonna live forever#

Wed, 19 Sep 2018 21:18:11 +0000

An unusual - and potentially ill-advised - method and apparatus for the querying of an external database from a re-frame Single Page Application.

Bit of background

Re-frame, is described by its author as "a pattern for writing SPAs in ClojureScript, using Reagent" and "... impressively buzzword-compliant". Both these things are true.

The documentation talks a lot about "dominoes": there is a six step loop that goes around and around.

Then the user clicks on things, and DOM event handlers are called, and the handlers call dispatch, and the loop goes around again.

Signal graph

When you read the example application, the query functions you see are so simple that you might wonder why we even bother with them -

;; 4 lines of code to register a query called :time that returns
;; an app-db key of the same name, what is the point of this?
(rf/reg-sub
  :time
  (fn [db _]     ;; db is current app state. 2nd unused param is query vector
    (:time db))) ;; return a query computation over the application state

but where it gets interesting is that

so we actually have a query graph with the app-db at one end of it and the view functions at the other end, and only the relevant bits of it are computed when needed. Hold onto that thought

Prior art

The re-frame documentation has an incredibly useful page on Subscribing to external data which describes two sensible ways to do it and warns against a third:

With subscriptions

If we start from the premise that components must always get their data from a subscription handler, the obvious route is to create a subscription handler which kicks off an async request to the external data source, then returns a reaction which wraps some path within the app-db where the results will be found when ready. The async request, when finished, runs a callback function which stores the results in the said path.

Bit of a learning curve here if you've only previously looked at the example app, because you won't have learned what a reaction is. A reaction is a "a macro which wraps some computation (a block of code) and returns a ratom holding the result of that computation" and a ratom (short for react atom) is like a regular clojure atom but with extra gubbins so you can subscribe to it and be told every time it changes.

With events

With a different premise, we get a different answer. Your requirement to look something up externally is most probably driven by some user command which is mapped onto a reframe event. In this approach, the event handler is responsible for getting the data it needs to handle the event, and pushing it into the app-db. Probably this means you have one event to initiate the transfer which dispatches another event when the server eventually responds.

Using React lifecycle methods (don't do it)

Views should be dumb. Don't do this.

A third way (unless it is the fourth way)

What if we said: the external data lookup is notionally a "pure function" (let us suppose that the server gives the same response every time it is provided with the same inputs, and for the moment let us handwave over error conditions), so should sit in the middle of the signal graph somewhere and provide its output as a subscribable value instead of scribbling into app-db.

The reason I started doing this was that I was trying to write an XHR subscription handler per the first approach above which subscribed to app-db so that it could find the query terms, and because it was also writing to app-db, it was triggering itself. Probably I was doing something wrong (it occurs to me now that it should probably have subscribed to a query of only the search term and not to the entirety of app-db) but that thought did not occur to me at the time, so this pushed me into looking for another way.

So let me start by showing some example code, and then I can try explaining it:

(defn get-search-results-from-server [term handler]
  (ajax-request
    {:method          :get
     :uri             "/search"
     :timeout 5000
     :params          {:q term :limit 25}
     :format          (ajax/url-request-format)
     :response-format (ajax/json-response-format {:keywords? true})
     :handler handler}))

(rf/reg-sub-raw :search-result (fn [db _] (let [out (reagent/atom [])] (ratom/run! (let [term (rf/subscribe [:search-term])] (when-not (str/blank? term) (reset! out []) (get-search-results-from-server term (fn [[ok? new-value]] (and ok? (reset! out new-value))))))) (reaction out))))

What are we doing here? The value we want to send on through the signal graph is not returned by anything we call, it's provided as an input to our ajax callback function. This means we can't use the reaction macro, so we have to create a reaction by hand and take care ourselves of updating the value it wraps. To do this we use reagent.ratom/run!, which (to my limited understanding) is kinda sorta half of the reaction macro - like reaction it runs a loop every time the subscriptions change but unlike reaction we have to call reset! on our ratom every time we want to provide a new value downstream.

On the whole I think I like this pattern, at least for services which (we can reasonably pretend) are functional - i.e. they give the same answer every time when called with the same inputs. The external service doesn't put stuff in app-db, which I guess might be an issue if it needs to be merged with other data from other sources or if multiple downstream subscriptions want to use it (but in that case why can't they subscribe to it?) but I haven't/can't see how that hypothetical would become real.

Come on then. Tell me why it doesn't work.