Simple minds#
Thu, 07 Dec 2023 22:52:36 +0000
I responded to a Linkedin post about refactoring code the other day, referencing the J B Rainsberger "Simple Code Dynamo"", and I don't think I said all I wanted to say: specifically about why I like it.
A common refrain I hear when I talk with other developers about refactoring is that it has low or no value because it's subjective. You like the inlined code, I like the extracted method. He likes the loop with a repeated assignment, they like the map/reduce. Tomato/tomato. Who's to say which is better objectively? If you consult a catalog of refactoring techniques you'll see that approximately half of them are inverses of the other half, so technically you could be practicing "refactoring" all day and end up with the code in exactly the same place as where you started. And that's not delivering shareholder value/helping bring about the Starfleet future (delete as applicable).
What does the Simple Code Dynamo bring to this? We start with a definition of "Simple" (originally from Kent Beck):
- passes its tests
- minimises duplication
- reveals its intent
- has fewer classes/modules/packages
[Kent] mentioned that passing tests matters more than low duplication, that low duplication matters more than revealing intent, and that revealing intent matters more than having fewer classes, modules, and packages.
This is the first reason I like it: *it gives us a definition of "good"*. It's not an appeal to authority ("I made it follow the 'Strategy' pattern, so it must be better now") or a long list of "code smells" that you can attempt to identify and fix, it's just four short principles. And actually for our purposes it can be made shorter. How?
Let's regard rule 1 ("Passes tests") as an invariant, pretty much: if the code is failing tests, you are in "fix the code" mode, not in "refactor" mode. So we put that to one side.
What about rules 2 and 3? There has been much debate over their relative order, and the contention in this article is that it turns out not to matter , because they reinforce each other.
- when we work on removing duplication, often we create new functions with the previously duplicated code and then call them from both sites. These functions need names and often our initial names are poor. So ...
- when we work on improving names, we start seeing the same names appearing in distant places, or unrelated names in close proximity. This is telling us about (lack of) cohesion in our code, and fixing this leads to better abstractions. But ...
- now we're working at a higher level of abstraction, we start noticing duplication all over again - so, "GOTO 10" as we used to say.
You should read the article if you haven't, because this blog post is not to paraphrase it but to say why I like it, and this is the second - and in some ways, the more powerful - reason. It gives us a place to start. Instead of being reduced to looking despairingly at the code and saying where do I even begin? (often followed 15 minutes later with "I want to kill it with fire"), we have an in. Fix a poor name. Extract a method to remove some duplication. You aren't choosing to attack the wrong problem first, because they're both facets of the same problem.
For me this is, or was, liberatory, and I share it with you in case it works for you too.
(For completeness: yes, we have basically ignored rule 4 here - think of it as a tie breaker)