Recently, I’ve been in conversations where people throw some doubt on DRY. In the cloud native, microservices mode of operating where independent teams are chugging along, mostly decoupled from other teams, duplicating code and functionality tends to come more naturally, even necessarily. And the benefits of DRY (reuse and reducing bugs/inconstancy from multiple implementation of the same thing), theoretically, no longer are more valuable than the effort put into DRYing off.
That’s the theory a handful of people are floating, at least. I have no idea if it’s true. DRY is such an unquestionable tenant of all programming think that it’s worth tracking it’s validity as new modes of application development and deployment are hammered out. Catching when old taboos flip to new truths is always handy.
In programming, we’ve long had the concept of DRY, don’t repeat yourself. It encourages developers to reuse existing code rather writing new, often duplicate code. You should have just one concept of a user, one way of parsing a string, and, more realistically, a generalized path for doing something like invoicing that gets customized as needed.
This idea of DRY bleeds into other parts of the software life-cycle: we want to consolidate out build and pipeline scripts, have one place that specifies common application configuration, and so on.
DRYing out Old OO Concepts
When I as learning programming the thing was object oriented design. What that was exactly was hard for my 17 year old self to understand, but eventually, by doing it, you figure out that there are some core concepts you put in practice: abstraction and encapsulation. (Another one that pops up in the WS-Deathstar days is “message passing,” but that’s a story for another time).
Abstraction was the quest to be all Platonic and categories things into their primary form. One typical example was a dog. There are many types of dogs, so you might have Dog object that Hound=dog inherits from. But wait, Dogs are just a type of animal, more precisely, they’re a type of Mammal, so you should have an Animal, then a Mammal, then a Dog, then a Hound-dog. You can do the reasoning to see how you’d shift this over to something more useful like invoices.
The reason you did abstraction, I think, was to reduce duplication in code, encouraging “reuse,” as Jeffery Hammond reminded recently. “Reuse” had both an economical and quality. Programming was and still is expensive, so you want to speed up the act of coding as much as possible: if you can reuse some existing code rather than writing more, money!
Reuse increased quality because you by encouraging people to avoid duplicating code, you removed the chance of them writing in bugs of inconsistent implementations. It also gave architects (as they were so often called back then) more control to enforce best practices/their view: they could write the higher-level functions (“all mammals have warm blood and give live birth” – [hey, except those other ones, I guess we need to monkey-patch in platypuses, erm, so…]).
Encapsulation was always a weird concept. To me, it essentially meant: don’t worry about how I do this, I am a black box of wizardry and you should be happy about the outputs I give to your inputs. Piss off, and go work on something else. Again, the benefits of this came down to reducing cost (it’s expensive to know everything) and control (don’t tell me how to implement my life, just take what you get).
I got over this frame – which is missing a lot of straw-personing it out – because I think they both inform an uber concept in programming: DRY.
Sorry I repeated myself so much, I didn’t have time to do otherwise
In a microservices-themed application, the ideas of reuse are much less important than speed of feature delivery. My way describing the steak not the pan on this is that you do microservices when your priority is speed of deployment as a means to create better, more useful software. Your goal is to get capabilities (builds, releases, etc.) out the door as quickly as possible so you can run through a small batch process and learn what works and doesn’t work. There’s also the business benefit of out innovating competition.
Of course, the stuff has to keep up and running. “User would like the software to work” is always the first post-it note. Much of this DevOps stuff and, to be frank, the life-style my company sells, is about that story.
Back to questioning DRY, though, in the microservices mode of operating, development teams are encouraged to be largely disconnected from other teams. They want to break dependencies to the road-map and work-flow of other teams and just focus on kicking out releases on their own schedule. So while it may be easy (and economical) to apply DRY to libraries and framework (you probably shouldn’t be writing your own URL creator and parser), DRY applied to “high” can become grit in your engine: it could slow you down and introduce dependencies to other teams that also slows you down.
Put another way: if there was a lot of “repeating” across a company’s 100’s, if not 1,000’s of microservices…I’m not sure the cloud native canon would have much more to say about it than: well, are you deploying regularly and improving the user experience?
Looming Technical Debt?
The looming trap in this kind of thinking (a new way of programming comes about and five years later you discover the flaws in it when it becomes “legacy”) is building up too much code, and duplication. In theory, that could result in inconsistent ways of doing the same functionality in your application (when I login to the Westin hotel brand portal my how I get my receipt is different than when I login to the Sheraton portal, etc.). You know, the problems DRY is trying to address.
Who knows. Like I said up-top, it’s an intellectually interesting topic to throw into a glass bowl and ponder.