The "Should" Rule

A code base of excellent quality will have constraints inherent in the
flow and structure of the objects and code. It should make much of
what people test for simply impossible to achieve, and therefore
eliminates the need for testing.

. . .

And that’s what a mature system should look like, one where
mistakes are not “checked” [by unit tests], but rather one where the
common mistakes simply cannot happen.

(Italics mine.)

Taking those
words
straight up, and slightly out of context, I remember a
little coding rule I’ve been kicking around in my notes: if you find
yourself using the word “should,” you need checks and/or process to
verify that whatever statement you’re making is true or works — not
only in the present, but in the future. That is, in the world of
programming “should” is equivalent to “can’t be trusted to”; tests
should be done appropriately.

“Code should work.”

Code should always work, and it
should always be perfect and bug free. The code I produce
should work under various conditions, even the simplest ones,
and, when people, including myself, modify either (1.) my code, (2.)
code that uses my code, or (3.) code that my code uses, everything
should still work correctly…given that there are no bugs in
any of the code involved.

We’re all perfect programmers, so this has always been ya’ll’s
experience, right? There are always bugs, of course, and that’s why we
always say “should always work,” instead of “always works.”

“Should” is Helpful

As I’ve mentioned
recently
, we programmers aren’t as scientific as you’d think,
we’re incredibly superstitious. However, in this case of “should,” our
computational voodoo is actually beneficial.

When you’re talking about a piece of code that you know isn’t
perfect, you often have an uncontrollable, unconscious urge to use
“should.” If you know the code is perfect, you don’t.

In this way, “should” can actually be quite helpful for detecting
when tests are needed. When “shoulds” start popping up, it’s time to
test and verify. Of course this isn’t the only indication that tests
are needed, just one of the many. Additionally, you can train yourself
to never use the word “should,” but that’s cheating ;>

Unit Tests

More on topic with Zane’s
original post
, I don’t quite agree with the phrasing of “Don’t
Unit Test,” even with the “5-20% test coverage” clarification.

Solo vs. Group Coder

It’s important to understand that
Zane is a solo programmer: he’s the only one working on his code
base. Under such circumstances, doing unit tests is really up to the
discretion of the single coder; essentially, unit testing becomes an
optional tool for the solo-coder, not an essential tool to assure a
group of coders can collaborate rapidly and “correctly.”

If I were a customer, I’d certainly favor a product that had lots of
unit tests, but I don’t think many customers are programmers

As a solo-coder, one plays the role of requirements gathering,
specifying, coding, and testing all at once: there’s no need to
coordinate communication between these 4 distinct roles. In group code
projects, these roles are often played by different people, and
different groups of people at that: one group of people gathers
requirements, another writes up the requirements (use cases,
architecture, design, etc.), another write all the code, and still
another groups of people tests it.

Unit Testing Benefits

Unit tests help these 4 groups coordinate and work together. A good
suite of unit tests…

  • specifies what the what the code does: what inputs cause what
    outputs. If the requirements people have gotten themselves involved
    enough with the creation of unit tests (something that too often fail
    to happen), they can assure that the tests validate many of their
    requirements, esp. data-centric ones, e.g., “Data will be formatted
    day/month/year.”
  • tells the coders the minimum behavior that must be coded: that
    is, once the coder has written code that passes all unit tests 100%,
    they know they’ve done the minimum amount of coding required. I think
    this is a greatly under appreciated and poorly used aspect of unit
    tests. They tell programmers exactly what to code: “if your code pass
    these tests you’re work is done.” Of course, more coding is
    often needed (along with new tests for that new coding) and
    it’s extremely difficult to computationally test the system as a
    whole (the “application” rather than the individual parts of code),
    but it’s nice to know that you’ve reached a certain baseline.
  • reduces the amount of manual testing that must be done: unit
    tests are run automatically, not by hand. In business — sad as it can be socially
    — the less involvement by actual human, the better. ’nuff said.
  • helps you fearlessly
    refactor
    . The ability to refactor your code base — improving
    code without changing it’s outward facing behavior or, at least,
    effect — is priceless and can be a tremendous boon. But, when
    refactoring even the smallest piece of code, you must re-test your
    code base to assure everything still works: without unit tests,
    there’s really no way to do this. Michael Feathers has an
    interesting MS in progress
    essentially about this very point;
    there’s also a <a
    href="https://www.objectmentor.com/resources/articles/WorkingEffectivelyWithLegacyCode.pdf&quot;
    12 page article for those who want a shorter read.

Those are just 4 cases where unit tests can and do help out. Unit
tests are good and helpful if people use them correctly and
effectively, just like any tool. I’m suspicious of claims that
writing unit tests take up too much time and hurt the development of
systems: testing every get and set is, of
course, stupid to do by hand, but that follows from the general rule
“Don’t do stupid things.”