Sometimes in my recent round of job interviews, I would let it be known that I have an unpopular opinion- that unit tests - while useful - are overrated. Based on the reaction I get from people -- and some biased googling I do -- it feels like I've stumbled into a bit of an Emperor's New Clothes situation - many coders seem to agree with my sentiments, but are reluctant to say so, because there is a larger group of influential people - either coders or managers - who I assume would say I'm spouting lazy and reckless nonsense.
Here are some ways I have of putting it:
- An extreme focus on unit tests - especially at the cost of distracting from or replacing higher levels of testing, or assuming "% lines tested via unit tests is another word 'code quality'" is like obsessively inspecting all your individual Lego Bricks for cracks and defects. Yes, it's important to be building with solid bricks, but honestly that's not where the problem in your structure is going to show up!
- Bugs are EMERGENT properties of systems. A well designed unit should
be just about "too small to fail" (It's a little bit like trying to
solve a biochemistry problem by thinking about the physics of atoms, or
an economics problem as chemistry. (since economics comes from
psychology comes from neuroscience comes from biochemistry comes from
chemistry comes from atomic physics.) Like some times the underlying
levels ARE the best way of thinking of a problem, but often the issue is
at a less fine-grained level.
- The problems I see actually show up in production are coders coding along the "happy path" - and then that same "happy path" is what is baked into the setup of the unit test. (I wish instead of testing the units, we had much more emphasis on firming up the contract between the unit and its environment...)
- And as for coders writing their own tests - it's very difficult to persuade someone to do a really focused search on something they don't really want to find - i.e. flaws in their previous work.
- Unit
tests that are too fine grained are at risk for confirming a specific
implementation vs actual functioning. A good unit tests confirms that
refactoring preserves functionality, but if you're too hung up on
verifying "it internally solved the problem this specific way" you lose
that benefit.
- If that's the only testing you're doing it's like inspecting 100 trees and saying "yup, I know this is a good forest with a safe path"
- Unit Tests tend to be an easy metric, with tooling set up to see which lines get hit in a unit test (a metric that is less common to be derived from higher level tests). So it's a bit like the drunk guy looking for his dropped car keys and is looking for them under the streetlight instead of where he dropped them down the street "because that's where the light is good". (Also, to be fair, unit tests are more reliable in setting up their little fakey mock-up worlds than higher levels of tests.)
But despite these views, I can't afford to be "That Guy", the perennial
pain in the butt about it. When unit tests are some of the music I'm
being paid to dance to, I will absolutely work to boogie down in the way
my coworkers think wise, and put effort into making the same kind of
test writing decisions I think that they would make.
Plus,
I also understand there are many possible benefits to unit tests and
the accompanying mindset - it can get a coder to think more clearly
about the inputs to the thing, it can make future refactoring feel
safer, etc. There's definitely some strengths in a "test first" kind of
mentality. (And I pride myself in reliable code... I am a very
incremental coder, making sure each bit is solid before moving on. But
my default habit would be to throw out those inputs and outputs I did
developer testing with, while I think the fervent unit testers' impulse
is to preserve that data in amber to make sure nothing changes out from
underneath it.)
(And to be fair, I
mostly do UI. And the best things to unit test are "functional" -
clearly defined inputs and outputs. I.e. No Side Effects. But honestly,
UI is all about the side effects! I.e. the side effect of what is on
screen to the user. (And the flip side of that is like the one quote
about how menu commands were essentially uncontrolled "goto" statements
all over your code...)
And when I heard my manager's
manager say "well we expect as unit coverage increases, the number of
incidents in production will go down accordingly" I had to bite my
tongue... there are SOME cases where the unit is indeed busted and not
acting according to its apparent spec, but again, the real world bugs I
witness are happening as the units are unhappy in their used context...
(I'm always encouraging teams to keep a collective log about production
bugs - why did it happen, and was there a unit test that would have
caught it?)
I liked Mark Talbot's answer to the Quora "Why do many programmers think that unit testing is not worth it?" - in part -
The problem is that as soon as you replace some dependency with a stub or mock, you are no longer testing the system in the same way it works in production. All you have done is assume that your stub/mock behaves in the same way as the dependency it is replacing (at least as far as the unit under test is concerned) and then write your tests based on that assumption. But it is not valid assumption. If I come along and modify the real dependency (maybe it’s some 3rd party library that I’m upgrading) I have no idea at all whether your code really works with the new dependency, because all the calls to it in the unit tests are stubbed out. (And I’m not going to check all 500,000 lines of code manually - what is the point in automated testing if I need to do that?)
Or as I just recently put it: Unit Tests test Mocks more than they test your Units
See also some bits I grabbed on Joel Spolsky on Complete Test Coverage...
No comments:
Post a Comment