Friday, July 17, 2020

Being right about the wrong things


So, following a link I encountered on Josh Grant's blog I came across a post named "against testing" where the author (someone named Ted, I think) has some very strong claims about tests and why most people shouldn't write them. On the face of it, most of those arguments are sound and represent a well thought of observations. It is true that most automated tests, especially unit tests, rarely find bugs, and that they are tightly coupled to the existing implementation in a way that means that whenever you come to refactor a module you'll find yourself in need of refactoring the tests as well.It is also true that there are a lot of projects out there where testing something in isolation is simply not that easy to do (There's a reason why Michael Feathers defines legacy code as "code without tests" ) and investigating a test fail requires putting a lot of time into understanding code that won't help you when you come to develop the next feature. Furthermore, having test code does introduce the risk of delivering it to production, where all sorts of nasty things might happen. No test code? This risk does not exist. 
All of those seem to follow a sound logic chain resulting in one conclusion - don't write tests. Not surprisingly, I disagree.
The easiest way to dismiss most of those claims is to respond with "most of those problems indicate that your tests are not good", and while I do hold true that a well written test should be clear to understand, at just about the right level of abstraction to minimize refactoring pain and targeting the tested unit commitments rather than implementation, I also know that those are hard to write, and that most people writing test code are not doing such a pristine job most of the time. In fact, for my response I'm going to assume that people write mediocre to trivial tests, simply because that's the most likely scenario. Most people indeed don't learn to write proper tests, and don't practice it. They get "write tests" as a task they must do to complete their "real" work and thus do the bare minimum they must. 

From my perspective, the post is wrong right at the beginning, stating that "In order to be effective, a test needs to exist for some condition not handled by the code", that is - a test is meant to "find bugs". For me, that's a secondary goal at best. I see tests as scaffolding - making the promises of a piece of code explicit, and here to help people refactoring or using that piece of code. If someone is working in a TDDish manner (no need to be strict about it) they can use this scaffolding to figure out earlier how their code should look like - when an internal logic that totally makes sense when implementing is just too cumbersome to use or when we need some extra dependencies. It is also a nice way to put things I don't want to forget in a place where I'll be reminded on. 
But, that's assuming TDD, and not enough people are using this method to justify writing tests, or to not delete them once I'm done, and that's when I get to two of the most common tasks a developer faces: refactoring code and investigating bugs. Starting with the fun part - refactoring. When refactoring a piece of code, there's one single worry - did I break something? Testing alone does not answer this question, but it does help in reducing it, especially in a language without a strict compiler. Imagine a simple python project, where there is a utility module that is being called extensively. I go and change the type of one of the parameters from string-duck to Object-duck (let's imagine I'm assuming a .foo() method to be available). It is already the case in 99% of the project, but not necessarily in all. If I wasn't using proper type hinting (as is sadly way too common), the only way I'll find this will be if I'll run the specific piece of code with the faulty code. a 100% line coverage increases my chances of finding it. Too far fetched? ok. What about a piece of code that is not straightforward? one that has what the linters like to call "high complexity". Just keeping those intricate conditions in mind is heavy lifting, so why not put those in a place where I'll get feedback if my refactor broke something?
Those types of functions are also a nightmare to debug or fix, and here I want to share an experience I had. In my previous workplace we had a table that was aggregating purchases - if they matched on certain fields we called them equal and would merge them. Choosing what to display had a rather complex decision tree due to what matched our business. I got a task of fixing something minor in it (I don't recall exactly what, I think it was a wrong value in a single column or something like that). Frankly? The code was complicated. complicated enough so that I wasn't sure I could find the right place. So I added a condition to an existing test. It wasn't a good test to begin with, and in fact, when I first approached it, it couldn't fail because it asserted that "expected" was equal to "expected" (it was a bit difficult to see that, though), Once I added the expected result to the test I could just run it, fix the problematic scenario, and move on to the next one. The existing tests did remind me of a flow I completely forgot about (did  I mention it was a very complicated decision tree?). 

Another useful way to use tests is as an investigative tool. In my current workplace we are using python (Short advice - don't do it without a good reason). Moreover, we are using python 3.6. we do a lot of work with JSON messages, and as such it's nice to be able to deserialize a message into a proper object, such as can be done with Jackson or Gson in Java. However, since python has a "native" support for json, I didn't manage to find such a tool (not saying there isn't), so in order to avoid string literals, we defined a new type of class that takes a dictionary and translate it to an object. Combined with type hints - and we have an easy to use auto-complete friendly object (In python 3.8 they introduced a "data class" that might do what we need,  but that's less relevant here). To do that we've overrode some of the "magic" methods ( __getattr__, for instance), which means we don't really know what we did here and what side effects there are. What we did know were our intended uses - we wanted to serialize and deserialize some objects, with nesting objects of various types. So, after the first bug manifested, we added some tests - we found out that our solution could cause an endless call-loop and that we don't really need to support deserializing a tuple, since a json string can only be a simple value, a list or a dictionary (not something we thought about when we started implementing the serialization part, so we saved some time by not writing this useless piece of code). Whenever we were unsure about how would our code behave - we added a test about it. Also, since we didn't really understand what we were doing, we managed to fix one thing while breaking another several times. Each time our existing tests showed us we had a problem. 

There is, however, one point on which I agree with the author - writing unit tests does change the way your code is written. It might add some abstraction (though that's not necessarily the case with the existing mocking tools) and it does push towards a specific type of design. In fact, when I wrote tests for a side project I'm working on, I broke a web controller to 5 different classes just because I noticed that I had to instantiate a lot of uninteresting dependencies for each test. I'm happy with that change since I see it as something that showed me that my single class was actually doing 5 different things, albeit quite similar ones. As a result of this change, I can be more confident about the possible impact a specific change can have - it won't affect all 5 classes, but only one of them, and this class has a very specific task I know how to access and which flows it's involved in. Changing an existing code this way does introduce risk, so everyone need to decide whether the rewards are worth taking those risks. If all you expect from your tests is to find bugs or defend against regression - the reward is indeed small. I believe that if you consider the other benefits I mentioned - having an investigative tool that will work for others touching the code, being explicit about what a piece of code promises and in the long run having smaller components with defined (supported) interactions between them - it starts to make much more sense. 


So, to sum up what I wrote above in a short paragraph - the author is completely right in claiming that if tests are meant to find bugs and defend against regression, they don't do a very good job. But, treating tests in such a way is to claim that a hammer is not very effective because it does poor job in pinning screws to a wall. Tests can sometimes find bugs, and they can defend against some of the bugs introduced by refactoring. but they don't do these tasks very well. What tests do best is mostly about people. They communicate (and enforce) explicit commitments, they help investigate and remember tasks and they save a ton of time wasted on stupid mistakes and leaves more time to deal with the real logic difficulties a code presents. I think that by looking at those properties of having tests, their value is represented better, and it also becomes easier to write better tests. 

No comments:

Post a Comment