When you talk about testing, it's just a matter of time until someone will mention the "gold standard", the one thing testers should be striving towards - full traceability. Perhaps by coincidence, the likelihood of someone mentioning it is negatively correlated with the amount of testing experience they have. The story is a compelling one - by some testing magic, we can map each test to the original requirement it came from, and when a new feature is introduced, all we have to do is to go to that requirements tree and find all the tests that are affected or that have relevance to that feature. Nifty, right? What is rarely mentioned by said people is the cost of creating and maintaining such a model, compared to the expected benefit, which, in the current, speedy development cycles, is even lower than before.
The first difficulty rises when we think about structuring our requirements - It probably have some sort of hierarchy in order for us to be able to find anything, and choosing the "right" kind of hierarchy is quite difficult. For instance, let's imagine a simple blogging platform, such as the one hosting this very post. It seems reasonable to assume that we would split the functionality
between the blog management and visitors, as they provide a very
different experience. Perhaps something like this:
Now, where, in our model would we place the requirements for comments? They can be seen and written by visitors reading each post, so it makes sense that it would reside there, perhaps even under the "view post" section. However, they also can be read and approved by the blog admin, so why not put it there? Should we have two "comments" categories? one for each branch of the tree? perhaps we should have a third category of "cross responsibility features", but that would only confuse us. We can also restructure the requirement tree to have the various components as root elements, like this:
Now we have solved the question of "where to put the comments section, but does it mean that we'll have to duplicate for each of the public facing components the "this is how the admin interacts with it, this is the user's view"? It's not that one view is more "correct" than the other, but rather that one will be easier for us and will match the way we think of our product.Please note, this was the easy example. How would we represent here global requirements such as "always keep the logged in user session until they log out"? or "all pages must conform to WCAG level 2 (AA)" ?
So, it's a challenge to build this, but that's ok - we don't have to get it perfect, just good enough, and we'll roll with the punches. Let's consider the next challenge: Do we version this tree? how do we compare two versions? Do we just keep the latest version? do we freeze it every release and save a copy? can we compare two versions in any meaningful way?
And, since we are talking about managing the requirements tree - who is maintaining it? Is the person adding requirements to the product doing so? Each time I've heard this idea it was a meant to be a testers only effort, and requirements for new features are not written using this sort of structure. In my previous work place we've tried to maintain such a tree, and each time we'd start a new feature one of the testers had to go and translate the requirements one by one to the proper place on the tree, sometimes breaking a single requirement to multiple entries in the tree. This tree, of course, was unofficial.
The next part, tracing tests to requirements, is rather easy - IF you are using a heavy weight test management tool such as MicroFocus's QualityCenter of SmartBear's QAComplete that allows you to link requirements to specific tests (There are probably other products as well, I don't keep track). If you do use such a product, you probably already feel the cost of using it. You'll still have to wonder how to represent "we cover the requirements to support mobile devices and 30 languages", but let's assume it's not that terrible to do. If you do the sensible thing and don't bother with maintaining a repository of test-cases that go stale even before they are written - congratulations, if you want this traceability to happen you now have the extra burden of manually maintaining that link as well over feature changes, refactoring, and what's not.
I hope that by this point I managed to convince you that it's expensive to keep this tree up to date, but even so, it might still be worth the price.
So, let's look at the promised return on investment, shall we?
Let's assume we have all of the product requirements laid out in front of us in a neat tree without too many compromises. We now add a new feature, say, we enable a\b experiments for posts. It touches the post editing, post viewing, analytics, content caching, users and comments, and it might have some performance impact. We've learned all of this in a single glance on our requirements tree and so we also know which tests should be updated and which requirements (new or old) are not covered. We invest some time and craft the tests that are covering our gaps, not wasting time re-writing existing tests simply because we didn't notice they are there.
How much time have we saved? Let's be generous and assume that once we added the new requirements to our tree, we know exactly which requirements are affected, and which are not covered at all. Now we go, think of appropriate tests, add them to our tree, update our automation suites at the appropriate levels, run the attended tests we want and call this feature done. Compare this to working without our marvelous tree: We spend some time analyzing the feature and requirements, then dig around our test repository to see what we have and is relevant, then we do exactly the same steps of planning, updating and executing tests. We saved ourselves the need to dig around and see what we have and is relevant, which could amount to some significant amount of time over multiple features, right? Well, if we replace testers each feature - that's the case. But assuming we are working on the same product for a while, we get to know it over time. We don't need to dig through hundreds of lines, we can ask the person who's been here for a couple of years and be done with it in 10 minutes. Those 10 minutes we "lose" each time are small change compared to the increased analysis time (since we're updating a model, effectively duplicating the requirements as part of the analysis). So, even in theory when all goes well we don't really save time using this method.
Perhaps the gain is in completeness and not in time? There is a lot of value in knowing that we are investing our effort in the right places, and it is far less likely that we'll miss a requirement using this method. In addition, we'll sure be able to provide an exact coverage report at any time. We could even refer to this coverage report in our quarterly go\no-go meeting when we decide to delay the upcoming version. What's that now? We don't do those things anymore? Fast cycles and short sprints? Small changes and fast feedback from customers? The world we live in today is one where most of our safety nets are automatic and running often, so we can see where they break, and since we're doing small chunks of work that are easier to analyze we can be quite confident in how well we did our job. What we miss is caught by monitoring in production and fixed quickly. So, perhaps this value, too, is less important.
And what if we don't work in a snazzy agile shop and we do release a version once in a never? In this case we are probably trying to get to faster release cadence which, in turn, means that adopting a method that drives the wrong behavior is not something we desire and we want to minimize the effort we put into temporary solutions.
There is, however, one case where such traceability might be worth pouring our resources into - in a heavily regulated life critical products. From avionics to medical devices. In such cases we might actually be required to present such traceability matrix to an auditor, and even if we won't, failing and fixing later is a bad option for us (or, as the saying goes: if at first you don't succeed - skydiving is not for you). Such heavy regulatory requirements will affect the way the entire organization is operating, and this will remove some of the obstacles to maintaining such a tree. Whether we'll use it for purposes other than compliance? I don't know. I also don't know whether there are alternatives to fulfilling the regulatory needs with a lighter process. At any rate, when we endeavor to perform such a herculean task, it is important that we know why we are doing it, what we are trying to achieve and whether the cost is justifiable.
No comments:
Post a Comment