People who heard me speak about testing, or noticed what I wrote here, for example, might know that I believe most companies should strive to get rid of dedicated testers. I think it's time to put some more words to it. There aren't that many reasons, but I find them convincing, and hope you will too.
- No specialization required. In most places I've seen, at least 95% of the testing work can either be done or is being done by junior testers - people without a lot of experience, and any skill they might possess can be taught to just about any software professional with relative ease. There are some cases where we need to define a strategy for a large project, or find a way to deal with large amounts of data, or cases where it's hard to define "correct". In those cases we might need an expert tester - but those are the rarity.
- Abdication of responsibility - when there's someone whose role is to find problems, the rest of the people are less careful. Even when they don't mean to. They submit their artifacts for "testing", so they assume they need to do very little of it themselves.
- Goal displacement - when there's one person writing code and another testing it, no one actually has the supposed shared goal - "deliver the right thing at a good pace". One has "deliver at a good pace", the other has "deliver the right thing" (if we're lucky, it might be "find reasons to delay the delivery"). This can lead to strange behaviors, that are perhaps best illustrated in an experience I had about eight or nine years ago, when I spent some time asking a developer to change the design of a feature in order to make it easier to test. His response was "it's not right to change design just for testing, and here are a few reasons why the current design is preferred". This very reasonable (yet wrong) response flew out of the window a couple of weeks later when the same developer was tasked testing the feature (we were transitioning to a role-less development team, which worked really well, I think) and after struggling for a few days, he came to the daily standup saying "I was having trouble testing the feature, so I made some changes to the design and now it's easy to test". All the good reasons suddenly were less important than having maintainable (which includes easy to test) code.
We don't want people in the team to have different goals, unless it's really necessary. - longer feedback loop. It doesn't matter how hard we collaborate, how good are our tools and practices, getting the feedback from someone else almost always adds some overhead to the feedback loop. when we have the same person who does the work also verifying the job is done well, we get a faster feedback loop, and action is taken on that feedback more easily. We are much more likely to see the loop of "test-tinker-test" when the same people are doing both tasks (either as a pair or ensemble comprised of different skilled people, or as a homogenous group of one or more people)
- More (potential) bottlenecks - with a dedicated tester, we've just made another vertex in our communication and dependency graphs. This means that there's one more party to update and hope no significant information gets lost, and there's one more kind of work that we need to coordinate with the rest of the work.
- There is some empiric evidence this is the case, I'm familiar with the claims in "accelerate" (that have been very careful to narrow their scope to "automated tests", but I suspect this choice is a political one, as they don't mention any other kinds of testing that has any correlation to business success)
Ok, let that sink for a while. I can hear some rant in the distance, rambling about "What about the tester's mindset??", and there are a few examples where dedicated testers are simply a must and I'm sure they jump into your mind. As for the first objection - there isn't a thing like that. Testing, and the thinking associated with it are skills that are being taught and honed by experience that includes focus on risk seeking. In this aspect, a developer with a decade of experience will probably do a better work testing than most testers with two or three years under their belt.
The second point, however, holds true, and this is why I believe only "most" companies can do without dedicated testers, and not all of them. The most notable example I've met is while I was interviewing for the team I was at a few years ago. The interviewee held a masters degree in physics and was testing a ground radar. His day job included a lot of calculations and applying his knowledge - knowledge the developers didn't have, nor did they have the time to acquire it. This is probably the only situation I can think of where having separate testers is more efficient than making testing a part of the creator's work. There might be some constraints that would make having dedicated testers a good choice for the time - we might not have the right culture, or we might be testing an application that we pay someone to develop and we don't trust their testing is good enough for us, or our budget and reporting mechanism might encourage us to behave in that way - but those are conditions that should be eliminated over the longer term.
I hope you find those arguments convincing as much as I do, but even if you disagree, I hope it helps to understand my stance on dedicated testers.
Next: Why, despite those thoughts, I identify as a tester and invest effort in becoming better at it.

No comments:
Post a Comment