Sunday, December 28, 2025

What is the value proposition of a tester?


עברית

In the previous post I have laid out my reasoning about why most places should not have dedicated testers, eventually. Despite that, I'm working as a tester, and investing effort in becoming a better one. How does that align with my claim that having dedicated testers is a bad business decision? Am I just cheating my employers by performing some task that looks important but have a negative value?

Naturally, I'd like to think I don't do that, which leads me to try and articulate both my motivation in performing a dedicated tester role, and the value testers (including myself) are bringing to the organization, even if the extremely special conditions that make dedicated testers a good idea don't apply. 

First and foremost, quite naturally, is being a transition guide. In essence, I believe this should be a part of the role of any tester - it might be a minor part when one is learning the ropes, or it can be a significant one when someone is filling a leadership position, but it should almost always be there. Saying that most organizations don't need testers is not the same as "send all of your testers home, now". Some organizations will survive such a change and even thrive, but for others, and I guess most, ripping the tourniquet (as I heard Alan Page refer to it once) will lead to a period of chaos resulting in founding another test team and declaring the experiment a failure.  An organization needs to get to a point where responsibilities that were part of the dedicated testers' mandate are being covered skillfully by other parts of the process or organization. Some call this "whole team accountability" or something along those lines. I might not be very good at this yet, but pushing towards this change is something I enjoy doing, and I believe it is creating benefit for my workplace, even if I still have a lot to learn. Figuring out the process improvements, dropping small bits of education here and there and building some of the safety nets that allow for faster, more informed SDLC is a super interesting challenge. If I want to be a central figure in those transitions I need to have a strong testing foundation that I can leverage to help dealing with the hurdles and surprises that are inevitable in these situations.

In fact, in my ideal world, it's not that there are no testing experts at all - some of the problems are difficult and require deep testing skills and knowledge, and for tackling those, an expert is needed. For instance, consider the following example: at my previous workplace I have to come up with a testing strategy for a new, AWS native product. It involved a lot of constraints - on the resources that would be available to develop and test, on the different teams that would participate and on the cadence we wanted to operate at. Just to agree on those, I had to lead some discussions with management and the lead programmers and make sure we were in agreement about how the work should look like - who is best positioned to perform which task and what complications each choice added to the rest of the system - In short, I had to articulate the testing needs and contributions in a language that would fit other software professionals, and take their input to adjust. When suggesting solutions I leveraged some ideas I've heard about in conferences (such as using contract testing to enable independent development of components, or using approval testing to speed up complex validations), at the very least, to create some options for us. I then went on to discuss how to design our system so that we could deploy most of it without creating endless AWS accounts, which for a product that assumes it manages an entire account, is far from trivial - which required strong system modeling, and balancing between ease of test system deployment and fidelity to the real world (as well as other tradeoffs). Without my testing skills and familiarity with the testing community and the ideas flowing in it - all of which are beyond what your average senior developer would usually have - I would have done a poorer job. Those difficult problems will mean that every now and then an expert tester will be needed. Probably at similar numbers to software architects (which need not be different people, I can easily imagine a world in which solving difficult testing problems is part of the architect's tasks.
My personal motivation is quite simple on this point - those difficult testing problems are something I like to tackle, and building my skills towards it makes a lot of sense. If when we get to my ideal world it'll turn out that those are handled by a software architect? well, I'll be starting this learning journey with the testing part covered and work on building the rest of the necessary skills.

But, I'm not only trying to become a better tester myself, I also strive to help others do the same. What justification do I have there? Some people might make the same choices as I did (do?) and in such case, no more defense is needed, but most people probably would have other goals and intentions. For those, I'd point that while teaching people to be better testers, I actually teach (and learn) various testing skills. which, in contrast to the tester role, are necessary. This knowledge will help also when teaching non-testers to how to test better.

So, we have difficult problems and we have a transition guide. Pretty hollow, so far - I have claimed that the difficult problems are less than 5% of the actual work, and helping a transition happen isn't much of a big deal if the value currently residing in the dedicated tester role is nonexistent, right? So, what are the responsibilities that make a tester valuable during this transition journey? Or, in other words, what is the value proposition of a tester? Here's my list. As can be expected, none of these items can be addressed only by having a dedicated tester and I would claim that it's usually better to either hand them over to another role or to change the process so that it is no needed anymore, but building those alternatives takes some time and conscious effort - exactly why we need a transition guide.

  • Risk seeking: Developers can sometimes be bold, maybe even too bold for everyone's good. It's really important to be bold if you want to tackle hard problems and not daring to make the first step. It does, however, needs to be balanced to avoid bold decisions that drive us off a cliff. An easy way to do that is to have someone who is looking on what we're building and asking "what if...?". That look for potential misuse of the application and raising them so that the risks can be either mitigated or consciously accepted. 
  • Design stakeholder - This is tied to the risk-seeking perspective: A tester is a force in the team pushing for a design that is easier to monitor, that deals with expected failures, perhaps something that is simpler or broken down to simple modules. Things that might evade a team that is focused on fast delivery on the short term - all those choices a tester might be the right person to advocate for, or to put pressure towards using them, should, if done properly, accelerate the team eventually, they just seem like needless hassle when only the immediate term is taken into consideration.
  • Safety net building. Some people call it feedback, but a tester does, in some places, build a set of tools that can detect errors and broken parts of our application. Building those tools is an expertise in itself,  that is not yet common enough among software practitioners, very much like it was with unit tests a decade (or two? probably closer to two) ago. 
  • Translating between the technical and business layers. Brendan Connolly wrote about this about a decade ago, and it is as true now as it was then. Testers usually navigate between two semantic worlds - the developers one, and both sides of the business world - requirements and customer support. As such, they are probably the best suited to translate between the layers. A customer opens a bug? the tester can translate it to something actionable by a developer. There's an ambiguous requirement? A tester might be able to help put it in technical terms that are easier to understand and communicate with the product manager to make sure nothing is lost in translation. Sometimes you even need someone to translate between two development teams, each with its own context and viewpoint.
  • Calming upper management. Sometime, you just need a magic 8-ball that will tell you that everything is going to be alright. Most testers will face some time in their career the question "is it ok to release?", when the  other side just wants to hear "yeah, we're good, you can trust me". No, it's not a call to lie - just to know that sometimes your stakeholders want the bottom line - either all is good, or there's something they need to know about.
  • Evaluating operational properties - performance, security, and all of those other side effects to a program's functional development. Since programmers tend to focus on the functional aspects of their work, it takes more time and maturity to add the operation properties to the scope of concern and testing. It's also a harder testing task, so a specialist will be required there for a little longer. 
There are two roles that I intentionally didn't put in this list, not because they never happen, but because I don't think they are part of the tester's value proposition (And I would claim they might actually be harmful). Those are:
  • Verification of functional correctness - the developers working on the features are big kids. They should be able to tell if they done a good enough job. Having a tester whose job is to find stupid mistakes is both a waste of talent and a call for developers to abdicate their responsibility.
  • Regulatory scapegoat. While some regulations might refer to testers, if that's the only reason for keeping them, - the organization is probably harming itself and would be far better off by showing the auditor that the regulation is satisfied by their process. Scapegoat needed?  Have some manager tick a box in the form and "take responsibility" on the outcome. In fact, it could be the very same person who is setting up the procedures required for this regulatory mandate. For cases where an "independent" validation is required,  a well functioning test team is not really aligned with the spirit of the regulation since it's too close to the development. In such cases, and auditing company (or department) is much more aligned - someone whose interest is not to ship the product but rather to make sure they can't be blamed for not doing their due diligence. 

No comments:

Post a Comment