Thursday, July 27, 2023

This too is a way to learn a lesson.


 עברית

Recently we've engaged with a consulting company to help us build a product in our core business. 

A terrible idea, right?

In our defense, there were some compelling(?) arguments to act this way. We knew the risks of employing a company under contract to build something we care about, and were willing to pay the price.

So, things have started moving. As the main reason we didn't do the work was that we were too busy with other stuff happening, the consulting company has started working pretty much unsupervised for a few months. When we finally joined the effort we could see a lot of design work that has been done, with some proof of concept code for various parts - it was not a lot, but it was as much as could be expected given the requirements we gave them and our availability until then. At least, this is what I thought until I examined their "tests". Starting with a review of the code, I found a couple of robot-framework scripts that are executing a browser against the UI defined in Figma, and only running in theory. But, we are starting to work on a new project and some work has been done by the contractor, so we should at least give it the benefit of the doubt, right?

So, we had a talk and I started asking the basic questions - Where in the SDLC are those tests expected to run? against which of the multiple repositories? how should the product be deployed? how often? who are the intended authors of future tests? who are the intended consumers of the results?
Based on the work I've seen, I didn't really expect them to have answers, but I could use this to start an internal discussion about how do we want to approach testing in the new product. While it was rather easy to name the gates that a piece of code should pass on its way to production, we found that some things needed to be defined better, and that we needed to choose a lot of new technologies, or at least review the existing ones to see if they fit. 

We talked about what is a "component test", should we run full system tests on pull requests to some services, how to incorporate contract tests to our process, and so on. 

As we are trying to answer those questions we find that the decisions we make are influencing the rest of the product - from our branching strategy to the way we architecture our product and deployment. There are some difficult questions that we need to answer in a short time, but it's quite interesting so far.

Key takeaways?

First, never fight a land war in Asia. 

First, when you plan your testing strategy, take time to figure out your context - who should be doing what, what results are important, and what limitations will hinder your progress.

Second, Especially if you are a service provider, be very explicit about how you expect your work to integrate with other parts of the development process. Have a discussion about what you will need, what you will provide, and how will this impact the work of others. If you can't have a discussion, share your thoughts with the other people involved. 

Third, your choices will change the way the entire team works, it will have an impact on both the processes you employ and the architecture of your product. In this sense, it's no different than choosing a tool, a programming language or the roles of people in the team. And just as is the case in those other examples, it is a two way street and choices make there will impact how you do testing.

No comments:

Post a Comment