Saturday, July 4, 2020

A "testing" competition


So, last week I've participated in a "testing" contest. You know, this event where you're given an app you've never seen before, and asked to "find bugs" for the next so and so hours. A lot has been written before me on why is there's basically no connection between such events and proper software testing, so I won't bore you with another such piece of text. Instead, I want to write about the things I did learn by participating in this event. 

First, since every such competition is different, I want to lay out the opening conditions for this event: 
To be short and blunt, it was an epic fail from the organizers side, In fact, I believe it was the level of amateurism involved in the event that drove me to write this post in the first place, just so that I could rant a bit.Furthermore, I'll make an exception and will name and shame this event - the Israeli Software Testing Cup.  
Anyway, what bothered me? First thing - while the competition does have a site, it's quite difficult to find and it contains just about zero useful information. No rules of engagement,  No mention to what we're allowed to do and what we're not - is it ok to look for security vulnerabilities in the app? do they care about usability issues? Also, no info on what is the basis for scoring, nothing whatsoever. On the day of the event itself we've heard for the first time "and make sure to send your report to this address". A report? what should be in it? who's the (real or imagined) client who'll be reading this? Should it be a business report? a detailed technical  report? A worded evaluation of the product with impressions and  general advice? Even directly asking that did not provide an answer, since "it's part of the competition and you should know for yourself what sort of report to send". Even as the results were announced, the teams were ranked, but what categories were used to score? no mention of it. Might have been shuffled in random as far as we know. 
Next we go to the application under test  - which was a nice idea but the app simply didn't work. It might have been a shoddy server spun up for the competition, or the product itself was in its pre-alpha stage but the fact is that many teams were having trouble just getting past the login\registration screen. In short - this should have been better.

Despite all of that, I managed to learn a few things: 
First of all, events such as this are a way to teach oneself on new things and catch-up with changes to familiar fields that are now out of focus. As I was preparing for the competition tries to capture my phone's traffic with a network proxy. Well, it seems that android users an't install their own certificates without have root access on android devices running android 7 or higher. You can still do that if you have a rooted device, but last time I checked, those mechanisms were not yet in place (I did have an older phone and it was a few years back) so now I know there's another thing to take care of whenever I'll have to approach mobile testing in the future.
The second thing I've learned was about the importance of experience - that which I had and that which I did not. I could leverage my past experience for faster orientation in the product, and knowing what I wanted to do even if I didn't know how to do it, one example is asking "where can I read the logs", this situation was a good chance for knowledge transfer, since my partner did know how to read application logs using logcat, so he could catch me up on that. The flip-side of that are all the things I didn't know. Perhaps with enough time I would have examined things such as app permissions or power consumption, but those didn't even pass through my mind while at the competition, since I lacked practice and didn't know the tooling around it so the time cost was just too big to even consider. 
Next thing - prep, prep prep. When interacting with an application, we are chasing electrical currents in various levels of abstraction - bits, text, various communication protocols, and effect on screen. Whenever we want to inspect a piece of program, it is useful to peel off one layer of abstraction just to see how things are under the hood - move from the nice GUI to the HTTP (or network) traffic, check out memory access, and so on. But unless you work routinely on a similar piece of technology, you probably don't have the necessary tools installed, and you might not even know what those tools are. A few hours can help you significantly reduce this gap. I spent a few hours of getting my environment up - downloaded some emulators, opned up ADB and while doing that I learned how to set my phone to developer mode (it's hidden in a very annoying way. I can understand why it was done, but really - seven taps on an unrelated field?)
Next is a reminder that no plan survives contact with reality. We had a nice first 30 minutes planned - orientation, some smoke checks and so on, but once we encountered the broken application, we scratched the plan and winged it altogether. Perhaps with some practice we could learn to work with a plan and adjust it on the fly, but when working in a time-boxed situation, I learned it's really important to keep check on where you are and what's important. 
The last thing that I was reminded of is the importance of modeling, and how unaviodable it is. As the competition went through I noticed myself creating multiple models - what is the business goal of the application (so that I'll know which severity to assign issues), how things might be implemented (so that I'll know if a problem I saw is something new or connected to other things I saw), Everything we do is based on a model, and once you started seeing them around you can practice in creating them - faster, more tailored to your needs, focused this way or the other. 

So, this is what I've learned from this competition. Can I take something back to my professional life? Not direcly, I believe. But, since everything I experience can help me gain new perspective or knowledge on what we do when testing, I can draw some lesson out of it as well. There are some similarities between this competition and a "bug bash", so I can take the mistakes I've seen done here and make sure to prepare for them if I get involved in organising one such event, and I also gained first hand knowledge on why we might want to do such a costly thing (Mainly, I believe it would be helpful in directing the spotlight to some of the problems we have in our product and help people outside of the testing team to experience them, so that we'll make fewer of those errors in the future). 
One thing that surprised me when I've noticed it was the difference between this circus-show and real testing work, and through this difference I can better define what I'm doing and what's important - the main difference was that in this situation there's a complete disconnect between my actions (wandering around the application, reporting various things) and the rest of the company - There's no feedback coming from the product team: There's no access to the product manager that can say "this is very important to me,. is there anything else like that?" or "That' very cool from a technical perspective, but it has little business impact", there's no access to the developers in order to find out what pains them, there's no view of the development process and nothing can be done to actually improve things around. All in all, it's a bug-finding charade. It was a great reminder that unlike "coding", testing is an activity best defined by a context in which it exist, rather than as distinct set of activities. 

That being said, I do recommend participating in such an event if you come across one (don't go at great lengths for it, though) - not because of any educational or professional value it brings, but rather because it can be fun, especially if the one you happen to find is better organised than the one I participated in. 

No comments:

Post a Comment