Sunday, August 14, 2016

CAST day 2

(Conference report, No Hebrew)
So, it is a recurring theme this conference that Neil manages to get a post written on the same day, despite getting terribly late to his hotel room. I think he deserves a round of applause.

this day I managed not being late to the extremely early lean coffee. Going to a such an event three times in a row, after TestRetreat's open space made me wonder "can I come up with interesting topics?" It has really surprised me that the answer to this is "apparently yes". My mind was blank up until the point where the event started, and then some ideas surfaced to mind. But, even if this wasn't the case, the discussion would not have been hindered, as we had an abundance of interesting topics from other people.
Wobbles in continuous integration - It is sometimes nice to hear about problems that other people are experiencing, especially if they seem to be in a more advanced phase than I am - It helps both to avoid the unrealistic despair that comes from "Why can't I be as perfect as they are", as it exposes the hard work that was and is done in order to be in that more advanced state. The other reason I like such discussions is that it gives me some insight to problems I have not yet encountered, and will have to consider as I slowly reach towards my desired goal. The problem raised by Katrina was about how to deal with CI breaking due to changes done by multiple teams. Each team's branch works fine, but it happens sometime that two teams are working on closely related areas of the code, and once both changes are merged into the master branch, some tests fail because the code checked in by team A might fail on a test created by team B.  I can't really say that anyone around had a definite solution for that,  but I liked some of the questions asked and I hope we helped by providing an idea or two that can be used.
Testing infrastructure - should we test a change to the infrastructure of the code? After being presented with the question and a bit of context we tried to get of grip around "what is infrastructure in this context", as this word can mean a whole lot of different things. We ended up with a conclusion that was not new to anyone participating in the discussion - If it matters, test around what matters to you - but I feel that the way we got there did help for the person who asked that question to find the suitable answer for them.
Training new testers -  that's kind of the eternal question, isn't it?  out of the many comments for doing that, I took three that I particularly liked, and want to try and remember:

  1. Help tester develop the courage to ask stupid questions - this can be done in a multitude of ways, from setting an example to having 1x1 talks with the new testers and assure them it's OK to ask question even if it seems stupid. 
  2. Remember that every tester is different - some might lack courage, while others should be scolded  to deal with their overconfidence. 
  3. Tell them they are in training. Setting the frame in such a way removes much of the unnecessary stress that are involved in learning to function in a new place. 
How to keep in touch with people we met at the conference? Yes, it was the last day of the conference, and having a way to keep in touch could be nice. There were some ideas, and some of them have already started to set in motion. 
Online lean coffee - Should we do this? how? We had a short discussion where we decided we definitely want to try that. 
Am I cheating the auditor? When working in a regulated environment, there is an audit from time to time. How much should we disclose to the auditor? Is it ethical to answer directly to questions asked but not to reveal what we know about the product? Where passes the line between legitimate politics and misleading on purpose? 
How to make AST more visible? We really njoyed CAST, and we appreciate what we recieved from the community around the AST - but it is not trivial to hear about it, and testers who are new to the profession might miss out simply because we do not do enough to be easy to find.  While we do leave that question to the AST board, and do recognize the significant work they are doing, maybe there's something else that can be done. As it just happens, I wore my ETC shirt, that has the AST logo on its back, and that was one example for that effort. We hope that this is something that will resolve itself in time - since if hooking to AST is indeed helping testers improve, there will be more and more good testers around that could attest for it. 

That concluded a rather packed lean coffee session where we had covered multiple subjects in a very concise way. I was really impressed by how quickly some subjects reached a point where we felt we concluded the discussion, as well as by the first time I saw a subject that had all thumbs-up to resolve it. 

The opening keynote for the day was Neuro-diversity and  the software industry. This talk felt a bit like a bucket of ice - sudden, sharp to the point of pain, and eye-opening. In this talk, Sallyann Freudenberg shed some light on the situation of people with what is generally referred to as neurological disorders - problems such as autism, ADHD and even bi-polarity. She made a very compelling case for the benefit of employing the special skills that often come alongside with the difficulties we are more familiar with. What really surprised me was that at a rather early stage of the talk Sallyann told us about when she found out she was on the Autistic spectrum (according to some metrics she shared - quite well in that spectrum), which raised the possibility - If it is possible to be on the Autistic spectrum without even knowing that, and if I relate to some of the examples I'm shown - could it be that I am on the spectrum myself? One thing I can tell you - this is a very good way to keep your audience interested and involved. The talk revolved around points that can be considered if we want to be open to that sort of diversity in our workplace - from major changes that involve construction and environment changes, to simply making it very clear that it is fine to take a 15 minutes break every now and then to relax. 
I would be surprised if there was a single eye that remained dry during the entire talk, and it also lead me thinking - Am I sensitive enough to identify such situations? Can I act in such situation to help such individuals to better fit the team I'm in? I think this talk is the type of things that make going to a conference worth that much more - I would probably not watch any talk on this subject in my own free time (since it isn't about testing or a subject that remotely resembles any of my interest fields), but I'm very glad I did get to hear this thought.

The next talk was dubbed "Lessons Learned in implementing exploratory testing", which was quite as the title promised - some stories where implementing those cool ideas we hear about in conferences simply fails. Nancy spoke about the importance of having an actual, actionable plan before trying to push for a large scale change, and about not overloading people with too many new ideas (such as the entire RST course for those who are used to work in the "old" way of running test cases.
I also liked her idea of what should a test manager do - which is to create visibility of the testing process and status to the forces that be, so that everybody will know what are the testers doing, and what takes them so long (this one is to fend of the "testing is our bottleneck" where this is not actually the case).
One thing I didn't like is that in some parts of the talk, Nancy sounded a bit like someone who saw the light - no longer shall we do test cases, but rather use mind-maps, and all those cool RST tricks for testing. It sounded a bit like one can do whatever they want, as long as we throw "those old ways" out of the window. This lead to one of the main concerns in the talk, which to me sounds like a concern that shouldn't exist - how to make the sort of testing that was now adopted in the test department to remain even if Nancy will leave her position and go elsewhere? The way I see it - as long as they are good testers, and that good testing is done, it really does not matter if people revert to the ways they are more comfortable with.

Next I attended a group discussion about "engaging people back home", with the question of should we actually make some effort in order to foster professional development in testers (the global feeling was "yes", as we do see the value in it, but that's really preaching to the choir), and how to actually make that happen. The main theme was "give people time on the job to learn", with one case where one of the participants told about having a designated time each week (4 hours, if I recall correctly) where no meetings are scheduled, and the whole time is used for learning. This sounds to me much better than the "we say we give you 10% to learn, but then we overload you with so many pressing tasks that you don't actually get to use those 10%" that I see in some places, including my own.

After this interesting, yet frustrating, discussion, I moved on to the lightning talks session.
The subjects were quite diverse, and each speaker got a doll of Tina the test Toucan (which means I now have two of those - the first I got when I registered to AST at the European testing conference).

  • Can't we all get along - was a short talk about being less toxic and violent in online discussions - should we actively reprimand those who express in ways we don't accept? or should we stay out of such muck and don't feed them with more attention? Should we defend those we see unjustly attacked?
  • Check list for test readiness -  a list of things that are worth fleshing out before actually starting to test - be it a test plan or simply having all of the requirements in one place. While not everything was 100% applicable to my environment, just going over it did a little to open my eyes.
  • Precognition in work - This talk put me at a certain level of discomfort, the speaker shared with us a "trick" to gain credibility and reputation - write down each time you predict something about a problem that might arise when taking some sort of an action (e.g.: "If we'll start by building the dev environment before building the test environments, our testers will have no environment to work on for at least a month"). Then, when this prediction comes true - make sure to remind people by telling them "I think I found something in my notes about this...". What really bothered me in this was that there is a very strong incentive to do that only in cases where I was right, and leave aside the times where I was wrong - thus creating a false appearance of "the doom prophet who's always right". It seems to be quite effective, but very misleading. 
  • Three talks about automation 
    • constructive objection to UI automation - was a reminder that automating UI is quite expensive and we should try not to do too much of it (the message was to avoid it where possible, but that's too harsh for me). 
    • Automation is dead (to me) - Richard Bradshaw did a quick talk about his signature subject - automation in testing. What I really liked about the way it was presented, besides demonstrating how a small shift in focus can mean a lot, is that when I see "test automation", there's almost an underlying (and incorrect) assumption that test automation is an inferior programming task (I even wrote about it not long ago), the way Richard presented automation in testing made it very clear that in order to achieve this goal, one should posses proper coding skills. It does not exclude writing a quick & dirty script to save time on a repetitive task, but the approach is "let see what tasks we can automate" assumes a much more capable programmer1 than trying to automate test cases which sometimes lead to many "testing tools" that help in automating a very limited set of actions (record & playback are the most obvious case, but even stronger tools such as SoapUI that is quite powerful still do that). When a capable programmer is tasked with "testing", not only test cases are targets of automation, but also is setup and teardown of environments, monitoring and various tools that aid manual testing. 
    • Fully automated regression testing in scale - quite a big name for an important topic - we sometime face the expectation to "automate everything" (or, in a more common name "100% automation"), where what should we really do is to identify the needs and risk the automation is addressing. The comparison I really liked was food, water and air - a person can go without food for  a few weeks and still live, about three days with no water, and maybe 20 minutes without breathing. Automation should address the most urgent needs,which will not always be the same thing as the task that was specifically requested. 
  • The advantages of being pushy - I spoke a bit about my experience in not accepting a "no" and nudging things to go my way in order to get some credibility (and I did have this in mind when I chose the title). In short, I found out that one of the things that make the test team important in the team I'm in is that we get involved in just about everything around us - we ask about design and offer suggestions when we feel it is appropriate, we ask to be included in meetings that we were not originally invited (would it surprise anyone that there are fewer of those than they were 4 years ago?) and we seek to contribute to the product in areas that are not strictly about testing.
    I think that such behavior, where we get involved in a lot, and share the information we thus get, has a significant part in making the testers in the team visibly involved and relevant. Or, as one developer who has transferred to our team once told me - the testers are really strong in your team. 
Following the lightning talks I had shortly considered joining the 2nd part of the workshop about "fostering healthy uncertainty in software projects", but as i already missed the first part, and after glancing over the groups notes from the it I decided to go to When You’re Evil: Building Credibility-Based Relationship with Developers, which wasn't about being evil at all. Instead it pointed out that when we as testers drive for a change, we actually rely on the credibility we have inside our team, or actually, how much credibility we have with the specific developer(s) that we will work with. This credibility has something to do with consistent good performance, but it has much more to do with being considered as "part of the team" and not as an external force to resist. Some of the tactics to get into the developers "good side" can sound a bit insidious at first, and a little bit manipulative, but after a while it sank to me that most of it really comes down to actually demonstrating that we are in the same team - It sound horrible when someone says "in order to be on someone good side, listen to them talking about things they care for", but is it really different than "when you work with someone, show some respect and interest in their hobbies" ? The former is a bit manipulative, the latter is similar to those tips you might find in any "how to foster team collaboration" articles you can find online. Still, both of them are really the same - is it bad to acknowledge the fact that being part of the team does have some very immediate benefits?


And suddenly, the conference was over. A very sharp change. I did my best to say goodbye to the people I've met during the conference (for those I didn't catch, I do plan on sending an email, once I deal with that terrible jet-lag that is caused by flying over 10 time-zones) and went back to my hotel to have at least some sleep before my 17-hours flight back home.

So, that was my conference in very small details. I still need to think a bit about the general picture and process some of the takeaways I noted during the conference, but all in all - a very good experience. 



1 Following this post I want to make some order in the terms I use - for me, a programmer is anyone who can code. I use "developer" to denote those that chose programming as their career path, and personally I identify as a tester. Since I'm a tester that also writes code, I'm a programmer despite not identifying myself as a developer. 

No comments:

Post a Comment