Day 1 of CAST is over, and it has been a while since I was in such an intensive day.
Starting the day with lean coffee was as exciting as it was the day before, with no less interesting discussions. I actually got a couple of minutes late to the session, just managing to squeeze in the post about the tutorials day (A feat I did not manage to complete again, for reasons that will be clarified at the end of this post). Neil, by the way, managed somehow to get his post written by the end of that day.
We managed to cover a smaller amount of topics during the lean coffee of the day, which is a good indication that people were involved in a discussion that interested them (or rather, us), so more time was spent on them.
- What to look for when interviewing at a company - are there some red flags? warning signs that mean it might be better to move on to the next company. For me, it was really interesting to see the cultural difference between what I am used to, and what appears to be standard at other places. For instance - I would never have thought that "meeting the team" is something you would expect at some phase in the process, but someone even spoke of joining team lunch in one case after passing some of the interview phases.
There were also some nice, useful things that can be applied almost in any interview: - Ask "what do you think needs to be changed?" - if the interviewer is answering this with a never-ending list, this gives you quite a bit of information. If an interviewer says everything is perfect - that also tells you quite a bit.
- When you get conflicting information from different interviewers (the example given was "we do a lot of unit testing" vs. "what's a unit-test?")
- Are blog posts good sources of information? The discussion here started, I think, from the angle of supplementing a book with a blog, or comparing between them, but evolved a bit into "how to find blogs" and how trusted is the information at these blogs. we spoke a bit about the authoritative stance of a written piece of text, and how does the comments thread affects that. We also found out that (not very surprisingly) most people would rather read a blog-post then read a book - which is a heavier investment. People still read books, but the vast opportunities in various blog posts gives us the chance to check up new subjects with a relatively low investment. It was noted that there are some blogs that looks like a very long sales-pitch (such as a vendor demonstrating how their tool is the right one for the mission they just invented), and those were generally disliked.
- Establishing a QA department - This is one question I get to hear being asked a lot around, and I imagine that even after hearing this discussion and some other, if I were to find myself in a position to set up a testing team from scratch, I would be inclined to ask the same. We spoke a bit about the change of what should be done according to factors such as the test-team size, the organization needs, preferences and maturity. I also had the chance to add an idea I got from Joel Montvelisky: It is useful to view the test team is a vendor of information, and that we should wrap that information according to the needs and wants of its 'customers' - so knowing how the different stakeholders want to see your information is important as well.
- we concluded the morning with a short notice about the emergence of a new twitter handle @fakejamesbach.
After a short breakfast, we could say that CAST has really began - we each got our name-tags (that came an a very handy case), and the opening keynote began.
The keynote was pretty good and presented an interesting point about the interaction between software and people, and how letting machines do all the work can lead to sever problems such as a plain crashing due to panic of the pilots who were given back control in a case of a real emergency. However, I kind of expected more from a keynote, and especially an opening keynote. I measure keynotes by the feeling of "A-ha!" I'm left with, and the strongest feeling I got from this keynote was "Hmm... interesting reminder, nicely built". I did get some stuff to look into, such as the generation effect, and I might evenhave a couple of extra things to test when I get back home.
Hoever, from that point onward, the day rapidly improved with sessions that were very interesting to attend.
Carol Brands and Katrina Clokie had a very nice talk named Babble & Dabble: Creating Bonds Across Disciplines about connecting with the other functions in our team: Dev, BA, even customer support and operations. The talk managed to show two very different contexts, and to show the similarities between them in how meaningful connections are created: they identified three components that make these connections work well: Collaboration space, pushing information out and taking information in. In order to achieve better collaboration we create a space in which it can happen - it can be a joined project, simply sitting in the same room or anything you can come up with. that will create human interaction. Then we push information out, telling people what we do, what we can do for them, and what we hope to get from them. The second part of that is that we will be that we should consume relevant information from the team we collaborate with.
During lunch I actually got to speak a bit longer with Carol and Katrina, and ask them a few follow-up questions I had. My main takeaway from this talk, is that a good way to make a connection with other functions in the business is to invite them to see and learn about what we do and what we are good at (as a precondition, we need to know that and show it to ourselves).
The second keynote, with Anne-Marie Charrett, was a talk I Have already heard at the last European Testing Conference (And by the way - super-early registration to ETC2017 is now open). I remembered this talk as being really powerful, and listening to it a second time made that impression stronger. This talk showed me that things that currently bother me at work can be different, even in an environment very similar to my own in some aspects (and very different in others), and that really encourages me to continue with my efforts of making my environment better. I also kind of want to compare both videos of the talk one webCAST is released. I noticed that hearing the same talk twice (not including the couple of times I watched it on video) is still interesting, and since my concerns and needs has changed a bit, and maybe the focus of the talk also changed a little, I noticed different things and it also allowed me to shift my focus from "what" to "how".
I then made a tough choice and decided to invest two hours in Janet Gregory's workshop about requirement elicitation. I gained some very interesting tools to think on requirements development (and review), one of which is the "7 product dimensions" that seems to be a bit crude yet effective tool to think about requirements. At some point during the workshop I got to my small moment of revelation, which in this case was a bit more of a confirmation - The main difference between the processes used by business analysts when defining requirements and testers analyzing requirements to devise a test plan is only a matter of timing - so techniques that are used in one case can be easily lent to the other. Personas, for instance, are mainly used by designers and BA, but is very helpful as a test design technique, and using state diagrams, which I am familiar with from my testing education (as is any other tester that took the ISTQB basic certification) lends itself very efficiently to defining requirements. I think this is because in both cases, the activities that best drive them are activities that enhance our understanding of the product, so it is only a matter of which goal is being kept in mind that separates between the two activities.
I really enjoyed switching a bit between listening to talks and participating in the workshop, and I'm quite happy with the choice I made - but choosing one event that spans over two time slots is a choice that makes me wonder what have I missed. It is a consolation that some of the sessions I missed were recorded as part of webCAST and I could watch them later.
Let the games begin - promptly after the end of the last scheduled talk (or workshop, in my case), dinner, in fingerfood form, was available, as well as some board games that I think were brought by Erik Davis. I ended up playing a game that had the goal of maximizing the amount of chaos around the table. It is named spaceteam, and is a nice game to play for about half an hour (which can be around 5 games, as each game cannot last more than five minutes). Somehow, three games in a row I ended up drawing a card that instructed me to shout at anyone who was touching the floor. until the end of the game. I concluded this with a soar throat.
Ethics discussion that came later was, to my surprise, both interesting and polite. If I recall correctly, there were four topics, at least one of which I can't remember. One of the subjects was very odd to me - what should a tester do in matters that involves public safety when they witness some sort of misconduct. This subject was odd to hear, since there is only one right action - the only right action in such case is to report the problem and if it is not addressed, go out and report the problem to the state or whomever is responsible on such a field. It's bolldy difficult to actually do, but when the question is phrased in that way, there is no other option. There are some very delicate questions around the borders though - How can one identify such a case? Should one resign from such a job or stay and monitor? Those questions were more in the background, but not openly discussed.
Another point that was discussed is the tester as representing the interests of the end-user. I was very surprised by the unanimous voice sounded by the panel members (or maybe it was an assumption made while discussing the previous topic I mentioned) - it seemed widely accepted that the tester should indeed represent the user interests, which strikes me as very odd - I was hired by a company that has one goal - making money. My actions there, to the extent they don't involve illegal or unethical actions, should be in favor of my company interests. When I act a a proxy of the user, I do so in order to enable my company better visibility of what should be done to maximize the value (and profit) of the product. I don't raise "what the user wants" because I care about it, but rather because a good product cares to satisfy those wants and needs. If the product has consciously decided to ignore some of the user wants (by deciding "this is not what we sell, a user that wants that should look elsewhere"), I don't bother mentioning how important this want is (except when analyzing the business impact this decision might have).
The most flammable discussion, which was surprisingly polite and considerate (partially thanks to the great moderation work done by Rich). I don't think a definite conclusion has been reached, but a tone has been set, and people got to bent out a bit, and maybe address the subject in a forum that is less toxic than online media is. With some luck, it might help in setting the tone in further discussions online, and maybe a solution to the problem the ISTQB certification presents will arise (the problem, by the way, is that there is no way we like to formally start life as a tester - other courses such as RST or BBST are too heavy for new\aspiring testers, and are not easy to find from outside, and do not appear in job posting requirements).
Following the formal discussion, I stayed a bit longer to talk about the idea of certification, the BBST course and stuff around that, which was nice until Neil did the sensible thing and got us to a geek bar, where we played Cards against Humanity. Some moral lines were crossed, and I got back to the hotel way later than I intended, but we did have some fun with that too.
No comments:
Post a Comment