So, My first day at ETC2016, and what a day it has been!
I'm still recovering from the amount of new information and thoughts that were crammed into my head, not to mention all of the great people I met (and the few great people I mustered the courage to speak with).
What did I have there? Two awesome keynotes, one perspective changing workshop, one really enjoyable lean coffee, one talk that introduced me to a cool new idea, and two talks that were good, and surprisingly enough, in that conference, or at least today - "good" felt like it just doesn't stand up for the high standards set by just about everyone else.
As my thoughts are still fuzzy from all of that rush, I'll just go quickly through my experiences - I made an attempt to keep some notes in each talk, but I'm a lousy note-taker, so it is probably better if I look at them while they still mean anything and re-organize them.
After registration, with me walking around trying to find some other folks who didn't know a single soul in the conference beforehand (which didn't go that very well, but at least it was short),
Then began the first keynote by Linda Rising, the subject was "the power of an Agile mindset". She began her talk with a short disclaimer about her slides being public and free to use as anyone would like, as probably we will wish that someone we know would have been here to hear it too. And she was right - ten minutes into the talk I knew exactly who should be hearing this talk. I still hope that there will be a video published with this talk, as I think I've seen a video camera posted.
The talk was about the difference between having a "fixed" mindset and "growth" mindset - the first assumes that skills are inherent and unchangeable, the other assumes they can be nurtured and grown, and that anyone can improve by investing effort. She presented a study that found that having the "growth" mindset drives people to perform better, and achieve better results. Also, for the parents around, especially for those with girls - be careful when you give compliments. Saying "how smart & perfect you are" is pushing your child to the fixed mindset. What she suggested was to comment about the effort invested, something along the lines of "wow, you have really put yourself to the task".
Where does Agile come along? Linda made a claim that the core values of the "growth" mindset are very similar to the Agile core values - embracing change, constantly improving, accepting failure etc.
I left this talk wondering about how can someone educate themselves to have that mindset, and whether I can do anything to help them do so, as it will definitely help them a lot.
Reading recommendation: Fearless change, mindset (and apparently, audible are giving a free audio-book version of it, if you don't have an account and join their 30 day trial)
Next, after a very short recess, I attended my first "regular" talk, where Thomas Sundberg talked about why everyone who invests in automation should strive to focus mainly on the low level checks, or, as they are more commonly known "unit tests". His argument was quite compelling, and even though I've heard it before, he made a good case for it, and showed an interesting perspective about the fairly known "test pyramid" where he attached different attributes to the layers of the pyramid - at the base you could find stuff like "precise", "fast" and "reliable", while at the top there are "business logic" alongside "long" and "fragile" (I probably missed some and mis-worded some of the terms, my apologies). Despite that, I'm not sure I really buy into that idea of focusing on the unit level. There's something at looking at the trees that makes me feel as if I'm missing out the forest. Probably it has a lot to do with the fact that my experience is in automating system level checks, and that every time I looked at a set of unit-tests written by our developers they were very naïve, and almost always were a bit to simplistic for my taste. I think I can understand the reason behind investing in low-level checks, but I don't feel it. I also feel that testing a whole system tends to force me watch out for the unknowns. a unit I can grasp and understand completely. two units integrating? I can probably manage. The whole system? Not a chance in the world. I believe that this way I'm less prone to assume I have some sort of a "perfect coverage", so when the unexpected occurs, I will be more open to noticing it.
Oh, his main case for that sort of investment was simple math - connecting all the pieces together means that you have to multiply the paths inside each component as you integrate them, if you break your tests early, you are only adding the paths, which scales much better.
After a short coffee break, I went to a workshop by Alexandra Casapu, under the subject of "Examine your test skills". We got a cool testing challenge (here's a black box. go ahead and figure out what it does", and she literally brought some black boxes to play with, which was awesome), and after 20 minutes or so of playing with it, or, as I suspect most of us ended up saying: "But we've only just started to understand what's going on" we were asked to stop and think about the different skills we have used to solve the challenge. Let me tell you one thing - it was very hard to start and break down my simple "oh, I just played with it a little bit" to actual skills I applied unconsciously (or "tacitly" as I hear occasionally from Bach, Kaner, Bolton and just about everyone around them). Next, we got to compare our lists with our team members (did I mention we were split into groups?) and then to categorize the skills, which was really important too, since it allowed us to find some other skills we used (or could have used), and to identify areas in which we invest more than others (maybe it's a sign for interest, maybe it's a hint we should change our focus).
My main takeout from this session is a decision that will be tough to maintain, and I got it in the form of a quote: "Get out of the execution mode!" (I think Alexandra was quoting someone else, but I'm not certain about it, and quite happy to attribute this to her ). Pausing from time to time gives us time to reflect on what we have done and think of ways to improve. It also gives us the language to talk about testing (I have Alexandra's word for it, since I have yet to experience it firsthand, but I totally see this happening).
Next, I was a bit late for a talk by Abby Bangser with the title - "Truth - the state of not yet proven false", which was a really nice talk about how our mind fills gaps automatically and how this filling-up is based on our expectations. There's not much to do in the way of preventing it, but if you are aware of it happening, you can definitely take some steps to minimize or overcome the ill-effects of those unconscious blind-spots. An idea I really liked as a way to fight that sort of bias was by using personas, by telling ourselves a nice story about the user who will be using the software. Some automated tools could help us vary the data we use (The example in the talk was and automated web-form filler that came up with semi-random input). Definitely some nice things to think about.
Next was lean coffee, which was a great experience. I'm used to think of myself as allergic to strict formats, and I felt that the five minutes limit sound always came just at the wrong moment, but this did enable us dropping subjects fast and get a nice coverage of our topics during the hour of the lean coffee (which felt like ten minutes). I really enjoyed hearing both the concerns and thoughts. We went over various stuff - from usability testing tactics for a software specifically targeted for doctors, to choosing a test strategy, to inspiring other testers to learn (and much more). Plus - I actually got to talk to some people and now I feel a bit more comfortable approaching them, and though names are a problem (I did manage to remember Claudia Rusu's name since she also gives a talk, and Gita Malinovska with whom I paired during the workshop as well, I think there were also two Adrians, but at this point I decided the world was trying to confuse me and focused on listening to people ideas). I ended up talking a bit with Claudia about the subject she just raised when the buzzer announced the end of the lean coffee, and without noticing I found myself late to Emma Keaveny's talk about "dark patterns". Whenever your application is deceiving the user, making it harder to choose the "right" option, or coerce the user to do something they wouldn't normally do, you are using a dark pattern. A really cool subject, and something I will have to remember looking for when testing in the future. For me, all the bits and pieces connected perfectly when I realized that this is very similar to some of the concepts of usable security. There we ask "Is the default behavior is the more secure one?". When looking for dark patterns the question is similar - is the default behavior is in the best interest of the user?" . I think it might prove a great rule of thumb to identify dark patterns.
Oh, and by the way, if you ask me - the most notorious use of a dark pattern I have ever seen is the flash player update. Why should I uncheck something to avoid downloading a second piece of software to my PC??
Reading materia: Evil by design, darkPatterns.org.
After the talk I stayed to speak a bit with Emma and two or three others (including one that worked for Adobe - the company that was my go-to example for using a dark pattern with the antivirus bundled with each flash player update). That was really cool, and we were late to the closing keynote by Anne-Marie Charrett with the misleading title "How to conduct great experiments". The talk did mention some experiments, and it seem that they were great, but it was so much more. It was a transition of paradigm - from test management to test leadership (and the bottom line, that left my jaw right down on the floor, was that the difference is that managing is a regulatory activity that promotes stability, while leadership looks for and drives constant change and evolving), which had to pass through redefining the role of the testers in the company, focusing on what is most needed and delegating or dropping other activities that could be done by others, or that are not as important to have. It also spoke about the difference caused by the test team size, and added a creative solution of dispersing the test team\department and driving it to become a practice rather than a designated owership (practice, in this case, is like the medical practice - where one might go and consult the doctor about their health, but the ultimate responsibility to eat, sleep and exercise properly is the individual's, not the doctor's).
Then - retrospective. How do you do retrospective of a full day for ~100 people?
Here, have a look.
By then, the day has ended, and I was happy to get to my Hotel and rest a bit. This day was intense.
Only I didn't. at 9 PM we met at a nice bar\café and had some good mingling time. I ended up talking a bit with Richard Bradshow, and having another chat with Emma Keaveny, and got a chance to thank Llewellyn Falco and hear from him about some of the purposes of the lean coffee (to get people to talk with each other). There was so much more going on there, but it was a bit late for me, and everything just got mixed up in my head to a really nice feeling that I can't pull details from, but certain that it was good.
If any of the others I talked with during this day and are not mentioned hear ever read this - please don't be offended I didn't have your names here. Some of them I actually do remember, but this post is already long as it is, and cramming a whole day to a single post is almost as hard as is cramming all of those ideas into my thick skull.
So, thank you everyone for a great first day.
No comments:
Post a Comment