Sunday, February 19, 2017

Anything you can do, can I do better?

Another ETC post. There are others:
Tutorial days
Conference days
My retrospective

Up until recently, I was assuming something very simple: There's nothing an average developer can do that I can't (and vice versa) - Yes, there are some differences in the experience we have,  but overall we have the same basic training, speak the same language and discuss software design with each other. So, with the exception of those superstars you see here and there that are clearly better programmers than their peers, I considered myself equal.

Then I went to ETC, where I heard Liz Keogh speaking on Cynefin, and creating safe-to-fail experiments. One of the basic assumptions she has there is that testers are good at finding problems, and developers are good at coming up with ideas to solving problems (even ideas that don't work). What does it mean "testers are not good at coming up with solutions"? Surely I can do that, I solve problems all the time at work. Don't I?
There were two things that happened and pretty much convinced me there is a point in this. The first, is the talk I had with Liz after the tutorial day where I was bombarded with a series of questions that led to that "oh..." moment, and the second is Liz talking about how people have tend to respond to new ideas with the phrase "That won't work because...", which I realized that I do a lot.
So, are testers less efficient in find solutions? Are testers so constricted by spotting possible fail points that they can't act?
The more I think about it, I tend to say yes. As a general rule of thumb, I can see how this can be true. Of course, this is not a yes\no question, but rather a scale - and the exact position on which a person stands had some other factors that affect it (such as the experience in doing similar things, knowledge of the field in which an action takes place, etc.), but in general, it makes a lot of sense - Testers are tasked, on a day to day basis, to find potential problems, to poke at the perfect model someone has built and find weak spots in it. Developers, on the other hand, start almost every really new task by "creating a proof-of-concept", something that will sort-of work and teach them how to approach the problem in full scale. I take it for granted that I test better than most developers with equivalent experience because I have invested a lot of time in becoming better at it, and that they are probably better at coding since this is what they practice. But practicing a skill does not only mean getting better at it, it means tuning your mind and habits to it, and if some other skill is opposed to that, it will be that much harder to acquire.
Also, one other thing that is important to keep in mind - This mindset is not the permanent. People can, and do function successfully in both roles, though probably not simultaneously (I think I recall someone saying it takes them about a week to do this mental shift).

I took three things out of Liz's talk -
  1. There is an important place for asking the difficult questions when setting out to try and solve something. It part of what makes an experiment "safe to fail" (which was the title of the talk). Putting this kind of lens gives me another tool that will help presenting questions at the context they are relevant to (e.g. - not going into "where should we put this bit" during a high level description)
  2. I, too, have my limitations. Now that I'm aware of one of them, I can act to make sure it's used properly (for instance, when I spoke with one of the developers in the team about the architecture of a new piece of software we're planning, I wrote down my questions instead of interrupting. By the end of his explanation, half of the questions were answered, and I could provide feedback that was more relevant. And how is litening relevant to Liz's talk? Knowing that what I'm about to hear is an imperfect solution that "generally works", the small problems were no longer blockers, and so writing them down is more useful than stopping the whole train of thought). 
  3. I need to practice my "let's do it" mindset - It's true that I'm a tester, but I still need to solve some problems, and there's more to me than just work. As long as I keep tabs on my "professional pessimism", being able to flow with an idea is useful.


זה פוסט נוסף על כנס הבדיקות האירופאי. כתבתי עליו קצת קודם (באנגלית בלבד):
יום הסדנאות
ימי הכנס
משוב קצר

עד לאחרונה, יצאתי מנקודת הנחה פשוטה - אין שום דבר שמפתח תוכנה ממוצע יכול לעשות ואני לא, ולהיפך: אין דבר שבודק תוכנה ממוצע יכול לעשות ומפתח לא. כן, יש לנו קצת הבדלים בניסיון ובכישורים שבחרנו לפתח, אבל לשנינו יש את אותה הכשרה בסיסית, אנחנו מדברים באותה השפה ודנים יחד על הדרך בה נעצב את התוכנה שלנו. אז, אם נשים רגע בצד את אותם מתכנתים מבריקים שגם טובים בבירור מאלה איתם הם עובדים, הנטייה שלי הייתה להניח שהיכולות שלי דומות.

ואז הלכתי לכנס, שם שמעתי את Liz Keogh מדברת על Cynefin, ועל איך ליצור ניסויים שבטוח להיכשל בהם. אחת מההנחות שהיו לה שם היא שבודקי תוכנה טובים במציאת בעיות, ואילו מפתחי תוכנה טובים במציאת מגוון פתרונות (כולל כאלה שלא לגמרי עובדים).
רק רגע, מה זאת אומרת "בודקי תוכנה לא טובים בלמצוא פתרונות"? בטח שאני יכול לעשות את זה. אני פותר בעיות בעבודה כל הזמן. לא ככה?
היו שני דברים שגרמו לי לבחון מחדש את ההנחה הזו שלי. הדבר הראשון היה שיחה שהייתה לי עם ליז בערב שלאחר יום הסדנאות, בה בקשת הדגמה הובילה לרצף מהיר של שאלות שבסופו נותרתי עם תחושת "אה, כזה...", והשני היה רגע קצר במהלך ההרצאה שם היא הזכירה את הנטייה האנושית להגיב לכל רעיון חדש ב"זה לא יעבוד כי...", בו שמתי לב שזו פחות או יותר התגובה האוטומטית שלי להרבה מאוד דברים.

אז, האם בודקי תוכנה פחות יעילים במציאת פתרונות? האם בודקי תוכנה כל כך מוגבלים על ידי איתור הבעיות האפשריות עד שהם לא מסוגלים לפעול?
ככל שאני חושב על זה יותר, הנטייה שלי היא לומר "כן". ככלל אצבע, אני בהחלט יכול לראות איך האמירה הזו נכונה.  כמובן, לא מדובר כאן בשאלה בינארית של כן ולא, אלא בסקאלה - והנקודה המדוייקת בה אדם נמצא על סקאלה מושפעת מגורמים נוספים (כמו ניסיון עבר בבעיות דומות, כמו מידת הידע על תחום הבעיה, וכד'). עם זאת, באופן כללי זה מאוד הגיוני. בודקים נדרשים מדי יום למצוא בעיות אפשריות בפתרונות, למצוא חורים במודלים המושלמים של אחרים ולאתר את נקודות החולשה שלהם. מפתחי תוכנה, מצד שני, מתחילים כמעט כל משימה חדשה באמת ב"הוכחת היתכנות", פיתרון חלקי שנכתב במהירות ולא מקיים את כל הדרישות מהמוצר המוגמר, אבל הכנה שלו מאפשרת למתכנת ולצוות ללמוד איך צריך לעשות את הדברים. אני מניח באופן אוטומטי שאני מסוגל לבדוק תוכנה טוב יותר מאשר מפתחים עם ניסיון מקביל לשלי, ושהם מתכנתים טוב יותר ממני, כי כל אחד מאיתנו השקיע מאמצים בפיתוח היכולת המרכזית שלו. אני "גם מתכנת", ומפתח טוב "גם בודק", אבל תחומי העניין והמיקוד שונים. אלא שתרגול של יכולת מסויימת לא מוביל רק לשיפור של היכולת הזו, אלא גם לשינוי דפוסי המחשבה וההרגלים בהתאם. ואם יש יכולת אחרת שבמקרה מנוגדת ליכולת שאני משקיע בה מאמץ, יהיה קשה יותר לרכוש אותה.
ועוד משהו אחד שכדאי לזכור - ההבדל הזה הוא עניין של הרגל (ושל נטייה טבעית ראשונית), לא גזירה משמיים. יש אנשים שמסוגלים להחליף בין התפקידים השונים ובין צורות המחשבה השונות (נדמה לי ששמעתי מישהי מספרת על כך שלוקח לה בערך שבוע לעשות את המעבר המחשבתי הזה).

שלושה דברים שנשארו איתי בעקבות ההרצאה של ליז:

  1. יש מקום חשוב לגישה ה"פסימית"ולשאילת השאלות הקשות בתחילת הדרך. זה חלק ממה שמאפשר לנו לערוך ניסויים שבטוח להיכשל בהם (וזה, כזכור, היה נושא ההרצאה). 
  2. גם לי יש את המגבלות שלי. זה נחמד לאגו שלי כשאני מתעלם מהן, אבל אם אני מודע לנקודות החולשה שלי אני יכול לעשות משהו כדי לפצות עליהן, או כדי לוודא שהן מתבטאות במקום הנכון (למשל - כשישבתי עם אחד המפתחים אצלנו על ארכיטקטורה של רכיב חדש, החלטתי להקשיב עד הסוף ולכתוב את השאלות שלי - עד שהוא סיים, חצי מהן כבר קיבלו תשובה, והשאלות שנותרו קיבלו יותר תשומת לב והיו רלוונטיות יותר.  איך זה קשור? כי כשניגשתי למשימה עם מודעות לכך שזה לא חייב להיות פיתרון מושלם מייד בהתחלה, הבעיות הקטנות שראיתי לא חסמו אותי מלבחון את המכלול השלם).
  3. אני צריך לתרגל קצת את הגישה של "בואו נעשה את זה ונראה מה יצא" - נכון, אני בודק תוכנה, אבל לכולם יש בעיות שצריך לפתור, ולפעמים צריך לדעת לשים בצד את כל הפחדים והחסמים כדי לעשות משהו. אני יודע לעשות את זה עד רמה מסויימת, כדאי להשתפר בזה קצת. כל עוד אני לא מזניח את הפסימיות המקצועית שלי, זה כלי שכדאי שיהיה בארגז הכלים שלי. 


Tuesday, February 14, 2017

ETC 2017 - the Good, and that which can be improved

(No Hebrew for this, it's quite long as it is, and the main audience for this post are the Organizers of ETC, none of whom speaks Hebrew to the best of my knowledge)

At first, I thought of naming this post "ETC - the good, the bad and the ugly", only there was nothing ugly there, and the things that weren't as good, are still far better than "bad". ETC was an extremely good experience.

This is my list of things that I liked and liked less in the conference, since the organizers did go quite far to collect feedback and understand what worked and what didn't, I want to try and help by pointing out the things I remember and provide some detailed explanation that will hopefully be more useful than a cryptic title on a sticky note, and help in getting the "why" as well as the "what".
So, without further ado - the list.

Things I missed, or think they can be improved
(I'm starting with this since I believe people tend to remember the last thing they read, and the conference as a whole was very good)

  • Conference slack channel - In my first night before the conference started, I wanted to find people to catch a dinner, but didn't have a way to know who will be attending ETC, and who is already here. I managed to find a tweet that was connected to a name I could reach through the testers.io slack channel (I don't have a tweeter account). Having a public place to shout "hi, does anyone want to meet?" is very useful, and not everyone on the conference will overcome the discomfort barrier of contacting a specific person they don't yet know. Publishing a general message on a designated board is a bit easier.
    In addition, while last year there wasn't enough discussion on the slack channel, there was enough to inspire me and prepare a subject for the open space (this year, I prepared something the day before, since I knew how it worked, but didn't have any sort of heads up this year).
    Plus, it was a very good place for administration messages.
    While in the conference, I don't really need a digital sphere for discussion, since I'm having a lot of interesting discussions face-to-face. Having a closed, define area where I can contact conference members.
    "But there was a twitter hashtag", one might say - Well, it's completely different. Twitter is a public identity one either have or they don't, and slack is a private account that is created for a single purpose.Only other channel members can see this account, and the purpose of it is narrower. My testers.io slack channel, which is fairly wide purposed, is not used to upload cat pictures or show up a new car\shirt\haircut. It revolves around testing. Using twitter as a communication channel excludes me from the discussion.
  • Video recording the talks - The talks at the conference were mostly great. At times, it was hard to choose which talk do I prefer.  Having the ability to watch the talk later is a good way to catch up.Yes, I've heard (some of) the reasons for not having the talks, or some of them, recorded, and they are important,  but I think in this case some sort of compromise can be done to enable video recording, at least of some of the talks. My reasoning for recording the talks are: 
    • Since "the rule of two feet" was recommended, I found myself at one point leaving a talk which I liked, and joining a talk I liked a lot. I'm happy with my choice, but I now feel two gaps - I am curious about how did the talk I left continued, and wonder what great ideas I missed in the talk I joined. Knowing that there is no video increased the price tag of leaving a talk and reduce the return on joining one in the middle.
    • Talks recordings are the best conference promotion there is. So far, I've attended two conferences - ETC (twice) and CAST. The reason I was determined to go to CAST is that in 2015 I was watching CAST live and saw this. I've seen some CAST talks in the past which made me think "it could be nice to be there", but when I saw this talk it shifted to "I must be there". On the other hand, when I watched STARwest live (or was it STAReast?) I knew that I don't want to go there. Testbash talks I've seen have made me want to do two things: Join the dojo once I have time, just to watch the talks, and hopefully in the future attend it. 
    •  Feedback to the speakers - I don't have a lot of experience in public speaking. The little I have from a local meetup taught me I rather have a video to see my mistakes and learn from them. Yep, it's not always fun watching myself on stage, but it is useful. 
    • Sharing - Some of the talks are "ok, nice", but some of them are "I must show it to..." I've shared Linda Rising's keynote from last year with at least 7 people, and referred people to the talks of Emma Keaveny, Abby Bangser and Franziska Sauerwein (which I did not even attend at the time), as well as to Anne-Marie Charrett's keynote of that year. 
    • Re-visiting talks and ideas - Some talks left such a great impression on me I just had to watch them again. It's either to be able to refine my revelations from the talk, revisit some ideas in order to be able to pass them on back at home or just re-living the experience. There are at least four talks I want to watch again from this conference.
  • Matching talks to their description \ title - When I first looked at the schedule this year, I was thinking "Really? those don't look very interesting, I have at most one option per time slot that is interesting, and sometimes not even that". During the conference while hearing about some of the talks (from the speakers beforehand or from participants afterwards) this image was completely untrue - Most of the talks were more than simply good. I just didn't see that while reading. Not sure how to do that - Nordic testing days ask for some expected takeaways from the speaker. It might help.
    Another issue I experienced was relating to the meta-tagging (testing \ craftsmanship \ automation \ other). I walked into a talk I expected to be about automation (I even dragged someone with me), and the talk was about something completely different. It was a decent experience report, but a complete mismatch for me. 
  • Full talks description printout - I'm a bit torn about this. I don't really like the idea of using more paper than is necessary, and most of the time, the description is not read, but there were times where reading the description of the next talk could have been useful. Yes, it's online, but opening a browser on my phone and trying to read stuff from there takes a long time. and just didn't occur to me at the time. Maybe a board somewhere with the descriptions printed once on a big page, or maybe I should just learn to use my phone.  
  • Speed meet - It was great to see the experiments that were done to create a more communicative conference. I think this experiment has failed - the short time didn't really enable meeting the people or even determining whether or not I want to talk with them further. Out of the people I met, there was one whose face I could remember (and we actually talked a bit at later events, which I don't know whether I should attribute it to the speed meet, or to the fact that we hanged out a bit with a 3rd common person). Even if I attribute the single success to the speed meet, one out of six or seven isn't a good way to spend my time in such a conference where every other activity (that is not a talk), including a coffee break, is making me meet more people and have more meaningful discussions with them.  
  • Questions - There was a deliberate decision not to allow time for questions, with decent reasoning behind it. I was missing this part. Last year, coming up to the speaker with questions meant we were late for the next talks almost constantly. This year, most talks had time for one or less questions. Comparing this with CAST, where the discussion is structured (with K cards), I really felt the questions time was a good way to spend my time. 
  • Only one slot of workshops - Yes, having "we didn't have enough" is a good sign. Yet, I find the workshops a refreshing point in a day. Having one each day last year was great, and having the same number of workshops in a single slot meant that there wasn't any at the 2nd day, and that choosing between them was more difficult. 
  • Conference day starts late - Ok, it's not very late, it's a very reasonable time. Another thing I liked in CAST and I think can be adopted was having a lean coffee session at 7:00 (or something of the sort). Yes, it's a bit difficult to get up this early after an interesting night, so not many people will come, but I really liked it there. I thought about trying to have people gather around, only finding a way to publicize this was a bit difficult (blimey, did I just say slack again?). In retrospect, I probably should have just posted a sticky note or shouted at main hall. It's a great way to start a conference day. 

Now, some the good parts
  • The venue - What I really liked about the place was that all of the rooms were around the main hall. No corridors to hide people from each other, no need to run a long distance to get to a talk and immediatly after getting out of a room, I was in the middle of the activity, where all of the other participants were having a cup of coffee. 
  • All talks were great - Ok, I'm exaggerating a bit. There's a chance that there was a talk that wasn't as good, but the care that was put to choosing the speakers and subjects was clearly visible. During the 2nd conference day, I had a sequence of magnificient talks that started from the opening keynote and ended with the closing keynote. The only event that broke this series of great talks was the open space, because it was not a talk (it was great). I was, and still am, very impressed by the quality of the talks I've seen.
  • Aligning actions with declarations - The conference organizers are very clear in stating their intentions and areas of care - They want a conference where experts and practitioners meet and share information (and not salespeople pitching their product). The care a great deal about having a conference about testing that is welcoming for non-testers, and developers in particular. They care about their speakers and see them as partners, and they are here to change how conferences are by setting example.
    Each and every one of these claims is addressed with brave and well thought about actions. Inviting developers to speak at the conference is making room for developers in the audience, a strict filtering of the talks made sure that nothing that looks like a sales-pitch ever got near the conference, and the organizers involvement in the community enables them to target practitioners and ask them to come and speak. By speaking with each and every person who submitted a talk they made it easier for new speakers to submit a talk, paying for the flight and hotel for speakers, and doing so in advance is removing a great barrier from speakers who cannot afford to pay that money just to get to the conference. Creating scholarships to help speakers at other conferences is helping to diversify other conferences, and carry the ideas that come from this conference to other conferences. 
  • Choosing good causes - Aligning the actions behind the declarations is good only since the causes this conference stands for are good. They focus on values I can strongly relate to, and that are important to push forward. 
  • Everything was masterfully organized - This was true last year, this was true this year as well - as a participant I could not see any crisis that required the organizers attention - so even if there were, they were dealt with without creating a fuss in a very professional way. 
  • hoodies! Instead of a conference T-shirt, we got a hoodie. Much more useful, and suitable in many places where the T-shirt would be inappropriate. Next time - let's print the Logo on the front as well so that it will be seen even when carrying a backpack. 
  • Focus on conferring - this relates to standing behind your statements: The effort done to foster discussion and enabling people to meet and talk was very visible.
  • Showing care - All of the fuss that went into matching words to actions sums up to one thing  - The organizers care, and it shows. This sort of care makes me happy to pay the entrance fee for the conference, since by doing that I can support their causes just a tiny bit.
  • Speed meet - Yes, this was a good thing too. Even though I didn't like the event itself, I really admire the fact that experiments are being done. Some experiments fail, but the way to push forward and learn is by trying something that has uncertain results. 
  • Aligned timetable - All events that start together, end together. As a participant I like this a lot, since I don't have to weigh a long event against two shorter time-slots. Having to choose with a ratio of one-to-one is difficult enough. 
  • Conference party - Twice makes a tradition. It's a great way to meet and talk with people without having a cool event that is starting in five minutes, and it's an awesome way to help people who are more shy to actually meet others and have fun in the evening. Spending an evening together is by far better than finding dinner alone and going to sleep early. Having the party for the whole conference means that everyone is welcome, and no one gets pushed aside.
  • Breaks between talks - Initially, I thought there wasn't enough time between the talks, but then I looked again at the schedule - there was a very good break time after each talk, and it's only because I was having fun that the break time flew by. So great time, and great breaks. 
  • Collecting feedback - The retrospective by the end of each day, the cute app to mark satisfaction from the talks - The organizers are constantly trying to improve, and every participant can add something to that. I simply find this cool. 

Sunday, February 12, 2017

ETC 2017, day one,two & three

(Short summary - ETC was super awesome, down below are my experiences from the conference days)
Previous: ETC2017 tutorials day

Another day of ETC has ended, and shortly after it, the end of the second followed, and despite my best intentions, I could not get a blog post in between the two.
The first day was, to say the least - packed. Starting with a great opening keynote (slides) from which I took two takeaways - unit tests can catch as much as 77% of "production failures" (source), and dead code can sometimes be worse than you might have thought.
Following the keynote, I attended a real inspiring talk by Rosie Sherry, about marketing the testing community and activities (by the way, if you are not familiar with ministry of testing, please correct that).
After the speed meeting (which was a bit too shallow and quick to my liking), I had some interesting talk with Bettina Stühle about a difficult client she was working with and what are her goals and approaches, followed by a short chat with Adina & Nicolai on JUnit5.
Then I ran to the exploratory testing workshop - which proved to be a great fun, and while doing that managed to show me some good pointers I need to pay attention to while testing. Shortly after that (we did have some time to grab a cup of tea), came Joel Hynoski's talk that was titled "Engineering for quality: using brain power rather than willpower", which was "quite good, but..." - In retrospect, the talk was presented well, and I like both Joel's style of presentation and the topic he chose, but that's not what I have expected from the schedule. Neither the title of the talk, nor the track it was on (automation) nor the description that can be found in the website had any resemblance to what was actually presented - I came to this talk expecting to see how tools were used in a smart way that I can learn from, and got a story that was "so, we did this and it worked nicely". There was no dwelling on "how to look for an idea that will work for you", not a glimpse on "how the automation is built", there was "we did some gamification around code submits", which was a cute anecdote, but I did not see how it connected to a coherent & clear message.
Anyway, as Zelazny once wrote - shift happens. By the end of the talk we shifted to a lean coffee session that was pretty much as expected - a great fun and an opportunity to meet some new people, and talk with people I already met.
At the end of the conference day, Nicola Sedgwick spoke about communication, and space, and prioritising, and setting expectations, and cheating on sports apps, and estimating with multi-faceted dice and people in chainmail working together. Or, if one wants to put all of that under a single title - getting inspired by whatever is around you and learning from it. A really hard talk to follow, but the message is getting around almost unconsciously.
That was the end of the conference formal schedule for day 1. Naturally, this only mean that the other activities just begin. On the way to the conference party, I had a really nice dinner with Kira & Bettina, and then we proceeded to the party itself, where we joined a whole bunch of people and had a great time.

Day 2 began pretty much similar to day 1 - I woke up to late to actually complete a blog post (that I stayed up to late to write after returning late to the hotel since I was having such a great time with people), and if the first day has been very good, the second was nothing short of awesome.
It started with a powerful keynote from Gitte Klitgaard where she got everyone to dance, which every bit of fun as it seems. Next was Adi's demo talk about different kinds of automated tests where he delivered a clear view of how to write good tests and what target each of them is useful for achieving. Plus, he showed some of the cleanest pieces of code I've seen (so, I have some refactoring to do in my code to start practicing writing like that). Following this - a great talk by Matt Lavoie on usability testing which was point on (for those keeping track, that's 4 great talks in a row). This one was followed by a talk by Liz Keogh, which, besides being an extremely talented speaker, is the first person I've seen speaking on the Cynefin model in a useful way. and despite a strong initial objection to her message (that there's an inherent difference in the way testers and developers think of things),I find myself pretty convinced (more on that will probably be in a separate future blog post). it's not often when I can hear my jaw dropping in a talk, but this was definitely one such case.  After that I went to hear a security talk given by Juha Kivekäs which was very well presented. However, since software security is a field I'm interested in and have some basic background, I found out that I'm not the proper audience for it. I activated the rule of two feet and went to hear Alex Schladebeck's talk on how to build proper UI automation. Listening to the talk was very hard on me - just about every other sentence I felt an urge to stand up and clap out load, since she was carrying a message that needs to be heard more often, and she has delivered this message far better than I could have done. Speaking later about this with Neil, he put it up to a very concise sentence - people are telling testers they need to code, but not enough are telling testers they need to code well.
After such a successful row of talks that were not simply good but rather brilliant, it was time for open space. Inspired by Joha's talk I offered people a quick introduction to using a web proxy. In the previous evening I downloaded OWASP Security Shepherd and we played with it a bit. Just to make thinks easier on the audience, I used Fiddler, and not one of the more powerful attack proxies such as ZAP or BURPsuite, since it has a much easier UI (and less options to confuse new users). We got along with some exercises and got to play with some XSS as well in a sort of a mob participation (There was only one driver, though). After this session, I went to participate in a proper mob programming exercise where we tried to learn a bit of Kotlin. Well, we had some environment issues, both on the Kotlin side, and in using a German keyboard on a computer that forces you to use shortcuts if there are any. Well, we didn't manage to have some code running by the end of the 30 minutes, but we did practice mobbing, and did learn quite a bit on how does mobbing work, and a little bit about Kotlin. Plus, I enjoyed myself.  The next topic on my schedule for the open space was another one I proposed - tools as eye openers. Sadly, no participants came, but with all of the other great subjects that were happening at the same time, I was just as happy to become a butterfly and hop over some discussions - I joined Sharanya's discussion on the difference between functional and integration tests, where I felt I had something to contribute (especially after Adi's talk earlier), and when this discussion came to conclusion I listened silently (or at least, I recall that my intention was to be silent) to a discussion on mobbing, what it's used for and how to present it at home. It was interesting.

It's only appropriate that such an intense day will be closed with a powerful keynote by Fiona Charles who spoke on the future of testing (TL;DR - the future is our to make).

And we all know what happens when the conference talks end, right? I looked for someone to have a dinner with, and talked with Bettina who said she was meeting one or two others and invited me to join. and so we went, 20 people strong, to have something nice to eat in a quiet place. After that we went to have a drink and talk some more, up until the point where I knew I must go to sleep.

Day 3 (for those who keep track, yes, it was a two days long conference, and the tutorials were day 0) was to be a more relaxed day of touring the city for a while. My heuristic for making the day fun was simple - Join Bettina who seemed to know what to look for. We strolled through some parts of the city and visited the architecture and design museums (which are two, close and small museums) and had some time to talk and enjoy.
After getting back to the hotel and resting for a while, it was time for dinner. So I messaged Neil, who was still in town, and he was just on his way to find something to eat as well, so we met, and he contacted one of the other twitterati and so we found ourselves walking for ~20 minutes to meet some others. as happens quite a lot for me in such events, I got to listen to some quite interesting stories (at one point Joep explained some of the peculiarities of Belgium's political system) and at some point we were all (or at least, those of us who remained at the time) listening to Maaret and Llewellyn speaking with Damian who's part of the team organizing QA or the highway, about conference organizing and explaining some of the choices they made and the priorities they have. Occasionally one of the others had some idea or experience to contribute, but for me it was mainly listening to what happens "behind the scenes", which was fascinating. For me, hearing the principles that are in the base of this conference and learning how the actions and choices of the organizers match those principles enforced a very important feeling - I was happy to pay for a conference that takes such great care of their speakers and that emphasises so strongly on making everyone feel welcome and providing opportunities to actually meet people and not only listen to some talks.

And so, the conference is at end, I extended it as much as I could (meeting Joep, Gitte,Bettina and Damian for breakfast at the hotel, and then sharing a cab to the airport with Joep), but all good things must end. I had a great time, and absorbed quite a lot (some of which, may form after processing to something I can share clearly enough back at home).
See you all again next year in Amsterdam!

Thursday, February 9, 2017

ETC tutorials day

Remebering names is hard. Starting tomorrow, I'm writing them down as I first ask, even if it means walking around with a notepad in my hand all of the time.

Choosing a tutorial was really tough, I mean - besides "starting with selenium", which is a very specific thing I already am comfortable with, every workshop was really appealing. And I had to choose which one I'm attending in advance.
I went to the mob-testing tutorial, since mobbing is something I think can help my team in some situations and I only need to figure out where and how it is more appropriate to use.
I came to the workshop with a clear goal - I was to maintain some sort of a meta-level perspective on the process and decipher which parts of it would be suitable to my team and which part would meet the most resistance. Simple, right? Completely failed at that, since mobbing is a very immersive activity, where the pace and the goal keep you always on your feet, and the challenge of doing something AND communicating can be overwhelming.
We were a huge mob - 14 people or so. It meant that even with short changing cycles, people had quite a long time as "co-navigators", and that hearing the co-navigators was difficult because of sheer distance. The large number of people also made improving a lot harder and slower, and I'm not sure if we got any better during the day.
On the bright side, I got exposed to a wide array of things that can be done by a mob. we started by a somewhat freeform testing of an application, followed by testing with a charter in mind, We then proceeded to do some TDD and then write a selenium script.
It was extremely interesting to notice how do different people react to the same situation - some froze at the navigator's stand, and some were comfortable sharing their ideas from the far end of the room. A point to take notice in, for me, is that this format is excluding the shy and introverts if the others don't accommodate time for them to speak and participate. Speaking over in lunchtime with some of the participants, it is also evident that mobbing is easier to "sell" as a training tool - we'll work on this together and all have a learning experience, but when "actually working" it is perceived as waste more readily.
One experience that I was missing in the workshop was one of a highly functioning mob - I guess I can imagine how it might be, but imagining and feeling are quite different from each other.

After the tutorials, each to their own way. I found myself tagging along with Helena (and some others) to a nearby pub. The two of us went to put our stuff in the hotel first, and when we got to the pub, the others were nowhere to be seen. But we had spotted there Liz KeoghAbby Bangser and Joep. The conversation went to many directions, where Abby and Liz went over some of the takeouts from Liz's workshop. Then, somehow, I got to experience one of the most effective explanations by demonstration acts I've seen. The topic was the difference between the thinking habits of testers and developers, and how does this affect a certain exercise in Liz's workshop. It really puzzled me, so I asked for some elaboration. Two minutes later I was bombarded by a series of questions, leading to this "Oh... now I see what you meant". Really amazing.

Then, sadly, Liz, Abby and Joep left to attend the speakers' dinner, which left us three (Helena, myself and a developer colleague of Helena) to find ourselves dinner, where we spoke a bit about the mobbing tutorial we were in, and flowed to other subjects - from automation to train-based conferences (that might have been the beer talking, but it does have potential), to trying to explain the non-tester at the table what do we mean when we say "heuristics" by comparing it to design patterns (and then just going over some examples that we use).

All in all, the only drawback this day had is that I will probably sleep less than I should.
Day 1 is over, waiting for the next one to come by.

Wednesday, February 8, 2017

ETC is starting early

So, ETC starts today (or on Thursday if your'e not attending the tutorials), but people were already here yesterday, so why not start meeting the people before the conference?
After taking most of the day to move from the airport hotel (I got there by 2AM) to the hotel I'll be in for the conference and after settling in (grabbing lunch, braving the cold outside) I tried to find some of the people.
Problem is - people these days communicate by tweeting. I don't own a twitter account, and can't afford the noise and distraction it generates (and knowing myself, creating one "just for the conference" is a slippery slope I don't want to venture), and unlike last year, there wasn't a conference slack channel to find people in, or there isn't one that I know of yet. So I tried the testers.io slack channel, with minimal success. However, I did find a tweet (I can read tweets online) by Helena Jeret-Mäe indicating she's in the area. The cool thing about this is that Helena was also on testers.io slack channel, so I used it to contact her. Luckily, by the time I did this, she was already meeting a bunch of people and invited me to join - I met there some folks I encountered before in conferences - Neil Studd, Dan Billing, Simon Schrijver, and there were some others I've met for the first time  - among them (I don't remember everyone, sorry) - Fiona Charles, Damian Synadinos, Angie Jones, Helena,  Joep Schuurkes and a couple of others (again, sorry for not remembering names). During most of dinner I spoke with Damien and Simon, Damien spoke a bit about the connections he sees between Improvisational theatre and fundamental personal skills, and Simon showed us  a cute visual bug in a twitter app. Afterwards, back in the hotel (that apparently we are all in), I ended up sitting with Helena and Joep, speaking just a bit about everything. I still need to work more on my listening skills though, as several times during the conversation I caught myself waiting for a pause to reply in instead of actually listening (there's a quote somewhere about people not really listening but rather waiting for their turn to speak - I try to avoid doing that, sometimes I succeed).
That was a great start for a conference, I can't wait for it to start.

Sunday, February 5, 2017

ללא תוכן, רק התרגשות

no content, simply excitememnt

מחר אני ממריא לפינלנד, לקראת ETC2017. אחרי החוויה שהייתה לי בשנה שעברה, אני מחכה בכליון עיניים למה שצפוי השנה. 
אם אתם שם - בואו לומר שלום. 


----------------------------------------
Tomorrow I'm lifting off to Finland, headed for ETC2017. After the experience I had last year, I'm really waiting for what's in line this year. 
If you happen to be there - come and say hi. 

Thursday, February 2, 2017

non coders and unit tests

(Hebrew version will be at the end of the post this time)

During Tuesday Night Testing, one of the subjects was "can non-coding testers contribute to unit testing?", and the discussion around the question soon moved from "can they?" to "how they can?". Personally, I feel that it's something that should be available out there in the open, since I hear this question (or similar ones) often enough. I also want to take the opportunity to elaborate a bit over what I felt was relevant in the actual discussion itself.
For the time being, I will put aside my urge to rant on the term "non-coding testers", and go directly to the subject.
The first thing that must be said, is that it helps quite a lot to understand code and to be able to write simple code in whichever language there is. Understanding stuff such as functions, classes, variables (and their scope) and perhaps a thing or two about data structures is not completely necessary to read code, but it sure does help a lot in understanding what you are reading 1.
The second thing that is important to know, is that unless you are working on some assembly code, all good code is very much like reading English, and bad code is like reading the JabberWocky - you can still make sens out of most of it. It's only the really horrifying code that is completely undecipherable.
Armed with those two assumptions, there are several ways to contribute to the unit testing:

  1. Review unit tests for coverage: No, not code coverage, there are automated tools for that. Requirements coverage is perhaps more suitable (but still not quite it). The idea is to read the titles of the unit tests (you don't even need to read what the test actually does, it should be clear from the title), and see that they are testing the functionality that they should. In order to do so, one must first understand the role of the unit under test (usually it will be a class or a method, it might be a somewhat larger component), and just make sure that every requirement (written or implied) that is relevant for that level is covered in a unit test.
    For example: I was reviewing unit tests of a class that was meant to validate a token. The token had 3 parts: a timestamp, a user_id and a signature. I saw some tests around the time - valid time, expired token, malformed timestamp. I simply asked "what if the timestamp is in the future?", we added the test to find out that the code didn't deal with it the way we wanted to. 
  2. Use it to do a code review: While reviewing the unit tests of the same feature, I've noticed that the tokens actually had another part in them. I don't recall what it was, but I noticed that there are no tests around it. When I asked, the developer said that there's no need for unit tests since that part is not processed. If it's not processed, why are we sending it? we fixed the code on the spot. 
  3. Suggest ideas for test cases: As a tester, you are probably more familiar than the developer with the system scope, and know better which parts of data go where. Leverage this knowledge to suggest ideas. "I know that the component you are working on get's input from the component over there. When there are problems with it, we get a number of -1 instead of the number between 0 and 1000 we expect", or "those 3 config files must be aligned, let's write a test to make sure they are" (it's more of an integration test, but still useful). You will also be surprised at how innovative simple ideas such as boundary value testing will seem to some of your developers. 
  4. Be a rubber duck: Rubber ducking is helpful, but speaking to a rubber duck is a bit embarrassing at first. So you help just by asking the developer writing the unit test to show you around, and asking an occasional "why do you that?" or "what are you doing here?". During your duckling career, you will be able to spot stupid mistakes the developer is writing and help fix them fast. 

So, now you know what can be done. Only question is - how to start? 
Ideally, you'll have the full cooperation of the developer, who sees value in your feedback and all you have to do is to ask, and make some time for that. 
Other times, the developer won't seek out your help, but will be happy to help you if you just ask "can you show me around the unit tests so that I can learn?" 
A great idea that was raised by Katrina, was to make this knowledge transfer bi-directional - let's meet and share what was done, and what I'm about to do. Maybe some of the things I was planning to do were already covered (or can be covered easily) by the unit tests, and maybe I'm missing a point that you are concerned about". By putting both of us under inspection, it makes the tester and developer more equal, and reduce resistance since it's not a tester critiquing the developer, but two colleagues collaborating.
But sometimes, especially if you are perceived as "non-programmer", programmers will look down on you and ask you politely to scram out of their way ("but you don't code", "you won't understand", "I don't have time to walk you through this", etc...) In such case, do the review by yourself. You have a source control system, just look into the code there and approach your developer with a question: "Hi, I saw that in your unit tests that you don't check for username format, is this covered elsewhere?", or (and this is a real case) "I looked into the algorithm you've checked-in, and I think there's a bug for this input, could we add a unit test for that?" What was that you are saying there? you don't have access to your source control? go and watch this. One thing to take care of if you are going down that path, is to make sure that your developers know that you are doing this, so that they won't think you are "sneaking up on them". 


I had a great time discussing this and the other subjects with Andrew Morton, Cassandra Leung, Claire Reckless, Katrina Clokie, Tracey Baxter and the event organiser Simon Tomes - thank you all for a great hour. 



1 ↩ If you can afford the time, take some time to complete a MOOC on programming (here, there) or watch this YouTube list (about 30 hours, I skipped the first video which is irrelevant), or use one of the many other ways to learn the basics of programming (hour of code, khan academy & codeacademy are some easy examples). Please note - I'm not suggesting to you (here) to learn to code, just to learn reading it, which is a whole lot easier.





-------------------------------------------------

ביום שלישי האחרון, השתתפתי (שוב) במפגש של בדיקות בשלישי בלילה, וכרגיל, היה מפגש מצויין. אחד הנושאים שעלה במהלך המפגש היה "האם בודקי תוכנה שאינם מתכנתים יכולים לתרום לבדיקות יחידה?" מהר מאוד הדיון עבר מ"האם?" ל"איך?". כיוון שאני שומע את השאלה הזו, או גרסאות שלה באופן תכוף מספיק, נראה לי שכדאי שהנושא הזה יהיה זמין גם לאלו שלא השתתפו בדיון. חוץ מזה, זו גם הזדמנות טובה להרחיב קצת בנושא מעבר למה שהיה מתאים למסגרת הדיון שהייתה שם.
לעת עתה, אניח בצד את הרצון שלי לקטר על עצם קיומו של המונח "בודקי תוכנה שאינם מתכנתים" ואגש ישירות לנושא.
הדבר הראשון שחשוב לשים על השולחן הוא שלמרות שיכולות קידוד אינן הכרחיות, זה מאוד עוזר לדעת דבר או שניים על איך כותבים קוד ואיך זה עובד. הבנה בסיסית של דברים כמו פונקציות, מחלקות משתנים, ואולי דבר או שניים על מבני נתונים בהחלט מוסיפה ליכולת להבין את הקוד עליו מסתכלים.
שנית, חשוב לדעת שאלא אם כן הקוד עליו אתם עובדים כתוב באסמבלי, קוד שכתוב היטב ניתן לקריאה כמעט לגמרי כמו אנגלית פשוטה, ומקוד גרוע עדיין ניתן לחלץ משמעות כמו מהשיר פטעוני (או גבריקא, או להגוני, תלוי מי המתרגם החביב עליכם) - זה רק קוד ממש לא מוצלח שנראה כמו ג'יבריש.
עם שתי העובדות האלה, נסתכל על כמה דרכים בהן ניתן לתרום למאמצי בדיקות היחידה:

  1. סקירת בדיקות היחידה לכיסוי: לא, אני לא מדבר על כיסוי קוד, בשביל זה יש כלים אוטומטיים, והם טובים יותר בזה. אני מתייחס למשהו שקרוב יותר לכיסוי דרישות - הרעיון הוא לקרוא את כותרות בדיקות היחידה (אין צורך לקרוא את המימוש, בשלב הזה אנחנו סומכים על המתכנת שהבדיקה אכן בודקת מה שהיא מתיימרת לבדוק) מסביב לנושא ספציפי ולראות אם הבדיקות מכסות את כל המקרים המעניינים.
    למשל, לפני זמן מה, ישבתי לסקירה כזו עם מפתח. המחלקה שבדקנו טיפלה בטוקן שהורכב מכמה חלקים- זמן, שם משתמש וחתימה. היו כל מיני בדיקות סביב הזמן - פורמט לא חוקי, זמן תקין, טוקן פג תוקף, טוקן כמעט פג תוקף. מה לא היה שם? טוקן שהזמן המצויין בו נמצא בעתיד. מה מסתבר? לא רק שחסר מקרה בדיקה, גם אין טיפול במצב הזה בקוד. 
  2. אפשר להשתמש בהם כדרך לסקירת קוד: זה די דומה למקרה הראשון, אבל טיפה שונה. כאן אנחנו בעצם מסתכלים על מה שנבדק כדי לנסות להבין מה לא נכתב, או מה מיותר. למשל, כשעברנו על הבדיקות של אותו פיצ'ר, שמתי לב לשדה נוסף שאין סביבו שום בדיקות - אפילו לא אכיפה של פורמט. כששאלתי למה, המפתח ענה "אני ממילא לא מעבד את הנתון הזה, אז לבדוק אותו לא מוסיף לי כלום". שאלת ההמשך שלי הייתה "אז למה אנחנו שולחים את הנתון הזה?" הורדנו אותו. 
  3. אפשר להציע מקרי בדיקה: כבודקים, אנחנו לרוב מכירים טוב יותר את ההקשר הרחב, איזה רכיב מדבר עם איזה רכיב ומה סוג ההפתעות להן ניתן לצפות בהקשר מסויים. אפשר לנצל את הידע הזה כדי להציע מקרי בדיקה מעניינים. "אני יודע שהרכיב הזה שאמור לספק מספרים אקראיים בין 1 ל1000 לא תמיד מגיב בזמן, ואז אנחנו שולחים מינוס 1 כדי לציין את השגיאה, אולי כדאי שנבדוק איך מה שאנחנו כותבים כאן מתמודד עם מספרים שליליים?" או "יש קשר חזק בין שלושת הקבצים האלה, אולי כדאי שנכתוב משהו שעובר עליהם ומוודא שהם עדיין מסונכרנים?"(נכון, זה קצת יותר קרוב לבדיקת אינטגרציה). חוץ מזה, תופתעו לגלות עד כמה מפתחים מסויימים מתרשמים מדברים בסיסיים כמו מחלקות שקילות ובדיקות גבולות. 
  4. אפשר להיות ברווז גומי: על ניפוי שגיאות בעזרת ברווז גומי שמעתם? זו שיטה אפקטיבית למדי לטיפול בקוד, בעיקר בקוד סבוך. אבל לדבר עם ברווז גומי זה קצת מביך, בעיקר בהתחלה. לעומת זאת, להסביר דברים לבודק תוכנה שלא מבין מילה בתכנות זה לגמרי לגיטימי. ככה אפשר לעזור פשוט על ידי ישיבה בשקט, ושאלה מתוזמנת של "רגע, מה בעצם קורה כאן?" על הדרך, תוכלו לאתר שגיאות מטופשות שהמתכנת עושה ולוודא מה בדיקות היחידה מכסות. 
אלה, בגדול, כמה רעיונות לדרכים לתרום לבדיקות היחידה בלי לכתוב קוד. שימו לב, כל הדרכים האלה מצריכות תקשורת עם מי שכותב את בדיקות היחידה. 
השאלה שנותר להתמודד איתה היא "איך להתחיל?" 

אם יש לכם מזל, המפתחים שלכם מעריכים אתכם וישמחו להיעזר בכם בזמן כתיבת בדיקות היחידה, אז כל מה שתצטרכו לעשות הוא לבקש, ולפנות לזה זמן. 
במקרים אחרים, המפתחים אולי לא ממש מרגישים צורך בעזרה, ואולי אפילו מעדיפים לעבוד לבד, אבל היחסים שלכם איתם עדיין מספיק קרובים כך שהם ישמחו לעזור לכם ללמוד אם פשוט תשאלו "יש לך איזה שעה כדי להראות ליאת בדיקות היחידה כדי שאלמד קצת מה הולך שם?"
עוד רעיון נהדר שהעלתה קטרינה היה לגשת לעניין כהחלפת מידע ולא כמשהו חד צדדי - המפתח מספר מה עשה ומה מכסות בדיקות היחידה והבודק מספר מה מתוכנן להיבדק (ומה נבדק עד כה), ושני הצדדים מחפשים נקודות לשפר בשני התהליכים. כך המפתח לא נמצא תחת ביקורת מצד הבודק, ומקבל משוב מהיר לגבי נקודות שאולי הוחמצו ("אתה הולך לבדוק את זה?? לא חשבתי על הכיוון הזה בכלל, שנייה אני מתקן"), והבודק מקבל נוסף על המידע על בדיקות היחידה גם משוב ראשוני על הבדיקות המתוכננות ועל נקודות שמדאיגות את המפתח. 

אבל, לפעמים, במיוחד אם אתם נתפסים כ"לא מתכנתים", המתכנתים עשויים להסתכל עליכם מלמעלה (מתוך הרגל, בלי שום כוונה רעה, כן?) ולנסות לנפנף אתכם במגוון תירוצים כמו "אין לי זמן" או "עזוב, צריך לדעת לתכנת בשביל להבין את זה". במקרה כזה, אפשר עדיין לבצע את הסקירה לבד על ידי מבט במערכת ניהול קוד המקור (source control, בלע"ז)  - תוכלו לקרוא את בדיקות היחידה שיש, ואת הקוד המצורף אליהן אם צריך, ואז לגשת עם שאלות למפתח. דברים כמו "שמע, שמתי לב שאין בדיקה של קלט לא חוקי, זה בכוונה?" או, אם יש לכם מזל (וזה מקרה שהיה באמת) "שמע, הסתכלתי על הקוד, ונראה לי שיש בעייה עם הקלט הזה, האם נוכל להכניס בדיקת יחידה כדי לכסות את זה?"
אין לכם גישה לקוד? לכו מהר לראות את ההרצאה הזו.

אני מאוד נהניתי לדבר במשך כשעה עם
וכמובן גם עם מארגן האירוע, Simon Tomes. תודה לכולכם (אם מישהו מכם קורא עברית) על הדיון!