Thursday, August 25, 2016

יום פקודה

Command and Conquer


echo "olleh#dlrow~" |rev|cut -d~ -f2|sed "s/#/ /"|awk '{print $2 " " $1}'
Can you read this? great!

(By the way - some code, so no Hebrew)
When I got out of the university, I had exactly zero familiarity with the linux shell, and found myself in an environment that heavily relied on this exact user interface. I learned pretty quickly that "less" is a way to read a file, and that "ls" is the equivalent of "dir", but that was mostly it.  It took me several months to start and see the vast world of the command-line utilities that come built in with linux (or are easily available), and since then I get every now and then reminder of how powerful those utilities are. Lately I got a quick series of reminders, so I knew I should pass the message around and remind everyone else of this fact as well.
What I really like about those utilities is that it almost does not matter which one you'll choose, it deserves a separate post all by itself (and, while we are at it, here's a great post on sed). But - despite having some complexity to dig into, most utilities are simple to use and have a single goal that can be described in a short sentence (with the exception of "awk" that seems to do way to much). The combination of simple and powerful makes them a great tool to use and a huge time saver.

So, inspired by a short session passed by Natalie Bennet during TestRetreat, A short list of my favorite command line tools, in no particular order.

  • XmlStarlet - this tool is my most recent reminder of why I really like those seemingly small utilities. It does not come out of the box with every linux distribution, but it's a powerful tool for editing XML files. If you ever tried to edit an unstructured XML in a programming language (such as Java) you know it can be a bit cumbersome. And I had a list of identical elements that I wanted to change their ID so that it will be unique. I wrote something very similar to the following three lines, and lo and behold - the file was updated just the way I wanted it (note - sometimes the command is called "xml", depending on the way you installed it)
    for i in {1..30}
    do 
       xmlstarlet ed -L -u "/root/testElement[$i]" -v text$i testFile.xml 
    done
  • xmllint - While I am speaking of xml tools, xmllint is what I use to search within an xml file, or to validate that the file structure was not corrupted.
    # if there is no output from this line - the XML is valid.
    xmllint --noout myFile.xml
    #opens a shell where I can ask to print a specific element described by an Xpath expression
    xmllint --shell myFile.xml
  • grep - Yet another powerful tool. In principle, it does a really simple thing - it filters text to help you find stuff. By default you get the whole line that matches the search condition (which is a regular expression), although you can set it to output lines that do not match the condition, or to output only the match itself.
    This will look a like this:

    echo "hello world \n and to you too"|grep hello  --> hello world
    echo "hello world \n and to you too"|grep -o hello  --> hello
    echo "hello world \n and to you too"|grep --color hello  --> *hello* world (the asterixes are to mark "bold")
    echo "hello world \n and to you too"|grep -v hello  --> and to you too
  • find - Simple, yet powerful. With the correct flags you can find a file, a directory or a link with specific name, or that was created before or after a certain period of time. 
  • xargs - this isn't a utility by itself, but rather it's  an instruction to linux to provide the output of a previous command as an input to the next one. So, for example:
    find . -name *.java |xargs grep --color "searchString"
    will find all of the places where "searchString" appears in any java file. Running the same command without the xargs instruction, will just result in finding all of the java files that have "searchString" in their name. 
  • sed - a simple way to do search and replace. Do read the post linked above for more details. 
  • du - Gives information about memory taken by files (du is disk-usage). Most of the time, the only proper way to use it will be
    du -h --max-depth=1
    This will give you the results in human readable form, and will not dive any deeper than one folder down, so you won't end up having a million lines that indicate a file of 200Kb. Very useful when cleaning up space on your hard-drive. 
It's getting a bit long, and this post isn't really "all you need to know about the linux command line", so I'll skip other utilities I'm using frequently (such as cp, mv, less, vi, wget and some others) and instead I want to share the story of how I learned - the hard way - the power of the command line utilities. 
I was quite new at work - less than a year after I finished university, and we were doing a security ramp up for our application. Since we deal with payment cards we are bound by PCI-DSS, so we have a handy security requirements document to refer to if we're out of ideas ourselves. This time chose to focus on the requirement never to print card numbers to the logs, even in debug logs. So I set to the task - create a check that will help us spot card numbers in the logs. Sure, why not. We were (and still are) writing our automated checks in java, and I set out to add another scenario to those tests - I performed some activities with several cards, then went about thinking of looking for a way to scan the files for card numbers. The check ran, and even found a place or two where we had card numbers printed. The test was a couple of hundreds lines long, and gave us a very small sample of the actual activities in our system. I wasn't very happy with the result. When I asked for advice from a more experienced tester in my team he asked me "why don't you write a script?" So I did. I wrote all of the card numbers we use in our system to a file, and wrote the following script (I ommitted some variable definitions here, since they are not that interesting) - 
#search for PAN files in logs  directory
for fullFileName in $(find ${LOGS_DIR} -type f -follow); do 
        for card in $(egrep -a -o "${searchPattern}" $fullFileName||uniq); do
                echo $fullFileName $card >>${resFile}
                filename=`echo ${fullFileName}|rev|cut -d\/ -f1|rev`
                cp ${fullFileName} ${resDir}${filename}_${card}
                wasFound="Y"
        done
done

### SEND E-mail#####

if [ "${wasFound}" = "Y" ];
then
        (echo -e "PAN was found in logs in machine: ${HOSTNAME}.\n copies of the logs are kept in ${resDir} (copies are kept as <fileName_date>_<PAN>). \n\n PANs found are:\n";cat ${resFile})|sendmail myemail@workplace.com;
        echo "PAN found"
fi

Basically, what I'm doing is to loop over the files and in each search for card numbers (PAN stands for "personal account number"). I then copy each of the resulting files and adding the card number as a suffix to the file name, so that I'll know what to look for when investigating. 

Do you see how short is this piece of code? and it covers a whole lot more ground than the automated check I wrote in java and was several hundred lines. Deleting the java code was a strong lesson (and to my surprise, I really enjoyed deleting a week's worth of effort).

So, if you are not familiar with the command line - invest some time in getting to know it a bit. It will probably open a whole new world of options for you. 


P.S. 
If you are using windows, you probably don't have access to linux commands. The way I hear it, powershell is as powerful. 






Tuesday, August 23, 2016

למה ללכת לכנס?

So why attend a conference?

לפני קצת יותר משבוע חזרתי מCAST, וחוץ מזה שהיה לי כיף חיים, אני גם חושב שהפקתי מהחוויה הזו תועלת. 
ובכל זאת, כנסים זה יקר, וממילא יש מלא הרצאות באינטרנט, אז למה בכלל לטרוח? אני רוצה לרכז ברשומה הזו כמה נקודות שאולי ישכנעו גם אתכם ללכת. אבל, קודם כל, חשוב לומר שלא כל הכנסים דומים, ולא לכולם אתם רוצים ללכת("אתם", במקרה זה, הם הקוראים המיועדים שלי - אלו העוסקים בפיתוח תוכנה. אם יש לכם משהו למכור, יכול בהחלט להיות שתעדיפו ללכת לאחד הכנסים שמבחינתי לא מצדיקים מבט שני). חלוקה אחת בה נתקלתי ונראית לי הגיונית אפשר למצוא כאן (לאלו שמתעצלים לקרוא, או מתעצלים לקרוא באנגלית, החלוקה היא לכנסים מכווני קהילת-מקצוענים, לעומת כאלו שמכוונים לשוק המסחרי וההרצאות בהם הן חצי פרסומת. ההטבות שמקבלים מרצים בכנס יכולות לשמש רמז משמעותי כדי לזהות את מיקום הכנס על הסקאלה). אבל, מתוך הנחה שאתם הולכים לכנס מהסוג הנכון, מה כבר יכול לצאת לכם מזה? 
  • הזדמנות לשמוע על מגוון נושאים מעניינים. כן, נכון. יש מלא נושאים מעניינים באינטרנט, עם וידאו באיכות נפלאה והיכולת לצפות בזה מתי שרק מתחשק לכם ולסנן את הנושאים שמעניינים אתכם בלבד. אבל מתי בפעם האחרונה ראיתם משהו במחשב בלי הסחות דעת באמצע? בלי הפיתוי לשים את השיחה ברקע ולעשות עוד שלושה דברים? זה קורה פחות עם אנשים אמיתיים.  יתרון נוסף שיש להרצאות בכנס הוא שלפעמים אתם נכנסים להרצאה רק כי אתם שם, או כי כולם נכנסים, או כי אתם רוצים לשמוע את הדובר (או הדוברת), ומגלים הרצאה מפתיעה. זה מה שקרה לי  בהרצאת הפתיחה של היום השני בCAST
  • הזדמנות לשאול שאלות - שמעתם הרצאה ממומחה שמתעסק בתחום שאתם מתקשים בו? לא הבנתם משהו בהרצאה בתחום חדש לכם? יש משהו שנראה לכם חשוב לומר? אם אתם נמצאים בכנס אתם יכולים לקום ולשאול. אם ההרצאה מוקלטת, ההערה שלכם תיכנס לשם לטובת הצופים. אתם גם יכולים לתפוס את הדובר לשיחה קצרה אחר כך. 
  • יש הרבה אנשים מעניינים שלא מרצים בכנס אבל מגיעים אליו. לפעמים תמצאו את עצמכם מדברים במשך שעה או שעתיים עם מישהו שרק פגשתם על נושא שמעניין את שניכם. או שתפגשו מישהו שמומחה בתחום בו אתם מסתבכים קצת (ועכשיו, כשיצרתם קשר, אפשר לשלוח לו דוא"ל עם שאלה קצרה או שתיים, או אפילו לשכור אותו כיועץ אם הוא עושה כאלה דברים). 
  • בכנס יש גם סדנאות - היתרון של סדנאות הוא בכך שאפשר גם לתרגל את הרעיונות החדשים. הסדנאות בדרך כלל לא יוקלטו, וגם אם כן - השתתפות בסדנה זה הרבה יותר מועיל מאשר לצפות בה. לא באמת לומדים הרבה מצפייה בלבד. 
  • לומדים מה קורה במקומות עבודה אחרים. יש לנו (או לפחות לי) מעין תחושה שאנחנו יודעים הכל, ומה שאנחנו רואים מסביבנו זה מה שכולם עושים. אבל המציאות די רחוקה מזה - חלק מהאנשים עושים דברים דומים, חלק נמצאים הרחק מעבר למקום אליו חשבנו שאנחנו רוצים להגיע, ואחרים נמצאים במקום בו היינו לפני חמש שנים. זו הזדמנות ללמוד מה קורה אצלם, ומה הם עשו כדי להגיע לאותו מקום (ואם מדובר בכנס מקומי, אולי גם נרצה לדעת אם מדובר במקום בו תרצו לעבוד). אולי סתם תשמעו במקרה על איזה כלי שמשתמשים בו ומוצא חן בעיניכם
  • זו זריקת מרץ נהדרת - לפעמים העבודה היומיומית יכולה להיות שוחקת, ואם אתם היחידים במקום העבודה שמפגינים עניין בתחום מסויים (האחרים אולי מתעניינים בשקט בלי שאתם שמים לב) זה יכול להיות מעייף. בכנס תמצאו אנשים שאכפת להם מנושא הכנס. להיות בחברת אנשים אחרים שאכפת להם מוסיף המון אנרגיה.
  • זה גם לא מזיק לצניעות - אם אתם אלו שמתעניינים בתחום באופן הכי קולני בסביבתכם, קל מאוד להתבלבל ולחשוב ש"אני הכי טוב בזה". בכנס יש לכם הזדמנות לראות אנשים טובים לא פחות (חלקם יהיו טובים יותר), וללמוד מהם, או לפחות להפנים שיעור זריז בצניעות.
  • זו הזדמנות לקדם את עצמך - לא כולם מרצים בכנס. למעשה, הרוב אינם מרצים. ועדיין, כנסים מספקים מקום להציג את עצמכם בפני המשתתפים האחרים - בחלק מהכנסים מספקים במה להרצאות ספונטניות מהקהל: אלו יכולות להיות שיחות בזק (lightning talks), או שוק פתוח (שמתקרא בלע"ז open space), או אותה פעילות שאין לי מושג איך לתרגם בשם lean coffee. המון הזדמנויות לדבר. אפשר גם סתם לשאול שאלות מוצלחות בהרצאות, או לגשת למישהו בארוחת צהריים ולהציג את עצמכם.  
  • זה כיף - עדיף לא לשכוח גם את זה. בדרך כלל יהיו אחרי הכנס מפגשים עצמאיים (או מאורגנים) של משתתפי הכנס, או טיול משותף ביום שלאחר מכן, ואפילו במהלך הכנס - אנשים מסביבכם נהנים, וזה מדבק. 
  • יש מה לספר אחר כך בבית - הייתם בכנס? מצויין. אם זה היה כנס טוב, אז שמעתם לפחות רעיון אחד חדש שאפשר לספר עליו בעבודה. זה יכול לשפר את הדרך בה עובדים אצלכם, או לכל הפחות לבנות קצת את המוניטין בעבודה, שגם זה חשוב.
אז איך מוצאים כנס טוב? מתחילים להתערב בקהילה. קוראים קצת תוכן בבלוגים או אפילו בטוויטר, מגיעים למיטאפים (היתרון המשמעותי של מיטאפ הוא בעלות הנמוכה - בדרך כלל המפגש יהיה בחינם, וכל מה שנדרש הוא כמה שעות בערב) ומתחילים להתעניין. מתישהו תשמעו על מישהו שהלך לכנס, או מתכנן ללכת לאחד. או מארגן אחד. או שתראו הקלטות של הרצאות משנים קודמות.
-------------------------------------------------------------
A bit over a week ago I got back from CAST, and had a great time. I also think I benefited from the experience quite a bit.  While a post that will summarize my general thoughts from this conference is still about to come, I want to dedicate this post to list a few reason for why everyone should go to a conference, because, to be honest - going to a conference is not a very obvious choice - it involves work-related stuff outside of work, getting to far places (unless a conference is in your city, you'll probably have to get to a place which is farther than your office), and even for relatively cheap conferences that happen to be next-door, conferences are still a significant expense. Besides, there's a ton a free material out in the internet, including talks from conferences. Moreover, unlike a course, conferences don't usually come with a defined "what do I get out of it" list that can be used to sell the idea to either your manager (who might pay for the event) or even to yourself.
First of all, I want to state the obvious - not all conferences are alike, and you don't want to attend all of them (Also, by "you" I mean my intended audience - software development practitioners, testers or otherwise. If you sell something, you might prefer the conferences I would tag as "don't bother"). One way I've seen and liked to identify conferences and decide whether you want to go can be seen here. For those of you that want only the short version, The division is between a practitioners-community and almost-a-sales-pitch, and the compensation received by speakers is a good hint).
So, assuming you go to the right type of conference, what do you get out of it?

  • A chance to hear about a variety of interesting subjects- Yes, There are even more interesting subjects out there online, some of them are high-definition videos, and you can watch them at your own time, rewind, skip and see only things that actually interest you.
    But! when was the last time you actually sat to watch something on a computer uninterrupted? or without doing three other things at the same time? With real face-to-face encounters you have less of these distractions. Another advantage in conferences is that you might go to a talk that didn't consider to be in your field of interest, just because it's there, or that everyone else are going. This talk might end up opening your eyes to something, or "only" be interesting. This happened to me in the CAST2nd day's opening keynote
  • A chance to ask questions - Attended a talk by an expert in something you're struggling with? You didn't understand something said in the talk because you're new to the field? If you are there you can ask. If the talk is recorded, your question will be there for future viewers. You can also catch the speaker for a longer chat later, maybe over lunch.
  • There are a lot of interesting people that get to conferences and don't speak in fact, at any given conference, chances are that most of the participants are not speakers. So you might find yourself spending an hour or two talking with someone you just met about a common interest you have, or you might meet an expert in a field you are struggling with (and now that you've met, you have a communication channel open and you can ask that expert for help later, or even ask them for some consulting work if they do that sort of thing). 
  • There are workshops - The cool thing about workshops is that you get to actually try out what you are learning. The workshops won't usually be recorded, and even if they will, participating in one is much more useful than watching it.
  • You get to learn what is happening in other places - It is very tempting (at least for me) to think that we know everything, and that everyone are doing the same things we do, but the reality is quite different. Part of the people are doing similar things, others may be far behind us, or far ahead of us while the rest simply have different conditions and need other things. Hearing that such things exist is quite different than actually meeting someone who is facing a different kind of challenge.  
  •  A great energy boost - chances are that if you're reading this you care about testing. In that case, you can probably use one hand to count others around you that visibly show the same enthusiasm. Add to this a smidgen of daily routine, and you get that sort of wear and tear that tags along. Attending a conference where almost everyone shares your enthusiasm is very energizing. 
  • A humbling lesson -  If you happen to be the only one who seems to care (see previous bullet), it is quite easy to fall into the fallacy of "I'm the best around here", which isn't such a great motivation to keep learning. In conferences you will meet people who are at least as good as you perceive yourself to be (some are better). Seeing them in action, say during a workshop, can remind you that there's still a lot to aspire towards, and you might get some pointers as well.
  • A chance to promote yourself - Not everyone present at a conference. Yet, conferences are a great place to present yourself and make yourself known to other professionals and build yourself a reputation. Conferences today enable a wide variety of opportunities to make your voice heard - lean coffees, open spaces, lightning talks and so on. Even if there's no formal place for you to get some spotlight, just chatting with some people makes your voice heard a bit. 
  • It's fun - Usually there will be a plethora of events after the formal conference. Some of it will be pre-organized, some will be spontaneous, or you might just find yourself in a pub with a small group of people you met at the conference. And even just attending the conference is really fun - people around you will be having a great time, and it's contagious.
So, how do you find a good conference? You start to get involved in the community (or actually, in a community). You read some content in blogs, maybe even in twitter. Attend a meetup or two (The benefit of meetups, besides being great by themselves, is that the entry cost is very low - most of them will cost you only the time you invest in attending, which will be a couple of hours or so), and you open your ears and eyes. Eventually you'll hear about someone going to a conference, or wanting to go to one, or sharing an experience of having been to one (in which case you'll have to wait for the next conference). Another option is to see some videos from previous events.

Sunday, August 14, 2016

CAST and ETC, a quick comparison

CAST is over, and I really enjoyed it. While I was there, I kept comparing it to the European testing conference I attended last February. Much of what I saw and experienced was similar, but there were a couple of differences that set the tone slightly, and since the two conferences felt so similar to me, I think they can easily borrow ideas from one another, if those ideas match the organizers' view.

  • Collaborative content - During ETC, there was at least one event each day meant for content that comes from the audience - a facilitated lean coffee and an open space. Both of those were part of the conference regular schedule, and were a great opportunity to get to know people and to hear ideas from everyone. At CAST, there was a lean coffee very early in the mornings (7:30 AM) , and while I think it was a great way to start the conference day, only about 20 people attended each morning to the event. There was also a session dedicated to lightning talks, with a relatively low attendance, as it was conflicting with other great events.
    It definitely felt to me that ETC was more collaborative than CAST, and not only due to the smaller size (if I got my numbers correct, CAST had about 2.5 times people attending than ETC). I got my share of collaboration during TestRetreat where I got to meet some ~20 testers and hear what they had to say.
    I find the collaborative parts of the conference very important. They provide a meeting place to start and make a personal connection and maybe foster some discussion later on. I also like to feel that I see familiar faces around, and talking for a couple of hours with a group of 10 poeple does give me that sense of familiarity. 
  • Discussion - One of the things separating CAST from other conferences I hear about is "open season". Each talk had a built in time dedicated for asking questions. Usually, talks are ~40 minutes long followed by a 20 minutes period dedicated to questions from the audience. As someone attending the conference, it very assuring to know that there will be time for my questions, and that I don't have to hope that the speaker will take the whole session time. I also saw less cases where people approached the speaker with a horde of questions after the talk. It still happened, but less. At ETC, one of the main challenges I had was that after each session I would come and ask a question or two, or listen to someone else asking an interesting question that lead to a short discussion - and then we were all late for the next event.
    The K-cards were also something new to me, and they were used to easily facilitate the discussions around, even outside of formal talks. Having a way to mark clearly the difference between "I have something new to say" and "I have something to say which is related to the subject we're talking about right now" was very helpful, and for some unexpected reason, I almost didn't see any red cards being used. 
  • Workshops - during CAST, there were three workshops that took a couple of time-slots each. in ETC they were two dedicated workshop time-slots (one each day) that were about an hour and a half long.
    For me, the fact that the workshops at CAST were measured against two events that collided with them (or rather, six events, as there were 3 other events in each time-slot), made it that much harder to choose going to a workshop.  I'm happy with the workshop I went to, but up until the last moment I wondered if I should really sacrifice two talks in order to get one workshop.
  • Sharing content - One thing that really surprised me was that CAST does not share the presentations of the conference, nor does it video all of the talks (but rather only those that were broadcast). The fact that I'm able to watch again the talks I had been in, and the talks I missed adds a lot of value to me. It is also a way to get the message back to people who weren't at the conference.
    (Edit: Apparently I was too quick to say that CAST does share the slides, Not sure when I got the impression they don't. Curtis was kind enough to inform me that CAST are collecting the slides from the conference, so I'll be waiting for that).
    Sure, I get it - The slides are the presenter's intellectual property, and recording all of the talks might provide people with an incentive not to come to the conference, but it might also do the opposite and make people want to come and actually participate in the confernce next time. 
  • Retrospective - ETC had one by the end of each day. I really liked the way they did it (use coloured notes - red for something bad, green for something good). During CAST I missed the easy feedback. 
  • Getting in touch - One of the things I really liked about ETC is that they have opened a dedicated slack channel for conference communication. There was one centralized place where I could contact just about anyone else in the conference, discuss plans (or past talks) and receive online notifications from the conference organizers. In TestRetreat there was a google-group for the same purposes (and it was used efficiently to organize several Sunday activities, a few lightning talks, and at least two dinners). At CAST, however, the closest thing was to try and follow the AST twitter handle (or CAST2016), which made me feeling a bit left aside, since I don't use, nor intend to use, twitter. 
  • Name tags - I think that the CAST folks nailed it. Usually in conferences I'm used to name tags ranging between "non existing" through "I sticker where I can write my name" to, if I'm in luck, "a piece of paper inside a plastic cover that has my name in small print".
    At cast I got a name tag that was printed in large letters (so I don't have to stare impolitely at someone's badge while trying to remember the name they gave me a minute ago), with a small pocket to put stuff in. Next time I'll bring a smaller notebook that will fit into it. I finally had a convenient place to put a pen in (pens tend to fall from my pockets). 

CAST day 2

(Conference report, No Hebrew)
So, it is a recurring theme this conference that Neil manages to get a post written on the same day, despite getting terribly late to his hotel room. I think he deserves a round of applause.

this day I managed not being late to the extremely early lean coffee. Going to a such an event three times in a row, after TestRetreat's open space made me wonder "can I come up with interesting topics?" It has really surprised me that the answer to this is "apparently yes". My mind was blank up until the point where the event started, and then some ideas surfaced to mind. But, even if this wasn't the case, the discussion would not have been hindered, as we had an abundance of interesting topics from other people.
Wobbles in continuous integration - It is sometimes nice to hear about problems that other people are experiencing, especially if they seem to be in a more advanced phase than I am - It helps both to avoid the unrealistic despair that comes from "Why can't I be as perfect as they are", as it exposes the hard work that was and is done in order to be in that more advanced state. The other reason I like such discussions is that it gives me some insight to problems I have not yet encountered, and will have to consider as I slowly reach towards my desired goal. The problem raised by Katrina was about how to deal with CI breaking due to changes done by multiple teams. Each team's branch works fine, but it happens sometime that two teams are working on closely related areas of the code, and once both changes are merged into the master branch, some tests fail because the code checked in by team A might fail on a test created by team B.  I can't really say that anyone around had a definite solution for that,  but I liked some of the questions asked and I hope we helped by providing an idea or two that can be used.
Testing infrastructure - should we test a change to the infrastructure of the code? After being presented with the question and a bit of context we tried to get of grip around "what is infrastructure in this context", as this word can mean a whole lot of different things. We ended up with a conclusion that was not new to anyone participating in the discussion - If it matters, test around what matters to you - but I feel that the way we got there did help for the person who asked that question to find the suitable answer for them.
Training new testers -  that's kind of the eternal question, isn't it?  out of the many comments for doing that, I took three that I particularly liked, and want to try and remember:

  1. Help tester develop the courage to ask stupid questions - this can be done in a multitude of ways, from setting an example to having 1x1 talks with the new testers and assure them it's OK to ask question even if it seems stupid. 
  2. Remember that every tester is different - some might lack courage, while others should be scolded  to deal with their overconfidence. 
  3. Tell them they are in training. Setting the frame in such a way removes much of the unnecessary stress that are involved in learning to function in a new place. 
How to keep in touch with people we met at the conference? Yes, it was the last day of the conference, and having a way to keep in touch could be nice. There were some ideas, and some of them have already started to set in motion. 
Online lean coffee - Should we do this? how? We had a short discussion where we decided we definitely want to try that. 
Am I cheating the auditor? When working in a regulated environment, there is an audit from time to time. How much should we disclose to the auditor? Is it ethical to answer directly to questions asked but not to reveal what we know about the product? Where passes the line between legitimate politics and misleading on purpose? 
How to make AST more visible? We really njoyed CAST, and we appreciate what we recieved from the community around the AST - but it is not trivial to hear about it, and testers who are new to the profession might miss out simply because we do not do enough to be easy to find.  While we do leave that question to the AST board, and do recognize the significant work they are doing, maybe there's something else that can be done. As it just happens, I wore my ETC shirt, that has the AST logo on its back, and that was one example for that effort. We hope that this is something that will resolve itself in time - since if hooking to AST is indeed helping testers improve, there will be more and more good testers around that could attest for it. 

That concluded a rather packed lean coffee session where we had covered multiple subjects in a very concise way. I was really impressed by how quickly some subjects reached a point where we felt we concluded the discussion, as well as by the first time I saw a subject that had all thumbs-up to resolve it. 

The opening keynote for the day was Neuro-diversity and  the software industry. This talk felt a bit like a bucket of ice - sudden, sharp to the point of pain, and eye-opening. In this talk, Sallyann Freudenberg shed some light on the situation of people with what is generally referred to as neurological disorders - problems such as autism, ADHD and even bi-polarity. She made a very compelling case for the benefit of employing the special skills that often come alongside with the difficulties we are more familiar with. What really surprised me was that at a rather early stage of the talk Sallyann told us about when she found out she was on the Autistic spectrum (according to some metrics she shared - quite well in that spectrum), which raised the possibility - If it is possible to be on the Autistic spectrum without even knowing that, and if I relate to some of the examples I'm shown - could it be that I am on the spectrum myself? One thing I can tell you - this is a very good way to keep your audience interested and involved. The talk revolved around points that can be considered if we want to be open to that sort of diversity in our workplace - from major changes that involve construction and environment changes, to simply making it very clear that it is fine to take a 15 minutes break every now and then to relax. 
I would be surprised if there was a single eye that remained dry during the entire talk, and it also lead me thinking - Am I sensitive enough to identify such situations? Can I act in such situation to help such individuals to better fit the team I'm in? I think this talk is the type of things that make going to a conference worth that much more - I would probably not watch any talk on this subject in my own free time (since it isn't about testing or a subject that remotely resembles any of my interest fields), but I'm very glad I did get to hear this thought.

The next talk was dubbed "Lessons Learned in implementing exploratory testing", which was quite as the title promised - some stories where implementing those cool ideas we hear about in conferences simply fails. Nancy spoke about the importance of having an actual, actionable plan before trying to push for a large scale change, and about not overloading people with too many new ideas (such as the entire RST course for those who are used to work in the "old" way of running test cases.
I also liked her idea of what should a test manager do - which is to create visibility of the testing process and status to the forces that be, so that everybody will know what are the testers doing, and what takes them so long (this one is to fend of the "testing is our bottleneck" where this is not actually the case).
One thing I didn't like is that in some parts of the talk, Nancy sounded a bit like someone who saw the light - no longer shall we do test cases, but rather use mind-maps, and all those cool RST tricks for testing. It sounded a bit like one can do whatever they want, as long as we throw "those old ways" out of the window. This lead to one of the main concerns in the talk, which to me sounds like a concern that shouldn't exist - how to make the sort of testing that was now adopted in the test department to remain even if Nancy will leave her position and go elsewhere? The way I see it - as long as they are good testers, and that good testing is done, it really does not matter if people revert to the ways they are more comfortable with.

Next I attended a group discussion about "engaging people back home", with the question of should we actually make some effort in order to foster professional development in testers (the global feeling was "yes", as we do see the value in it, but that's really preaching to the choir), and how to actually make that happen. The main theme was "give people time on the job to learn", with one case where one of the participants told about having a designated time each week (4 hours, if I recall correctly) where no meetings are scheduled, and the whole time is used for learning. This sounds to me much better than the "we say we give you 10% to learn, but then we overload you with so many pressing tasks that you don't actually get to use those 10%" that I see in some places, including my own.

After this interesting, yet frustrating, discussion, I moved on to the lightning talks session.
The subjects were quite diverse, and each speaker got a doll of Tina the test Toucan (which means I now have two of those - the first I got when I registered to AST at the European testing conference).

  • Can't we all get along - was a short talk about being less toxic and violent in online discussions - should we actively reprimand those who express in ways we don't accept? or should we stay out of such muck and don't feed them with more attention? Should we defend those we see unjustly attacked?
  • Check list for test readiness -  a list of things that are worth fleshing out before actually starting to test - be it a test plan or simply having all of the requirements in one place. While not everything was 100% applicable to my environment, just going over it did a little to open my eyes.
  • Precognition in work - This talk put me at a certain level of discomfort, the speaker shared with us a "trick" to gain credibility and reputation - write down each time you predict something about a problem that might arise when taking some sort of an action (e.g.: "If we'll start by building the dev environment before building the test environments, our testers will have no environment to work on for at least a month"). Then, when this prediction comes true - make sure to remind people by telling them "I think I found something in my notes about this...". What really bothered me in this was that there is a very strong incentive to do that only in cases where I was right, and leave aside the times where I was wrong - thus creating a false appearance of "the doom prophet who's always right". It seems to be quite effective, but very misleading. 
  • Three talks about automation 
    • constructive objection to UI automation - was a reminder that automating UI is quite expensive and we should try not to do too much of it (the message was to avoid it where possible, but that's too harsh for me). 
    • Automation is dead (to me) - Richard Bradshaw did a quick talk about his signature subject - automation in testing. What I really liked about the way it was presented, besides demonstrating how a small shift in focus can mean a lot, is that when I see "test automation", there's almost an underlying (and incorrect) assumption that test automation is an inferior programming task (I even wrote about it not long ago), the way Richard presented automation in testing made it very clear that in order to achieve this goal, one should posses proper coding skills. It does not exclude writing a quick & dirty script to save time on a repetitive task, but the approach is "let see what tasks we can automate" assumes a much more capable programmer1 than trying to automate test cases which sometimes lead to many "testing tools" that help in automating a very limited set of actions (record & playback are the most obvious case, but even stronger tools such as SoapUI that is quite powerful still do that). When a capable programmer is tasked with "testing", not only test cases are targets of automation, but also is setup and teardown of environments, monitoring and various tools that aid manual testing. 
    • Fully automated regression testing in scale - quite a big name for an important topic - we sometime face the expectation to "automate everything" (or, in a more common name "100% automation"), where what should we really do is to identify the needs and risk the automation is addressing. The comparison I really liked was food, water and air - a person can go without food for  a few weeks and still live, about three days with no water, and maybe 20 minutes without breathing. Automation should address the most urgent needs,which will not always be the same thing as the task that was specifically requested. 
  • The advantages of being pushy - I spoke a bit about my experience in not accepting a "no" and nudging things to go my way in order to get some credibility (and I did have this in mind when I chose the title). In short, I found out that one of the things that make the test team important in the team I'm in is that we get involved in just about everything around us - we ask about design and offer suggestions when we feel it is appropriate, we ask to be included in meetings that we were not originally invited (would it surprise anyone that there are fewer of those than they were 4 years ago?) and we seek to contribute to the product in areas that are not strictly about testing.
    I think that such behavior, where we get involved in a lot, and share the information we thus get, has a significant part in making the testers in the team visibly involved and relevant. Or, as one developer who has transferred to our team once told me - the testers are really strong in your team. 
Following the lightning talks I had shortly considered joining the 2nd part of the workshop about "fostering healthy uncertainty in software projects", but as i already missed the first part, and after glancing over the groups notes from the it I decided to go to When You’re Evil: Building Credibility-Based Relationship with Developers, which wasn't about being evil at all. Instead it pointed out that when we as testers drive for a change, we actually rely on the credibility we have inside our team, or actually, how much credibility we have with the specific developer(s) that we will work with. This credibility has something to do with consistent good performance, but it has much more to do with being considered as "part of the team" and not as an external force to resist. Some of the tactics to get into the developers "good side" can sound a bit insidious at first, and a little bit manipulative, but after a while it sank to me that most of it really comes down to actually demonstrating that we are in the same team - It sound horrible when someone says "in order to be on someone good side, listen to them talking about things they care for", but is it really different than "when you work with someone, show some respect and interest in their hobbies" ? The former is a bit manipulative, the latter is similar to those tips you might find in any "how to foster team collaboration" articles you can find online. Still, both of them are really the same - is it bad to acknowledge the fact that being part of the team does have some very immediate benefits?


And suddenly, the conference was over. A very sharp change. I did my best to say goodbye to the people I've met during the conference (for those I didn't catch, I do plan on sending an email, once I deal with that terrible jet-lag that is caused by flying over 10 time-zones) and went back to my hotel to have at least some sleep before my 17-hours flight back home.

So, that was my conference in very small details. I still need to think a bit about the general picture and process some of the takeaways I noted during the conference, but all in all - a very good experience. 



1 Following this post I want to make some order in the terms I use - for me, a programmer is anyone who can code. I use "developer" to denote those that chose programming as their career path, and personally I identify as a tester. Since I'm a tester that also writes code, I'm a programmer despite not identifying myself as a developer. 

Thursday, August 11, 2016

CAST - day 1

(conference report, no Hebrew)
Day 1 of CAST is over, and it has been a while since I was in such an intensive day.
Starting the day with lean coffee was as exciting as it was the day before, with no less interesting discussions. I actually got a couple of minutes late to the session, just managing to squeeze in the post about the tutorials day (A feat I did not manage to complete again, for reasons that will be clarified at the end of this post). Neil, by the way, managed somehow to get his post written by the end of that day.
We managed to cover a smaller amount of topics during the lean coffee of the day, which is a good indication that people were involved in a discussion that interested them (or rather, us), so more time was spent on them.

  • What to look for when interviewing at a company - are there some red flags? warning signs that mean it might be better to move on to the next company. For me, it was really interesting to see the cultural difference between what I am used to, and what appears to be standard at other places. For instance - I would never have thought that "meeting the team" is something you would expect at some phase in the process, but someone even spoke of joining team lunch in one case after passing some of the interview phases.
    There were also some nice, useful things that can be applied almost in any interview:
    • Ask "what do you think needs to be changed?" - if the interviewer is answering this with a never-ending list, this gives you quite a bit of information. If an interviewer says everything is perfect - that also tells you quite a bit. 
    • When you get conflicting information from different interviewers (the example given was "we do a lot of unit testing" vs. "what's a unit-test?") 
  • Are blog posts good sources of information? The discussion here started, I think, from the angle of supplementing a book with a blog, or comparing between them, but evolved a bit into "how to find blogs" and how trusted is the information at these blogs. we spoke a bit about the authoritative stance of a written piece of text, and how does the comments thread affects that. We also found out that (not very surprisingly) most people would rather read a blog-post then read a book - which is a heavier investment. People still read books, but the vast opportunities in various blog posts gives us the chance to check up new subjects with a relatively low investment. It was noted that there are some blogs that looks like a very long sales-pitch (such as a vendor demonstrating how their tool is the right one for the mission they just invented), and those were generally disliked. 
  • Establishing a QA department - This is one question I get to hear being asked a lot around, and I imagine that even after hearing this discussion and some other, if I were to find myself in a position to set up a testing team from scratch, I would be inclined to ask the same. We spoke a bit about the change of what should be done according to factors such as the test-team size, the organization needs, preferences and maturity. I also had the chance to add an idea I got from Joel Montvelisky: It is useful to view the test  team is a vendor of information, and that we should wrap that information according to the needs and wants of its 'customers' - so knowing how the different stakeholders want to see your information is important as well. 
  • we concluded the morning with a short notice about the emergence of a new twitter handle @fakejamesbach
After a short breakfast, we could say that CAST has really began - we each got our name-tags (that came an a very handy case), and the opening keynote began. 
The keynote was pretty good and presented an interesting point about the interaction between software and people, and how letting machines do all the work can lead to sever problems such as a plain crashing due to panic of the pilots who were given back control in a case of a real emergency. However, I kind of expected more from a keynote, and especially an opening keynote. I measure keynotes by the feeling of "A-ha!" I'm left with, and the strongest feeling I got from this keynote was "Hmm... interesting reminder, nicely built". I did get some stuff to look into, such as the generation effect, and I might evenhave a couple of extra things to test when I get back home. 
Hoever, from that point onward, the day rapidly improved with sessions that were very interesting to attend. 
Carol Brands and Katrina Clokie had a very nice talk named Babble & Dabble: Creating Bonds Across Disciplines about connecting with the other functions in our team: Dev, BA, even customer support and operations. The talk managed to show two very different contexts, and to show the similarities between them in how meaningful connections are created: they identified three components that make these connections work well: Collaboration space, pushing information out and taking information in. In order to achieve better collaboration we create a space in which it can happen - it can be a joined project,  simply sitting in the same room or anything you can come up with. that will create human interaction. Then we push information out, telling people what we do, what we can do for them, and what we hope to get from them. The second part of that is that we will be that we should consume  relevant information from the team we collaborate with. 
During lunch I actually got to speak a bit longer with Carol and Katrina, and ask them a few follow-up questions I had.  My main takeaway from this talk, is that a good way to make a connection with other functions in the business is to invite them to see and learn about what we do and what we are good at (as a precondition, we need to know that and show it to ourselves).

The second keynote, with Anne-Marie Charrett, was a talk I Have already heard at the last European Testing Conference (And by the way - super-early registration to ETC2017 is now open). I remembered this talk as being really powerful, and listening to it a second time made that impression stronger. This talk showed me that things that currently bother me at work can be different, even in an environment very similar to my own in some aspects (and very different in others), and that really encourages me to continue with my efforts of making my environment better. I also kind of want to compare both videos of the talk one webCAST is released. I noticed that hearing the same talk twice (not including the couple of times I watched it on video) is still interesting, and since my concerns and needs has changed a bit, and maybe the focus of the talk also changed a little, I noticed different things and it also allowed me to shift my focus from "what" to "how".

I then made a tough choice and decided to invest two hours in Janet Gregory's workshop about requirement elicitation. I gained some very interesting tools to think on requirements development (and review), one of which is the "7 product dimensions" that seems to be a bit crude yet effective tool to think about requirements. At some point during the workshop I got to my small moment of revelation, which in this case was a bit more of a confirmation - The main difference between the processes used by business analysts when defining requirements and testers analyzing requirements to devise a test plan is only a matter of timing - so techniques that are used in one case can be easily lent to the other. Personas, for instance, are mainly used by designers and BA, but is very helpful as a test design technique, and using state diagrams, which I am familiar with from my testing education (as is any other tester that took the ISTQB basic certification) lends itself very efficiently to defining requirements. I think this is because in both cases, the activities that best drive them are activities that enhance our understanding of the product, so it is only a matter of which goal is being kept in mind that separates between the two activities. 
I really enjoyed switching a bit between listening to talks and participating in the workshop, and I'm quite happy with the choice I made - but choosing one event that spans over two time slots is a choice that makes me wonder what have I missed. It is a consolation that some of the sessions I missed were recorded as part of webCAST and I could watch them later. 

Let the games begin - promptly after the end of the last scheduled talk (or workshop, in my case), dinner, in fingerfood form, was available, as well as some board games that I think were brought by Erik Davis. I ended up playing a game that had the goal of maximizing the amount of chaos around the table. It is named spaceteam, and is a nice game to play for about half an hour (which can be around 5 games, as each game cannot last more than five minutes). Somehow, three games in a row I ended up drawing a card that instructed me to shout at anyone who was touching the floor. until the end of the game. I concluded this with a soar throat. 

Ethics discussion that came later was, to my surprise, both interesting and polite. If I recall correctly, there were four topics, at least one of which I can't remember. One of the subjects was very odd to me - what should a tester do in matters that involves public safety when they witness some sort of misconduct. This subject was odd to hear, since there is only one right  action - the only right action in such case is to report the problem and if it is not addressed, go out and report the problem to the state or whomever is responsible on such a field. It's bolldy difficult to actually do, but when the question is phrased in that way, there is no other option. There are some very delicate questions around the borders though - How can one identify such a case? Should one resign from such a job or stay and monitor? Those questions were more in the background, but not openly discussed. 
Another point that was discussed is the tester as representing the interests of the end-user. I was very surprised by the unanimous voice sounded by the panel members (or maybe it was an assumption made while discussing the previous topic I mentioned)  - it seemed widely accepted that the tester should indeed represent the user interests, which strikes me as very odd - I was hired by a company that has one goal - making money. My actions there, to the extent they don't involve illegal or unethical actions, should be in favor of my company interests. When I act a a proxy of the user, I do so in order to enable my company better visibility of what should be done to maximize the value (and profit) of the product. I don't raise "what the user wants" because I care about it, but rather because a good product cares to satisfy those wants and needs. If the product has consciously decided to ignore some of the user wants (by deciding "this is not what we sell, a user that wants that should look elsewhere"), I don't bother mentioning how important this want is (except when analyzing the business impact this decision might have).
The most flammable discussion, which was surprisingly polite and considerate (partially thanks to the great moderation work done by Rich). I don't think a definite conclusion has been reached, but a tone has been set, and people got to bent out a bit, and maybe address the subject in a forum that is less toxic than online media is. With some luck, it might help in setting the tone in further discussions online, and maybe a solution to the problem the ISTQB certification presents will arise (the problem, by the way, is that there is no way we like to formally start life as a tester - other courses such as RST or BBST are too heavy for new\aspiring testers, and are not easy to find from outside, and do not appear in job posting requirements). 

Following the formal discussion, I stayed a bit longer to talk about the idea of certification, the BBST course and stuff around that, which was nice until Neil did the sensible thing and got us to a geek bar, where we played Cards against Humanity. Some moral lines were crossed, and I got back to the hotel way later than I intended, but we did have some fun with that too. 

Tuesday, August 9, 2016

CAST - When testing, every day is CHRISTMAS

(Conference report - Hebrew might come later, probably won't)
Today was the first official day of CAST, the tutorials day.
Before I start talking about the tutorial itself (I chose to attend to Michael Bolton's testopsies) the day began with lean coffee  - where a bunch of early birds (and me) gathered to talk a bit about all kinds of subjects. we started out with a fairly large group of people for a lean coffee (I think we were about 10 when we started, and ~15 near the end), but it still managed to hear almost everyone (though not as much as I was hoping - as it tends to happen in large conversations, three or four of us were more vocal than the rest). Some topics that were very interesting for me to hear about came up (and two of them were topics I raised, so yay me!). two subjects that I want to point out were "session based test management debriefing"and "office politics" - The first topic was raised by Brendan Connoly (who has a great blog that you should read), where he wanted to know whether others in the group has tried SBTM, and how did they deal with the debriefing part that seems to be very heavy. We spent some time on trying to understand what are the gains he was aiming to achieve in debriefing, and Dwayne said that it's a powerful tool for either training or for peer review of your testing. Matt did mention that out of the many cases where he encountered claims of using SBTM, none were actually doing that, and the first thing they didn't do was the debrief. Apparently, this is quite a common challenge. Personally, for the purposes stated (letting people know where you are) I find that saying "what I'm about to do", perhaps during the daily stand-up, is usually enough.
The other subject that was raised by Neil (whom I mentioned in the last post, and is by far faster than I am in getting posts written), and he wanted to hear people ideas about dealing with cases where you don't agree with decisions done at your work place. It resonated with a subject I'm very fond of  - which is finding the best way to work with management to promote my ideas about how we should approach testing.

Then came the tutorial of the day.
Frankly, it was a good session, that I probably have very little to write about, since it was more of an experience for me and less of "now I know something specific". At first, we split up to small groups and created something to represent "what is testing". The form our group chose (and we weren't the only group to do so) was to create a mindmap (which you can see here). I had the opportunity to work with Perze Ababa and Steven Woody, and we had a really interesting discussion about "what testing is for, and how does it look like", which, despite being very abstract, was interesting, and, at least for me - enlightening. I got a glimpse of how other testers think about their testing, and try to defend (and change) my own take on the matter.
The next step was to try and define the activities that are in testing. We grabbed Neil to join our group, and engaged in a discussion. Suddenly, during the part where we collected the draft into some presentable form, Neil asked "What's a word for "deep diving" that starts with an "M"?
The result is posted here:
So, to put it in Perze's words: Every day is Christmas when you are testing.
Most of the points are clear and self explanatory, but there are a couple of points where we used words that fit the acronym better than the idea we had:
By Modeling and refining we meant that while testing we perform activities that improve our understanding of the product - investigating a bug is one easy example, as we build a model of how the software really works, and when investigating the bug we refine that model and adjust it to reality.
The second part is the "setup\configuration\orientation" part. At the beginning of the discussion we specifically excluded things that happen before and after testing, which are usually named "setup" and "report". However, we came to conclusion that even after the setup part is done, a test will probably have a part where the tester verifies that the configuration is as it should be, or that a test will include a part requiring configuration changes (upgrade tests are the easiest example), and sometimes, in mid test a tester will briefly stop just to figure out "where am I now, and what is the state of the system?"
The best part of the testopsies was, naturally, when we actually sat down to testing. I paired with Neil, which was really impressive - The model in which we paired was "tester \  reviewer", and I first got to see Neil in action. Neil did a very impressive job of testing while speaking constantly to narrate the work he was doing, while taking notes on the side, so I could understand (some of) how he was thinking and what he was doing - with the exception that he was talking *very* fast while trying to beat the 12-minutes clock. You can watch Neil's session in his blog post.
After the session we talked a bit about some of his choices and techniques, I was impressed by the speed in which he oriented himself, and how orderly and details were his notes.
Then came my turn, after Michael focusing everyone to the list they comprised and see if it needed adjusting after the test session. We were quite happy with our list (which might not be complete, but is quite convenient). During my turn, I tried to add some tool-usage which we didn't do much, and after some very short 12 minutes, I think I managed to do a semi-decent work. Neil then surprised me again when he kept a scorecard of minute-by-minute timing and marked what part of Christmas was applied during that minute (picture in his post), from which he drew an interesting conclusion about the difference between "orientation time" and "hands-on testing" time.
That's about it - while I would have enjoyed some more iterations of actual testing & feedback, I found this workshop fruitful and fun.
At the evening I went to Excelon house, where we talked about, and I got to meet Joshua, and then we played what is probably the longest Munchkin game I've ever played (just a bit over 2 hours, I think). While it was a bit late when we were done, it was a nice way to close the evening.

Sunday, August 7, 2016

TestRetreat 2016-Vancouver

(English, Hebrew might or might not follow at a later time).
-------------------------------------------------------
Edit: Some others are blogging on the same event, so far, I like their posts better than my own:
Claire's blog
Neil's blog
The AST blog (it's Claire again, but from a completely different angle)
-------------------------------------------------------
So, Today was my first day at CAST, or, actually, on TestRetreat, organized by Matt Heusser from Excelon. As I was hoping (but didn't know if to expect it or not), I had a great time and got to meet some great people. I can even remember the names of some of them, so Yay me! The day began with some tea & pastries (There was also coffee, I think) and we were getting to know people just a bit. As expected - those of us that had some prior familiarity with each other had it a bit easier than the rest of us, but since everyone was very friendly, it wasn't as awkward as getting to know a whole bunch of people can sometimes be.

  • After that, there was a series of lightning talks about various Ideas - Starting off with Matt talking about "Frames" - the idea, if I got it right, is that everyone is interpreting the reality through their perspective, feeling and character, and project this when communicating with others. Those frames are often in conflict, and people, without noticing it, tend to accept the most dominant frame and try to work within it, where it is better sometimes to just project your own frame stronger. For instance, Matt used the example of someone saying "I wasn't talking to you" as a way to silence someone - by doing so the speaker implies a set of rules that make it inappropriate for that someone to push into a situations uninvited. A way to project a frame against that would have been "right, but I was talking to you" (I think Matt used the term "holding a frame"). This works also in less violent situations, such as the project manager setting expectations that do not match the team perspective (or, in Matt's case - the consultant's way of working). 
  • Following that, I was called on stage to share my thoughts about the way new tools & skills are opening our eyes to new testing options. It's an Idea I'm really interested in developing, to see where it might lead. Currently, all I have is this notion of "new capabilities make my testing better, I really need to find a way to actively search for new capabilities that I don't know I'm missing. 
  • Claire raised an interesting question about using personal judgement to decide when to go against what you are being told to do as a tester and do what you think is right. 
  • Shachar (which was born in Israel, despite growing up in the US) spoke about maintaining your personal reputation. One point I managed to take of one of his examples was that declaring clearly what you do not intend to do can sometimes be very important to keeping up your reputation. 
  • Miranda spoke about conveying the value of testers especially in an agile world. Two points that remained with me were bringing a different perspective to the table, and bridging that gap between talking about what we build (which developers are really good at) and talking about "what we should be building" (which developers tend to be less good at). 
  • Eric raised the question of "what to do with experienced testers that will enable them to adapt to the changing world around us?" . While my initial response was a bit Darwinian (adapt or die), I think that there might be  some things that the testing community could do (and should do) in order to still have those experienced minds around and benefit from the skills they did acquire and might be relevant. 
  • Ash took that talk and built upon it a great question - The software creation world is changing, should the testing definition expand alongside it to encompass more than the traditional position of testing? 
After the lightning talks, we had and open space where everyone who wanted could raise a subject, and we had six rounds of sessions (3 before lunch, 3 after). Choosing between talks was really difficult, to the extent that I found myself regretting suggesting a talk\discussion, as it was impossible to find a slot that didn't have at least one talk I really wanted to go to. By the end I went to a discussion about how to improve the contribution of a non-coding tester who moved in to the developers' room, and is moving into agile. Despite the fact that I went to this talk to try and help someone in need, I found some of the advice really interesting. 
Next, we had a discussion on "who should be automating what?" which was a title I gave in order to try and see if the role of having some person whose sole responsibility is creating test code. The discussion was quite interesting, but I think it derailed a bit in the into speaking about how does the automation looks like in each of our teams.
Then Was a combined session on creating a global reputation with a bit of community building - we were a small group (4 or 5 of us), but the discussion was fairly interesting, and when the buzzer hit for lunch, it felt way too soon.
After lunch (which was nice, we even got a beer and had some nice discussions that I have no recollection of, besides feeling they were pretty fun to have) we did a quick threat modeling intro session (I really want to perfect this one out), followed by a massive conversation about types of failures we've all seen at some point and might want to be more weary of in the future.
The last formal discussion I took part in was a question I raised - how to wake up testers back at work? how to make sure that the company culture is encouraging learning and developing testing skills? I can't say that I know what will I do when I get home, but I do have some processing to do.
There were, also, some sessions I really hated to miss out - from  Natalie's session about linux command-line tools (some of them I know and use a lot, others less so), Matt's session on "interviewing testers" (he had a better name for that), "you can't outsource trust" and "the skillset of tomorrow tester".

Following that packed day, we went on a diner cruise, and had the opportunity to watch Matt's kind tomfoolery, and I got to know a bit better some of the people around.
So, all in all, a great day.

And, in Monday, CAST begins!

Friday, August 5, 2016

Page objects are not enough -Pt.2

Disclaimer - code post, hence, English only.

In a previous post, I mentioned that page objects are not enough, and introduced the concept I call "flows" (I can't take credit for the name - it was already in use when I first got to work). The idea, basically, is to add a layer between the tests and the page objects that will be responsible of complex operations that are not in-scope for a single page object.
However, the flows have some limitations and while better than using raw page-objects, it can still be inconvenient to the point where we said "There must be a better way to do this". 
The main problems we were seeing were: 
  • We used static flows, which meant that we were sending a ton of parameters for each flow - which suffers from readability & usability issues similar to  what can be seen in telescoping constructors.
    Essentially, a call to a flow would look like this:
    PurchaseFlows.checkout(driver,reporter,baseUrl,cardDetails,null,null,false);
    Where the function signature is:
    public static checkout(WebDriver driver,Reporter reporter,String baseUrl,CardDetails cardDetails,SHIPPING_OPTIONS shippingOptions,String discountCoupon,Boolean shouldAbortPurchase){...}
  • Code duplication
    Every now and then we wanted to do "this flow, only change this little thing" and this tiny change, in the middle of the flow, forced us to create another flow (or, when we were lazy, to add another parameter to the flow, which will be null 95% of the times it will be called, as in the previous code snippet).
  • multiple functions doing the same business action. It's partially connected to the previous point, but using the naïve version of flows will end up having many flows that are similar in terms of business logic, but different in implementation (for instance: "purchase a book and register" is similar to "purchase a book with a registered user" and "purchase a book without registering")
  • Ugly tests.
    Since we had several flavors in our system, writing a test that will simply perform a single purchase looked like this:
    if (isCardSmsEnabled){
         SMSFlows.checkout(driver,reporter,baseUrl,preferredPhoneNumber,otp, null);
    }else if (isPurchaseWithPassword){
        PasswordFlows.checkout(driver,reporter,baseUrl,password,null,false,null);
    }else{
       NonRegisteredFlows.checkout(driver,reporter,baseUrl,cardDetails,null,null, false);
    
    }
    Sure, we could put this logic in a seperate method, except that then we would have doubled the parameters that should be null (why would an SMS enabled purchase need a password?)
  • Refactoring is painful.
    When we began, we did the mistake of sending username & password as strings. At some point it didn't work anymore and we moved to sending a "User" object. Now I had to go over each and every flow and change them to support this new behavior - and it took me 3 days of knowing that I could have been doing something useful with my time instead of this partial refactoring. Why partial? because I didn't go and change all of the calls to the flows - If there were more than 5 calls to a flow with username and password, I just left it lying around, which I could do only by overloading the methods to support both calls. 
So we clearly needed to find a solution. We wanted this solution to be simple to use, flexible and as future-proof  as we could get it. Each of these properties came from a pain we experienced:

  • Simple to use - This includes both not having a large number of parameters and not having to worry about the different flavors in our system. It should be "write once, run with all configurations". The pain here was what is described in the 1st and 4th bullet above. 
  •  Flexible - we should be able to change the "default" behavior in a test without too much of a fuss and without causing ripples that will affect other tests. The story behind this was the one that made us realize our flows solution was not good enough anymore. We had a new timeout feature: after a certain time from the purchase start, if it was not completed, end the session and fail the purchase. Now, imagine what was the situation we were in: the flows didn't have any notion of "wait", and each flow was passing anywhere between 1 and 4 screens that we wanted to wait for a while before submitting the page. In the flows world, the choice was between sending a complex "sleep in step X for Y seconds" parameter or duplicating the flows to create "TimeoutFlows" that would have a method for each of the waiting places. Either way - Yuck!
  • Future proof -  The idea here is to avoid two kinds of problems: The application changes in a place shared between multiple flows, so we want to make the fix in one place only, and not in each flow, and we wanted to lower the cost of refactoring - even if we change a method signature. 

The solution we came up with is simple to describe but complex to implement, as what we did is to take that awful complexity that is part of our product's business logic and hide it elsewhere.
It has the following parts:
  1. Commands - each step that we consider to be a single action (clicking "next", filling a form, validating a value against the database, you name it) is encapsulated within a command. You can check the command design pattern in Wikipedia, but the general idea is that it doesn't matter what lies beneath the surface, externally, there is only an "execute" method that is exposed. We cheated a bit and have two methods ("run" and "runCancel"), but the idea that every command exposes the same interface still stands.
    example:
    public class FillPasswordCommand implements ITestCommand {
     WebDriver driver;
     PasswordContext passwordContext;
     IStepResult result;
     public FillPasswordCommand(WebDriver driver,ITestContext context,IStepResult result){
      this.driver=driver;
      //see explanation for this below
      this.passwordContext=(IPasswordContext)context;
      this.result=result;
     }
     public IStepResult run(){
      PasswordPage page = new PasswordPage(driver);
      //skipped some verifications to keep the example short
      page.fillPassword(passwordContext.getPassword());
      page.clickNext();
      result.addScreenShot(driver,"after filling password");
      return result;
     }
    } 
  2. Context chameleon objects - This part is a bit odd, and the reasoning behind it is that we had the following conflicting requirements:
    1. The context should contain every bit of information that any command might need, now or in the future
    2. The context object shall not have too many methods (the idea is to utilize efficiently the IDE auto-complete functionality, which won't be very helpful if you have over 100 methods)
    3. There will be only one context object that will be used to create multiple commands. 
    The solution we came up with was to have multiple context objects, and then combine them all into some sort of a Megazord (for those who have failed the age\culture test - a Megazord is the giant robot resulting from combining the power-rangers personal robots together). The object can then be cast to represent any of the underlying objects. So we might have "IUserRegistrationContext","IDbConnectionData" and "IPurchaseContext" all bundled together. As the code itself is neither short nor is it self explanatory, I won't include a code sample, but the idea of what we did is as follows: For each type of context we wanted we created an interface  (the "I" at the beginning is marking "interface"), then when we want to merge a couple of these we create a Java proxy object that answers for all interfaces implemented by both contexts. The InvocationHandler is just holding the two contexts and redirecting the calls appropriatly.This was also the first time I looked at a Java code example and did not understand what I was reading1.. All of the contexts are initialized at the setup method of our base test, and each specific test needs only to change the relevant context values that matter to it (plus, the defaults match most of the use cases, so a test won't need to change many parameters.
  3. Commands runner - This entity holds a chain of commands and is responsible to run them one after the other. In case that I want to change something - say, click "cancel" after the 3rd screen I see, the runner is the one responsible to do that for me. I want to sleep 2 seconds between commands? The runner again. I want to add a specific command somewhere in between the existing commands? I will have to do this before calling the runner to execute the steps.
  4. The Flow Factory - Remember that I said that we hid all of the complexity in another place? well, this is that other place, or at least - most of it. This part returns a runner with the chain of commands built inside.
    Here we read the context objects, build new command instances in the right order and return them to the test. Since the logic it encapsulates is complex, we have broken it into ~5 different classes, just to keep things readable.
    How complex it is? well, when we started, we created a decision chart. It now has some additional nodes that make it just a bit more fun. (It is redacted, since I don't know how much I can share, so I left the interesting questions out, but the decision tree structure remained to illustrate the inherent difficulty).
    Using the factory, one the other hand, is really simple. here's an example:
    userActionContext.setUserAction(ACTION.FAIL);
    purchaseFactory.createPurchase(context).run();

So, a short summary of the test-commands is that the tests are sending context to a factory in order to get a runner that will execute the required actions. And that's it.
What did we gain from this construct?
Well, quite a bit:

  1. The test does not call a method with fifty parameters, out of which half are null. We could have gained most of that by using non-static flows, but I feel this works better also in this aspect. 
  2. All of our tests are now configuration oblivious (to the extent that the business logic does not change according to those configurations) - we don't have to worry about getting the correct flow. 
  3. We have the ability to intervene in the middle of a chain without creating a new flow - so no code duplication. 
  4. Adding new behaviors is actually easier & faster - since every part of the chain creation is isolated, we don't need to create the whole flow from scratch (or, as was common - copy, paste & edit), we can just add the needed code at the right point. For example - when we added a new challenge (we had password & SMS, we wanted to add another one), all that it took was to add the code that deals with the new screens to the switch statement dealing with the challenge type - and did I mention that all of our tests now supported this new challenge? this is really the point where I wanted to shout "presto!"
  5. Writing tests got shorter to a third. Not "by a third", to a third. It also enables us to focus our attention on the important stuff that are developed instead of making sure our tests are compatible with the multitude of flavors our product has.

As you can see - there's quite a lot of work to get to the point where the commands are working, and it might not be intuitive at first. It has some advantages over the flows implementation, but those advantages do not always outweigh the drawbacks of high initial cost. So, when to use what?
If your application has a small number of atomic actions (by that I mean "things a user would consider as a single action"), and they are strongly distinguished from one another - Flows are probably OK for you. If, however, there are a lot of similar actions or they change rapidly - commands are probably better. Currently, we consider using a slightly different approach for dealing with situation where a user will perform more than one action ("go and buy something" is one action, but "check user history, then unlock the user account and reset the password" are three separate actions) - the concept of the commands will probably stay, but we consider replacing the factory in a builder. The difference is that in a builder we could do something like
builder.login().checkUserHistory(user).unlockUser(user).changeUserpassword(user,newPass);
But, we'll have to wait for a trigger to start working on that  - implementing such a solution would not be short and just like everyone else - we have more improvements we want to make than time to implement them.
I hope you'll find this idea useful, if there are any questions (I did try to explain what we do with the commands, but I feel it might not be as straightforward as I think it is) - don't hesitate to ask.



1  Reflection in Java, and proxies in particular, can be a bit confusing when you first encounter them. If you are a bit confused as I was, all you need to know is that a Java proxy has two parts: A list of interfaces that it is faking and will answer true for "instanceof" queries, and an invocation handler, which is the part that is responsible to actually do something when a method is called. It can be as simple as just returning null value,  adding a delay or counting the number of times each method was invoked for this specific object, or it could be as complex as you would like it to be. 

Thursday, August 4, 2016

האם בודקי תוכנה צריכים ללמוד אוטומציה?

As a software tester, do I need to learn about automation?

English first, as this is (sort of) a response to something I saw.

The answer to this question is NO. In fact, I would say the answer is "Don't you dare doing that". Instead, learn to program (or, if you insist, to code). 
I have encountered this post, and at first I thought to myself: "great, here's another one who calls test programming 'automation' ", when I saw this: "You don’t need to complete all of the lessons unless you want to. You’re not trying to become a developer, you need to know ‘just enough’ to be getting on with for now."
And this is why I don't like the "automation skill" talk. A test code project is a coding project in every aspect, and taking part in it does require being a decent programmer. Otherwise, you'll be digging yourself a hole that will be quite difficult to get out from - It could be poor infrastructure that will bite back in a 6 months time, it could be weak automation that only moves the browser around since this is all you've learned. Heck, it might even be that you need to write a multi-threaded test, only you have never before encountered the concept of threads since you were learning "automation" instead of programming. 
So yeah - programming is not difficult, and every tester would benefit from having some programming skills, just stop blinding yourself by calling it automation and regarding it like 2nd grade programming.

---------------------------------------------
התשובה לשאלה הזו היא לא בל' רבתי. אין לאף אחד שום סיבה ללמוד "אוטומציה". 
במקום זה עדיף ללמוד תכנות. 
נתקלתי במקרה בפוסט הזה ובמבט ראשון אמרתי לעצמי "נו, הנה מישהי שכותבת אוטומציה כשהיא מתכוונת לתכנות. ניחא". אבל, אז הגעתי למשפט שאמר (בתרגום חופשי שלי, המקור למעלה) - "זה בסדר לדלג על תרגילים, אתה לא מנסה להיות מפתח,  אתה צריך לדעת בדיוק מספיק כדי להסתדר". 
וזו בדיוק הסיבה בגללה אני לא אוהב את המונח "כישורי אוטומציה". פרוייקט אוטומציה הוא פרוייקט תכנותי לכל דבר. ככזה, הוא דורש מתכנתים שיודעים מה הם עושים כדי להצליח. אחרת, אנחנו חופרים לעצמנו בור שיהיה קשה לצאת ממנו - זה יכול להיות כי התשתיות שנכתוב כ"מפתחי אוטומציה" יהיו גרועות מאוד ונצטרך להחליף אותן ברגע בו נפסיק להתלהב מכך שכתבנו משהו שפחות או יותר עובד. או אולי פשוט יהיו לנו בדיקות חלשות כי כל מה שלמדנו לעשות הוא להזיז את הדפדפן קצת, אז אנחנו לא מסתכלים על מסד הנתונים. על דברים כמו מולטי-ת'רדינג (שקלתי לכתוב ריבוי חוטים והגעתי למסקנה שזה נשמע טפשי) אפילו לא נדע לשאול כי למדנו "אוטומציה" במקום תכנות. 
אז כן, זה לא קשה לתכנת, וכדאי מאוד שכל בודק תוכנה ידע לעשות את זה. רק בשם שמיים - תפסיקו לקרוא לזה "אוטומציה" ולהתייחס לזה כאל תכנות בחצי כוח.