Saturday, January 12, 2019

שייחנק הלקוח



English version

הרעיון הזה מסתובב לי בראש כבר זמן מה, ומדי פעם אני נתקל בתזכורת לכתוב משהו ואז מוצא לעצמי עיסוקים אחרים. התזכורת האחרונה הייתה בכנס סיגיסט, שם שמעתי את יואל מונטבליסקי מדבר על איך השתנה תפקיד בדיקות התוכנה לאורך השנים ולאן הוא הולך. הנושא שמתרוצץ לי בראש, אגב, הוא ההתנגדות שיש לי לאחד משבעת העקרונות של בדיקות תוכנה מודרניות. אודה ואתוודה - אני די מחבב את העקרונות האלה וחושב שהם רעיון טוב. כלומר, כולם חוץ מאשר החמישי. בתרגום חופשי שלי: "אנו מאמינים שהלקוח הוא היחיד שמסוגל להעריך את איכות המוצר שלנו". 
אני לא אוהב את העיקרון הזה, אפילו לא קצת. לדעתי הוא גם לא נכון וגם מזיק. חמור יותר, הרעיון הזה נפוץ באופן מדאיג, בדרך כל בניסוח הפחות טוב של "איש הבדיקות כנציגו של הלקוח". 
לכאורה, האמירה הזו מרגישה מאוד נכונה - במיוחד אם אנחנו מקבלים את ההגדרה של ג'רי ויינברג לאיכות - "ערך למישהו" (ואת התוספת של ג'יימס באך "למישהו חשוב"), רק הגיוני שהלקוח, שמגדיר את האיכות בעצם קיומו יהיה המתאים ביותר לשפוט אותה. 
חוץ מאשר במקרים בהם הוא לא. 
כמובן, אני לא מנסה לטעון כאן שמוצר מושלם מכל בחינה שהיא שאף אחד לא קונה או משתמש בו יכול להתיימר ל"איכות גבוהה", רק לכך שלפעמים הלקוח לא מסוגל לאייך את המוצר (וכן, "איוך" היא מילה, עכשיו גם אתם יודעים). למשל, נדמיין עדכון למערכת ההפעלה של הטלפון. העדכון כולל תיקון של פרצת אבטחה, שיפור בניהול הזיכרון ומשפר את הלוגים שנוצרים בזמן קריסה. האם אתם מסוגלים לקבוע את איכות העדכון? הרי אף אחד לא פרץ לכם לטלפון גם לפני העדכון, וממילא אבטחת תוכנה היא כמו ביטוח - רואים אותה רק אם משהו מתקלקל, לטלפון שלכם יש מספיק זיכרון כדי להתמודד עם ניהול הזיכרון הפחות טוב לפחות לשנתיים הקרובות וכשהטלפון שלכם קורס אתם אפילו לא רואים את קבצי הלוג, וגם אם כן הם לא יגידו לכם כלום כי אין לכם מושג איפה להתחיל לחפש. במקרה הטוב הלקוח רואה שיפור בזמני התגובה או בקצב השחרור של פיצ'רים, ויש כל כך הרבה משתנים שמשפיעים על זה עד שאי אפשר באמת לקשור אותם לעדכון שקיבלתם. בכל המקרים האלה ההשפעה של השינויים על איכות המוצר שיש לכם ביד ניתנת למדידה רק בעקיפין ואתם כלקוח תדעו רק אם משהו שבור בצורה חמורה. ההבדל בין "סוג של בסדר, בערך" לבין "עשוי היטב וכיאות" פשוט לא חשוף לעיני הלקוח. המקום בו הלקוח הוא השופט היחיד של איכות המוצר מעביר אותנו לדבר על "איכות נתפסת", וזה כבר לגמרי סיפור אחר. 
ובואו נדבר רגע על "הלקוח", מי זה בעצם? נניח שאנחנו מתקנים התנהגות של המערכת שגובה כסף רק על שלושת רבעי מהפעילות של הלקוחות. לכאורה, אנחנו גורעים מהם ערך, כי המוצר שלהם עכשיו יקר יותר ועושה בדיוק את אותו הדבר. מצד שני, האם למישהו יש ספק בכך שצריך לתקן את זה? ובעוד שהמקרה הזה קיצוני למדי, קל מאוד למצוא דוגמאות בהן יש סתירה בין טובת הלקוח לבין טובת המשתמש, במיוחד במוצרי B2B. למשל, מחלקת IT אצלנו מצפינה את הכוננים הקשיחים של המחשבים שלנו. זה גורם להם לקפוא לשנייה או שתיים מדי פעם. מצד שני, זה מספק לחברה את הביטחון שסתם לגנוב מחשב לא יעזור למישהו שרוצה לגנוב מידע. מי שמקבל את ההחלטות לא מאוד מושפע מזה כי הוא גם ככה לא עובד עם המחשב באותה אינטנסיביות כמונו, ולנו אין אפילו ערוץ להתלונן בפניו על זה אז הוא אפילו לא מרגיש עשירית מהניג'וס שאנחנו מרגישים (שהוא לא כזה נורא, כן, זה סתם מעצבן). הערך ללקוח גבוה, אבל הערך למשתמש שלילי. 

עד כאן לגבי למה אני חושב שזה לא נכון להשאיר ללקוח את המילה האחרונה באשר לאיכות המוצר שהוא מקבל, אבל אילו זה היה כל הסיפור, לא הייתי טורח לכתוב כאן. אחרי הכל, אנשים טועים באינטרנט כל הזמן. הבעיה היא שהגישה הזו עלולה להוביל מאוד בקלות להעתקת יעדים (אם מישהו מכיר תרגום טוב לgoal displacement, אשמח לשמוע על כך). הבנה פשטנית של העיקרון הזה דוחפת אותנו להתבונן על איכות נתפסת של המוצר ודורשת לא מעט להטוטנות פרשנית כדי להצדיק כל דבר שאיננו פיצ'ר חדש. למשל, אם נסתכל על שינויי תשתיות שמאפשרים לנו לגדול בהתאם לשוק - כרגע, המוצר שלנו מחזיק יופי ומסוגל להכפיל את גודלו בארכיטקטורה הנוכחית, אז אין טעם לשנות, נכון? תיאורטית, אפשר לזרוק עוד חומרה על הבעיה וזה יחזיק אותנו במשך חמש שנים נוספות. התרגיל הפרשני שנדרש הולך בערך ככה: "אם לא ניישר את זה עכשיו נצטרך לתקן את זה תחת לחץ, כשהלקוחות שלנו כבר מרגישים הרעה בשירות, והתיקון יהיה מורכב יותר כי בחמש השנים האלה נוסיף עוד יכולות שיתבססו על התשתיות הישנות, חוץ מזה פתרון בעיות יעסיק אותנו ולא נוכל לספק ללקוח תכולה בקצב דומה לזה של היום". טיעון רב שלבין כזה הוא לוגיקה מורכבת, ובעוד שאין לי ספק שכולנו חכמים ומסוגלים ללוגיקה כזו, ככל שנתרגל את האינסטינקטים שלנו לחשוב על ריצוי הלקוח קודם כל, נדמיין פחות תרחישים כאלה ויהיה לנו קשה יותר להצדיק פעולות צופות עתיד למול תחושות הבטן המבוססות שלנו. אני מאמין שרוב הצוותים היום לא בדיוק צריכים *עוד* משהו שידחף לכיוון פיצ'רים גלויים ללקוח על חשבון שיפוצים נדרשים. 
נכון - אם ממש מנסים, אפשר למצוא קשר בין כל פעולה חיובית במוצר לבין השפעה על הערך שמקבל הלקוח, אבל ההשפעה הזו יכולה להיות עקיפה ולהיות רק אחד הגורמים שמשפיעים על חוויית המשתמש.
תרגיל מחשבתי נוסף כדי להסביר על מה אני מתכוון - דמיינו שתי חברות שמייצרות מוצר דומה, מבוסס ענן. המוצרים עצמם נבדלים זה מזה רק בכמה פיצ'רים מינוריים שפונים לטעם האישי של כל משתמש. חברה א' מחזיקה עשרה מהנדסים מיומנים כדי להפעיל את המוצר והם מנהלים אותו בקפידה. חברה ב', מצד שני, בחרה לשכור עובדים בשכר נמוך משמעותית ובאותו מחיר היא שוכרת 100 אנשים להפעיל את המוצר. גם ההוצאות התפעוליות דומות - חברה א' משלמת על שירותי אחסון יקרים ואמינים מאוד, עם חומרה חזקה וארכיטקטורה שמסוגלת להתמודד עם כל מה שהאינטרנט זורק עליה, ואילו חברה ב' משתמשת בשירות זול יותר ושוכרת הרבה יותר חומרה זולה כדי להתמודד עם אותו עומס בהצלחה. בחברה ב' יש צוות ייעודי שמתמודד עם משברים ודואג להדביק במסקינגטייפ את כל החורים והתקלות כך שהלקוח לא רואה כלום. 
האם אפשר לומר שאיכות שני המוצרים שווה?לכאורה, אלו שתי אסטרטגיות שמובילות לאותו מקום. רק שחברה ב' מתעמרת באנשי Operations שלה ועושה להם חיים קשים. 
תגובה שקיבלתי לזה מיואל הייתה שאנשי Ops הם אחד מסוגי הלקוחות שלנו. 
על זה אני יכול לומר - לא, הם לא. 
עד עכשיו הטיעונים שלנו הניחו שיש זהות בין החברה שמספקת את המוצר לצוות שמפתח אותו. זה עובד רק כל עוד מתייחסים לכל מי שבתוך החברה כאל חלק מהמנגנון הזה. אם אנחנו מתחילים להסתכל על צוות פנימי כעל "לקוח" אנחנו משנים את הרזולוציה של הדיון שלנו ועוברים לדבר על צוות הפיתוח ולא על החברה כולה. 
זה מוביל אותי למשהו שחשוב לי מאוד להבהיר: לצוות פיתוח אין לקוחות. אין. אפילו לא אחד. יש לנו מעסיק ויש לנו עמיתים. השימוש בשפה של "לקוח" הוא דרך נהדרת להדגיש את העובדה שאנחנו מספקים שירות מסוג מסוים, אבל בכל הקשר אחר המינוח הזה יוצר עמימות ובלבול. "לקוח" זו מילה טעונה בהקשרים והתנהגויות, ואי אפשר להתייחס לעובדים פנימיים באותו אופן בו מתייחסים ללקוחות, אם כולם "לקוחות" המונח הזה מאבד משמעות. כשכירים אנחנו מקבלים משכורת כדי לקדם את האינטרסים של המעסיק שלנו. לא פחות ולא יותר.  לא יקצצו לי בשכר כי במשך יומיים בחודש המעסיק שלי בחר לנסות חבר צוות אחר והחוזה שחתום בין המעסיק לביני מגדיר התחייבויות שונות מאוד מאשר אלו שבין ספק שירות והלקוח שלו. ואם אני כשכיר פועל כדי לקדם את האינטרסים של המעסיק שלי, בואו נדבר עליהם לרגע - לחלק הארי של החברות יש מטרה אחת פשוטה וקלה למדידה - כסף. לחברה אכפת מהלקוח אך ורק בתור שק מזומנים מהלך - אם נשמור אותו מרוצה, אולי יותר מהכסף הזה יגיע אלינו. אז נכון, קשה מאוד ליצור מוצר רווחי שלא עונה על צורכי הלקוח, אבל המקרים הנדירים בהם הרווח (ארוך הטווח) של החברה עומד בסתירה עם טובת הלקוח, אני צריך לזכור מי משלם את המשכורת שלי ולבחור בצד של החברה. 

בקיצור - איכות נתפסת אינה זהה לאיכות המוצר, וחשוב לזכור שבעוד שהלקוח הוא אמצעי חשוב להשגת המטרה שלנו, הוא רק אמצעי ולא המטרה עצמה. 

Eff the user



This idea has been running in my head for a while, and every so often I'll stumble upon a reminder to write something about it, and then have other things to occupy me. The most recent trigger was a talk by Joel Montvelisky in a local SIGiST conference where he spoke about the changing role of the testers. The subject is my objection to one of the modern testing principles.  I'll admit - I really like them. That is, with the exception of principle #5: "We believe that the customer is the only one capable to judge and evaluate the quality of our product".
I don't like this one a tiny single bit. I think it's a false statement and it drives wrong team behaviour and goal displacement. Worse, this idea seems to be ubiquitous in the industry, or at least wildly spread, as some of the variants of similar ideas out there show. I think that #5 is one of the most carefully worded expressions of this idea (as one cannot dismiss it with "a tester can't be a customer proxy\champion\advocate") So I'll stick with it when thinking through my disagreement. 
On the face of it, the principle is very solid - if we accept Jerry Weinberg's notion of quality being value to some person (and I like Bach's addendum to it), it only makes sense that the person who is getting the value is the best judge of it. 
Except when they aren't.
I am not claiming here that a product that is not used or bought by any of its target audience can claim to be "of high quality", but simply that there are cases where evaluating the quality of a product is something that the customer is not capable of doing. 
Imagine that you are getting an OS update to your phone. It improves memory management, fixes a security flaw and provides better logging in cases of a crash. Are you in any position to evaluate any of this? Would an average phone user be in such position? All of it brings value - better logging means faster debugging cycles and faster hotfix releases, a security update reduces the likelihood of the phone being hacked and the memory management improvement will mean you could run more applications in parallel and avoid lagging. But how do you even notice any of those? You don't see the crash reports (and there are other ways to improve response time), the security update is like an insurance - you only notice it if something bad happens, and  your phone has enough memory that you didn't notice any problem with the previous memory management scheme. In all of those examples, there are simply too many layers of indirection between the actual property and the perceivable result and the quality of the product, if assessed by the customer, must be assessed by a proxy measurement. In some cases, the customer might notice if you did a shitty job, but the distinction between "barely good enough" and "excellent" is not visible. 
Then there's the ambiguity about "who is the customer". Consider a bug fix that eliminates a scenario where the user is charged for only 3/4 of the actual use. By fixing it, we are taking "value" from the product's user, but adding value to the vendor. While such condition is extreme, there are more examples when one looks on B2B products - The full disk encryption scheme used by our IT department makes our computers freeze for a second every now and then, but provides the decision makers (who don't use their PC as intensely as other employees do) the peace of mind knowing that data could not be leaked by simply stealing a PC. Customer value? high. User value - not so much.

Now, to goal displacement and wrong team behaviour. A simplistic understanding of this principle drives a lot of focused into perceived quality and requires advanced logic-fu to justify any kind of improvement that isn't a new feature. Take for example an architecture improvement for enabling our product to scale to need. We can go with the current architecture for a year or so, and probably throw more hardware on it for the next five years, so, no real benefit to the user, let's not do that. The rule-fu required would be something along these lines:  "If we don't do it then we can probably patch the system for 5 years but it will gradually slow us down as we deal with issues as they appear, and will also be exposing customers to issues caused by this load. In addition, in five years time we'll have more components relying on the old architecture we want to replace, so the change will take longer and be more risky and complex". This sort of logic is not trivial and far from immediate, if we train our instincts to zero in on the customer satisfaction we'll have a tough time coming up with such scenarios, let alone justify such future-looking actions in face of a well trained gut-feeling. Most teams don't need another thing applying pressure and bias towards perceivable features. True, if we insist, everything we do has an ultimate impact on the customer, but sometimes the impact of good engineering practices is several times removed from the actual impact, and it will usually be only one of the factors affecting that specific property of customer value. Imagine two similar SaaS products, both have the same capabilities, a similar size of customers and are distinguishable only by minor features that might appeal more to someone's personal taste. One product is built by a team of 10 highly skilled engineers that are running a tight ship. The 2nd is maintained by 100 people who work frantically to keep things afloat. The cost of operation for the two products is similar - The top notch engineers are getting payed 10 times as the less skilled ones, and the robust hardware and hosting services used by the first team are as expensive as the cost of throwing cheaper hardware  to gap for design problems in the second team. The perceived quality is the same, both companies can scale at roughly the same pace with roughly the same resources - is the quality of the two products equal? They are supplying the same customer value in different means, only the 2nd team is making it really difficult for their operations team.
I got a response for similar claim from Joel who said "The operations team are your customers as well".
Well, no.
Up until this point in the text we equated the development team to the company. By claiming that someone internal (be it operations, marketing or product management) is also a customer we are shifting our focus to the development team.
Let me say it clearly: As development team we don't have customers. We have employers and colleagues. Calling them "customers" might be useful when trying to convey the message that we provide a service to our colleagues and employers, but is otherwise causing confusion.  As employees we are getting a salary to promote the interests of our employer and that's it. I won't get a lower salary because my employer is unhappy with my work this month, and the contract between us have different obligations than those between a service provider and a customer. If I'm working towards my employer's interests it is time to face the cynical truth - most companies care about the customer only in terms of financial gain and a customer is nothing more than a walking bag of cash. Sure, it's very difficult to have a profitable product that does not align well with the customer's needs but when the two values are in conflict, I should be choosing the (long term) monetary1 business value over customer satisfaction.

So, Short version: Perceivable quality is not quality and it's important to remember that the customer is a (very important) mean to achieve our goals, but not the goal itself.



1 Thanks to Brent Jensen for reminding me that business value could be more than simply monetary. It's a bit harder to measure, and I believe it makes the point even more relevant - this business value could be the company's reputation, business contacts or even its moral values, for those fortunate enough to work in a company that cares about those


Monday, November 19, 2018

שמן נחשים להמונים

Snake oil for all

Source: http://thehealthcareblog.com/blog/2016/06/28/the-patient-and-the-snake-oil-salesman/

לפני בערך שנתיים הייתה לנו משרה פתוחה וחיפשנו בודק נוסף לצוות. תהליך הגיוס היה ארוך ומפרך, אבל אחד המועמדים היה כל כך ייחודי עד שהסיפור עליו נשאר חרוט לי בזיכרון.
ביקשנו מהמועמד לפתור בעיה תכנותית פשוטה על הלוח. כמה פשוטה? מספיק כדי שארגיש בנוח לשאול אותה את אחי הקטן, שבאותו זמן הרקע התכנותי שהיה לו הסתכם במגמת מחשבים. בכל מקרה, המועמד ישב וחשב והתלבט, והרהר ובסוף בפרץ של כנות אמר "תראה, אני איש אוטומציה, אני לא מתכנת".
להגנתנו ייאמר שהצלחנו לא לצחוק.

ולמה נזכרתי בזה עכשיו? טוב ששאלתם. נתקלתי לאחרונה בפרסומת ל"סדנת אוטומציה", אתם יודעים, הדברים האלה שמבטיחים לכם שבתוך חמש דקות ובלי מאמץ או ידע קודם, אפשר לכתוב אוטומציה. 
בדרך כלל, אני רואה את הדברים האלה ומתרגז בשקט, כי ברור לי לגמרי מה הולך להיות שם בסדנה, או בקורס, או איך שלא רוצים לקרוא לזה - ייקחו חבורה של אנשים שרוצים באמת ובתמים ללמוד משהו, יושיבו אותם מול מחשב, יכתיבו להם כמה שורות קוד, הם יראו דפדפן רץ מול העיניים שלהם ויחשבו "הנה, יש אוטומציה". יש בערך מיליון דברים בעייתיים עם הגישה הזו, אבל לא לשם כך התכנסנו, כי הפעם החלטתי לחרוג ממנהגי וביקשתי ממעביר הסדנה להגיע ולהשתתף. אחרי הכל, מה הן כמה שעות מהחיים שלי עבור היכולת להשמיץ אנשים בצורה מבוססת?
חוץ מזה, אם אני כבר משקיע זמן כדאי להיות הוגן. אז החלטתי להגיע עם ראש פתוח ועם כמה קריטריונים:
  1. הצגה הוגנת של הסדנה - מדובר בסדנה של שעתיים בערך, אי אפשר באמת להספיק המון, חשוב להציג את היעדים בצורה ריאלית כדי שהמשתתפים ידעו למה לצפות. 
  2. האם יש יתרון על פני צפיה בתוכן באינטרנט?
  3. יחס למקצוע (במקרה הזה - לאוטומציה). 
  4. עד כמה הסדנה מכוונת למכירה של התוכן בתשלום?
נתחיל מהסוף - חשבתי שיהיה גרוע יותר, אז אפשר לומר שהופתעתי לחיוב. 
קצת יותר בפירוט, אפשר לומר שהסדנה קיבלה ניקוד מלא בסעיפים 1 ו4, דילמה לגבי סעיף 2 ונכשלה ממש מבחינתי רק בסעיף 3. 
הצגת הסדנה הייתה הוגנת - כמה וכמה פעמים המרצה הזכיר שבשעתיים אי אפשר באמת ללמוד לעשות דברים כמו שצריך, או להבין הכל, ושיש עוד לא מעט ללמוד. בנקודה הזו, מבחינתי, הייתה גם ההצלחה בסעיף 4 כשהמרצה נמנע מלומר בכל רגע "ואת זה נלמד בקורס שאני רוצה למכור לכם" - כל מי שהגיע ידע שהסיבה המרכזית לסדנה הזו היא קידום מכירות, אבל מעט ההפניות לקורס הארוך יותר לא הרגישו כמו יותר מדי והכיתה הייתה ממוקדת משימה. נקודה אחת שקצת כאבה לי בהקשר הזה הייתה שהמדריך עשה כמיטב יכולתו כדי לשכנע את המשתתפים שהסיכוי שלהם למצוא עבודה בבדיקות נמוך אם הם לא יכללו בקורות החיים שלהם כישורי אוטומציה. זה בהחלט נמצא בתוך כללי המשחק, ויש מספיק נתונים שתומכים בטענה כזו, אבל במצב השוק היום אני חושב שעדיין נעשתה פה עבודה יפה של קטיף דובדבנים כדי להציג את המצב כחמור יותר מאשר הוא באמת. עם זאת - לטעמי לפחות, זה לא היה מוגזם. 

היתרון על פני האינטרנט, במובן מסויים, הוא בעייתי - מצד אחד, יש מישהו שיכול לענות לשאלות. זה חשוב. זה גם עוזר לפעמים. מצד שני, כשלומדים לבד, אין מגבלת זמן מציקה כמו שיש בסדנה, אז ההתמודדות עם הקשיים נעשית פשוטה יותר, ואפשר גם לעצור כדי להבין דברים קצת יותר לעומק. ברור לי לחלוטין שאילו הייתי משווה את הסדנה מול תוכן אינטרנטי טוב (למשל, כמו החלקים החינמיים של הקורס של Alan Richardson), הסדנה הזו מספקת תוכן ברמה נמוכה הרבה יותר. מצד שני, כשמגיעים ללמוד נושאים באינטרנט לבד, קשה לדעת אם מצאנו משהו טוב או משהו גרוע, ויש בחוץ מספיק תוכן שיגרום לסדנה הזו להיראות סבירה למדי, כך שאני מוכן לתת למרצה ליהנות מהספק ולומר שעדיף להגיע לסדנה כזו מאשר לחפש באופן אקראי באינטרנט. 

אבל, החלק החשוב הוא היחס למקצוע. זו הסיבה בגללה הייתי צריך ללכת לסדנה ולא סתם לשפוט אותה מרחוק - בעוד שהתוכן של סדנה כזו הוא ברור מאליו ושונה רק בכמה ניואנסים, מה שמשנה באמת הוא הסאבטקסט שנמצא בסדנה - הצורה בה מדברים על התוכן והאווירה הכללית בחדר. בקיצור - כל מה שישפיע על הדרך בה המשתתפים בסדנה יתקדמו הלאה. חשוב לומר: כל סדנה קצרה שמנסה ללמד אנשים שאינם מתכנתים להשתמש בספרייה ספציפית (סלניום, במקרה שלנו) מתחילה את ההערכה בסעיף הזה בציון נכשל. עם זאת, סדנה טובה תצליח בכל זאת לעבור את ההערכה הזו אם המסרים שמועברים בה נכונים. 
כאן, די בהתחלה, הייתה נקודה קטנה של אור -  המרצה ציין שהדבר החשוב למי שרוצה ללמוד אוטומציה הוא ללמוד לתכנת. בעולם בו לא מעט אנשים מנסים למכור "Codeless automation", זה מסר שאסור לוותר עליו. נקודות אור נוספות היו האמירה שצריך להשקיע לא מעט עבודה ומאמץ כדי להיות טובים באוטומציה או תזכורת לכך שאוטומציה היא לא רק סלניום.
אבל, זה לא מספיק. קודם כל, כי פעולות הן חזקות הרבה יותר ממילים, והפעולות בשטח משדרות את המסר ההפוך. אוטומציה זה לא רק סלניום? אבל גם בסדנה וגם בקורס הארוך יותר זה הדבר הראשון (והיחיד, אם הבנתי נכון) שמלמדים, היתר הם "נושאים מתקדמים".  צריך להשקיע עבודה ומאמץ כדי להיות טובים? צריך ללמוד לתכנת היטב? איך זה מסתדר עם אמירות כמו "באוטומציה אנחנו לא כותבים 'קוד של מתכנתים' ברמה גבוהה" או "מתכנת שבוחר לכתוב אוטומציה זה כי אין לו את היכולת להיות מפתח מוצר"? ואיך זה מסתדר עם כל המטרה של הקורס שהיא "לפרוץ מהר לעולם האוטומציה"? בנוסף, אפילו בסדנה של שעתיים, שמתמקדת בסלניום ומטרתה היא מכירות, הייתי מצפה לראות מילה או שתיים על כתיבת קוד נכון - גם אם אין זמן להסביר מה הן פונקציות, או מה הם משתנים, אפשר היה להגניב מילה או שתיים על מתן שמות נכונים, ועל הוצאה של מחרוזות לתוך משתנים בעלי שם נורמלי. אפשר היה גם להזכיר, בחצי מילה, שמה שכתבנו בסדנה איננו בדיקה עד שמוסיפים לפחות שורת assert אחת.
אם זו הגישה - בטח שאפגוש בראיונות אנשים שיחשבו שזה לגיטימי לומר שהם אנשי אוטומציה ולא מפתחים.

לסיכום, המצב לא טוב. מצד אחד, המרצה הזה עונה על צורך שיש בשוק - אני בהחלט מאמין לו כשהוא מספר שרוב התלמידים שלו מוצאים עבודה באוטומציה. במובן הזה, השירות שהוא מוכר הוא שירות אמיתי ובעל ערך. מצד שני, אני מאמין שהשירות שהוא מספק הוא שירות דוב. הגישה שהוא מקדם פוגעת באופן אקטיבי גם במקצוע וגם בבודקים שלומדים אצלו - הם יוצאים לשוק העבודה עם סט כישורים לקוי, ועם גישה מוטעית ואז עושים עבודה טובה פחות מאשר הם אמורים לעשות. פרוייקט אוטומציה שעוקב אחרי העקרונות שדובר עליהם בסדנה הוא פרוייקט שמועד לכישלון בתוך שלוש שנים.  מצד שני, השוק לומד שמתכנתים גרועים צריכים לכתוב אוטומציה, ומוריד את הסטנדרטים - מה שאומר שיותר פרוייקטים מועדים לכישלון, כי מלכתחילה משקיעים בהם פחות, ושהשכר הממוצע יורד, כי למתכנתים הגרועים יש פחות אופציות. לא אמרתי את זה שם בקול, כי לא הגעתי כדי להרוס למרצה את אירוע המכירות, אבל אני חושב שמי שבאמת רוצה ללמוד "אוטומציה" צריך פשוט ללמוד לתכנת, ואם זה עדיין יהיה לו מסובך, להוסיף עוד יומיים כדי לכסות את הכלים הספציפיים בהם משתמשים כדי לכתוב בדיקות.





A couple of years ago we had an opening for another tester in our team and I found myself interviewing a lot of candidates. It was a bit long and very painful, but one of the candidates left a lingering impression. Like with most candidates, we've asked him to solve a simple coding problem on the whiteboard. How simple? Simple enough so that I asked my brother (who's programming background at the time was the stuff taught in high-school) to solve it, and he had no real difficulty with it. The candidate sat and thought about it. He pondered and considered and then contemplated a bit. Then, with an outburst of honesty said "look, I'm an automation person, I'm not a programmer".
We managed to keep a straight face.

The reason I recalled this now is that last week I stumbled upon an advertisement for a "introduction to automation workshop", free of charge, only two hours long. You know, the stuff that is usually done to sell a longer, pricey course that will teach someone "automation" from scratch. Normally  I see these things and fume silently. Those workshops have a constant format: You gather a bunch of people with no prior knowledge on coding, make them copy-paste some selenium commands in your language of choice and voilà! Automation! There are a million things wrong with this, but that's not what we're here for, since this time I changed my normal habit and asked the organiser to join. After all, what are a few hours of my time for the possibility to smear people on a founded basis?

However, as I was investing some time in it I decided I'll try to be fair and set some criteria according to which I'll pass my judgement. They are (in no particular order):

  1. Fair presentation of the workshop. One can only achieve so much in a two hours session, especially when assuming no prior knowledge. Making sure to explicitly say that and let people set realistic expectations is important.
  2. Is there a real advantage to attending the workshop over just going through some online tutorials?
  3. Respect to the profession (in this case - automation) - A sales pitch that conveys the message of "Automation is not that difficult, everyone can do it" will gain my contempt faster than one that will say "Automation is a complex task, let me help you do the first steps". 
  4. How much is the workshop geared towards sales? It is one thing to attend such an event when it is clear that it is only a taste of a bigger course, it is another to have the speaker mention it every other sentence and pushing it down the audience's throat. 
To start with the conclusion: I expected worse, so one might even say I was positively surprised. The workshop got full points for criteria 1 and 4, sort of passed the 2nd and failed only in the 3rd criterion, and even that was less severe than I expected it would be.  
The workshop was presented in a fair way - the speaker mentioned that no one will leave this room an expert, and threw some terms around to show that there is more to learn. This is also where I decided that the level of sales promotion was decent - the fact that most of the mentioned "stuff we won't learn today" is taught at the courses was almost unmentioned - Naturally, it was there in the background, but  I appreciated that it was not mentioned bluntly. The only discomfort I felt here was that the speaker did spend some time to convince the audience that it is becoming more and more difficult to find work without "automation skills", and while the market is probably going in that direction, I think there was some cherry picking in the job-ads presented as "evidence" to make it look more significant than it actually is. Still - it felt well within the boundaries  of the game. 

An advantage over the internet was a bit tricky to determine. On one hand, there's someone who knows his way around that can answer questions, explain what is needed and point the way forward. On the other - when one studies alone there isn't this pesky time limitation that forces the approach of "just copy paste this, we don't have time to explain this in detail", and being forced to deal with some technical difficulties usually mean better understanding. It is quite clear to me that in comparison to a good internet tutorial (such as the free parts of Alan Richardson's course), this workshop falls very short: The explanations in the video series are far better than what one could have got in the workshop, and the content is far superior in that that by the end of the workshop the participants had a "main" function that couldn't be used or extended in any way, while following the video tutorial would have left one with a fully functional, albeit very basic, testing project (maven+Junit). However - I knew what to look for. I was familiar enough with Richardson work to suspect he would have a very good tutorial, and I had the knowledge to evaluate the content one I saw it. Simply googling for "selenium for beginners" gave a lot of results that would put the workshop contents in a very good place. So, taking that into consideration, I think there is a small benefit to attending this workshop over trying to learn alone (unless you are reading this post, in which case, you know where to find better content).

Finally, the main pain point is in what I labeled as "respect for the profession". This is the main reason I felt the need to actually attend the workshop: The content of such workshops is pretty much well defined by the genre and will differ only in a few small nuances, but the real difference in in the subtext - in the way things are presented and the subtext of the whole activity. It is important to say that in my eyes every short workshop with the goal of teaching non-programmers to use selenium starts this evaluation with a failing score. A good workshop might flip my impression by focusing on the correct messages, but it isn't trivial to do so.
Here, it actually started with a small bright spot, when the speaker mentioned that in order to learn automation one needs to learn coding, which is an important message to hear when the market is so full with solutions selling "codeless automation". I was also happy to hear that this speaker did not equate automation with selenium (and has provided some reasons for focusing on it first) and that people wanting to know how to automate will have to work hard at it.
However, this wasn't enough. Primarily because actions are stronger than words, and they were sending the opposite message. Automation is not only selenium? then how come that the workshop (and, as far as I understood, the "basic" course as well) is only about selenium? I would have been satisfied with even a single, simple assertion.
Hard work is required to be able to learn automation? How does this align with statements such as "When automating we are not writing 'developers code' " or "A programmer that decides to write automation is probably lacking the skills to write production code"? And how does it align with the whole goal of teaching non-coders to write automation in less than 15 lessons? If I'm generous, that's the amount of time invested in CS101, and no one would dream consider CS101 graduates as decent programmers. In addition, even in a 2 hours workshop, if good coding was something the course provider believes in, I would expect to see some minor tips about naming variables, or extracting logic to function, even if it was only to say "normally I would do this, but here we don't have the time".
If this is the approach taken by course providers, it is no surprise that a candidate thought it is legitimate to claim that they were "an automation engineer, not a programmer".

In conclusion, the situation isn't good.
The speaker is answering a market need - I believe his claims that his students are able to find work in automation. In that he is providing his clients a valuable service. But, I think that this course (and its similar competition) is doing a disservice to both their students and to the profession. The students are getting to the market with a lacking skillset and a skewed perspective about how good automation should look like. Therefore, they are not doing as good a job as they should have been doing. In fact, I would wager that any automation project based on the principles demonstrated in the workshop will fail within three years.
Meanwhile, the market is flooded with low skilled "automators"and the many places that are now venturing into the automation space are finding a lot of bad candidates around - some of them will get hired since the hiring company doesn't know better. In fact, that market learns that "Bad programmers are good enough for automation" and this has an impact in three ways: Good programmers are considering automation to be beneath them, companies paying less those that have automation skills (compare to "real" programmers), and the expectation from an automation project is lowered - making it a self fulfilling prophecy.
In my opinion (which I kept pretty much to myself, as my goal was not to ruin the sales event) any person wanting to learn automation would do well to avoid such courses and focus on learning to code well. If, after doing so, automation will still be a challenge, just take a two hours course on some of the specific tools used in testing. 

Monday, November 12, 2018

אולי אחר כך

Myabe later


תזמון זה הכל בחיים. או ככה לפחות אומרים כולם. 
בדרך כלל כשמדברים על תזמון בהקשר של בדיקות תוכנה יש לזה כיוון אחד - איך אפשר להיות מעורבים מוקדם יותר?
אבל כמובן, זה לא הכל - לפעמים, צריך לדעת מתי לא לדבר, ולפעמים גם מתי כבר החמצנו את ההזדמנות ועדיף להמשיך הלאה. 
ומעשה שהיה כך היה:  לפני שלושה חודשים, פחות או יותר, הייתי בחו"ל כשהשתתפתי בCAST (היה אחלה, תודה ששאלתם). לפני שטסתי, בדיוק סיימנו בעבודה לתכנן איך יראה אחד החלקים המורכבים יותר במוצר שלנו. הייתי מעורב בתכנון, היה מעניין והכל היה ברור לחלוטין. כשחזרתי אחרי שבועיים של היעדרות סיפר לי חבר הצוות שהוביל את הפיתוח שהיו כמה וכמה דיונים וויכוחים והוחלט להפוך את העיצוב למשהו אחר. לא ממש הצלחתי להבין מה ולמה, אבל נו - מי שלא נמצא, לא יכול להתלונן שלא שיתפו אותו. בעודי משלים פערים ומבין מה קרה בזמן שלא הייתי גיליתי שיש עוד פיצ'ר שנמצא בשלבים התחלתיים יותר וגם שם יש כמה דברים שחשבתי שכדאי לעשות אחרת. תוך כדי שאני מדבר עם חברת הצוות שהובילה את הפיצ'ר הזה ושואל למה דווקא כך ולא אחרת קיבלתי תשובה שנוסחה בערך ככה "היו על זה אינסוף דיונים, ודי נמאס לי מכל זה, אז זה מה שהחלטנו ואין לי כוח לריב על זה". כאן נדלקו שתי נורות אזהרה: קודם כל, יש מישהו שכותב קוד שהוא לא מסכים עם הדרך בה הוא כתוב. זה מתכון לטעויות. כולנו צריכים לעשות את זה מדי פעם, אבל זה לא פשוט בכלל לשחרר את התפיסות האחרות ולכתוב את הקוד כמו שמישהו אחר רוצה. שנית, זה אומר שהיו כמה דיונים שלא התנהלו כמו שצריך, והצלחנו להגיע למצב מקביל להוכחה ע"י התשה. בקיצור - עכשיו זה לא הזמן הכי מוצלח לבקש שינויים מרחיקי לכת. במקום זה, את החלקים הקריטיים יותר הצעתי לעשות בעצמי (טיפ קטן - אם אתם עובדים עם מסדי נתונים טבלאיים, טעות בשם של טבלה או עמודה זה לא משהו שכיף לתקן "בגרסה הבאה". כמו רכבות, זה משהו שצריך להרוג כשהוא קטן). שאר הדברים? ובכן, זה יכול לחכות לאחר כך. גם הפיצ'ר הגדול שמשום מה התהפך לו העיצוב.
השיקול שהיה כאן היה פשוט - הרבה יותר קל לתקן קוד מאשר לתקן מערכת יחסים בתוך הצוות, והרבה יותר קל להתמודד עם קיומו של קוד לא מזהיר מאשר עם קצר בתקשורת.
מה שקצת התעלמתי ממנו היה, כמו תמיד, הגורם האנושי.  אחרי זמן מה, רעיונות נוטים להכות שורש ולהפוך לקשים מאוד לעקירה, כי אנשים מתרגלים, ולהחליף משהו בדיעבד זה כבר מאמץ שונה מאשר לשנות אותו תוך כדי כתיבה. 
בסופו של יום, ניהלנו את הדיון שהיה חשוב לנהל, פשוט כי הצלחנו לבלבל את עצמנו כהוגן. אני עדיין לא מת על התוצאה הסופית (או יותר נכון, על התוצאה הנוכחית, עם כל יום שעובר, אנחנו מתקרבים יותר לתכנון ההגיוני שהיה מלכתחילה, רק עם שפה קצת אחרת), אבל אחד הדברים החשובים ביותר שלמדתי עד כה הוא שלמרות שנורא קל לחשוב שאני תמיד צודק (מה שנכון בלי קשר), האנשים שאני עובד איתם מבינים תוכנה טוב לא פחות ממני וכשהם עושים בחירות עיצוב שונות זה כי כל אחד מאיתנו נותן ערך שונה לתכונות אחרות של הקוד. מה שחשוב באמת הוא שפתחנו מחדש את הנושא ודיברנו עליו כדי להבין טוב יותר מה אנחנו מנסים לעשות. תוצאה נוספת של הדיון היא שהוספנו מטלה של שרטוט מחדש של כל ההחלטות שקיבלנו כדי ליישר קו ולראות שאנחנו באמת מדברים על אותו דבר - מה שחסך לנו תקלות בהמשך, כי מצאנו חלק מהמכשולים הצפויים מוקדם יותר.
בקיצור - לא תמיד צריך להתעקש ולדבר על הכל "עכשיו", לפעמים אפשר גם להיות קצת בשקט, כל עוד הנושאים החשובים לא נשכחים.




Timing is everything. Or at least that what's everyone says.
Usually, when said in the context of testing, it has one meaning - getting involved earlier.
Naturally, it is not everything. It is just as important to know when to when not to speak, and even when the opportunity to speak is gone.
A couple of months ago, I was abroad participating in CAST (Was great, thanks for asking). Just before I flew over we finished designing a complicated piece of code in a way that I was quite happy with. Two weeks later when I returned I found out that there were some discussions and the design was changed. I didn't really understand or agree with all of the decisions made, but that's fine - I wasn't there when the discussion was taking place and it's unreasonable to expect them to wait for me just because I took a vacation. As I was catching up, I came across another feature where I had some comments. As I spoke with the team member who was working on this I got a response that went along the lines of "there were countless discussions about it, I'm sick and tired of all this so I just do what I'm told". This raised two alarms in my head: First, we have someone who's writing code they don't agree with its design, which is a source of potential mistakes as writing code is much about having a feel of what part fits where and when you write something you disagree with you don't always have that intuitive understanding. The second, more serious alert was that there were some discussions that went wrong, and we were in the domain of proof by exhaustion, and I don't mean the proper mathematical type of exhaustion. In short, now might not be the best time to ask for a major change I thought was required. Instead, I asked for the urgent changes only (you don't want to go and change database tables once in production, it makes it way more difficult in terms of backwards compatibility) and offered to do the work myself - I went to get agreement from the other people, and changed the code. I also left the important but not urgent discussions for later.
My reasoning for not pressing on this argument was simple: Fixing code is easier than fixing people, and it is much easier to deal with a not so perfect piece of code than to deal with a communication problem in the team.
What I didn't take into consideration was, of course, the human nature: After a while, ideas take root in people's mind and they get used to it, so changing their mind now is more difficult. In addition, changing something in retrospect usually takes more effort than changing it while it is being created, so the price of action is higher. 
At the end, we did have another discussion about the design - I can't say I'm super excited with the result (or rather, with the current result, as the solution is inching its way towards the original design that I liked, only with different terminology), but during the years I've learned one thing - While it's easy to think I'm always right (I am, but this is besides the point) The people I work with know software at least as good as I know it. If someone else prefers a solution I don't like it's usually not because they are wrong, but because each of us gives different weights to the various properties of each solution. What was really important was that we discussed our differences and got to an agreement about how we want to push forward. We also added a task to create a diagram of what we've decided to make sure everyone is aligned - which helped us avoid some mistakes in the future since we found some obstacles earlier.

So, to sum things up - Speaking up is not always the right thing to do - as long as you are having those discussions later.

Sunday, September 9, 2018

Data and Goliath - book review


TL;DR - you need to read this, and I was impressed enough to buy a physical copy of the book.

Listening to Audiobooks is a great way to make use of my time while driving, and it sure makes traffic that much more bearable. As a bonus, I get to learn new stuff. Ain't that great?
This time I was listening to Data and Goliath by Bruce Schneier. I must admit - I wasn't expecting much of it: Big data here, big data there, big data everywhere. In fact, this book makes a very good case for the importance of privacy and does a good job describing the problems in the trends we see in today's economy where data is one of the most valuable currencies. I was eagerly waiting to get to the final section of the book where the author lists some actions we can take to improve the current state of affairs. This last section, sadly, is the one point I didn't like , since after making some promises at the beginning this section can be summed up as "there isn't much you can do, so go ask your government to pass some laws".
After a short summary of the book in the introduction, which is great if you don't have the patience to walk along the details,  the book delivers its content in three section.
The first section, dubbed "The world we're creating" is very much a description of the current state of affaires. It goes over what is data and how it is generated (short answer - we generate data merely by existing, with our phone providing location information, security cameras videotaping us, and everything we do in the internet is, basically, generate data), how surveillance is ubiquitous (a word used many times in this book) and focused on "meta-data", how cheap it is to just pile up the data - we got to the point where storing stuff is cheaper than filtering the interesting bits out. It describes two factors driving forward the momentum of surveillance - the financial incentives of businesses that use the data for marketing that supports a "free" service on the internet, and then sell information as another revenue path. The second is the governments, that quite unsurprisingly wants to know everything - fighting crime and terrorism are the most used reasons to support that. While private companies may collect the data of its users (which might be a staggering amount of data if one considers giants such as Facebook, Google or Microsoft) and maybe buy some data from other companies, but the government is even more encompassing - laws require companies to share that data without much supervision (and sometimes a gag order is issued to ensure everything remains hidden), other regulations might demand a backdoor for use of the government and sometimes the various agencies actively hack products and infrastructure to maintain access to data. One main concept that makes perfect sense and yet I've not considered it explicitly before is that if enough data is collected on the people one interacts with, just about everything can be inferred about a person who, in theory, isn't tracked - just because tracking the others will capture their interactions with that person. The book appropriately uses the term "inverse herd immunity".

The second part is going back to "why?" or, to be more specific, why should we care so much? The most important part, in my eyes, is a head-on challenging of the way too common saying "If you're not doing anything wrong, you don't have anything to hide". This statement is with us way too long, and I've yet to hear a debate around privacy that did not use it in one form or the other. In fact, there are many secrets people keep all the time - Think about people who had AIDS a few years back: everyone who knew that about them would assume that they were homosexual and irresponsible (the former is still not fully accepted everywhere) even if they got infected by a blood transfusion. But that's an extreme case - would you like your employer to know if you've started looking for another place? And what if you were trying to surprise your spouse with a vacation abroad and someone other than you told them that? Would you want a conversation where your manager asked you to help a struggling co-worker become common office knowledge?
We all have secrets. Or at least some data compartmentalization mechanisms in place - we choose which information to share and with whom, and if asked directly, most of us would not volunteer to be monitored (A rather blunt example of the effect of complete lack of privacy can be seen in the the movie "The Circle", which is not a very good movie. You can rather listen to Screen Testing episode 11 which is where I've heard about it).
Besides the personal aspect of privacy, the book mentions other reasons for strongly opposing mass surveillance (yes, we don't usually think of it in those terms, but when just about every company or national or municipal authority has access to data about a large amount of people - this is what it is).
Those reasons are political liberty, commercial fairness, economy and security.
Of those reasons, the one with most aspects is, surprisingly, the political liberty. For starters, let's consider what the author calls a "chilling effect" - when people know they are being watched, they behave differently. Just remember the last time you drove by a police car and slowed down a bit even though you were well below the speed limit. Now imagine driving where you know that by the end of the day, the police would get an exact report of when and where you were speeding, and when and where you were crossing a white line. This can be easily done if the police were to take your location data from your phone or service provider. Such surveillance pushes people to conform with norms
Second is the potential of being harassed by the law - No man is 100% law abiding - people speed, make mistakes on their taxes, cross streets in red light and so on. A strong enough political figure (or a petty enough police person) could make an individual's life a miserable place if they were to dig in on the data about them and look for petty crimes. Mass surveillance helps lower the costs of such activities, and removes most of the regulation around it.
Finally on that matter is the important role dissidents play in social change. It's a bit odd to wrap ones head around it at first, but then it just clicks. Basically, in order to have social change we need to allow for some degree of illegal activities. How so? Consider two rather recent examples: Same sex marriage and smoking marijuana. Looking 30 years back, both of those activities were considered shameful at least, and probably downright illegal (It still is in some countries). Yet, a growing number of people were doing it - first in hiding, then the laws were not enforced and then started the legalisation debate (which is still going on in some places). In the meanwhile, public opinion is shifting. This is possible since homosexuals could hide in the closet and not being persecuted, and since enough people used pot illegally without "getting caught" (or if they did, without serious repercussions) so they could form communities and lobby for that specific activity to become legal and accepted. When surveillance is omnipresent, we get the opposite. The chilling effect mentioned earlier kicks in and people are trying to remain well within the "norm", thus dragging social changes to a halt. When people are aware of being constantly monitored  they prefer to err on the side of safety and not act (in "Think Fast and Slow" Daniel Kahneman states that people value loss or pain about twice as more intensely as gain or pleasure) and thus self-censor their actions and behaviours. This inaction, in turn, is causing stagnation and fortifies the boundaries at what is "acceptable", effectively narrowing them.
The other categories are almost self explanatory -
The economy of a given country is suffering from survelliance since there are other comparable products that will track people less. For a long while the company I worked at blocked Skype from being installed, since Microsoft were (being forced into?) providing a backdoor for the NSA to eavesdrop on Skype calls. After Cambridge Analitica's shenanigans with Facbook blew up, we could see the #DeleteFacebook hashtag running around, and other examples are out there.  The chapter focuses mainly on regulation forcing companies to "share" data with the authorities and asks an important question: if a certain country is known to demand businesses to provide backdoors, and issues all encompassing gag-orders to hide it - who would do business with any company from that country?
Commercial fairness is the term the book uses to describe data driven discrimination. After listening to Weapons of Math Destruction I needed very little convincing that "big data" can be and is being used in ways that discriminate people unjustly. In short, data is being used as a more obfuscated form of redlining (For anyone such as myself lacking the American reference - redlining was a practice where banks avoided investing or approving loans in impoverished neighborhoods). While there is an objective financial gain out of redlining - obfuscated or not - this practice is harmful in the long run, hampering social mobility and punishing people for being poor or part of a minority group.
Lastly is the argument of security. Again, this focuses mainly on activities of governments and other nation authorities. The claim is simple, and widely accepted in the security community: There can be no backdoor that can ensure only "good guys" will access it. By forcing companies to install security flaws to their products, by actively hacking civilian organizations, by hoarding vulnerabilities instead of driving for them to be fixed the governments are making the entire internet less secure.
One last point that is worth mentioning in this section: None of the arguments raised here are new, so one might wonder what has recently changed to warrant such an interest in privacy? The answer is that two things have changed - The first is that storing information has become so cheap that "let's just store that and see later if we can do something with it" is a viable, almost affordable, strategy - so more information about is is being stored. The second thing is that our life revolves more and more around computerized systems. Our phone,and the multitude of security cameras on the street mean that we generate a whole lot of data just by taking a stroll. In the past, information was ephemeral, in that that once a conversation was over, it would reside only in the memory of the participants (recording was possible, but not common). If someone sketched a note and then ripped it to shreds, that information was gone. This is not the case today when we communicate online and our information exists on other people's computers (sometime we call them "servers") that have routine backup, so even data we thought we've deleted might only have been marked in the database as such and not actually deleted, and even if it was deleted, this database might have backup tapes that go years back. Today, the follies we make as teenagers go on social media and will haunt us when we're older - Children will see their parents drunk in photos from 20 years ago, potential employers will see an old tweet they vehemently disagree with, a picture shared by a proud parent today will be used in 20 years to steal that child's identity. The persistance of data ensures that if there's a tiny bit of information we don't want a specific person to know, they are sure to find it.

So, what can we do about it? The book answers this question for three types of actors - governments, corporations and private people. 
The governments wanting to improve the privacy of the world they exist in are quite powerful: They can set rules and regulations that limit the access to data, strive to fix vulnerabilities their intelligence agencies find instead of hoarding them, protect whistleblowers from charges (so that the citizens will know if the government officials are subverting their privacy) and avoid a number of harmful activities exerted by the different departments. The most interesting idea in this part is to provide a "commons" Internet arena - platforms in the internet that are "publicly owned", such as parks or sidewalks - a place where financial pressure to track the users and maximize revenue is negated. Those public domains should be defined with specific laws ensuring proper conduct and resilience to surveillance,  and should be budgeted by the people's taxes.

Corporations, too, are quite powerful in that they are the main collectors of information. So, if a specific corporation decides to collect less data - they can. They can also be very transparent about the data they collect and do a decent job in protecting it so that it will only be used for the purpose it was intended for. Being both technically savvy and large enough, corporations can (and do) also battle the government's attempt to breach security - by creating a secure product when not specifically obligated by law to do otherwise, by challenging warrants and government requirements in the court of law and by investing in research to secure their product. Since the current state of affairs favours businesses that do surveillance it is understandable that quite a large part of the chapter is about "what a government should do to protect from corporations".

The third section is the private people using the internet - being the target of surveillance by two major forces, there is, unsurprisingly very little one can actually do without incurring significant harm to their ability to operate - A person can choose to install some privacy enhancing plug-ins (Privacy badger and AdBlock are two that I use), we can make sure we use TLS everywhere we can, avoid having a social media profile, pay for services that are promising privacy instead of using the "free" equivalents that guzzles your data to increase revenue. One can also leave without their phone regularly, pay only in cash and move to the mountains living off the land. Apart from those rather insignificant actions, the main suggestion is to ask your local politicians to change the laws.

One thing which is important to remember while reading this book is that the idea of trading our privacy for services isn't inherently wrong, and the author does not claim otherwise - processing data, even personal data, can do some good - it can improve people's health and support research (imagine a DNA database that is used to find suitable organ donors, or warn people about dormant life threatening genetic flaws) it can be used to improve road safety and reduce traffic or prevent credit card fraud. Also, minor as it might be, it can help people find things they want by getting better personalized advertisement. The main issue is that the deal today is implicit and all encompassing - there's no backing off from the deal we unknowingly made, and there aren't enough incentives to not keeping all of our data.

The last point I want to touch upon is how fast this book seems to age. It was published in 2015 with what was at the time the most up-to-date information, including some insights from the Snowden leaks in 2013. Despite that, while listening to the book I was having a constant feeling of "missing out". In roughly three years since the book publication, many of the trends shown as warning signs seem to be already in full scale, and reversing the trend seems even more difficult than what is described in the book. I'm a bit optimistic to see some positive changes such as GDPR, and wonder if it will be enough, or will we drift towards a world with zero privacy.


In conclusion - go and read this book.

Also, it just so happens that I finish writing this just before the Jewish new-year, so if you got until here: Happy New Year!
שנה טובה

Sunday, August 12, 2018

Cast 2018, day 2


One thing is common to all good conference – I miss out on sleep hours because there’s so much to do, and this conference was no different.
I woke up, organized my stuff, and went down to lean coffee, only slightly late. The topics, as usual, were very varied – We’ve discussed personal insecurities, what does it mean to be a senior team member (short answer – you don’t get to actually work) and how to approach the whole issue of effective and efficient documentation & reporting. Everyone was engaged - Almost every topic got at least one extra timeslot.
The opening keynote of the day was delayed due to the speaker flight being delayed, so instead we got the next activity of the day a bit earlier. I got to Lisi’s mobservations session – she dealt really nicely with the surprising change of plans and had the classroom ready for a mob. If you are ever in a session where there is a demonstration of mobbing, do yourself a favor and volunteer to be part of the mob. Yes, you’ll be putting yourself in front of an audience, but watching a mob is nothing like participating in one. As a mob, we’ve spent quite a while in orienting ourselves around the application under test and trying to decide on a concrete direction that we should take, and had a difficult time doing that. But frankly – testing the application wasn’t really what we were there for. Learning to mob was our purpose, and Lisi provided some excellent guidance to help us focus on how we behaved as a mob and then how we behaved as individuals in a mob. All in all, we got a reminder of why mobbing is difficult, but also saw how effective it was in dispersing knowledge in the team – even if it was only how to use certain tools or deal with an operating system in German. I feel that this exercise should have been maybe a couple of hours longer to really get some decent pace, as a lot of the insights we came to did require both trying it out, and some hands-off reflection. But, given the constraints, and while there is always something more that can be improved, it was a good experience for me and I would be happy to have some more like it.
Sadly, I cannot say the same thing about the keynote, to which I didn’t connect at all. The overarching topic was similarities between UX design and testing, but it felt very remote and detached. Perhaps I was missing the background to appreciate such a talk.  But, you know, that happens, too.
Good thing lunch was immediately after that. I had a nice chat over food and drink, and then went risk-storming with Lisa, Alex and a few other testers. This was a very interesting experience for me, and the first time I held a deck of TestSphere cards, which appear to be an interesting tool to have in certain situations.
Afterwards I attended Paul Holland’s workshop on unlocking creativity in test planning. It was very nicely built, and I got to both troll Paul over twitter by paraphrasing what he said and to take away some important insights from the workshop. First of all, a requirement for creativity is peace of mind, which is obtained by setting boundaries – both spatial and temporal. Second thing is that some ideas just take time and offline processing. Third, Ideas bring out other ideas, so stupid ideas would most likely attract some good ideas as well. But most importantly – Don’t burden yourself with too much information. Get a basic understanding of the task, then stop to think and process, and only after you done some hard thinking come back to the rest of details and see whether concerns you had are addressed by all of the tiny details you skipped, and what does it add to the mental picture you already have in mind.

The best talk of the day was waiting for last. I went to Marianne’s talk titled “Wearing Hermione’s hat: Narratology for testers” Marianne combined three of her passions: Testing, Harry Potter and literary studies. It was a perfect combination for me, and I happen to share her affection to those subjects, even if to a lesser extent (My focus during my studies was more on poetry and less on prose, and I don’t know my Harry Potter as deeply). Marianne spoke about how people tend to follow the first paradigm they adopted and ignore further information that might prove otherwise, which connected in my mind with Liz’s keynote about people tendency to seek, and pretend to find, order and patterns where there is none to be found. Another important observation we can borrow from narratology is the need to look again – our first read of the book is usually great to get a basic understanding of what’s going on the surface, but after we’ve gained this basic understanding, a second reading will expose new information that wasn’t as clear before, and that we can only now notice. With software it is very much the same – we learn a lot by doing, and I have yet to see a project that by the end of it people didn’t have a better way to do what they just did. Marianne also mentioned that many companies engage in “root cause analysis”, but are actually only scratching the surface. They understand what went wrong in this specific instance, but don’t actually take the extra step required to find the systematic fails that contributed to those failures. If you do those post mortems and keep a record of them, it might prove interesting to do a meta-analysis on several of them to try and decipher patterns.
Another thing I found in Marianne’s talk was the value of specialized language. She spent a few minutes in providing the audience with a simplified explanation of the technical terms “text”, “fabula” and “story”1.
Afterwards, she used that distinction to point at a series of events where the story is different from the fabula, and what effect It had, and why changing the perspective helped in creating such “deception” that can only be seen and understood in retrospect. The fact that she had distinct names for two phenomena was not only useful as a shorthand, but also helped keep the two related ideas separate in the minds of the listeners, and be added to their toolbelt the next time they read a story. So, if you ever wondered why so many people fuss over terms and meaning while it’s clear that everyone understands what you mean – that’s why. Words, and technical terms2 in particular, are ways to direct our thought process and raise our awareness to things. They also carry with them a plethora of meanings and associations. For instance, during the talk I was reminded of Wolfgang Iser’s gap-filling, which is part of the reader’s-response theory, and thus immediately made it crystal clear that there is an important place for the “reader” who does the interpretation of the text and to the way they react.
All in all – A great talk to end the conference with. The only thing I’m missing is one of Marianne’s fabulous sketch-notes.

End the conference did I say?
Well, almost. We still had to grab dinner. I went to the room to rest a bit (it was a packed day, so I needed a few minutes to unwind). I then joined a very nice group containing Lisi, Thomas, Lena, Marianne, Lisa, Santiago and Andrea who were sitting and just chatting. It was a very nice way to say goodbye. We’ve sat for about three hours and then it was time to go to sleep. After all, I had a plane to catch in a ridiculous hour. I did manage to say goodbye to a whole lot of other people that were playing some board games.
And now (or rather, a few days ago, as I was writing most of this in the airplane leaving Orlando), the conference is over. I had a great time, and I have way too many people to thank for it to list them all here. Next time I’ll make sure to have some time after the conference. 


I usually match “fabula” with “Syuzhet” (which I’m more comfortable spelling “sujet”), but Marianne was conscious enough to spare the audience from more definitions to confuse them. In short, fabula is the chronological order of events as they “happened” in the imagined world of the text. The sujet is the order events are presented the reader. so “I fell after stepping on my shoelaces” and “I stepped on my shoelaces and fell” are the same fabula, but different sujet. And yes, I had to go back to my class notes to verify that.
A text is an instance of a literary creation, it is the book one reads.   
2 When I say “technical term” in this context I mean any word that has a specific meaning within a profession which is different than the common understanding, or not commonly used outside of a specific jargon.  



Friday, August 10, 2018

CAST, day 1


And what a packed day it was.
It all started with lean coffee facilitated by Matt Heusser, which was both enjoyable and insightful (the picture above is the discussions we were having, taken by Lisa Crispin). My main takeaway from this session was the importance of being able to verbalize your skills to yourself, and to communicate them to others. Also, this was my first lean coffee where there was actual coffee.
Then, the opening keynote. Liz Keogh was speaking about Cynefin, and delivered a great talk. I did  hear a similar version of this in ETC2017, but it did not matter very much. In fact, listening twice enabled me to better understand and process what she was speaking about. In short - developing software is complex space, probe a lot and make sure that your probes are safe to fail. Also, use BDD and avoid tools such as Cucumber (BDD is about the conversation, not about the feature files).
After the keynote I went to a workshop on domain testing passed by Chris Kenst and Dwayne Green. It's always nice to refresh the fundementals, and to learn a new name for that (I was familiar with the concept of Equivalence classes and boundary value analysis, that are techniques inside the space of domain testing).
During lunch I managed to talk a bit with some people, and then went on to the lobby where I met Alex and we've talked about organizing your desktop in a way that should (we guess) increase productivity. What I really liked was that we actually started mocking the screen layout that we would want to see. It was very cool to watch Alex tear down some paper pieces so that it would be easy to move them around. This sort of think kind of makes me want to go out and figure how to implement such a thing. The main challenge is that in order for such a solution to work, it must be ingrained in the OS is a seamless way, so that it will always be on top, and manage the size of just about everything else. I wonder if Windows are already offering such a thing.
The first talk I attended had a promising title about coaching and the GROW framework. It took me a while to find out that I didn't connect with the content and move to another talk - "Don't take it personally" by Bailey Hanna. I got just in time for the exercise. Not really knowing what I should do, my instruction was "be aggressive", and I do owe Polina another apology. I was very difficult.
After that, I went to Lisi's talk about her test journey. So far, I've listened to two of Lisi's talks, and they have been very dangerous to my free time. Lisi has a way of sharing her experience while showing her passion for what she did, and has a unique way of inspiring others to do the same. It was my favorite session of the day. Also, before having a chance to regret this, I agreed with Alex on pairing together, and we decided that by the end of August we will set up a time for a session.
My talk was up next, and I took my usual 5 minutes to stress out. The talk itself went ok, I think - By the end of it I felt as if I was pushing a bit hard to hold the list of ideas as coherent a narrative as I could, but I wonder how many in the audience actually saw it. The open season was, as expected for the time and type of talk - awkward silence. My facilitator at the talk - the Friendly Richard Bradshaw managed an amazing feat of wriggling some questions out of the audience, and had some interesting questions himself.  After the talk I got some very kind feedback, which I greatly appreciate.

A surprise was set for the evening - after a short time to meet & mingle, we all (or, up to a 100 of us) got on a bus and took of to the Kennedy space center. Rockets, space, astronauts, nice company (and even some food) - what more can one ask?
We got back to the hotel and I joined a couple of quick rounds in a card game I don't know the name of but was nice to play. Tired, I returned to my room and started writing this post, which, as you can see, I did not manage to complete before the conference was over.
Still, a whole lot more was waiting for me in the second day, but that's for another post that I hope to get on seen - there's still a week of vacation ahead of me, and I intend to make the most out of it.