Friday, July 17, 2020

Being right about the wrong things


So, following a link I encountered on Josh Grant's blog I came across a post named "against testing" where the author (someone named Ted, I think) has some very strong claims about tests and why most people shouldn't write them. On the face of it, most of those arguments are sound and represent a well thought of observations. It is true that most automated tests, especially unit tests, rarely find bugs, and that they are tightly coupled to the existing implementation in a way that means that whenever you come to refactor a module you'll find yourself in need of refactoring the tests as well.It is also true that there are a lot of projects out there where testing something in isolation is simply not that easy to do (There's a reason why Michael Feathers defines legacy code as "code without tests" ) and investigating a test fail requires putting a lot of time into understanding code that won't help you when you come to develop the next feature. Furthermore, having test code does introduce the risk of delivering it to production, where all sorts of nasty things might happen. No test code? This risk does not exist. 
All of those seem to follow a sound logic chain resulting in one conclusion - don't write tests. Not surprisingly, I disagree.
The easiest way to dismiss most of those claims is to respond with "most of those problems indicate that your tests are not good", and while I do hold true that a well written test should be clear to understand, at just about the right level of abstraction to minimize refactoring pain and targeting the tested unit commitments rather than implementation, I also know that those are hard to write, and that most people writing test code are not doing such a pristine job most of the time. In fact, for my response I'm going to assume that people write mediocre to trivial tests, simply because that's the most likely scenario. Most people indeed don't learn to write proper tests, and don't practice it. They get "write tests" as a task they must do to complete their "real" work and thus do the bare minimum they must. 

From my perspective, the post is wrong right at the beginning, stating that "In order to be effective, a test needs to exist for some condition not handled by the code", that is - a test is meant to "find bugs". For me, that's a secondary goal at best. I see tests as scaffolding - making the promises of a piece of code explicit, and here to help people refactoring or using that piece of code. If someone is working in a TDDish manner (no need to be strict about it) they can use this scaffolding to figure out earlier how their code should look like - when an internal logic that totally makes sense when implementing is just too cumbersome to use or when we need some extra dependencies. It is also a nice way to put things I don't want to forget in a place where I'll be reminded on. 
But, that's assuming TDD, and not enough people are using this method to justify writing tests, or to not delete them once I'm done, and that's when I get to two of the most common tasks a developer faces: refactoring code and investigating bugs. Starting with the fun part - refactoring. When refactoring a piece of code, there's one single worry - did I break something? Testing alone does not answer this question, but it does help in reducing it, especially in a language without a strict compiler. Imagine a simple python project, where there is a utility module that is being called extensively. I go and change the type of one of the parameters from string-duck to Object-duck (let's imagine I'm assuming a .foo() method to be available). It is already the case in 99% of the project, but not necessarily in all. If I wasn't using proper type hinting (as is sadly way too common), the only way I'll find this will be if I'll run the specific piece of code with the faulty code. a 100% line coverage increases my chances of finding it. Too far fetched? ok. What about a piece of code that is not straightforward? one that has what the linters like to call "high complexity". Just keeping those intricate conditions in mind is heavy lifting, so why not put those in a place where I'll get feedback if my refactor broke something?
Those types of functions are also a nightmare to debug or fix, and here I want to share an experience I had. In my previous workplace we had a table that was aggregating purchases - if they matched on certain fields we called them equal and would merge them. Choosing what to display had a rather complex decision tree due to what matched our business. I got a task of fixing something minor in it (I don't recall exactly what, I think it was a wrong value in a single column or something like that). Frankly? The code was complicated. complicated enough so that I wasn't sure I could find the right place. So I added a condition to an existing test. It wasn't a good test to begin with, and in fact, when I first approached it, it couldn't fail because it asserted that "expected" was equal to "expected" (it was a bit difficult to see that, though), Once I added the expected result to the test I could just run it, fix the problematic scenario, and move on to the next one. The existing tests did remind me of a flow I completely forgot about (did  I mention it was a very complicated decision tree?). 

Another useful way to use tests is as an investigative tool. In my current workplace we are using python (Short advice - don't do it without a good reason). Moreover, we are using python 3.6. we do a lot of work with JSON messages, and as such it's nice to be able to deserialize a message into a proper object, such as can be done with Jackson or Gson in Java. However, since python has a "native" support for json, I didn't manage to find such a tool (not saying there isn't), so in order to avoid string literals, we defined a new type of class that takes a dictionary and translate it to an object. Combined with type hints - and we have an easy to use auto-complete friendly object (In python 3.8 they introduced a "data class" that might do what we need,  but that's less relevant here). To do that we've overrode some of the "magic" methods ( __getattr__, for instance), which means we don't really know what we did here and what side effects there are. What we did know were our intended uses - we wanted to serialize and deserialize some objects, with nesting objects of various types. So, after the first bug manifested, we added some tests - we found out that our solution could cause an endless call-loop and that we don't really need to support deserializing a tuple, since a json string can only be a simple value, a list or a dictionary (not something we thought about when we started implementing the serialization part, so we saved some time by not writing this useless piece of code). Whenever we were unsure about how would our code behave - we added a test about it. Also, since we didn't really understand what we were doing, we managed to fix one thing while breaking another several times. Each time our existing tests showed us we had a problem. 

There is, however, one point on which I agree with the author - writing unit tests does change the way your code is written. It might add some abstraction (though that's not necessarily the case with the existing mocking tools) and it does push towards a specific type of design. In fact, when I wrote tests for a side project I'm working on, I broke a web controller to 5 different classes just because I noticed that I had to instantiate a lot of uninteresting dependencies for each test. I'm happy with that change since I see it as something that showed me that my single class was actually doing 5 different things, albeit quite similar ones. As a result of this change, I can be more confident about the possible impact a specific change can have - it won't affect all 5 classes, but only one of them, and this class has a very specific task I know how to access and which flows it's involved in. Changing an existing code this way does introduce risk, so everyone need to decide whether the rewards are worth taking those risks. If all you expect from your tests is to find bugs or defend against regression - the reward is indeed small. I believe that if you consider the other benefits I mentioned - having an investigative tool that will work for others touching the code, being explicit about what a piece of code promises and in the long run having smaller components with defined (supported) interactions between them - it starts to make much more sense. 


So, to sum up what I wrote above in a short paragraph - the author is completely right in claiming that if tests are meant to find bugs and defend against regression, they don't do a very good job. But, treating tests in such a way is to claim that a hammer is not very effective because it does poor job in pinning screws to a wall. Tests can sometimes find bugs, and they can defend against some of the bugs introduced by refactoring. but they don't do these tasks very well. What tests do best is mostly about people. They communicate (and enforce) explicit commitments, they help investigate and remember tasks and they save a ton of time wasted on stupid mistakes and leaves more time to deal with the real logic difficulties a code presents. I think that by looking at those properties of having tests, their value is represented better, and it also becomes easier to write better tests. 

Saturday, July 4, 2020

A "testing" competition

 

So, last week I've participated in a "testing" contest. You know, this event where you're given an app you've never seen before, and asked to "find bugs" for the next so and so hours. A lot has been written before me on why is there's basically no connection between such events and proper software testing, so I won't bore you with another such piece of text. Instead, I want to write about the things I did learn by participating in this event. 

First, since every such competition is different, I want to lay out the opening conditions for this event: 
To be short and blunt, it was an epic fail from the organizers side, In fact, I believe it was the level of amateurism involved in the event that drove me to write this post in the first place, just so that I could rant a bit.Furthermore, I'll make an exception and will name and shame this event - the Israeli Software Testing Cup.  
Anyway, what bothered me? First thing - while the competition does have a site, it's quite difficult to find and it contains just about zero useful information. No rules of engagement,  No mention to what we're allowed to do and what we're not - is it ok to look for security vulnerabilities in the app? do they care about usability issues? Also, no info on what is the basis for scoring, nothing whatsoever. On the day of the event itself we've heard for the first time "and make sure to send your report to this address". A report? what should be in it? who's the (real or imagined) client who'll be reading this? Should it be a business report? a detailed technical  report? A worded evaluation of the product with impressions and  general advice? Even directly asking that did not provide an answer, since "it's part of the competition and you should know for yourself what sort of report to send". Even as the results were announced, the teams were ranked, but what categories were used to score? no mention of it. Might have been shuffled in random as far as we know. 
Next we go to the application under test  - which was a nice idea but the app simply didn't work. It might have been a shoddy server spun up for the competition, or the product itself was in its pre-alpha stage but the fact is that many teams were having trouble just getting past the login\registration screen. In short - this should have been better.

Despite all of that, I managed to learn a few things: 
First of all, events such as this are a way to teach oneself on new things and catch-up with changes to familiar fields that are now out of focus. As I was preparing for the competition tries to capture my phone's traffic with a network proxy. Well, it seems that android users an't install their own certificates without have root access on android devices running android 7 or higher. You can still do that if you have a rooted device, but last time I checked, those mechanisms were not yet in place (I did have an older phone and it was a few years back) so now I know there's another thing to take care of whenever I'll have to approach mobile testing in the future.
The second thing I've learned was about the importance of experience - that which I had and that which I did not. I could leverage my past experience for faster orientation in the product, and knowing what I wanted to do even if I didn't know how to do it, one example is asking "where can I read the logs", this situation was a good chance for knowledge transfer, since my partner did know how to read application logs using logcat, so he could catch me up on that. The flip-side of that are all the things I didn't know. Perhaps with enough time I would have examined things such as app permissions or power consumption, but those didn't even pass through my mind while at the competition, since I lacked practice and didn't know the tooling around it so the time cost was just too big to even consider. 
Next thing - prep, prep prep. When interacting with an application, we are chasing electrical currents in various levels of abstraction - bits, text, various communication protocols, and effect on screen. Whenever we want to inspect a piece of program, it is useful to peel off one layer of abstraction just to see how things are under the hood - move from the nice GUI to the HTTP (or network) traffic, check out memory access, and so on. But unless you work routinely on a similar piece of technology, you probably don't have the necessary tools installed, and you might not even know what those tools are. A few hours can help you significantly reduce this gap. I spent a few hours of getting my environment up - downloaded some emulators, opned up ADB and while doing that I learned how to set my phone to developer mode (it's hidden in a very annoying way. I can understand why it was done, but really - seven taps on an unrelated field?)
Next is a reminder that no plan survives contact with reality. We had a nice first 30 minutes planned - orientation, some smoke checks and so on, but once we encountered the broken application, we scratched the plan and winged it altogether. Perhaps with some practice we could learn to work with a plan and adjust it on the fly, but when working in a time-boxed situation, I learned it's really important to keep check on where you are and what's important. 
The last thing that I was reminded of is the importance of modeling, and how unaviodable it is. As the competition went through I noticed myself creating multiple models - what is the business goal of the application (so that I'll know which severity to assign issues), how things might be implemented (so that I'll know if a problem I saw is something new or connected to other things I saw), Everything we do is based on a model, and once you started seeing them around you can practice in creating them - faster, more tailored to your needs, focused this way or the other. 

So, this is what I've learned from this competition. Can I take something back to my professional life? Not direcly, I believe. But, since everything I experience can help me gain new perspective or knowledge on what we do when testing, I can draw some lesson out of it as well. There are some similarities between this competition and a "bug bash", so I can take the mistakes I've seen done here and make sure to prepare for them if I get involved in organising one such event, and I also gained first hand knowledge on why we might want to do such a costly thing (Mainly, I believe it would be helpful in directing the spotlight to some of the problems we have in our product and help people outside of the testing team to experience them, so that we'll make fewer of those errors in the future). 
One thing that surprised me when I've noticed it was the difference between this circus-show and real testing work, and through this difference I can better define what I'm doing and what's important - the main difference was that in this situation there's a complete disconnect between my actions (wandering around the application, reporting various things) and the rest of the company - There's no feedback coming from the product team: There's no access to the product manager that can say "this is very important to me,. is there anything else like that?" or "That' very cool from a technical perspective, but it has little business impact", there's no access to the developers in order to find out what pains them, there's no view of the development process and nothing can be done to actually improve things around. All in all, it's a bug-finding charade. It was a great reminder that unlike "coding", testing is an activity best defined by a context in which it exist, rather than as distinct set of activities. 

That being said, I do recommend participating in such an event if you come across one (don't go at great lengths for it, though) - not because of any educational or professional value it brings, but rather because it can be fun, especially if the one you happen to find is better organised than the one I participated in. 

תחרות "בדיקות"



ביום שישי האחרון השתתפתי בתחרות "בדיקות", נו, אתם יודעים, הדבר הזה בו נותנים לכם אפליקציה אקראית עם מעט מאוד הקשר ואומרים לכם "לכו למצוא באגים". הרבה אנשים כבר כתבו על למה הקשר בין אירועים כאלה לבין בדיקות תוכנה מקרי בהחלט, אז לא אלאה אתכם בקשקושים על החלק הזה. במקום זאת אני רוצה לדבר על הדברים שלמדתי תוך כדי השתתפות באירוע. 
נתחיל קודם כל עם תנאי הפתיחה, כדי שיהיה קל יותר לעקוב - בארגון התחרות היו כמה כשלים מאוד משמעותיים מבחינתי, מספיק משמעותיים כדי שtאחרוג ממנהגי ואזכיר את שם האירוע  - תחרות הבדיקות הישראלית שמתארחת כבכל שנה בתוך DevGeekWeek של ג'ון ברייס. למה אני טורח להזכיר אותם? שיתביישו להם. אז הייתה אפליקציה לבדוק, והייתה מערכת לדיווח תקלות. סבבה, לזה הם דאגו. לשום דבר אחר - לא באמת. 
על מה אני מדבר? קודם כל, לא הצלחתי למצוא בשום מקום את חוקי התחרות. הכוונה בסיסית על מה מעניין יותר ומה מעניין פחות. מה מותר ומה אסור (למשל, האם מותר לי לחטט בקוד המקור של האפליקציה כדי למצוא בעיות אבטחה? לא ברור), הייתה עמימות משמעותית מסביב לקריטריונים לפיהם נמדדים הצוותים: כמות תקלות שנמצאה, רלוונטיות התקלות, איכות הדיווח, מורכבות תרחישי הבדיקה, היכולת לספור עד שבע במנדרינית - יוק. שנית, בתחילת התחרות סיפרו לנו שצריך לשלוח "דו"ח מסכם", למה? מי הלקוחות שלו? מה התוכן שמצפים שיהיה בו? האם "היה לי כיף" זה דו"ח מסכם? האם הם רוצים לראות טבלה עם מספר התקלות שדווחו? סתם תדפיס? כשניסיתי לשאול, לא הצלחתי לקבל תשובה חוץ מאשר "זה חלק ממה שמנקדים עליו, ההבנה שלך של מהו דו"ח מסכם". בקיצור, כל חוקי התחרות חסויים וסודיים. יתר על כן - גם לאחר שהוכרזו התוצאות, לא היה פירוט על מה נוקדו הצוותים. בכנות, זה נראה כאילו שלפו שמות של צוותים באקראי. 
שנית, בעיות באפליקציה הנבדקת. קודם כל, מסתבר שאחד הפרמטרים החשובים ביותר מבחינת האפליקציה הוא מיקום, והעובדה שלאור מגבלות הקורונה עבדנו כל אחד מביתו הגבילה אותנו. מישהו חשב לספר על זה? שנית, האפליקציה הגיעה אל המשתתפים במצב רעוע - בין אם בגלל שרת שלא סוחב, אפליקציה בשלבים מוקדמים מדי של פיתוח או מזל רע, כל לחיצת כפתור שלישית הובילה להיתקעות של האפליקציה, ולא היה ניתן לעשות איתה שום דבר. במצב כזה, כל תקלה שנצפית היא לא מעניינת, והרבה מאוד זמן של צוותים הלך לאיבוד כי הם לא הצליחו אפילו לעבור את מסך הכניסה. לא רציני. 

אבל בכל זאת, למדתי כמה דברים. 
קודם כל, אירועים מהסוג הזה הם דרך אחת להשלים פערים טכנולוגיים - תוך כדי ההכנות לתחרות התעסקתי קצת עם הכנת סביבת עבודה למכשירים ניידים, ולמדתי מגוון דברים שלא ידעתי קודם או שהשתנו מאז הפעם האחרונה בה הסתכלתי - קודם כל, מסתבר שאי אפשר להתקין סרטיפיקטים על מכשירי אנדרואיד החל מגרסה 7. כלומר, אלא אם יש מכשיר שהוא rooted, וגם אז יש כמה חישוקים שצריך לעבור דרכם.
דבר שני, הוא היכולת לנצל כישורים קיימים כדי לתפקד בסביבות זרות - עד היום, לא יצא לי להתעסק בבדיקות מובייל בשום צורה רצינית (כן אספתי כמה כישורים קטנים על הדרך), אבל למרות זאת, הניסיון שאספתי עד כה הוכיח את עצמו כרלוונטי - ידעתי מה אני רוצה לחפש, ויכולתי לחפש את הדרך לעשות את זה. הצד השני של המטבע הוא פערי היכולת בין מי שמתעסק באופן יומיומי עם הטכנולוגיה הנבדקת: יש מגוון דברים שאפשר לעשות יחסית בקלות אם יודעים איך, או שבכלל צריך. למשל - בהינתן קצת יותר זמן יכול להיות שהייתי מסתכל על דברים כמו צריכת סוללה או הרשאות האפליקציה, אבל זה לא משהו שהיה לי במודעות בזמן הקצר בו שיחקנו עם האפליקציה.
הדבר השלישי שלמדתי הוא על חשיבותה של הכנה מראש לאירוע כזה. במקצוע שלנו אנחנו רודפים אחרי אותו חשמליים שרצים בין מערכות שונות, עם המון שכבות אבסטרקציה. כשבודקים מוצר צריך לקלף שכבה או שתיים כדי לגלות מה יש מתחת למכסה המנוע. אין זמן במהלך התחרות להתקין את כל הכלים, ואלא אם אתם עובדים באופן יומיומי על טכנולוגיה דומה, אין לכם בהכרח מושג מה הם הכלים שיכולים לעזור לכם. זה פער שאפשר לצמצם באופן משמעותי בכמה שעות של הכנה. במקרה שלנו - אמולטורים, חיבור לADB והעברת הטלפון למצב מפתח (זה מגוחך עד כמה זה נעשה מסובך בגרסאות האחרונות של אנדרואיד. אני מבין את הסיבה, אבל בחייאת ראבאק).
תכנון זה נחמד, אבל זה הולך מהר מאוד לפח. לפני הפגישה עשינו תכנון זמנים קצר כדי למקסם אפקטיביות. צעד ראשון - סיור באפליקציה ומיפוי הפונקציונליות. מה קרה? האפליקציה לא באמת עובדת. קצת הלכנו לאיבוד ולא זכרנו את התוכנית של עצמנו. אני מניח שזה משתפר ככל שמתרגלים סיטואציה כזו יותר. 
דבר אחרון, והכי חשוב - מודלים. גם לנוכח האפליקציה השבורה כל הזמן רצו לנו בראש מודלים למגוון רחב של דרים איתם התמודדנו - מודל של "מה מטרת האפילקציה" כדי לקבוע מה חשוב, מודל של "איפה דברים יכולים להיות שבורים" כדי למקד את החיפוש שלנו, מודל של "מה עומד מאחורי התקלה שאנחנו רואים" כדי לדעת אם להמשיך לחפור באותו מקום ולחפש תקלות נוספות או אולי עדיף לחפש במקום אחר, ואם התקלה שאנחנו רואים כרגע קשורה לתקלה קודמת שראינו. 

אז אלו הדברים שלמדתי על תחרויות מהסוג הזה. האם יש משהו שאפשר להשליך מכאן על החיים המקצועיים שלי? לא באופן ישיר. השתתפות בתחרות הזו בהחלט מלמדת אותי דבר או שניים על טעויות אפשריות אם נחליט לארגן ציד-תקלות על המוצרים שלנו ועל הסיבות בגללן אולי נרצה לעשות את זה (בעיקר, אני חושב שיתרון אחד שיכול להיות מזה הוא שינוי הלך המחשבה של מי שאינם בצוות הבדיקות והפניית הזרקור לסוגי בעיות שונים שיש לנו, כך שיווצרו פחות מהן). חוץ מזה, אני חושב שחידדתי אצלי בראש את ההבדל בין בדיקות תוכנה לבין התחרות הזו - המבדיל העיקרי בין הפארסה שמתרחשת בתחרויות מהסוג הזה לבין עבודה רצינית הוא המשוב הקבוע ממפתחי המוצר - אין גישה למנהל המוצר שיאמר "זה חשוב, חפשו עוד כאלה" או "זה מגניב ברמה הטכנית, אבל לא אכפת לי", אין גישה למתכנתים השונים כדי להבין מה קשה ומה קל להם, אין התייחסות לתהליך הפיתוח ושיפור שלו. יש כאן קרקס של "מצאתי תקלות במוצר". יופי נחמה.
בסופו של יום, זו הייתה תזכורת מצויינת לכך שיותר מכל דבר אחר, בדיקות תוכנה מוגדרות על ידי ההקשר בו הן מתקיימות ולא על ידי סט כזה או אחר של פעולות. 

למרות הכל, אני שמח שהשתתפתי בתחרות הזו, לא כי היה לה ערך מקצועי כלשהו, אלא כי היא הייתה בילוי נחמד לכמה שעות, אם יוצא לכם להיתקל בתחרות כזו (בתקווה, אחת שמאורגנת טוב יותר מהפארסה בה השתתפתי), אני ממליץ בחום.

Monday, February 17, 2020

Book review - The Surveillance state


So, I'm cheating a bit. This is not a proper book, but rather a series of lectures wrapped in an audio-book (the course material just happens to be a 198 pages PDF, so one might use that excuse too to call it a book) . At any rate, the surveillance state, big data freedom and you is a rather interesting listen. It's an introductory course to software security and privacy, but unlike many others, this book approaches the topic with the goal of covering the legal concerns, as well help the audience to understand why the subject is a complex one and how every choice made by a legislator is a choice of a specific balance between civil rights (that might conflict within themselves), safety & security, and technical soundness.
The "book" is comprised of 24 lectures, so I'll skip my usual chapter-by-chapter summary, and just go over some of the points that got my attention.
The first one is the claim that there's a trade-off between the government ability to protect its citizens, and its ability to track everyone, everywhere. This dichotomy sounds simple at first - of course we want our government to know where the terrorists are planning to plant a massive bomb, but even without going the length that Bruce Schneier does and claiming that this is a false trade-off, this book does raise the problems in this approach: Would we be as comfortable to allow complete surveillance in order to catch a drug deal? and tax evasion? Currently, we are willing to allow our government to invade our privacy only for certain reasons, and the fact that much of reality is moving to the cyber-space is changing what privacy means and how difficult it is to separate between "legitimate" invasions (such as espionage, counter terrorism and general anti-crime operations) and liberty limiting surveillance is becoming more difficult (there isn't a separate communication network for terrorists - they are using the internet, same as everyone else).

Another point I have not considered before was the necessity of some measure of transparency in order to have a meaningful policy discussion. The most obvious reason being that without it, the policy does not matter. However, transparency should also be limited in this game of balance - some actions are becoming ineffective when they are known to the target (wiretapping, for instance) and in other cases, just exposing that a nation has a certain capability is enough to thwart it (Today it is common practice to leave cell-phones out of top secret bases since the ability to track them is well known). A nice way to compromise for that sort of problems is to appoint an overseeing body of a sort, but one thing I gleaned between the lines was that protecting insiders who expose malpractice, since this observation mechanism is, in itself, something that needs to be checked.

I mentioned the word "privacy"  once or twice already in this post, and this leads me to a question - what is privacy? In the 8th chapter (or lecture), the professor, Dr. Paul Rosenzweig, is mentioning that there's a need for a new concept of privacy. While he's not using those words himself, he's saying that the big data revolution has killed privacy. Since there is truth to that, he suggests a different was of looking at privacy - Since preventing collection and retention of data is not an option when it is the same data that also enables services we actually want, redefining privacy is what we can do to adjust to this new world we live in. He uses a definition I'm uneasy to accept, and for a reason I simply reject. He suggests that the limitations we set to protect this new type of privacy would be there to prevent use of data in a way that would cause some "actual harm to an individual". I reject this, because showing "actual harm" is difficult, and it's easy to brush aside non-bodily harm as "negligible". The chilling effect caused by the eye in the sky? "Oh, that's not actual harm, is it? everyone else agrees... ".
Privacy, for prof. Rosenzweig is "a desire for the independence of  personal activity. a form of autonomy". This sort of  'autonomy' can then be protected in many ways - secrecy (for instance, no one knows who you voted for), direct protection on action (you are allowed to practice any religion you'd like) and anonymity - where our actions are not hidden, but they are, generally, stripped of consequence because the action is not linked to us in the relevant circles where this sort of link would cause us harm. For instance - a teenager buying condoms would very much like hiding it from their family and classmates, but might not mind the clerk or the other strangers in line having the same information. Or, in his words - "Even though one's conduct is examined routinely and regularly, nothing adverse should happen to us without good cause".
Personally, I prefer seeing privacy in simpler terms - privacy for me means that I can limit the knowledge about myself or my opinions and activities and to some extent, control who it will be visible to. I do not expect nor have complete privacy, and many bits of "my" data have different levels of privacy, but I believe we need to create a vocabulary that would help us identify the level of privacy I have for each such detail, and debate what it the appropriate level. For me, privacy is meant first and foremost to allow a person to save face. Avoiding additional harm is also desirable, but can be achieved by rules prohibiting some behaviors. In that sense, I think I agree more with the sentiment behind the GDPR principles, and why I really like the "right to be forgotten".


There's a lot more going in this course, and the more tech-oriented people might notice some inaccuracies or broad generalizations when the professor is explaining technology, but that's ok - it's a course about policy making.
The last section is a call for action - to participate in the public debate around privacy and help define boundaries and set up the grounding for the legislation to come. After starting with a claim that technological advancement always will be ahead of laws and policies, I completely agree - public discussion is probably the one way we can catch up on some of that gap, and even set the direction technology will be moving forward.


Sunday, February 16, 2020

ETC 2020 - conference days



It's official now. ETC 2020 is now over, and there's no more stretching it for me. It's been a wonderful conference, where I got to meet new people and some I've met before as well as learn about new ideas, skills and tools. All in all, those have been a packed couple of days, in the best way possible.
The conference has started with a talk by Mirjam Bäuerlein on TDD, and dogs.  The talk, as it is, was presented in a skillful manner and was a good overview on what TDD is and covered the "why" on top of the "how". Personally, I didn't connect with the dog theme and didn't really see a new insight on the topic emerging from this unique perspective. I think that as a keynote I expected a bit more out of this talk: I like keynotes to be inspiring or perspective changing, which wasn't the case for this one. Judged as a track talk, however, it was a very good talk with a subject that is quite important, even if it might be more familiar to regular conference attendees (And frankly, we had roughly 1/2 of the people at the conference there for their first time).
After a short coffee break it was time for a regular track talk - I started with Jeremias Rößler's talk - Test Automation without Assertions. The talk, as far as I could tell, was a presentation of a specific tool called Recheck-Web which he has created. As far as I could understand it's a combination of a stability-enhancer and a approval tool integrated into selenium. It was a decent presentation, but I could see that there's no real value for me in it, as I'm already familiar with the concept of approval testing (and visual validation), and I don't really like talk focused around code, since they are difficult to execute. Also, I have a request for all future speakers discussing coding related issues - There are a lot of bad examples in the coding space, especially around test-code. Please don't contribute to this pool without clearly marking "here's some bad code". If you want to suggest why your solution is a good thing, compare it to the best code possible without it.
So I left and went to check out another talk. As I was planning to attend the PACT workshop, I thought I might go and check out the talk about the same topic. Have I mentioned that coding talks are hard to pull off? Well, the part I entered was where a speaker stood with an open IDE and mumbled "here's how you can do this" for several times. It might have been a great talk for those who were there from the start, but I stayed 5 minutes to see if the talk would get to more interesting things than code, and left this one as well. The next event was speed meet. Those who read my ETC posts in previous years know that it can get a bit too crowded for my liking, yet this year felt slightly less so - I was able to both talk with some people and not end up so exhausted  I needed to take some time-off at a corner to recharge. I even managed to remember a name or two by the end of it. After the speed meet there was time for lunch, and I spoke a bit with Kaye and hear about their story of moving to the Netherlands.
Shortly after lunch it was time for the workshops. Due to a last minute change in the rooms I found myself in Fiona Charles' workshop titled "Boost Your Leadership Capability with Heuristics". It wasn't what I planned to go to initially, but despite having about five minutes before the actual start of the workshops, I decided to stay and learn. The workshop itself was meant to break down the vague concept of leadership down to more useful components and notice that most leadership qualities and practices are heuristics, which makes them fallible and means we should be noticing when it is suitable to use certain of them and when it wasn't. For instance, providing feedback  is generally seen as a good thing, but it is not suitable for cases where one does not have the credibility to provide feedback, or when the other side does not have the attention span to process it. Sadly, the time constraints were a bit too harsh on this workshop - we managed to complete a single exercise, and we were missing a bit of direction and maybe a summary from Fiona that would help participants notice what was it we did and how to carry this forward. I believe my explanation above is where the workshop was aimed, but I might be missing it completely.
Next on the menu - Hilary Weaver-Robb's talk on static analysis. It was a well rounded introduction to the concept of the topic, and really left me confident about starting such a thing back at home (next task - find some time to do that). In addition, Hilary divided the different types of outcomes we can expect to get from a static analysis tool - probably bugs, security vulnerabilities, code smells and even some performance enhancements. Each of those categories have a slightly different urgency to it, but the general thing to do about each of them is "understand, prioritize, fix". In most cases, one can skip just to "fix" since the tools are pointing quite clearly to a well defined issue. In some cases it exposes problems that are more complex to fix and one might not want to do right away.  We also mentioned the difference between linters that show up issues as one is coding, and analyzers running on the entire code base. All in all, I liked this talk very much.
Finally, it was time for lean coffee and the closing keynote. Lean coffee was, as usual, very interesting. It was a bit off that I was the only one with more than one topic suggested, but fortunately the topics presented by others were so good that only one of my topics actually made it to the discussion. Co-facilitated by Mirjam & Gem, everything went smoothly enough, with the one usual caveat of not having enough time for this activity.
The closing keynote for the day was given by Maaike Brinkhof and was confusingly titled "Using DevOps to Grow". Why confusing? Because in fact, this talk is not very much about DevOps, which exists in the background, but was in fact about growing professionally, and learning to expand beyond the narrow definitions of one's role. Sure, DevOps helps, as there are many collaboration opportunities, and as the role of a dedicated tester becomes even more narrow and other needs such as good monitoring, simply presents themselves, but one does not need to wait for DevOps to achieve such progress. At the end, it all comes down to working together as a team and focusing on value rather than on roles. This message, while it's not the first time I hear it, is extremely important, and is new enough to actually change the way people might think. As keynotes go - it was definitely a good one, and it was important to hear it during the conference, to encourage people to step outside of their familiar comfort zone.
End of day one, more or less - there was still some socializing with some drinks and refreshments happening for a couple of more hours, and then I went out for dinner with a bunch of cool people as well.
I managed to get to my room by the early hour of 23:30, and even thought I might end up going to sleep early. We;ll ignore my optimism for a moment and skip ahead directly to the next day, starting with the opening keynote of the day which was an overview of application security. More specifically, Patricia Aas presented her take on how DevOps culture changes the way application security is taken care for, with 6 ground rules to actually make it work: Live off the land (use the tools already in place, don't add your own tools and ask the developers to work with them), Have Dev build it (because you don't have time to do that yourself, also they will be more committed to it if it's theirs), Trunk based development (This one is just a tip taken out of "Accelerate", I don't think it has anything to do with security, but the jist of it is to have small chunks of code review so  that reviewing them for security is feasible), Use existing crisis process (and train for it, since people revert to their instincts when crisis hits), Automate as much as possible (and I would add - as much as sensible) and treat your infrastructure as code (since, you know, it is a part of your code, and needs to go through the same system of checks and deployment process). All in all, it was interesting to get the perspective of a security person on a development process. One particular point I connected to was her story of origin - starting as a developer on a company that was constantly hit by attacks, everyone in that team learned about application security, and from there she continues to the security path. Having worked on a regulated product for ~7 years, it felt very natural to me.

After the keynote it was time for the workshops again. Having the workshops run twice is a really nice thing, as one can gather impressions from the attendees of the previous day and the inevitable dilemma of "Out of these 5 amazing activities I really want to attend 3, and they're all at the same time". This way we can choose 2 of those sessions. I went to the workshop on Pact given by Bernardo Guerreiro. To be frank, it wasn't very much of a workshop as it was a live demo on the tool. Despite being a difficult feat, Bernardo managed to produce a good code intensive talk. Moreover, given the limited timeframe of the workshop, I believe it was the best possible choice. Bernardo walked us through an overview of the entire process of deploying Pact in your pipelines, so that we could get a good grip on what capabilities the tool had and what problems might occur as a result of mistakes made at the beginning so that we can learn from them. One thing that would have made the workshop perfect for me would have been a sign of "what are we aiming to achieve" and perhaps a handout of the different phases in the journey to help people remember the different insights, as they all played out very naturally during the workshop. I stayed a bit after the workshop to chat with Bernardo and we ran a bit late to lunch, which for me is an indication something is going right in this conference.
After lunch I attended Gem Hill's talk about "Value and Visibility; How to figure out both when you don't do any hands on testing", short version:  As you move to a more senior tester role, your job looks a lot different than what it was before and you might feel as if you're not doing anything and wonder where your time is slipping away. With the relevant differences, that's a pretty good description of feelings I had during my last couple of years in my previous workplace, and then again once I joined my current place and found myself in a completely different setting - working outside of multiple teams instead of embedded in one. Gem shared some strategies that worked for her - noting on paper what she has achieved instead of only what she planned is one that I remember.  Another thing was just the realization that work looks differently.
The final track talk I attended was Crystal Onyeari Mbanefo's talk: "Exploring different ways of Giving and Receiving feedback". The topic itself is quite important, but on a more personal level, I did not connect with the way it was presented. It felt a bit too dry and rule driven, with no real story to hold everything together, and the division between feedback to a peer, to a manager (or someone more senior) and to someone down the food chain wasn't very useful to me. Probably, what I missed the most was a sense of purpose. We ditched a goal oriented definition of feedback and used instead one with no obvious purpose (it can be seen  in this slide). For me, feedback with no intended goal is less valuable - I give feedback because I want to change a specific situation, or because I want to help someone learn, or because I want to encourage them for something they did. If I have no goal in mind, I can't really know if my feedback was effective, and it is not very different than any sore of observation or rant.
From that talk I went on to the open space, where I skipped raising a discussion around "quality is a toxic term", which quite frankly, is more suitable for a lightning talk. Instead I got to attend two awesome activities: The first one was a sketchnoting session by Marianne who has sketchnoted the keynotes for this conference as well. now I have a strategy on how to create my own sketchnote.
The second discussion I attended was a game of value. The idea of the game is to do some "work" (passing on some poker chips) and getting "paid" by the customer. The goal of this game is to practice learning about the customer value while still producing something. Short version - we failed miserably. I still managed to learn quite a bit from it - first thing is that without getting some feedback from the customer we waste effort (someone paying us, by the way, is one sort of feedback). Another thing I've learned was that we don't know the parameters we can tweak, nor what are the variables that have an impact on the value - is it time of delivery? predictability? new functionality? snazzy UI? Can the value of those parameters change over time? Maaret wrote about it in more details right here. the final slot in the open space was one I added to the board, with a call for help - We have at work a lot of changes that needs doing, and I've never actively persued a culture change in an organisation, let alone do that in a bottom up fashion. So I asked for help in order to see if anyone has some ideas about how to plan for such a thing, and how to track our progress. It was an interesting discussion, and I had some tips on things I might do, but with the short time we had, and my deficient facilitation skills, it wasn't wrapped up to something whole that I can take and say "now I know where I'm headed". I did gain some insights that I still need to process.
Open space was done, time for the closing keynote: "The one with the compiler always wins" by Ulrika Malmgren. I still think it could be a title for a talk in a python conference (and for all you pythonistas out there, I know python is technically compiled, just not in any way that is helpful for me), but in fact it was a reminder of how, at the end of all ends, the person doing the coding has a lot of power in their hands - they may deviate from pre-agreed design, add small features they like in between their tasks, and raise the price of tasks they dislike. This power means there are consequences to how we act (or, if you are not part of building your product, on how your devs work). Inspired by Marianne's mini-workshop, I made a small sketchnote. It is still a lot more practice hours before it can be compared to a proper one, and I'm unsure if the experience is something I enjoyed enough to repeat. Note taking, in any format, tends to distract me from listening (now, combine that with poor memory, and let the fun begin), but it was interesting to see what I could do with a small set of tricks and a piece of paper.
And that's it for the conference - lights out, people going home, and all of those. I stayed a bit to see if I could help tidying after us and after dropping several things to Marianne's car, I went to join a lot of people for dinner (I'll skip names, but you can see some of us here). A really nice surprise was when a small delegation from DDDEU came to say hi from the other side of town, so I got to say hi to Lisi Hocke as well. After dinner was done, we moved the conversation to the hotel's lobby and then just talking about a variety of things (From concrete feedback on a talk, to différance and speech act) and at some moment there were only 4 of us left - Jokin, MarkusMaaret and myself. As it usually happens in such conversations, I learned a lot. For instance, I learned that creating a safe space for attendees involves also caring for the content and presentation of the talks, and we had an intersting discussion around connecting people online (I'm still not a fan, but I can see now how it can work for some people), about CV filtering (We all agreed that it can cause a lot of missing out on great people, we differ in the question of whether we are willing to pay the price of using a more accurate but more time-costly predictor) and much of my insights from the game in the open-space came in this conversation where I learned about the various factors we should have considered, and about the options we had hiding in plain sight. Suddenly, it was 2:30 AM, so sleep time it is.

I still managed to extend the conference feeling for one extra day by meeting people for breakfast, then walking around in Amsterdam a bit, both alone and with Mira and by the evening we had dinner and were joined by Julia and her family, but that's ETC 2020 for me. It was a great mix of learning, meeting people and fun.

Thursday, February 6, 2020

ETC 2020 - day 0


So, after landing yesterday in Amsterdam, I used most of the day before ETC is starting to tour in some of the fishing villages around. That, at least, was the plan. At the hotel I was told that there's a hop-on hop-off bus I could take from central station, and so I went there, looked for the relevant booth and found out that this is true only during summer. Ah, well, I'll take a regular bus, the helpful man at the counter told me I could find tickets at the EBS office just on the other side of the central station.  So I went looking for it. about an hour later, I realized I have no idea where's that and went back to the info booth to ask for more specific directions (as no-one was familiar with the name I was given). It appears that the EBS is inside the train office, and so I managed to purchase a bus ticket - which in turn led to me looking for the bus. It took me a while to figure out that they were hiding hte buses on the 2nd floor, up the stairs. And finally, I was out to Volendam (which, as one can guess from the massive wooden stakes in the picture, is suffering from a grave problem of giant vampires). The village itself is a nice place to walk about, and the scenery is lovely and very calming. I took a ferry to Marken, and from there back to Amsterdam.
Then, after a quick shower (and time to recharge my phone's battery, that was depleting quite rapidly), down to the lobby and out to the pre-conference meetup, I met Marianne on my way down, and we met with more people in the Lobby (Joep, Elizabeth  and Marit) and most of us set out to the meetup venue at one of the ING buildings. At the meetup itself I met a lot of people I've met before, and some new people as well, but listing their names would be testing my memory a bit too much, so I'll just say that I got to meet Gem Hill, for the first time, and talk again shortly with Thomas, and that there was a nice talk given by Lisa Crispin and Janet Gregory, and that Maaret facilitated a very interesting discussion, and that I heard some horror stories from Fiona Charles with regards to disfunctional workplaces, as well as an interesting tip for the daily stand-up: come prepared. I wonder whether my team will buy into this one for an experiment, as we are having trouble just in keeping the stand-ups in time (Hi team! how are things back at home?), it feels like something worth trying.

All in all, I had  a really nice day, can't wait for the conference to start tomorrow.

Tuesday, January 7, 2020

Why don't I like python?



Some of you might have noticed that over the past 9 months I've been tweeting a bit under the hashtag #PythonSucks. As part of my pains with this language I asked the twitterverse to help me with finding sources that can confirm (or debuff) the claim I've been hearing a lot about python contributing to developers' productivity (for an example of this boast check this almost 5 y.o. podcast episode, around minute 11:30). During the conversation that sprung up from this question, Maaret mentioned it will be nice to see what am I missing when working with python.

As I went on to start answer, I figured out I have a bit more than a tweet's content to say about it - and so, this post is on. Some of the things I dislike about python are probably due to its nature as a dynamically typed language (I don't have a lot of experience in other languages that share this property, so I'm comparing it mainly against java and c# which are the closest statically typed languages (unlike c++ or c which are a bit more low-level)

So, what is it I don't like about python?

  • It's chaos prone.
    While it's definitely possible to write bad code in just about any language, my feeling is that in python writing well structured code is an extra chore. There are "type hints" that allow me to state "this variable should be of this type", but it is only optional, so we need to remind people to do this. It is possible to put multiple classes inside a "module", so we have to take care and ensure that the files are not growing out of proportion. The lack of the "private" concept is pushing people to abuse python features (check "name mangling") and instead a coding convention was invented to indicate "this should be private"  I also want to smear the variable scoping, but I'm not sure I understand it enough to do that.
    Under the mantle of "consenting adults" the language is allowing too much freedom and entrusting the programmer with the quite heavy task of keeping everything tidy with no purpose other than tidiness. Another example is the stupid decision in holy PEP8 (python coding style) with regards to string quotes (use single or double quote?): "Pick a rule and stick to it." Really? 
  • Tooling support sucks.
    Don't get me wrong here. I am rooting for my favorite python IDE (I use PyCharm) which saves me a lot of effort and is making everything less painful. Still, even with my limited use of the language, I found some stuff I was missing comparing with my favorite java IDE by the same vendor (IntelliJ) - since stuff is dynamically typed, auto-complete is weaker (type hinting helps, where it exists), the same property is probably why I spent several hours looking for a "generate equals and hashcode" functionality that was built-in in IntelliJ but missing from PyCharm - it is simply more difficult to do if there isn't a defined set of fields. Another thing that works less well is finding usage of a function, and I'm sure there's more. The tool vendors just have less metadata to work with
  • It's difficult to read.
    No, I'm not speaking only on the lack of curly brackets and reliance on indentation (there's a reason why whitespaces are ignored in most other languages - they are invisible!) I'm speaking on what seems to me like favoring writing code over reading it. It starts with the odd ternary operator ("a=c if bool else d" is done in  most other languages I could find as "a= bool? c:d" where we first inform the reader that this is a question) goes on with convenient to use but meaningless constructs such as for\while: ... else: (what does it mean to have an "else" statement on a loop? ) and goes on with removing the most useful metadata - types being send and returned. Sure, one can put type hints, or have this information in the docstring, but as we all know - documentation gets out of sync with the code very fast). So sure - I might have less lines to read per function, but I'll be forced to hop into all of the functions I'm calling just to see what am I getting from them.  Personally, I also don't like the convention of using less braces (my IDE yells at me if I write "if (a>b):" and would rather have me write "if a>b")
  • Libraries are difficult to use.
    Sure, another of pythonistas boasts that I've encountered is that python libraries have great documentation. While this might be true, I prefer not needing to read the documentation except for finding about corner cases or difficult to explain stuff. It's those missing types again. The only way know that the parameter "X" is supposed to be duck-typed as a string is to read something outside of the code (I have not encountered a library that used type-hinting, I assume they are some). There's a very popular way of sending parameters as "args, kwargs", which means that just to find the names of the possible parameters I'll have to step outside of the code.  
  • Importing a file executes it.
    I don't think I need to add here anything. 
  • Dependency management?
    This one is probably just something I need to learn, but I haven't found out yet a too to help me with dependency management (such as maven or Gradle in Java).
  • It's a special bloody snowflake.
    It might just be my feeling, but sometimes I can just stare at the code and wonder what have the designers of a specific language quirk been thinking (oh, who am I kidding, I'm always blaming that Guido chap). We've discussed above the ternary operator, there's the odd choice of not including a "switch\case" support in the language, having "join" as a method of String instead of a method of the collection, and a plethora of tiny things I encounter but cannot recall at the moment). It seems that python just wants to be different than other languages just because, and I prefer things that stick to the convention unless there's a reason not to. 
  • Say "pythonic" one more time...
    Seriously, are we professional coders or a part of a religious cult? While this can be used to improve code aesthetics  I've seen this argument used to justify ugly code ("It's more 'pythonic' that way ").  
Those are the pain points I think are most visible to me at the moment, and from the technical point of view, I  don't think adding more would help. 
There is, however, one other property I really don't like - It's too easy to learn. 
Normally, you could think that being easy to learn is a good thing, but what actually happens in python is that the learning curve is longer. Yes, it is less steep, but on java or c#, once you are able to write a small scale exercise you have already faced some OOP principles (in fact, since every variable has an explicit type, the new coder must have faced this concept), in python, stuff just work without trying. In some manner, python is suffering from the same problem as the testing profession - it is really easy to get started with limited professional capacity, so people just assume it's easy and don't invest the further effort required to be good at what they do. 
Another part of that is that you could write some bad code and it will work and will be enough for quite a while. It makes python a great tool for non-programmers (such as data scientists who just need their script to work while they focus on honing their data science skills. a lightweight scripting language with extensive libraries is exactly what they need), but for me it only means that once you'll find that you are missing those extra bits of information to manage your project properly, you'll have a lot more to rewrite.