Sunday, August 12, 2018

Cast 2018, day 2


One thing is common to all good conference – I miss out on sleep hours because there’s so much to do, and this conference was no different.
I woke up, organized my stuff, and went down to lean coffee, only slightly late. The topics, as usual, were very varied – We’ve discussed personal insecurities, what does it mean to be a senior team member (short answer – you don’t get to actually work) and how to approach the whole issue of effective and efficient documentation & reporting. Everyone was engaged - Almost every topic got at least one extra timeslot.
The opening keynote of the day was delayed due to the speaker flight being delayed, so instead we got the next activity of the day a bit earlier. I got to Lisi’s mobservations session – she dealt really nicely with the surprising change of plans and had the classroom ready for a mob. If you are ever in a session where there is a demonstration of mobbing, do yourself a favor and volunteer to be part of the mob. Yes, you’ll be putting yourself in front of an audience, but watching a mob is nothing like participating in one. As a mob, we’ve spent quite a while in orienting ourselves around the application under test and trying to decide on a concrete direction that we should take, and had a difficult time doing that. But frankly – testing the application wasn’t really what we were there for. Learning to mob was our purpose, and Lisi provided some excellent guidance to help us focus on how we behaved as a mob and then how we behaved as individuals in a mob. All in all, we got a reminder of why mobbing is difficult, but also saw how effective it was in dispersing knowledge in the team – even if it was only how to use certain tools or deal with an operating system in German. I feel that this exercise should have been maybe a couple of hours longer to really get some decent pace, as a lot of the insights we came to did require both trying it out, and some hands-off reflection. But, given the constraints, and while there is always something more that can be improved, it was a good experience for me and I would be happy to have some more like it.
Sadly, I cannot say the same thing about the keynote, to which I didn’t connect at all. The overarching topic was similarities between UX design and testing, but it felt very remote and detached. Perhaps I was missing the background to appreciate such a talk.  But, you know, that happens, too.
Good thing lunch was immediately after that. I had a nice chat over food and drink, and then went risk-storming with Lisa, Alex and a few other testers. This was a very interesting experience for me, and the first time I held a deck of TestSphere cards, which appear to be an interesting tool to have in certain situations.
Afterwards I attended Paul Holland’s workshop on unlocking creativity in test planning. It was very nicely built, and I got to both troll Paul over twitter by paraphrasing what he said and to take away some important insights from the workshop. First of all, a requirement for creativity is peace of mind, which is obtained by setting boundaries – both spatial and temporal. Second thing is that some ideas just take time and offline processing. Third, Ideas bring out other ideas, so stupid ideas would most likely attract some good ideas as well. But most importantly – Don’t burden yourself with too much information. Get a basic understanding of the task, then stop to think and process, and only after you done some hard thinking come back to the rest of details and see whether concerns you had are addressed by all of the tiny details you skipped, and what does it add to the mental picture you already have in mind.

The best talk of the day was waiting for last. I went to Marianne’s talk titled “Wearing Hermione’s hat: Narratology for testers” Marianne combined three of her passions: Testing, Harry Potter and literary studies. It was a perfect combination for me, and I happen to share her affection to those subjects, even if to a lesser extent (My focus during my studies was more on poetry and less on prose, and I don’t know my Harry Potter as deeply). Marianne spoke about how people tend to follow the first paradigm they adopted and ignore further information that might prove otherwise, which connected in my mind with Liz’s keynote about people tendency to seek, and pretend to find, order and patterns where there is none to be found. Another important observation we can borrow from narratology is the need to look again – our first read of the book is usually great to get a basic understanding of what’s going on the surface, but after we’ve gained this basic understanding, a second reading will expose new information that wasn’t as clear before, and that we can only now notice. With software it is very much the same – we learn a lot by doing, and I have yet to see a project that by the end of it people didn’t have a better way to do what they just did. Marianne also mentioned that many companies engage in “root cause analysis”, but are actually only scratching the surface. They understand what went wrong in this specific instance, but don’t actually take the extra step required to find the systematic fails that contributed to those failures. If you do those post mortems and keep a record of them, it might prove interesting to do a meta-analysis on several of them to try and decipher patterns.
Another thing I found in Marianne’s talk was the value of specialized language. She spent a few minutes in providing the audience with a simplified explanation of the technical terms “text”, “fabula” and “story”1.
Afterwards, she used that distinction to point at a series of events where the story is different from the fabula, and what effect It had, and why changing the perspective helped in creating such “deception” that can only be seen and understood in retrospect. The fact that she had distinct names for two phenomena was not only useful as a shorthand, but also helped keep the two related ideas separate in the minds of the listeners, and be added to their toolbelt the next time they read a story. So, if you ever wondered why so many people fuss over terms and meaning while it’s clear that everyone understands what you mean – that’s why. Words, and technical terms2 in particular, are ways to direct our thought process and raise our awareness to things. They also carry with them a plethora of meanings and associations. For instance, during the talk I was reminded of Wolfgang Iser’s gap-filling, which is part of the reader’s-response theory, and thus immediately made it crystal clear that there is an important place for the “reader” who does the interpretation of the text and to the way they react.
All in all – A great talk to end the conference with. The only thing I’m missing is one of Marianne’s fabulous sketch-notes.

End the conference did I say?
Well, almost. We still had to grab dinner. I went to the room to rest a bit (it was a packed day, so I needed a few minutes to unwind). I then joined a very nice group containing Lisi, Thomas, Lena, Marianne, Lisa, Santiago and Andrea who were sitting and just chatting. It was a very nice way to say goodbye. We’ve sat for about three hours and then it was time to go to sleep. After all, I had a plane to catch in a ridiculous hour. I did manage to say goodbye to a whole lot of other people that were playing some board games.
And now (or rather, a few days ago, as I was writing most of this in the airplane leaving Orlando), the conference is over. I had a great time, and I have way too many people to thank for it to list them all here. Next time I’ll make sure to have some time after the conference. 


I usually match “fabula” with “Syuzhet” (which I’m more comfortable spelling “sujet”), but Marianne was conscious enough to spare the audience from more definitions to confuse them. In short, fabula is the chronological order of events as they “happened” in the imagined world of the text. The sujet is the order events are presented the reader. so “I fell after stepping on my shoelaces” and “I stepped on my shoelaces and fell” are the same fabula, but different sujet. And yes, I had to go back to my class notes to verify that.
A text is an instance of a literary creation, it is the book one reads.   
2 When I say “technical term” in this context I mean any word that has a specific meaning within a profession which is different than the common understanding, or not commonly used outside of a specific jargon.  



Friday, August 10, 2018

CAST, day 1


And what a packed day it was.
It all started with lean coffee facilitated by Matt Heusser, which was both enjoyable and insightful (the picture above is the discussions we were having, taken by Lisa Crispin). My main takeaway from this session was the importance of being able to verbalize your skills to yourself, and to communicate them to others. Also, this was my first lean coffee where there was actual coffee.
Then, the opening keynote. Liz Keogh was speaking about Cynefin, and delivered a great talk. I did  hear a similar version of this in ETC2017, but it did not matter very much. In fact, listening twice enabled me to better understand and process what she was speaking about. In short - developing software is complex space, probe a lot and make sure that your probes are safe to fail. Also, use BDD and avoid tools such as Cucumber (BDD is about the conversation, not about the feature files).
After the keynote I went to a workshop on domain testing passed by Chris Kenst and Dwayne Green. It's always nice to refresh the fundementals, and to learn a new name for that (I was familiar with the concept of Equivalence classes and boundary value analysis, that are techniques inside the space of domain testing).
During lunch I managed to talk a bit with some people, and then went on to the lobby where I met Alex and we've talked about organizing your desktop in a way that should (we guess) increase productivity. What I really liked was that we actually started mocking the screen layout that we would want to see. It was very cool to watch Alex tear down some paper pieces so that it would be easy to move them around. This sort of think kind of makes me want to go out and figure how to implement such a thing. The main challenge is that in order for such a solution to work, it must be ingrained in the OS is a seamless way, so that it will always be on top, and manage the size of just about everything else. I wonder if Windows are already offering such a thing.
The first talk I attended had a promising title about coaching and the GROW framework. It took me a while to find out that I didn't connect with the content and move to another talk - "Don't take it personally" by Bailey Hanna. I got just in time for the exercise. Not really knowing what I should do, my instruction was "be aggressive", and I do owe Polina another apology. I was very difficult.
After that, I went to Lisi's talk about her test journey. So far, I've listened to two of Lisi's talks, and they have been very dangerous to my free time. Lisi has a way of sharing her experience while showing her passion for what she did, and has a unique way of inspiring others to do the same. It was my favorite session of the day. Also, before having a chance to regret this, I agreed with Alex on pairing together, and we decided that by the end of August we will set up a time for a session.
My talk was up next, and I took my usual 5 minutes to stress out. The talk itself went ok, I think - By the end of it I felt as if I was pushing a bit hard to hold the list of ideas as coherent a narrative as I could, but I wonder how many in the audience actually saw it. The open season was, as expected for the time and type of talk - awkward silence. My facilitator at the talk - the Friendly Richard Bradshaw managed an amazing feat of wriggling some questions out of the audience, and had some interesting questions himself.  After the talk I got some very kind feedback, which I greatly appreciate.

A surprise was set for the evening - after a short time to meet & mingle, we all (or, up to a 100 of us) got on a bus and took of to the Kennedy space center. Rockets, space, astronauts, nice company (and even some food) - what more can one ask?
We got back to the hotel and I joined a couple of quick rounds in a card game I don't know the name of but was nice to play. Tired, I returned to my room and started writing this post, which, as you can see, I did not manage to complete before the conference was over.
Still, a whole lot more was waiting for me in the second day, but that's for another post that I hope to get on seen - there's still a week of vacation ahead of me, and I intend to make the most out of it.


Wednesday, August 8, 2018

CAST - Tutorial days



CAST has officially begun!
(I'll be keeping those posts short, because otherwise I won't have time writing them)
Yesterday I attended Anne-Marie Charrett's workshop about coaching testers. It gave me a lot of material to think about - The main message I took from there is that coaching someone is very much like teaching, only you need to be constantly aware of the meta-processes in place and enable the coachee (not really a word, I know) to walk the path they chose and develop the skills they need.
We had some nice exercises (though, practice coaching through Slack wasn't really working for me, and I probably owe my pair an apology for being semi-purposefully stubborn).
Besides the workshop, there was some nice food, and even nicer people that are always interesting to converse with.
After walking to dinner(much less time than the day before, where we walked for 20 minutes, found out that the restaurant was closing early due to an emergency and then walked 15 minutes further to another place), we played some board games that I have never heard about before, and there was much rejoice.
While some people decided to stay awake to watch the rocket launch (we are not far from Cape Canaveral), I was too tired and went to sleep (seriously - why launch stuff at 2am ? Can't those people just move the galaxy a bit so that it will be in a more convenient time?). 

Today was my day off - I did not attend any workshops, but instead took some time to go over my slides (still not done with that) and took a surfing lesson with Marianne. It was just immensely fun - all of that falling to the water and just barely being able to stay on the surfboard - I wasn't expecting to enjoy this as much, and yet I did.
Later today, I'll be going on a night paddling (it promises something with bio-luminescence, so it should be cool), and I expect to be completely wiped out after today, which appears to be my water-sports day.
So far - great time, great people.


Monday, August 6, 2018

PreCAST


So far, the conference experience is starting great as expected.
Yes, I know, the conference officially doesn't start until tomorrow's workshops, but as always, the conference is up once the people start to gather.
In hope of having some time to fend off jet-lag, I took an early flight, landing on Friday after a 12 hours flight to Newark, followed by 2.5 hours flight to Orlando (which I nearly missed after not checking that my gate wasn't moved - it was), then all I had to do is to try and stay awake (my Fitbit did not detect any sleep during Thursday, which was the night of the flight. I believe I did manage to squeeze in a couple of sleep hours) and hopefully, that would take care of the jet-lag. It seemed to work, I woke up at 7 AM after ~8 hours of sleep and set to explore the location. Well, it's humid, and hot. so walking about is not very pleasant, and Cocoa beach is a rather dull place, if you are not into surfing or simply staying at the beach.  I spent most of my day out, then rented a surfboard just to see what I can make out of it (never before have I ever held one of those, let alone try to use it). I had a nice time, but the sea claimed my hat.
I got to, briefly, meet Liz Keogh and Darren Hobbs before being picked up by Curtis Petit and his family for dinner.

Yesterday, I woke up at 4:30 AM (Jet lag? or simply because I slept my 6 hours? I don't know) and went to see the sunrise at ~6:00, where I encountered Anne-Marie Charrett. I then set out to find a new hat and from there to see the Kennedy Space center at Cape Canaveral. A tip for future visitors - Cape Canaveral is *Not* where the space center is, it's over 12 Miles to the north of that. I'll have to check the Kennedy space center another time. 
At the evening I got to meet some wonderful people - some for the first time. I met Curtis again, with Matt Heusser, Maria Kademo, Ben Simo Liz and Darren and we headed out for dinner.

So far, so good, looking forward for today.

Sunday, July 8, 2018

Things you can't polish until shine: A hostile reading in the 2018 ISTQB Foundation level syllabus



לפני שאני נכנס כאן לעובי הקורה, חשוב לי לומר דבר אחד: קל מאוד לפטור את הסילבוס הזה ב"כתבו אותו אנשים שלא מבינים דבר וחצי דבר בבדיקות תוכנה ועסוקים בבבירוקרטיה בלתי פוסקת כתחביב". זה לא המצב. לפחות ברשימת הסוקרים והמעירים נתקלתי בכמה שמות של אנשים שאני מכיר ומעריך במישור המקצועי. אני מניח שהבעיות שהפריעו לי נבעו מסיבות אחרות.

נתחיל עם השורה התחתונה - הסילבוס החדש אינו גרוע פחות מגרסת 2011 למרות ניסיונות להסתיר את ריח העובש בהתזת כל מיני מונחים "חדשניים" התחושה הכללית זהה, והערך הכללי של קריאת הסילבוס הוא שלילי. 

הקשבתי להרצאה של רקס בלאק על הסילבוס החדש, וכיוון שהוא נשמע יחסית מרוצה ממנו, החלטתי שלא סביר מצידי להמשיך ולהשמיץ את ההסמכה הבסיסית בלי שלפחות קראתי את הסילבוס המעודכן. אז הקדשתי לזה קצת זמן. כמה? לא המון. הכרחתי את עצמי להתרכז בעזרת פומודורו, ובסך הכל סיימתי לקרוא את הסילבוס בתוך שבע עגבניות של עשרים וחמש דקות. כלומר - כמעט שלוש שעות. תוך כדי הקריאה כתבתי לעצמי הערות בצד - חמישים ושבע מהן (על פני תשעים ושישה עמודים, למרות שדילגתי על תוכן העניינים, מעקב הגרסאות וכו'). מתוך ההערות האלה, בשני מקומות שמתי לב לשיפור שמצדיק אזכור כאן: נוסף סעיף (1.4.1) שמדבר על "תהליך הבדיקות בתוך הקשר", וגם בסעיף 5.3.2 - דו"ח בדיקות, יש הכרה בעובדה שדו"ח בדיקות צריך להיות מותאם לקהל היעד. שאר ההערות היו קיטורים, הערות שליליות ותזכורות לעצמי לקרוא בהמשך דברים (הרוב, לצערי, היו הערות שליליות). 
אז מה בעצם יש לי נגד הסילבוס הזה? ובכן, מצאתי טיוטה מ2016 שלא הגעתי לפרסם, אבל כל מה שיש שם עדיין נכון. בקצרה, הבעיה שלי נוגעת לשני מישורים: נכונות ועמידה בציפיות. 
נתחיל עם החלק של עמידה בציפיות. CTFL, מעבר להיותו רצף של ארבע אותיות, משמעו "בודק תוכנה מוסמך, רמת בסיס". אני לא יודע מה איתכם, אבל כשאני שומע "מוסמך" אני מצפה למישהו שיש לו את הכישורים הנדרשים למקצוע מסויים - חשמלאי מוסמך אמור להיות מסוגל להחליף מפסק בבית, עורך דין מוסמך אמור להיות מסוגל לייצג מישהו בבית משפט. המוסמכים אולי לא יהיו הטובים ביותר באזור, אבל את העבודה הבסיסית הם יהיו מסוגלים לעשות.
מוסמכי CTFL, מצד שני, מגיעים לשוק העבודה בלי שום יתרון על פני אדם אקראי מהרחוב - הם למדו שחייה בהתכתבות. הם לא נתקלו מעולם בפרוייקט תוכנה, אין שום דבר במהלך ההסמכה שדורש מהם תרגול או ניסיון (רוב הקורסים הארוכים מנסים לספק תרגול כזה או אחר, התוצאה, עד כמה שיוצא לי לראות, נלעגת) והקשר בין התהליכים התיאורטיים שנלמדו לטובת המבחן לבין המציאות הוא אפילו לא קשר מקרי - זה קשר שלילי. 
שנית, ההסמכה הזו קלה מדי. ארבעים שאלות אמריקאיות? מתוכן צריך קצת יותר מחצי כדי לעבור? רוב השאלות דורשות קצת שינון ותו לא? מי שמצליח להיכשל צריך להתבייש ולשבת בפינה. להסמכה קלה יש שני חסרונות: היא לא עוזרת לאנשים חיצוניים להבדיל בין מי שמוכשר לעבודה ובין מי שלא, והיא יוצרת רושם מוטעה כאילו המקצוע הזה קל מאוד. עד כמה קלה ההסמכה הזו? הסילבוס החדש מגדיר (במבוא, בחלק 0.7) שנדרש מינימום של קצת פחות מ17 שעות כדי להעביר את החומר הנדרש (וזה יותר זמן מאשר נדרש בסילבוס של 2011). כן, פחות משבע עשרה שעות. לצורך ההשוואה, קורס מבוא למדעי המחשב, משהו שאין ספק שמי שסיים אותו לא מסוגל לעשות הרבה בכל מה שקשור לתכנות, נמשך בין 78 ל84 שעות אקדמיות, שהן 58.5 עד 63 שעות מלאות, ועדיין לא ספרתי את עשרות השעות שמושקעות בתרגול ובשיעורי בית. האם בדיקות תוכנה הן עד כדי כך יותר קלות מאשר כתיבת קוד? אני בספק. 
עד כאן לגבי ציפיות. 
עכשיו לגבי נכונות - משהו רקוב בממלכת דנמרק. אני לא מדבר על טעויות בפרטים קטנים כמו ההתעקשות להתייחס לבדיקות "קופסה לבנה" כאל "סוג בדיקות" (זו טכניקה לפיתוח מקרי בדיקה, ובעזרתה אפשר לגזור בדיקות משני הסוגים האחרים המוזכרים - בדיקות פונקציונליות, ובדיקות תכונתיות - מונח שאני חושב שאתחיל להשתמש בו במקום "בדיקות לא-פונקציונליות" שאני לא מחבב), אני מדבר על טעות יסודית בגישה. הסילבוס מתייחס לעולם התוכנה כאל עולם הומוגני למדי בו יש תהליכים שמתאימים לכולם ולכן מנסה ללמד "מה" במקום "למה". הגישה הכללית של הסילבוס היא ציווי (או "הוראה", אבל המילה הזו כפולת משמעות כאן) ולא דיון. בכלל, נראה שמילת הקישור החביבה על מי שאחראי לטקסט היא "באופן מיטבי" (למשל, בעמוד 23, בתרגום חופשי שלי: "באופן מיטבי, ישנה נעקבות דו כיוונית בין תנאי הבדיקה לרכיבים המכוסים מתוך בסיס הבדיקות"). שימוש בניסוחים כאלה מדכא חשיבה עצמאית ומעודד צייתנות עיוורת. באמת, כשאני קורא במסמך הזה את הטענה השגויה "הסילבוס הזה מייצג פרקטיקות מיטביות שעמדו במבחן הזמן" (עמ' 91, כחלק מההסבר על הצורך בעדכון הסילבוס) אני מתחיל להבין למה לבאך ובולטון יש כאב בטן בכל פעם שמישהו מזכיר לידם את המונח הזה - מה שיש בסילבוס לא "עמד במבחן הזמן", הוא התעפש והתאבן. אחת הדרישות הבסיסיות מבודק תוכנה היום היא להיות מסוגל להסביר למה הוא עושה משהו, ולשקול את האלטרנטיבות - יש פרוייקטים בהם כדאי מאוד להשקיע הרבה עבודה בתכנון מראש ובכתיבת מסמכים מסודרים - אם מישהו יכול למות במקרה של תקלה, נניח. אבל יש כל כך הרבה פרוייקטים בהם בזמן שאכתוב את הניירת הדרושה, המתחרים כבר יכבשו את השוק, אז להשקיע בכתיבת הניירת שאף אחד לא יקרא?
בכלל, רוב הסעיפים בסילבוס מתוייג בK2, שזו רמה של "להבין מה זה", הרמה שהייתה יכולה להפוך את ההסמכה למשמעותית ולכן הייתה צריכה להיות על רוב הסעיפים היא K4 (ניתוח), שלא נמצאת בסילבוס בכלל (בגרסה של 2011 הרמה הזו הופיעה בהקשר של בדיקות קופסה לבנה, ובסילבוס הנוכחי היא נעדרת לגמרי). 
בקיצור - לא התרשמתי מהשינוי אפילו לא קצת. הוא קוסמטי בעיקרו ולא נוגע באף אחת מהבעיות שהיו. הדבר היחיד ששונה הוא שמדי פעם זורקים כאן את המונח "הקשר" בלי להתעכב על מה המשמעות של הקשר. במלוא הכנות? לא נראה לי שמי שיסתמך על הסילבוס הזה כמקור השכלה ראשוני יוכל לזהות הקשר אם הוא ידרוך על אחד בטעות. 


אז איך זה קרה? איך חבורה כזו גדולה של אנשי מקצוע מצליחה להוציא מתחת ידיה מסמך שהדרך היחידה לתאר אותו היא "מחפיר"? אני יכול רק לנחש - מה שקורה כאן הוא שמנסים "לתקן" ו"לעדכן". אז משפצים קצת פסקאות, מוסיפים ומסירים נושא או שניים, אבל הבסיס השבור נשאר - כמו בתרגיל חשבון שהסתבך, הגענו (מזמן) לנקודה בה הדרך היחידה לתקן היא למחוק הכל ולהתחיל מחדש. 






I spent some time reading the 2018 ISTQB CTFL syllabus, here are my thoughts. 
Before I start, though, there's one thing I want to say - I went over the list of reviewers and found some names of people I know, and really appreciate. The easiest thing to do with this syllabus is to dismiss it as something written by a bunch of detached incompetent buffoons, this is not the case. The people I recognized are professional practitioners of testing, and they are damn good at what they do. I assume that the issues I have with the syllabus are despite them being involved and not because of it. 

After listening to Rex Black's webinar about the new ISTQB CTFL syllabus and hearing how satisfied he was with it, I decided I cannot go on smearing the CTFL program without at least reading the updates and seeing if some of the issues I have were addressed. Short answer, for those not intending to read this long(ish) rant - The new syllabus is no less terrible than that of 2011.

When reading the syllabus, in order to keep myself on the task and not wandering off, I timed the reading using 25 minutes Pomodori. Seven of them, to be more precise (which amounts to almost 3 hours), as I was reading I wrote down some comments for later. all in all, out of the 96 pages (including Table-of-contents, references and appendices) I have 57 comments, mostly because I got to the point where I was saying the same thing over and over, so I narrowed my scope down to comments that would help me write this blog post. Out of those comments, 3 are positive to some extent, two of them are actually worth mentioning here: The addition of section 1.4.1 "Test Process in Context", and a  (rather trivial) recognition that the test report should be "tailored based on the report's audience" (page 72). The rest of the comments were rants, tasks and (mostly) negative comments.
All in all, I can say that the 2018 version of the syllabus is not less terrible than the 2011 version despite some glitter that was sprinkled on top of it. However, despite trying to mask the moldy scent by throwing a buzzword or two around - it still is very much the same in terms of both content and approach.

So, what do I have against the ISTQB CTFL syllabus?
As I was certain I have already written something about it before, I went looking at my old posts. I found a forgotten draft from 2016, it's a bit old, but everything there is still relevant, so here's a short summary: I think that the syllabus is not living up to the expectations it creates, and is fundamentally incorrect.
The expectations part is the easy one - CTFL, besides being a four letter word, is "certified tester (foundation level)". I don't know about you, but when I hear the word "certified" I expect someone that can actually do the job they are certified for. A certified electrician should be able to change a fuse and a certified accountant should be able to deal with a small business's tax report. They might not be the best of their profession (after all, they are just starting) but they are more proficient than a random person from the street. The people "certified" by the ISTQB (disclaimer, I have the foundation diploma somewhere in my drawer) are the equivalent of people learning to swim by reading a book. They don't have any real advantage over someone that is not "certified", they have never encountered a real software project, there is nothing during the certification process that requires any practice and the correlation between the material learned and reality isn't even random - it's negative.
Second thing is that the certification process is way too easy. 40 multiple choice questions? With passing grade at 26 "correct" answers? Where most of the questions require nothing more than memorization? Anyone who manages to fail this test should be ashamed of themselves. An easy certification has two main drawbacks: It fails in helping people identify the professionals from the amateurs, or the good professionals from the less competent, and it promotes the idea that the subject for which the certification is is easy and not challenging. How easy is it? The 2018 syllabus defines a longer minimal learning period which is higher than that of 2011, and gets to the laughable number of 16.75 hours. Just for comparison, The course "introduction to computer science" at the university takes between 78 to  84 academic hours (or 58.5 to 63 full hours) of frontal instruction (I've left out the significant time spent on homework), and no one assumes that after such a course the student is capable of any real programming work. Is testing this much more easy than programming? I doubt it.

Now, for being incorrect. Something is rotten in the state of Denmark. No, I'm not speaking about small concrete mistakes such as referring to "white box" as a "testing type" (it's a technique to derive tests and create any of the other "types" of tests the syllabus mentioned, once you have a test idea written down, it's not always possible to trace it back to the technique that was used to get to it), I'm speaking of an Intrinsic flaw in attitude: The syllabus refers to the world of testing as a rather homogeneous and understood space, and thus is prescriptive where it should be provoking discussion. It tries to teach what to do and skips almost entirely both the "why" and "when not to" parts. It seems that the favorite conjunction in this document is "ideally" (e.g., in page 23: "Ideally, each test case is bidirectionally traceable to the test condition(s) it covers". Really? is this property still "ideal" in a project where the requirement is "make it work"? or in a project that will have a significant makeover within six months?). Such language discourages thinking and creates the illusion that there is a "correct" generic answer. Check for instance this section in appendix C - "While this is a Foundation syllabus, expressing best practices and techniques that have withstood the test of time, we have made changes to modernize the presentation of the material, especially in terms of software development methods (e.g., Scrum and continuous deployment) and technologies (e.g., the Internet of Things)" [page 91, emphasis added].  Note how despite saying "we changed stuff" the message is "those are eternal truths". When reading this I can really relate with the stomachache Bach & Bolton express each time someone mentions "best practices" in their vicinity. Most of the stuff in the syllabus have "withstood the test of time" by fossilizing and growing mold. One of the requirements from a software tester today (I would say "a modern software tester", but this term is better used to indicate this) is to be able to communicate why some activities are needed and what are the trade-offs. Yes, even in the foundation level, as so many testers find themselves as lone testers in a team or even a small start-up company. There are cases where writing extensive documentation and planning well ahead of time is completely the right thing to do (for instance, if someone could die in case of failure) but in many other cases, by the time I'd be cone creating my bidirectional matrices my competitors would have already released similar functionality to the market and had time to revise it based on customer feedback. So, should I invest time in writing those documents no one will read?
Generally, most sections in the syllabus are labeled K2 or lower (K2 is defined as "understand", but it is more like "understand what a thing is" and not the complete grokking one usually relates to this term), The level that could have made this syllabus any valuable is K4 (Analyze, which was removed in 2018 version, and was applied only to code coverage in the 2011 syllabus) with a minority of some K3 (apply).
All in all - I was completely unimpressed by the 2018 syllabus. It does meet my expectations, but I'm very saddened by that. The changes are, almost entirely, cosmetic. The main difference is that the word "context" is thrown around a lot - I don't believe anyone who learned by this syllabus would be able to recognize context if it punched them in the face.

So, how did this happen? How come that a large group of involved, highly professional testers can get such a shameful document out, and even be proud of it? I can only guess - what I think has happened is that the task at hand was to "update" or even "fix" the 2011 syllabus, so people get to updating paragraphs, fixing sentences or even completely re-writing an entire sub-section. But, as the saying goes, you can't polish a turd (actually, you can). Like a math problem that went astray, this syllabus got (a long time ago) to the point where the best option is to throw everything away and start over.

Thursday, June 28, 2018

נו, יש לך גיבוי?

Don't worry, I got your back(up)


נעלמתי לזמן מה, בשל מגוון סיבות. אחת מהן הייתה שהכנתי את ההרצאה שלי לCAST, והעברתי אותה במיטאפ של TestIL, לי היה כיף, ואני מקווה שגם לקהל. 
סיבה אחרת בגללה נעלמתי היא שקרס לי המחשב. כך, בבוקר שישי אחד, אני מדליק מחשב שעד אתמול היה בסדר גמור (קצת יותר גמור מבסדר, אבל פעל באופן סביר) ולפתע אני מקבל הודעה שאי אפשר למצוא מערכת הפעלה. טוב נו, מתארגנים על תקליטור של אובונטו ומפעילים את המחשב בכל זאת כדי לראות אילו קבצים אפשר להציל. התשובה הקצרה - אי אפשר. משהו בכונן הקשיח נהרס והמחשב אפילו לא מזהה אותו. 
עכשיו, תרגיל קצר לקוראים:
עצמו את העיניים ונסו לדמיין אתכם במצב דומה, המחשב הראשי שלכם התקלקל\נגנב\הוצפן. עדיין יש לכם גישה לכל המידע המגובה שנמצא בעוד מקום נוסף על המחשב. מה אבד לבלי שוב? מה חשוב לכם לשחזר ואי אפשר? מה סתם ייקח המון זמן לשחזר?
עדיין לא עצמתם עיניים? עכשיו זה זמן טוב. 
...
אני מנחש שכנראה הצלחתם למצוא דבר או שניים, אבל סביר להניח שכמעט כל מה שיגרום לכם לכאב ראש כבר מגובה איפשהו - בענן או על כונן חיצוני. זה גם היה המצב אצלי, חוץ מאשר מצגת שהתחלתי לעבוד עליה אבל לא שמרתי עדיין בגוגלדרייב, יש לי גיבוי לכל מה שהצלחתי לחשוב עליו - תמונות מטיולים ומוזיקה שהעברתי מדיסקים ישנים למחשב (וגם כזו שהגיעה למחשב שלי בימי נאפסטר וקאזה, אבל אל תספרו לאף אחד) נמצאים על כונן חיצוני (שניים, למעשה), כמעט כל המסמכים שחשובים לי נמצאים בדוא"ל ואת התוכנות שמותקנות אפשר להוריד שוב. גם רוב הסימניות שלי בדפדפן, שמורות בגוגל בטעות, אחרי שחיברתי פעם את החשבון לכרום ודברים סונכרנו לפני שהספקתי לומר ג'ק רובינזון.גם המשחקים שקניתי היו דרך Steam או אחת החנויות האלה והמידע שלי שמור בענן, עד כדי משחקים שמורים שאולי נשמרים מקומית בלבד.  סך הכל, אחלה, לא?
בכל זאת, היה לי קצת חבל על שלוש השעות שאצטרך כדי לבנות מחדש את המצגת, ועל אבדנו של קובץ אקסל בו אני עוקב אחרי צריכת הדלק של הרכב שלי. לא סוף העולם, אבל סתם מציק. חוץ מזה, יש פה אתגר - כונן קשיח נגיש לחלוטין, אבל לא מזוהה ע"י מערכת ההפעלה. לא יכול להיות שמידע נעלם סתם ככה, נכון? מה שכנראה קרה הוא שהסקטור הראשון נדפק, ואז מערכת ההפעלה לא יודעת אילו ביטים הם חלק מקובץ ואילו אינם.
אז הורדתי תוכנה בשם testDisk, שנועדה לשחזור מחיצות ומסתבר שהיא יודעת גם לשחזר קבצים אבודים, עד רמה מסויימת, ולפתע - כל הכונן שלי נגיש שוב. פתאום גיליתי מה עוד לא בדיוק מגובה:
  • האם אתם זוכרים אילו תוכנות מותקנות לכם על המחשב? הגעתי לשלושים וארבע תוכנות שרציתי להתקין מחדש, לא כולל תוספים לnotepad++ או לכרום. סיור בתיקיות program files עזר לי למצוא את מה שאני רוצה להתקין באופן אקטיבי.
  • %APPDATA% - ועם אנשי הלינוקס הסליחה. לכל מיני אפליקציות מותקנות יש מידע שנשמר תוך כדי עבודה, ומכיל את מה שחשוב לכם באמת. למשל, נזכרתי שמותקן לי על המחשב לוח שנה עברי, יחד עם תאריכי ימי ההולדת של כמה חברים קרובים. אני לא זוכר את תאריך הלידה העברי של רובם, והיה מאוד נוח כשיכולתי למשוך את הקבצים הרלוונטיים מתוך ההיסטוריה. אותו הדבר היה נכון למסד הנתונים של ditto, בו אני שומר כל מיני דברים שחוסכים לי זמן. 
  • דברים שהשארתי על שולחן העבודה - למשל, יש לי תיקייה עם צילומים של שירים, כאלה שגזרתי מעיתון, או צילמתי מאחד מספרי השירה שלי. אני כנראה יכול לשחזר את הרוב, אבל לכו תזכרו מה היה שם. עד שלא ראיתי את זה, לא זכרתי שזה שם.
  • סיכומים מהאוניברסיטה  ושאר מסמכים מmy documents - לא שמשהו משם בער לי, אבל יוצא לי בערך פעם בחצי שנה להיזכר במשהו שאמור להיות בסיכומים שם ולחטט בו. 
  • תיקיות נוספות תחת כונן C - מדי פעם יש דברים שצריך לבחור להם מקום. למשל, כל מיני ספרים בפורמט PDF שקניתי או קיבלתי. די בטוח שהרוב שם מגובה, אבל אולי משהו הוחמץ. 
בקיצור - מגוון הפתעות נחמדות חיכו לי כשהתחלתי לחטט בכונן ההרוס, ואני בהחלט שמח שהצלחתי לשחזר ממנו את רוב המידע (לא מצאתי מה עוד חסר לי, אבל אני מניח שפספסתי דבר או שניים). 
עכשיו לתרגיל השני: טיילו בחמשת המקומות שהזכרתי והשוו את הרשימה של דברים שתרצו לומר "את זה אני מעביר למחשב הבא" לרשימה שבניתם בעיניים עצומות קודם - עד כמה הרשימה הזו ארוכה יותר?

חוץ מזה, האם מישהו מכיר דרך נוחה לסנכרן תיקיות לגיבוי? אני לא רוצה לשמור את הכל בdropbox, אבל הייתי שמח להגדיר תהליך שירוץ באופן שבועי ויגבה את כל הדברים הקטנים האלה שאני לא רוצה לטפל בגיבוי שלהם בעצמי. 





I've been away for a while - for many reasons. One of which is that I was busy preparing a talk for CAST, and practicing it in a local meetup. I had a lot of fun, and I hope the audience enjoyed it as well. 
Another reason, which is also the reason for this blog post, is that my PC died. Or rather, my hard drive did: One day my computer works (sort of) fine, then the next I switch it on, just to get a nice message that it cannot find an operating system. Oh well, I created an Ubuntu disk and managed to get the computer past the problematic point, just to see that the hard drive is not recognized. Something there is messed up. 
Now, a short exercise for the readers: Close your eyes and imagine your main computer crashes in a similar way, or gets stolen or encrypted by a ransomware. Your backups and online data are intact. How much data have you lost? 
Eyes still open? You can close them now.
...
My guess is that your answer would be "not very much" - I imagine that while you managed to find an item or two, most of the data that you expect to be missing in such a case is probably backed up either on the nebulous "cloud" or on a physical external drive (sometimes connected to a secondary computer). 
For me, the case was very similar: I had a presentation I worked on for a small number of hours and  did not yet save to my google drive, but apart from that, I had almost everything backed up: music that I ripped from discs purchased ages ago and probably lost by now (some of them might still be buried in a drawer at my parents house), pictures I took at various trips. Most of the important documents can be found in my gmail, my games are in Steam\Origins and their save data too (or, if it isn't, it's not important for me), the software I had installed can be downloaded again and even most of my bookmarks are stored in Google servers after I once signed in to Chrome and before I could say Jack Robinson my bookmarks were synced (up until the point where I disabled that and logged out). All in all - not that bad, right?
Well, I I decided to not give up, and downloaded a piece of software called testDisk and maybe save myself the need to reconstruct the slides. Surprisingly, it worked. I then found that there were some other stuff I really didn't want to lose, but have not thought about. 
  • Do you remember all of the programs that are installed on your computer? After brosing through the folder structure with special care for "program files" folders I could list about 34 programs I wanted to install again (not including plugins for notepad++ or chrome).
  • %APPDATA% (and the Linux people will have to find the equivalent on their own) - Some programs are valuable not because of the functionality they have, but rather because of the data stored in them, I had a Hebrew calendar where some friends Hebrew birthdays were stored, and some of them I did not remember. I managed to get this back by restoring the relevant files from %APPDATA%. Same goes for chrome bookmarks, or ditto's database (where I keep some copied strings as a shortcut).
  • Speaking of that calendar, I encountered another problem - downloading it again was a bit challenging, as the official site is "under construction" for at least a couple of years (according to the wayback machine). Installation files are not what I would normally bother to backup. 
  • Stuff I left on the desktop - A plethora of tiny things that are nice to have handy - I have a folder with poems I took out of some of the poetry books I have, or received by mail, or found online - it is backed up, but I'm not certain how up to date is the backup. 
  • "My documents" - while most of the documents there can be forgotten, some of them are in the category of stuff I remember once in a blue moon and recall that I want to share some of it, or read again. My University notes are such thing, and some documents with sentimental value to me. 
  • Other folders directly under C:\  - I found there a folder with PDFs I bought or received over the years (most of them are RPG books, probably from a kickstarter). 
Now, for the 2nd part of the exercise - Go over those folders on your computer and think - "what would I like to keep and wasn't in my list?"

And now to the question I have for you - Do you know a tool that allows syncing a folder to my preferred backup? Updating backups manually is not a real option, and I would rather be able to click something and have all of those small things backed up for me.  I think I'll give cwRsync a chance.

Tuesday, April 17, 2018

קץ הבדיקות הידניות

Manual Testing - Finally dead

...
כן, שוב. 
עוד מאמר עם הכותרת הלעוסה הזו. ולא, זה לא מה שאתם מניחים. 
כמו שאפשר היה לשים לב כאן וכאן, אני מאוד לא מחבב את המונח "בדיקות ידניות". למרבה הצער, סתם לומר "אל תשתמשו במילה הזו" לא באמת פועל. אם נרצה ואם לא, המילים בהן השתמשנו עד כה יצרו תבניות מחשבה ודפוסי פעולה והיום יש אנשים שבודקים תוכנה על ידי כתיבת קוד באופן כמעט בלעדי, וכאלה שבודקים תוכנה ואינם כותבים קוד בכלל - בניגוד למה שטוענים בולטון ובאך, זה לא באמת אפקטיבי לקרוא לשני הדברים הללו באותו שם ("בדיקות"), כי אנשים תופסים את הפעילויות הללו כנפרדות. גם השימוש ב"מבחנים" (checks) לעומת "בדיקות" (testing) לא באמת תופס, כי אנשים שאינם מתחום בדיקות התוכנה רגילים להתייחס לכתיבת קוד כאל "כתיבת בדיקות", זאת נוסף על העובדה שתקלה במיתוג גרמה לכך שיותר מדי אנשים משתמשים בהפרדה הזו כדי לומר ש"מבחנים הם לא בדיקות" ולצמצם את החשיבות שלהם - שוב, למרות תיקונים חוזרים ונשנים מצד באך ובולטון (חמור יותר - התרגום של ההפרדה הזו לא עובד היטב בעברית, כי "לבדוק" זו מילה מקיפה פחות מאשר "לבחון" - אלא אם למישהו יש הצעות אינטואיטיביות יותר). 
בקיצור - חסרה לי דרך להפריד בין שני סוגי הפעילות באופן שמצד אחד ישתמש במילה "בדיקות", כי לזה אנשים רגילים ומצד שני לא יוצר הבדל הירארכי בין השניים. התואר "ידני" נתפס כמיושן ונחות ביחס ל"אוטומטי", והתואר "חקרני" (exploratory) הוא שקר גס - אני חוקר לא פחות כאשר אני כותב קוד מבדקים, ואפילו כשאני מנתח את תוצאות הריצה. בנוסף, זה שקר לא אפקטיבי - אנשים לא מבינים באופן אינטואיטיבי את כל הקטע הזה של בדיקות תוכנה כמסע גילוי, והתואר הזה לא אומר להם כלום. בנוסף, זה נשמע קצת כמו מונח פוליטיקלי-קורקט ל"ידני".  בקיצור, נדרש תואר שיכול לעמוד מול "אוטומטי" כשווה בלי ליצור מצג שווא ובלי שהוא יישמע כמו מסכה דקה ל"מה שכולם באמת חושבים". 
לאחרונה, מסתובב לי בראש רעיון - מה לגבי "בדיקות אינטראקטיביות"?
היתרון המרכזי במונח הזה, מבחינתי, הוא שאין צורך להסביר אותו - "אינטראקטיבי" הוא תואר שמייצג "מערב פעילות אנושית". בנוסף, המונח הזה כבר טעון באופן חיובי, והוא כמעט תמיד מופיע כתואר שמייצג יתרון (למשל, האם שמעתם על "למידה אינטראקטיבית"?).  המעורבות האנושית שמתייחסים אליה במערכות אינטראקטיביות היא רצויה, ובמקרים רבים היא אפילו המטרה. 
לכן, כבודק תוכנה אני נעזר בבדיקות - חלק מהן יהיו אוטומטיות, וחלק יהיו אינטראקטיביות. לא צריך להסביר אף אחד מהמונחים האלה, ולדעתי - גם לא צריך להגן עליהם. יש מקומות בהם אוטומציה חשובה לנו, ויש מקומות בהם נדרשת אינטראקציה. 
הנקודה היחידה בה עדיין קשה לי היא במעבר בין סוג הבדיקה לסוג הבודק (אין "בודקים אינטראקטיביים", בדיוק כמו שאין "בודקים ידניים" או "בודקים אוטומטיים"), אבל אני חושב שלכל הפחות, הבחירה ב"אינטראקטיבי" לא מחמירה את המצב.

אז, מה דעתכם?

-----------------------------------------------------------------------
Yes, Again. 
Another article with this unimaginative title. And no, it's not going to be what you might assume.

As you might have noticed here or here I don't like the term "manual testing" (unless, as is customary to say, you are testing a manual, in which case it is a fine way to describe what you are doing). Unfortunately, simply going about saying "don't use X" is very ineffective, and we need to suggest alternative wording that will be as compelling as the current one. Whether we like it or not, the words we've used until now have helped to create a thought pattern and define a strong distinction between writing code to test code, and humans testing software - those activities tend to be perceived as separate, and sometimes even performed by different people - at any rate, the point is that people are used to think of two activities, and therefore use two different terms to distinguish between them, so while saying "all of this is simply testing" is, in my eyes, preferable, it will be quite difficult to persuade people who are less versed in the world of testing to give up that useful distinction.
Currently, I'm aware of two ways to retain this differentiation, both of them I find lacking in some way. First there is Bolton & Bach's distinction between "Testing" and "Checking", which has a couple of problems: It was abused to try and make writing code to test inferior to playing with the software in person, it is not immediately understandable for people less interested in testing (i.e.: it needs to be explained), and it does not translate well to Hebrew (and possibly to other languages).
Second, there's an odd trend of used "Exploratory" as a euphemism for "manual" - While the terms can be traced back to Bach & Bolton as well, I don't think I've heard any of them use this odd term (which makes sense, as they retired their use of "Exploratory testing" for good reasons). Using "exploratory" in a manner that means "manual" has even bigger problems - First and foremost, It is a blatant lie. When I write a piece of "automation", I am actively exploring. The same is true for reading the run reports. Second, this too has no meaning on its own for the laymen -  the idea that software testing is an act of exploration is not a common concept outside of specific testing paradigm & communities - most people are more "just do your thing so that we can ship" (or worse - "make me some me quality") . Using "exploratory" in this sense feels like a politically-correct way to say "manual", and like most P.C. language - it is useful only for a very short time until the derogatory meaning and prejudice catches on with the new term. In addition, as it comes from the same idea-space as did testing and checking, it is easily used to demote "automation" and get again into that purposeless superiority struggle.
So, What I'm looking for is a term that can at the same time go along with people's habits (so automation remains "testing"), maintain the needed distinction between the two activities (people's habits, did I mention them?), be intuitive to understand and convey enough self confidence to co-exist peacefully with automation without the need to defend it and enter a futile & harmful battle. If possible, it should also help narrow the mental gap between the two activities.

Some time not too long ago, a thought hit my mind. I'm not sure where or when, but it has been there for a while now. How about interactive testing ? In essence, I feel that it helps address my problems with the other terms and stands up for most of the goals I want it to.
First of all, it is intuitive to understand - we use "interactive" in our day to day life and contrast it with "automated", so there's no surprise when we use it ("automated" \ "exploratory" is an odd axis, "automated"\"interactive" is as common and as natural as "automated"\"manual").
Second, it is positively charged: "interactive" is used most of the times to represent an advantage, or a desired result. For instance, have you heard about Interactive Learning?
Finally, is conveys a clear meaning of what testing is - "interactive" is used to imply cognitive involvement of the human(s) interacting with the object of interaction. Unlike "manual" which implies boring, repetitive work, "interactive" should be interesting and captivating.
As a tester, I do testing. Some of it will be automated, some of it will be interactive - There's no need to explain any of those terms, and my feeling is that none of them needs to be defended against the other. Some things are better if automated, some are better in an interactive form.
One point that is not solved by using a better term is the mix people make between type of testing and type of tester - having an "interactive tester" is as meaningless as having a "manual" or "automated" tester (another long time rant of mine). But, hopefully, it does not make the situation any worse on that front.

So, would "interactive testing" work for you? I would love to hear your thoughts.

Thursday, March 22, 2018

Deadlines

A month ago I asked here for volunteers to keep tabs on me in order to set a short-term improvement goal and then go find an "accountability buddy" (to borrow the name from Lisi & Toyer) with whom I might either share a similar goal, or simply keep tabs on each other as we each go on their own goals, with the idea behind it is to drive each other to put the required effort.
I was fortunate enough to have Lisi ping me a few days back and help me formalize some of the thoughts that were running through my head by placing them in a mindmap. I might go occasionally and add some stuff to it.
Narrowing stuff down, I managed to remain with three goals I'd like to do first:
1. Create a personal "talk library". Creating a conference talk, for me, is still quite an arduous task - finding and idea and formulating it
2. Go over BBST free material and invest time to learn it (probably as a preparation to taking the course)
3. Building a project I promised my dad a while ago and got distracted by a lot of other things.

So, want to help me keep track and achieve one of these goals? All you have to do is choose a goal you want me to help you, and ping me in some way. We'll set up a way to check on each other later.

Since I'm not sure what to choose, I'll go with "first come, first serve" - If someone wants to join on one of my goals, or simply find one of them interesting to listen to - I'll go with that.

In terms of timelines - next couple of weeks I'll be busy with a the upcoming holiday (It's Passover time, so next week is cleaning, and the week after is friends and family), and probably the following month I'll be working on a slide-deck for a talk I committed to give at a local meetup (that's my way of forcing myself to prepare talks for conferences in advance), so I'm hoping to start working towards one of those goals on May 1st, and would love to find a partner for the journey up until then.

Anyone wants in on that ?

People problems suck

(No Hebrew, it's enough to wallow in it one time)

So, today our manager informed us someone on the team is being sent home.
I don't think anyone on the team was surprised by the decision, as we all were experiencing some of the difficulties for the past 6 months, but even so, and even if we might think it was the correct decision - it is no fun.
What is really upsetting is that everyone in the team knows that this person had their heart in the right place - they cared, they tried their best, and then some, and really cared for what they were doing. The question buzzing in my head (and in others, according to some corridor talks after the announcement) is "Did we do enough to try and avoid this?" After all, we often say that caring and trying can take one a long way. So, could we have done anything different to fix the situation in any other way then letting that person go? Could (and should) we have done more than what we did?
When I look back, it seems to me that all the symptoms were originating from one core problem - We did not manage to make the team a safe environment for that person to try and fail, in part because the way this person was failing was hurting the team and in part because we didn't try consciously to do so - we just assumed that everyone feels safe to fail and missed it when it wasn't. This, obviously, only made things worse, because when someone is not feeling safe to err, they default to not doing - which is only another failure that adds pressure and make everything spiral down really fast.

I'm not sure if I have any concrete conclusions out of today.
Or, in other words - bummer.

Monday, February 26, 2018

ETC 2018, did I say it was awesome?



Yes, I did. The first part is here.
However, that was only the first day of the conference.
The second one started with a nice breakfast where I got to speak a bit with Abbey and Llewellyn and as we were getting (a bit late) to the opening keynote of the day, Llewellyn shared an awesome strategy of getting changes to an open-source project you use: Hire the maintainer for a day or a week to make the change with you - that way the feature you need will find its way to the core product (so no need to fork your own version of the tool and enjoy updating). It will also probably be way cheaper to get your solution, as the maintainer knows the project very well, and by pairing with them you can add your specific domain knowledge to the solution.

Then we got to the keynote, just as the speaker was starting. Topic of the talk: become a skeptic.
The talk left me with a somewhat ambivalent feeling: On the one hand, it was very well presented by a speaker that clearly knew what he was doing. On the other hand - it felt a bit lacking in terms of content, and more so - actionable content. Sure, I can get intuitively why being skeptic might help a tester, but it felt a bit like preaching to the choir: I couldn't find any real, concrete reason to become a skeptic, and I am not really convinced in the value of skepticism as a tester's main approach.

However, after the keynote I got to Abbey & Lisa's workshop on pipelines. What can I say? It was great, with good exercises and even better explanations between them. Within the very limited time-frame for this workshop (it can totally be a full day one, I suspect. Or at least 1/2 a day) we managed to decide on a pipeline based on the pain points each of us have at work, and got to realizing our pipeline is waaay too long (we estimated a week to go through everything there). It is interesting to see how much of a discussion one can get simply by laying out the process your code goes through to production. I really enjoyed this workshop.

Then, a tough choice between 3 talks I wanted to go to I've attended Alex's talk on exploratory testing, and about practicing speaking out the way we test. If you have not yet got the chance to hear Alex speak, you should. The talk was sheer fun (or rather, sheer learning fun) and I liked the way she managed communicating her thought process and involving the audience in the exercise.

Following this talk I attended Mirjana talk about production monitoring and some of the tools they are using. This one was particularly interesting for me, as almost all of the tools she mentioned are either used by people at my work, or are intended to be used somewhere soon (I even participated in a POC for some of them) and seeing some of the benefits she was able to get out of those tools was really nice. It also connected well with something Gojko mentioned in the opening keynote: make stuff visible for the developing team. Great insights are gained that way.

The open space is always a great event, and this one was not any different. One thing I need to do is practice more self-restraining, and limit myself to owning only one subject, as there are always so many great topics. I started by going to a discussion led by Ron about how to train new testers. Apparently this is a tough question for all of us - we know to do this by mentoring or pairing, but teaching this in a mass-centered way is posing some difficulties. Sadly, I left the discussion early due to a mistake on my part with regards to the next session start time, so I had 20 more minutes before the discussion I led. Instead I joined a discussion about management. Then there were two discussions I posted: Tools & the way they change the way we think, from which I gained insight about the way some tools changed the processes of the team, and the need to constantly monitor the effect of new tools on the team culture. The second discussion was a bit tougher - how to help a colleague who's struggling to keep up, and when to give up. My takeaway from this discussion - Different things might work for different people, and don't give up easily (However, you'll know when you've given up, so don't prolong it more than is necessary) .

Great day, isn't it?
We had a blast closing it with a Keynote by Dr. Pamela Gay about some of the challenges she faces in her work in NASA, which, in case you wondered, includes identifying craters on Mars or on the moon and correlating pictures taken by astronauts with google-maps. Both tasks are difficult for professionals and for computers. However - people are great, and are willing to help, if you are willing to filter out some of the data. The coolest part? You can join the effort (but please wait until tomorrow at least).

Then, the conference was done. Or, mostly done - a lot of people met for dinner and we had some fun chatting around. It is amazing how when the conference is over, it seemed that almost everyone around just wanted to extend the experience just a bit more. It was really tough to do the "responsible" thing and go to sleep early in order to catch a cab at 5 AM to the airport. Still, this is what I did.
At the morning I shared the taxi with Abby, so I got to extend the conference ambience to the last possible moment (though, I must admit - at 5 AM, the ambience is mostly sleepy).

What amazes me is that while the sessions themselves are really good, what makes this conference so great is in the more difficult to tell about, small moments: speaking with new people and those I've met before, seeing everyone around me smiling (to themselves and to each other), and sharing an experience. My only regret is that I did not get to spend more time with people, and some people I wish I could catch up with a bit more. However, I will follow the advice given by the conference organizers at the open space: Whatever happened is what should have happened, and it could not have been any other way. I'm very happy things were as they were.

So, until next year :)



Thursday, February 22, 2018

ETC 2018, it was simply awesome


(This is part one, as it came out a bit long, the next part will be out in a few days)
European Testing Conference is over, and it was the best ETC so far. Each year I come to ETC with higher expectations, and each time they somehow manage to surpass them and look as if it is the natural order of things. There will be a retrospective post for me later, but for the meanwhile, I want to sort out some of my experiences from the conference days (I wrote briefly about the days before the conference here).
The morning started with a nice breakfast at the hotel, getting to chat a bit with some people (With whom - I don't remember. Or rather, I remember most people I talked to, it's only the when that is a bit fuzzy) and after that - registration and the first keynote in which Gojko Adzic presented his newfound approach to automatic visual validation. His main message was - UI tests are considered expensive, but now we have the ability to change this equation - not because of the tool (that looks nice, I got the impression that it was some sort of a mix between Applitools eyes (comparing really small elements, defining textual conditions, and Galen framework), but rather because we can now parallel a whole lot of test runs using headless chrome on AWS lambda. So sure, this won't work for you if you are not working on AWS, or can't parallelize your tests, but it's a nice thing to consider, and see how far can we go towards this sort of goal.
Following the keynote I went to a talk given by Lisi & Toyer. Frankly, I came to this talk with very low expectations - sure, another "share and collaborate" talk. Perhaps this is why my mind was blown. Toyer & Lisi managed to tell an interesting story about how they created a "pact" with a specific goal in mind, and how many benefits they got from it. I think that what really got me, though, was the genuine excitement they expressed around the whole process. I went out of this talk with a strong feeling of "that's a great idea, I should try it one day" (and, since most of the times "one day" equals "never", I'm looking for a volunteer to smack me on my head if I don't make anything more concrete out of this within 30 days, in this very blog).
Then came speed-meet. Since last year, I learned to notice the good things in it - it forces people to open up and speak to people they don't know, and really breaks the ice fast. Still, it was a bit too loud for me. How loud? This loud. One thing I did learn before is to completely ignore the mindmap drawn by my partner and tell them I'd rather look and listen to them and not to a piece of paper, so that helped a bit. I still got to shout towards some people I've never spoke before, and people I didn't speak with enough. I think I only needed a silence bubble in order to properly enjoy this event.
Following the speed meet, and one minute alone in a quiet corner to recharge and let my ears some rest, there was lunch, with quite a nice setting to help people talk some more, this time in a quieter manner.

After lunch - workshops time!
A while before the conference I've decided to go to the Gherkin workshop (I don't like calling the given-when-then formulation BDD, since for me BDD is a lot broader than that) in hope that I'll manage to figure out why some people find this artificial, restrictive format useful. Or, at least, learn when to use such a thing and when not to. Going through a workshop with some experts seemed to be the best chance I could give it.
Well, apparently, I should have read the fine print better - the workshop was targeted towards the already convinced. Those who are using, or planning to use, the Gherkin formulation and want to learn how to do so better. I got to see some bad examples, discuss why they might be bad, and how to write a proper one. Frankly? Initially I thought that it was a well built workshop that I came to with the wrong expectations, but the more I think about it, the more I believe it was a waste of everyone's time. Writing a single gherkin scenario is easy. The tips we got there were trivial (and easy to find online) and the discussion level was not deep enough to justify our time (nor I think it should have been). A better workshop, still aimed at the users, should have been how to maintain a good suite of Gherkin scenarios, as even a relatively small number of well defined scenarios can become a terrible task to read and understand when there is no way to organise them. My personal limit before asking for a different format stands around 5 scenarios. If I have to read any more, the rigid format is becoming actively harmful.

Anyway, rant time over, and I had a talk to prepare to. After dealing with some technical difficulties (I knew I had to purchase new batteries for my clicker) and tweaking the slides a bit to make sure that everything on the slides was visible, I started talking a bit about automation and some ideas on structuring a part of it. The slides can be found here (and will soon be available at the conference site). I got some valuable feedback from Richard Bradshaw after the talk, and as far as I can tell - it seemed that the audience response was good (thank you Mira for your very kind words).

I then had a chance to relax a bit during lean coffee, which always feels too short (In fact, checking the schedule, I see we didn't even have an hour - it was too short!), but I got to have an interesting discussion with new people I have yet to meet. I think I need to become a bit better at facilitating the discussion, but it went rather well even so. Between this and speed-meet, I have a better experience meeting people this way.

We went on discussing the subjects at hand until the day's closing keynote where Lanette was sharing a whole lot of cat pictures and an interesting point alongside them.

I was a bit tired after such an intensive day, which was not over just so - the conference dinner event was scheduled for this evening, and so I went. Nice people, nice vibe, and everyone got a free drink. The place itself, though, felt like a restaurant, and so people were sitting at their tables (A large table, but still) instead of wandering about. I had a nice chat with Karlo and Emily, but finally my fatigue got the better of me and I took a tram back to the hotel to crash.

Monday, February 19, 2018

ETC time again!


So, it's this time of the year, and this year the European Testing Conference takes place in Amsterdam.
I got here early (Thursday) to make sure I get to tour the place a bit, and that my feet are properly sore before we start (My favorite way of touring a new place involves a lot of wandering around, so I started my first touring day in ~10 hours of strolling), and the city has been very welcoming - with great weather (a bit chilly, but bright and sunny - just the way I like), beautiful sights and some very good tourist attractions (I highly recommend taking a free walking tour , and the Rijksmuseum is very impressive).
I started the conference early in a very good way by meeting Marit in Saturday for a really nice chat and interesting food.
Then, come Sunday, after paying a visit to one of our data center (from the outside, I'm not permitted to enter) and strolling around the lovely moat they have around it, the conference started at speakers dinner. It never ceases to amaze me how friendly and welcoming can a group of people be, and how fun and natural if feels to talk with them, or even join by listening, since just about everyone there has a lot of interesting things to share.
So, an amazing start to what I expect will turn out to be a magnificent conference.

Wednesday, February 7, 2018

Reading Listening to books - part 4


TL;DR - I'm listening to audiobooks, some reviews below, and I would love to get some recommendations from you.

This is the 24th part of a series of (audio) book reviews Here are the previous posts:
Part 1
Part 2
Part 3


Crucial Conversations Tools for Talking When Stakes are High, Patterson, Grenny, McMillan, Switzler:
Short summary: A book about people skills. Specifically, how to have better discussions.
What I have to say: I'm fairly ambivalent about this book. On one hand, it addresses a super-important subject. On the other hand, I was very alienated by the examples in the book.
Starting with the good stuff - the authors coin the term "crucial conversation", which are conversation that might have significant outcomes. Some are easy to detect - trying to agree upon a big business decision, asking for a pay raise, or deciding whether to relocate the family. Other conversations might turn crucial in a very rapid manner - a minor disagreement becoming a shouting contest, a family dinner resulting in multiple people sulky and hurt, or a routine work meeting where the wrong decisions are being made because people are not really listening to each other.
People, so it seems, are really bad at speaking - despite doing so for most of their lives. And just to make things more fun, people are acting even worse when they need to be at their very best thanks to the all familiar fight\flight mechanism that kicks in in stressful situations. Some people, however, seem to do better than others - and this book tries to explain how they do that.
The overall strategy, as far as I understood, is "pull out, relax, calm others, build confidence and start thinking together instead of trying to 'win an argument' ". Naturally, I'm simplifying things here, and skipping some of the tools they mention to actually do all of those points, but I think this is the core of all processes in the book.
When sticking to the principles and intentions mentioned in the book, I found myself agreeing vehemently. It does sound like a very compelling way to approach potentially difficult conversations, and some of the tools actually makes a lot of sense. It is only when I got to the examples that I started feeling a bit off - sure, the examples are simplified to make a point, but as I was listening, I found myself sometimes wanting to punch the teeth out of the example speaker. It is then that I started wondering whether the book is heavily biased towards American culture. For example, in the fifth chapter a technique called "contrasting" is presented. In short, it's a way to neutralize suspicion by acknowledging it, and the example goes as follows: "The last thing I wanted to do was to communicate that I don't value the work you put in, or that I didn't want to share it with the VP, I think your work has been nothing short of spectacular". When I hear something like that, I assume that someone is lying to me and trying to weasel their way into a hidden goal. Living in a much more direct (and way less polite) society, I feel such statements to be pretty glitter meant to cover up some ill meant actions. There are ways to phrase such a message in a way that will be acceptable for me, but this is not one of them. This lead me to think - it seems that the components of effective discussions mentioned in the book are very aligned with the stereotypes I have about the American behaviour patterns. There isn't a single example that I can find (audiobooks are not easy to scan quickly), but almost every example felt a bit off - a bit too polite to be real, a bit too artificial to be convincing, and in some cases, simply achieving the opposite goal: Sowing suspicion instead of trust, seeming detached instead of concerned, and so on. It reminded me of something a friend who has relocated to the states has told me: "At first it was very nice that everyone around were very polite and kind. After a while it started feeling phoney and annoying". All in all, the book left me thinking that in order to really benefit from this content, I would need a localized version of it, where the principles were taken, assessed and modified to match the culture, and the examples updated to something a bit more realistic. Given time and need, I think I can do some of it myself, so this is a book I intend to get back to in the future.

So,  those are the books I've listened to recently (and currently listening to Great Mythologies of the World, that won't receive a review here, being unrelated, but I think it's generally quite nice) and I'm gradually compiling a wish-list to tackle one at a time. What are the books you think I should listen to?

Monday, February 5, 2018

Reading Listening to books - part 3



TL;DR - I'm listening to audiobooks, some reviews below, and I would love to get some recommendations from you.

This is the 3rd part of a series of (audio) book reviews Here are the previous posts:
Part 1
Part 2


Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Cathy O'neil:
Short summary: Computer based decisions are every bit as biased as people, and less transparent. They should not be blindly trusted, should be used cautiously, and must be constantly monitored.
What I have to say: I find this book a must-read (or hear, apparently) for anyone who's taking part in software manufacturing, acquisition or regulation. It's probably a good idea to listen to this book if you are only using software. It's not that the book is presenting a revolutionary idea, or that it masterfully narrated (although I did find it quite enjoyable) - it is the way it makes something we are all aware of to some degree very explicit, and shows how prevalent is the problem it discusses. In short, the title of the book lays it all quite clearly - There are very harmful algorithms out there, and they pose a significant threat to the society. That's why they are named Weapons of Math Destruction (WMD, for short).
But, putting aside they hyperbolic phrasing, what is a WMD? and why do we care?
a WMD is a software using some sort of mathematical algorithm to achieve a task, which have the following three properties:
  1.  It's pervasive. It doesn't matter how nefarious is the algorithm I use to manage my local neighbourhood book-club, it's not a WMD unless it affects a large amount of people. 
  2. The algorithm is opaque. Visible algorithms are regularly scrutinized (or at least, can be scrutinized), and they lay out the rules quite clearly - so anyone affected by the algorithms can assess the expected outcome and act to change it. Or, if the system is measuring completely the wrong things, they can be challenged easily enough.
  3. Damage. Some algorithms are using bad math, some of those are scaling up rapidly, but only some of those are causing significant damage to people under their influence. 
A bit abstract, right? Most of the book is dedicated to discussing some of such algorithms, and showing the types of damage they create. Some of the most damaging algorithms are created with the best intentions in mind, and that is the main problem: The people using them are thinking they are actually doing good. Two examples that stuck in my mind are the teachers grading algorithms, and some criminal risk assessment programs used to help judges decide on the length of imprisonment. 
The teachers grading algorithm is simpler, since it has one main flaw - it uses lousy math (in this case, trying to draw statistical conclusions based on the achievements of 20-60 students). From the examples in the book, it is quite evident that this model has nothing to do with reality. So this algorithm is used, because it seems "objective" and "fact based", where in reality it is pretty much a random number generator that should not have been used at all.
The second WMD is a bit more intricate and complicated. The problem is that the software seems to be pretty much benign: it helps a judge assess the level of danger a convict poses in order to determine their punishment, or to assess their chance of recidivism when considering parole. The reasoning behind it is simple: more of a risk a person presents to society, longer should this person be detained, imprisoned or at least carefully watched. That way, minor offenders could get out, leaving the law enforcement mechanism to deal with bigger problems.  Chances are, that the algorithm predictions are fairly accurate, too - the company selling it has an interest in keeping it accurate, or seemingly accurate to sell it to the next state and fend off its competition. There are, however, some caveats: First, the algorithm; being the competitive advantage on the competition, is secret. Normally, a judge must explain the motives behind a given verdict, and those reasons can be challenged or limited. No judge today would say "I decided for a stricter punishment since the convict is poor, and therefore is more likely to commit crime again", and yet - the statistical model might do exactly that. There is a correlation between poverty and crime, and between poor neighbourhoods and criminal opportunities, so the model, measured for "correctness" will be more effective to use that. Even if we won't provide the income level of a person, there are a lot of proxy measurements that are highly relevant: Area of residence, whether the convict has a home or a job to go back to, even how many times was this person arrested in the past has some correlation to their financial situation, as wealthy people tend to get arrested less for minor misdemeanors.
On top using discriminatory elements, there's another risk for this WMD: it creates what the author calls "pernicious feedback loop". Meaning, the algorithm results are actually creating the reality it attempts to predict.
Imagine that: Two people are detained for drunk driving. One of them gets a low recidivism score and therefore is released with a warning. The other gets a high score, and so the judge chooses a more deterring punishment and sends him for 6 months in jail. Six months later, when getting out of jail, this person finds that it is more difficult to find a job with a criminal record (and the longer one was sentenced, the harder it becomes), and he got to know some "real" criminals while in jail, so when things will get rough, the route to crime will be ever more tempting. Point for our recidivism algorithm! The one marked as a likely felon was indeed the one who returned to crime. What did you say? it was only because of the first score he was given? Naaa, this is science, and science is always right, isn't it?
So we got an algorithm that discriminates weak populations, and then actually harms their lives and makes it harder for them to make their way in the world. Fun, isn't it?
Unlike the teachers assessment program, the recidivism model can be used for good, since wheher or not we like it, there's no denying that it is possible to correlate life circumstances with chance of recidivism. People without steady income, or with criminal family members do return to crime more often than people with a decent job who know no criminals. However, imagine what would happen if this algorithm would be used to determine whom to target with rehabilitation programs, or whom to support more closely upon release - In such a case, the algorithm ceases to be a WMD, since it improves the chances of its targets. Instead of deepening the chasms between rich & poor it would help level the playground by providing help for those who need it most. Any recidivist from the "safe" group? this feedback would return to the system and improve the algorithm.

I got a bit carried away, but I hope that I managed to show why I think this book is important for anyone involved in anything remotely technological: It raises some interesting points on the potential damage of careless or malicious use of big-data algorithms (I skipped the malicious ones, but think targeted marketing) and mentions that sometimes, a perfectly valid algorithm is becoming a WMD only because the way it is used, so take care to ensure your software is being used for good, or at least, does no harm. 

Sunday, February 4, 2018

Reading Listening to books - part 2


TL;DR - I'm listening to audiobooks, some reviews below, and I would love to get some recommendations from you.

This is the 2nd part of a series of (audio) book reviews Here are the previous posts:
Part 1


Quiet: The Power of Introverts in a World That Can't Stop Talking, Susan Cain:
Short summary: Our world today is appreciating mostly outgoing, confident seeming people, and there is a lot of place for the quieter people.
What I have to say: In a manner of speaking, this book is quite similar in format to Carol Dweck's book, as it presents how a single trait is affecting peoples life in many facets. Despite that, I found this book quite interesting - perhaps it was that I had not heard the book's main message before, but I found most of the listening quite interesting - It started by defining introversion and extroversion and distinguishing them from shy and outgoing. In short, an extrovert is someone who enjoys social events and is energized by them, while an introvert is someone who finds those type of events taxing and needs some quiet time to recharge. While there is a correlation between introversion and shyness, the two are not synonymous. Despite the book strong focus on the character benefits of introverts (things I remember - introverts are more careful, tend to give up less easily on frustrating tasks, and are interested in deep conversations), it does not carry the message that all should be introverts, but rather advocates quite effectively for the place of introverts alongside the extroverts, each side complementing the others and together achieving much more. The book touches upon the physiological aspects of introversion and extroversion (apparently, while one can learn to mitigate the limitations of their tendency, the basic physiological reaction can be spotted in infancy and remains mostly unchanged throughout life), the claim is that the reason is a difference in stimulation threshold - introverts are more comfortable with stimulation levels that will make extroverts feel isolated. There are a lot of interesting pieces of information about the attributes of introversion, but perhaps the one I found most useful is a practical advice about how to be able to function outside of one's preferred environment - how can an introvert act in a highly extroverted manner, and how can an extrovert adopt an introverted behaviour patterns. The main thing to do is to make sure one allows for recovery time and finds their ways to recharge - an introvert acting in a densely populated space (giving a presentation, hosting a party, participating in work meeting with large-ish crowd, etc.) would fare better if they can find a place where they can be in their quiet zone - a stroll alongside a river, a chat with a friend in a remote corner, or even taking several minutes to unwind quietly in the restroom. An extrovert doing quiet work (research, creating diagrams, writing or editing) can schedule an evening with friends at a bar,  listen to energizing music, take coffee breaks in the kitchen with other coworkers, and so on. It also helps if one does act outside of this tendency in service of some value they hold highly - it would be easier to put the effort needed to be active in a "hostile" environment when it is done for a cause one is genuinely enthusiastic about (the book gives an example telling about a popular professor  who was carrying very charismatic lectures, despite being a highly introverted person, which was possible in part because he cared a lot about educating the students).
Oh, and one more thing - It's almost unavoidable to try and figure out whether one is introvert or extrovert while listening to the book. A lot of people are what the book calls "ambiverts", meaning they posses some introverted traits and some extroverted ones, with the traits manifesting more strongly sometime depend on the situation they are currently in.
All in all, I strongly recommend this book both for enabling yourself to work with other, quieter people, and to find some tips to recharge yourself in the daily routine.

Saturday, February 3, 2018

Reading Listening to books - part 1


(Book reviews are English only)
TL;DR - I'm listening to audiobooks, some reviews below, and I would love to get some recommendations from you.

About two years ago, while attending the first European testing conference in Bucharest, I heard Linda Rising's keynote in which she spoke about her interpretation of Carol Dweck's book "Mindset, the new psychology of success". I really liked the ideas presented in the talk, and so, about a year later, when I re-watched the talk I decided to purchase the book. Lo and behold - there was a free audio-book version, as long as I registered for an Audible account - which I did, and as listening to books isn't really "my thing", I cancelled the registration shortly after.
It took me several months to go through this book, as I just didn't find the time to listen - Most of my listening time is while driving, which happened twice or thrice a week, and it was dedicated to catching up on podcasts, so I just didn't get around to it.
But, then we had about a year ago a team reshuffle, with half of the team at our other office, which is an hour and a half by train, and I was getting there at least once a month since. so, extra 3 hours of dead time? Hey... I still have that Audible app installed with that book I downloaded a year ago!
The second change was when I bought a small mp3 player that can be attached to a sleeve using a clip, and started listening to the podcasts while on my bicycle on my way to work - so now when driving, I have some free listening time. So, after getting another free book from Audible (after a year or so, they considered me a new user and allowed me to have another book if I just signed in to their service), I decided I'm listening to enough books to actually pay for an account.
The experience of listening to a book is very different than reading one - there's no skipping, no control over the speed of progress, and no getting back and re-reading something tricky I think I missed (driving, remember?). However, as a way to make use of the brain time otherwise wasted in commute, it is great that I can concentrate on driving and just hear the book being read to me.
With that being said, Here's a compressed review of the books I've listened to recently:
(Edit: forget about "compressed", it ended up being too long, so it will be one post per book)

Mindset, the new psychology of success, Carol Dweck:
Short summary: "fixed" mindset is bad for you, adopt a "growth" mindset.
What I have to say about it: I started listening to this book with expectations a bit too high. Linda Rising's talk gave me quite a lot to think about and process with regards to how people grow and learn, the importance of refusing to say "I simply suck at this"  and the focus on improvement rather than on achievement. The book, so I hoped, would further expand these ideas to provide some more interesting insights, or elaborate more on the ones I've got. Sadly, it didn't. What I got was a lengthy presentation of the concepts I mentioned, repeated over and over to show how it can affect multiple facets of life. It felt a bit like a sales pitch that goes on and on - I got the idea after the second chapter, really. Only in the last chapter the book gets to deal with a promise that has been mentioned over and over - how to approach changing your mindset. Up until that chapter, mindset seemed like something inflicted upon one by the environment - Parents praising their child on "being smart" instead of praising "putting effort", workplace with personal reviews based on results only and so forth. This chapter is titled "changing mindsets" and does contain some interesting tips. It mentions that simply being aware to this concept is causing some shift in the mindset, and mentions a workshop for schoolkids where the focus is on teaching them how practicing actually creates and enforces new neural links in the brain, thus explaining how one can actually transcend above his current self-image in any given field. It also gives one tip I found quite useful: deciding to do something is nice, but it isn't enough. In order to increase the chance of actually executing a decision, one must create a detailed and vivid plan of execution. Not "I'll write the blog post I've been postponing for over a month", but rather "This evening, after dinner, I'll sit on my couch, close all other windows on my computer, and write a paragraph or two". So, my advice - listen to Linda Rising, she makes the point clear in less time, and there isn't a big gap that I could notice.