Sunday, September 9, 2018

Data and Goliath - book review

TL;DR - you need to read this, and I was impressed enough to buy a physical copy of the book.

Listening to Audiobooks is a great way to make use of my time while driving, and it sure makes traffic that much more bearable. As a bonus, I get to learn new stuff. Ain't that great?
This time I was listening to Data and Goliath by Bruce Schneier. I must admit - I wasn't expecting much of it: Big data here, big data there, big data everywhere. In fact, this book makes a very good case for the importance of privacy and does a good job describing the problems in the trends we see in today's economy where data is one of the most valuable currencies. I was eagerly waiting to get to the final section of the book where the author lists some actions we can take to improve the current state of affairs. This last section, sadly, is the one point I didn't like , since after making some promises at the beginning this section can be summed up as "there isn't much you can do, so go ask your government to pass some laws".
After a short summary of the book in the introduction, which is great if you don't have the patience to walk along the details,  the book delivers its content in three section.
The first section, dubbed "The world we're creating" is very much a description of the current state of affaires. It goes over what is data and how it is generated (short answer - we generate data merely by existing, with our phone providing location information, security cameras videotaping us, and everything we do in the internet is, basically, generate data), how surveillance is ubiquitous (a word used many times in this book) and focused on "meta-data", how cheap it is to just pile up the data - we got to the point where storing stuff is cheaper than filtering the interesting bits out. It describes two factors driving forward the momentum of surveillance - the financial incentives of businesses that use the data for marketing that supports a "free" service on the internet, and then sell information as another revenue path. The second is the governments, that quite unsurprisingly wants to know everything - fighting crime and terrorism are the most used reasons to support that. While private companies may collect the data of its users (which might be a staggering amount of data if one considers giants such as Facebook, Google or Microsoft) and maybe buy some data from other companies, but the government is even more encompassing - laws require companies to share that data without much supervision (and sometimes a gag order is issued to ensure everything remains hidden), other regulations might demand a backdoor for use of the government and sometimes the various agencies actively hack products and infrastructure to maintain access to data. One main concept that makes perfect sense and yet I've not considered it explicitly before is that if enough data is collected on the people one interacts with, just about everything can be inferred about a person who, in theory, isn't tracked - just because tracking the others will capture their interactions with that person. The book appropriately uses the term "inverse herd immunity".

The second part is going back to "why?" or, to be more specific, why should we care so much? The most important part, in my eyes, is a head-on challenging of the way too common saying "If you're not doing anything wrong, you don't have anything to hide". This statement is with us way too long, and I've yet to hear a debate around privacy that did not use it in one form or the other. In fact, there are many secrets people keep all the time - Think about people who had AIDS a few years back: everyone who knew that about them would assume that they were homosexual and irresponsible (the former is still not fully accepted everywhere) even if they got infected by a blood transfusion. But that's an extreme case - would you like your employer to know if you've started looking for another place? And what if you were trying to surprise your spouse with a vacation abroad and someone other than you told them that? Would you want a conversation where your manager asked you to help a struggling co-worker become common office knowledge?
We all have secrets. Or at least some data compartmentalization mechanisms in place - we choose which information to share and with whom, and if asked directly, most of us would not volunteer to be monitored (A rather blunt example of the effect of complete lack of privacy can be seen in the the movie "The Circle", which is not a very good movie. You can rather listen to Screen Testing episode 11 which is where I've heard about it).
Besides the personal aspect of privacy, the book mentions other reasons for strongly opposing mass surveillance (yes, we don't usually think of it in those terms, but when just about every company or national or municipal authority has access to data about a large amount of people - this is what it is).
Those reasons are political liberty, commercial fairness, economy and security.
Of those reasons, the one with most aspects is, surprisingly, the political liberty. For starters, let's consider what the author calls a "chilling effect" - when people know they are being watched, they behave differently. Just remember the last time you drove by a police car and slowed down a bit even though you were well below the speed limit. Now imagine driving where you know that by the end of the day, the police would get an exact report of when and where you were speeding, and when and where you were crossing a white line. This can be easily done if the police were to take your location data from your phone or service provider. Such surveillance pushes people to conform with norms
Second is the potential of being harassed by the law - No man is 100% law abiding - people speed, make mistakes on their taxes, cross streets in red light and so on. A strong enough political figure (or a petty enough police person) could make an individual's life a miserable place if they were to dig in on the data about them and look for petty crimes. Mass surveillance helps lower the costs of such activities, and removes most of the regulation around it.
Finally on that matter is the important role dissidents play in social change. It's a bit odd to wrap ones head around it at first, but then it just clicks. Basically, in order to have social change we need to allow for some degree of illegal activities. How so? Consider two rather recent examples: Same sex marriage and smoking marijuana. Looking 30 years back, both of those activities were considered shameful at least, and probably downright illegal (It still is in some countries). Yet, a growing number of people were doing it - first in hiding, then the laws were not enforced and then started the legalisation debate (which is still going on in some places). In the meanwhile, public opinion is shifting. This is possible since homosexuals could hide in the closet and not being persecuted, and since enough people used pot illegally without "getting caught" (or if they did, without serious repercussions) so they could form communities and lobby for that specific activity to become legal and accepted. When surveillance is omnipresent, we get the opposite. The chilling effect mentioned earlier kicks in and people are trying to remain well within the "norm", thus dragging social changes to a halt. When people are aware of being constantly monitored  they prefer to err on the side of safety and not act (in "Think Fast and Slow" Daniel Kahneman states that people value loss or pain about twice as more intensely as gain or pleasure) and thus self-censor their actions and behaviours. This inaction, in turn, is causing stagnation and fortifies the boundaries at what is "acceptable", effectively narrowing them.
The other categories are almost self explanatory -
The economy of a given country is suffering from survelliance since there are other comparable products that will track people less. For a long while the company I worked at blocked Skype from being installed, since Microsoft were (being forced into?) providing a backdoor for the NSA to eavesdrop on Skype calls. After Cambridge Analitica's shenanigans with Facbook blew up, we could see the #DeleteFacebook hashtag running around, and other examples are out there.  The chapter focuses mainly on regulation forcing companies to "share" data with the authorities and asks an important question: if a certain country is known to demand businesses to provide backdoors, and issues all encompassing gag-orders to hide it - who would do business with any company from that country?
Commercial fairness is the term the book uses to describe data driven discrimination. After listening to Weapons of Math Destruction I needed very little convincing that "big data" can be and is being used in ways that discriminate people unjustly. In short, data is being used as a more obfuscated form of redlining (For anyone such as myself lacking the American reference - redlining was a practice where banks avoided investing or approving loans in impoverished neighborhoods). While there is an objective financial gain out of redlining - obfuscated or not - this practice is harmful in the long run, hampering social mobility and punishing people for being poor or part of a minority group.
Lastly is the argument of security. Again, this focuses mainly on activities of governments and other nation authorities. The claim is simple, and widely accepted in the security community: There can be no backdoor that can ensure only "good guys" will access it. By forcing companies to install security flaws to their products, by actively hacking civilian organizations, by hoarding vulnerabilities instead of driving for them to be fixed the governments are making the entire internet less secure.
One last point that is worth mentioning in this section: None of the arguments raised here are new, so one might wonder what has recently changed to warrant such an interest in privacy? The answer is that two things have changed - The first is that storing information has become so cheap that "let's just store that and see later if we can do something with it" is a viable, almost affordable, strategy - so more information about is is being stored. The second thing is that our life revolves more and more around computerized systems. Our phone,and the multitude of security cameras on the street mean that we generate a whole lot of data just by taking a stroll. In the past, information was ephemeral, in that that once a conversation was over, it would reside only in the memory of the participants (recording was possible, but not common). If someone sketched a note and then ripped it to shreds, that information was gone. This is not the case today when we communicate online and our information exists on other people's computers (sometime we call them "servers") that have routine backup, so even data we thought we've deleted might only have been marked in the database as such and not actually deleted, and even if it was deleted, this database might have backup tapes that go years back. Today, the follies we make as teenagers go on social media and will haunt us when we're older - Children will see their parents drunk in photos from 20 years ago, potential employers will see an old tweet they vehemently disagree with, a picture shared by a proud parent today will be used in 20 years to steal that child's identity. The persistance of data ensures that if there's a tiny bit of information we don't want a specific person to know, they are sure to find it.

So, what can we do about it? The book answers this question for three types of actors - governments, corporations and private people. 
The governments wanting to improve the privacy of the world they exist in are quite powerful: They can set rules and regulations that limit the access to data, strive to fix vulnerabilities their intelligence agencies find instead of hoarding them, protect whistleblowers from charges (so that the citizens will know if the government officials are subverting their privacy) and avoid a number of harmful activities exerted by the different departments. The most interesting idea in this part is to provide a "commons" Internet arena - platforms in the internet that are "publicly owned", such as parks or sidewalks - a place where financial pressure to track the users and maximize revenue is negated. Those public domains should be defined with specific laws ensuring proper conduct and resilience to surveillance,  and should be budgeted by the people's taxes.

Corporations, too, are quite powerful in that they are the main collectors of information. So, if a specific corporation decides to collect less data - they can. They can also be very transparent about the data they collect and do a decent job in protecting it so that it will only be used for the purpose it was intended for. Being both technically savvy and large enough, corporations can (and do) also battle the government's attempt to breach security - by creating a secure product when not specifically obligated by law to do otherwise, by challenging warrants and government requirements in the court of law and by investing in research to secure their product. Since the current state of affairs favours businesses that do surveillance it is understandable that quite a large part of the chapter is about "what a government should do to protect from corporations".

The third section is the private people using the internet - being the target of surveillance by two major forces, there is, unsurprisingly very little one can actually do without incurring significant harm to their ability to operate - A person can choose to install some privacy enhancing plug-ins (Privacy badger and AdBlock are two that I use), we can make sure we use TLS everywhere we can, avoid having a social media profile, pay for services that are promising privacy instead of using the "free" equivalents that guzzles your data to increase revenue. One can also leave without their phone regularly, pay only in cash and move to the mountains living off the land. Apart from those rather insignificant actions, the main suggestion is to ask your local politicians to change the laws.

One thing which is important to remember while reading this book is that the idea of trading our privacy for services isn't inherently wrong, and the author does not claim otherwise - processing data, even personal data, can do some good - it can improve people's health and support research (imagine a DNA database that is used to find suitable organ donors, or warn people about dormant life threatening genetic flaws) it can be used to improve road safety and reduce traffic or prevent credit card fraud. Also, minor as it might be, it can help people find things they want by getting better personalized advertisement. The main issue is that the deal today is implicit and all encompassing - there's no backing off from the deal we unknowingly made, and there aren't enough incentives to not keeping all of our data.

The last point I want to touch upon is how fast this book seems to age. It was published in 2015 with what was at the time the most up-to-date information, including some insights from the Snowden leaks in 2013. Despite that, while listening to the book I was having a constant feeling of "missing out". In roughly three years since the book publication, many of the trends shown as warning signs seem to be already in full scale, and reversing the trend seems even more difficult than what is described in the book. I'm a bit optimistic to see some positive changes such as GDPR, and wonder if it will be enough, or will we drift towards a world with zero privacy.

In conclusion - go and read this book.

Also, it just so happens that I finish writing this just before the Jewish new-year, so if you got until here: Happy New Year!
שנה טובה

Sunday, August 12, 2018

Cast 2018, day 2

One thing is common to all good conference – I miss out on sleep hours because there’s so much to do, and this conference was no different.
I woke up, organized my stuff, and went down to lean coffee, only slightly late. The topics, as usual, were very varied – We’ve discussed personal insecurities, what does it mean to be a senior team member (short answer – you don’t get to actually work) and how to approach the whole issue of effective and efficient documentation & reporting. Everyone was engaged - Almost every topic got at least one extra timeslot.
The opening keynote of the day was delayed due to the speaker flight being delayed, so instead we got the next activity of the day a bit earlier. I got to Lisi’s mobservations session – she dealt really nicely with the surprising change of plans and had the classroom ready for a mob. If you are ever in a session where there is a demonstration of mobbing, do yourself a favor and volunteer to be part of the mob. Yes, you’ll be putting yourself in front of an audience, but watching a mob is nothing like participating in one. As a mob, we’ve spent quite a while in orienting ourselves around the application under test and trying to decide on a concrete direction that we should take, and had a difficult time doing that. But frankly – testing the application wasn’t really what we were there for. Learning to mob was our purpose, and Lisi provided some excellent guidance to help us focus on how we behaved as a mob and then how we behaved as individuals in a mob. All in all, we got a reminder of why mobbing is difficult, but also saw how effective it was in dispersing knowledge in the team – even if it was only how to use certain tools or deal with an operating system in German. I feel that this exercise should have been maybe a couple of hours longer to really get some decent pace, as a lot of the insights we came to did require both trying it out, and some hands-off reflection. But, given the constraints, and while there is always something more that can be improved, it was a good experience for me and I would be happy to have some more like it.
Sadly, I cannot say the same thing about the keynote, to which I didn’t connect at all. The overarching topic was similarities between UX design and testing, but it felt very remote and detached. Perhaps I was missing the background to appreciate such a talk.  But, you know, that happens, too.
Good thing lunch was immediately after that. I had a nice chat over food and drink, and then went risk-storming with Lisa, Alex and a few other testers. This was a very interesting experience for me, and the first time I held a deck of TestSphere cards, which appear to be an interesting tool to have in certain situations.
Afterwards I attended Paul Holland’s workshop on unlocking creativity in test planning. It was very nicely built, and I got to both troll Paul over twitter by paraphrasing what he said and to take away some important insights from the workshop. First of all, a requirement for creativity is peace of mind, which is obtained by setting boundaries – both spatial and temporal. Second thing is that some ideas just take time and offline processing. Third, Ideas bring out other ideas, so stupid ideas would most likely attract some good ideas as well. But most importantly – Don’t burden yourself with too much information. Get a basic understanding of the task, then stop to think and process, and only after you done some hard thinking come back to the rest of details and see whether concerns you had are addressed by all of the tiny details you skipped, and what does it add to the mental picture you already have in mind.

The best talk of the day was waiting for last. I went to Marianne’s talk titled “Wearing Hermione’s hat: Narratology for testers” Marianne combined three of her passions: Testing, Harry Potter and literary studies. It was a perfect combination for me, and I happen to share her affection to those subjects, even if to a lesser extent (My focus during my studies was more on poetry and less on prose, and I don’t know my Harry Potter as deeply). Marianne spoke about how people tend to follow the first paradigm they adopted and ignore further information that might prove otherwise, which connected in my mind with Liz’s keynote about people tendency to seek, and pretend to find, order and patterns where there is none to be found. Another important observation we can borrow from narratology is the need to look again – our first read of the book is usually great to get a basic understanding of what’s going on the surface, but after we’ve gained this basic understanding, a second reading will expose new information that wasn’t as clear before, and that we can only now notice. With software it is very much the same – we learn a lot by doing, and I have yet to see a project that by the end of it people didn’t have a better way to do what they just did. Marianne also mentioned that many companies engage in “root cause analysis”, but are actually only scratching the surface. They understand what went wrong in this specific instance, but don’t actually take the extra step required to find the systematic fails that contributed to those failures. If you do those post mortems and keep a record of them, it might prove interesting to do a meta-analysis on several of them to try and decipher patterns.
Another thing I found in Marianne’s talk was the value of specialized language. She spent a few minutes in providing the audience with a simplified explanation of the technical terms “text”, “fabula” and “story”1.
Afterwards, she used that distinction to point at a series of events where the story is different from the fabula, and what effect It had, and why changing the perspective helped in creating such “deception” that can only be seen and understood in retrospect. The fact that she had distinct names for two phenomena was not only useful as a shorthand, but also helped keep the two related ideas separate in the minds of the listeners, and be added to their toolbelt the next time they read a story. So, if you ever wondered why so many people fuss over terms and meaning while it’s clear that everyone understands what you mean – that’s why. Words, and technical terms2 in particular, are ways to direct our thought process and raise our awareness to things. They also carry with them a plethora of meanings and associations. For instance, during the talk I was reminded of Wolfgang Iser’s gap-filling, which is part of the reader’s-response theory, and thus immediately made it crystal clear that there is an important place for the “reader” who does the interpretation of the text and to the way they react.
All in all – A great talk to end the conference with. The only thing I’m missing is one of Marianne’s fabulous sketch-notes.

End the conference did I say?
Well, almost. We still had to grab dinner. I went to the room to rest a bit (it was a packed day, so I needed a few minutes to unwind). I then joined a very nice group containing Lisi, Thomas, Lena, Marianne, Lisa, Santiago and Andrea who were sitting and just chatting. It was a very nice way to say goodbye. We’ve sat for about three hours and then it was time to go to sleep. After all, I had a plane to catch in a ridiculous hour. I did manage to say goodbye to a whole lot of other people that were playing some board games.
And now (or rather, a few days ago, as I was writing most of this in the airplane leaving Orlando), the conference is over. I had a great time, and I have way too many people to thank for it to list them all here. Next time I’ll make sure to have some time after the conference. 

I usually match “fabula” with “Syuzhet” (which I’m more comfortable spelling “sujet”), but Marianne was conscious enough to spare the audience from more definitions to confuse them. In short, fabula is the chronological order of events as they “happened” in the imagined world of the text. The sujet is the order events are presented the reader. so “I fell after stepping on my shoelaces” and “I stepped on my shoelaces and fell” are the same fabula, but different sujet. And yes, I had to go back to my class notes to verify that.
A text is an instance of a literary creation, it is the book one reads.   
2 When I say “technical term” in this context I mean any word that has a specific meaning within a profession which is different than the common understanding, or not commonly used outside of a specific jargon.  

Friday, August 10, 2018

CAST, day 1

And what a packed day it was.
It all started with lean coffee facilitated by Matt Heusser, which was both enjoyable and insightful (the picture above is the discussions we were having, taken by Lisa Crispin). My main takeaway from this session was the importance of being able to verbalize your skills to yourself, and to communicate them to others. Also, this was my first lean coffee where there was actual coffee.
Then, the opening keynote. Liz Keogh was speaking about Cynefin, and delivered a great talk. I did  hear a similar version of this in ETC2017, but it did not matter very much. In fact, listening twice enabled me to better understand and process what she was speaking about. In short - developing software is complex space, probe a lot and make sure that your probes are safe to fail. Also, use BDD and avoid tools such as Cucumber (BDD is about the conversation, not about the feature files).
After the keynote I went to a workshop on domain testing passed by Chris Kenst and Dwayne Green. It's always nice to refresh the fundementals, and to learn a new name for that (I was familiar with the concept of Equivalence classes and boundary value analysis, that are techniques inside the space of domain testing).
During lunch I managed to talk a bit with some people, and then went on to the lobby where I met Alex and we've talked about organizing your desktop in a way that should (we guess) increase productivity. What I really liked was that we actually started mocking the screen layout that we would want to see. It was very cool to watch Alex tear down some paper pieces so that it would be easy to move them around. This sort of think kind of makes me want to go out and figure how to implement such a thing. The main challenge is that in order for such a solution to work, it must be ingrained in the OS is a seamless way, so that it will always be on top, and manage the size of just about everything else. I wonder if Windows are already offering such a thing.
The first talk I attended had a promising title about coaching and the GROW framework. It took me a while to find out that I didn't connect with the content and move to another talk - "Don't take it personally" by Bailey Hanna. I got just in time for the exercise. Not really knowing what I should do, my instruction was "be aggressive", and I do owe Polina another apology. I was very difficult.
After that, I went to Lisi's talk about her test journey. So far, I've listened to two of Lisi's talks, and they have been very dangerous to my free time. Lisi has a way of sharing her experience while showing her passion for what she did, and has a unique way of inspiring others to do the same. It was my favorite session of the day. Also, before having a chance to regret this, I agreed with Alex on pairing together, and we decided that by the end of August we will set up a time for a session.
My talk was up next, and I took my usual 5 minutes to stress out. The talk itself went ok, I think - By the end of it I felt as if I was pushing a bit hard to hold the list of ideas as coherent a narrative as I could, but I wonder how many in the audience actually saw it. The open season was, as expected for the time and type of talk - awkward silence. My facilitator at the talk - the Friendly Richard Bradshaw managed an amazing feat of wriggling some questions out of the audience, and had some interesting questions himself.  After the talk I got some very kind feedback, which I greatly appreciate.

A surprise was set for the evening - after a short time to meet & mingle, we all (or, up to a 100 of us) got on a bus and took of to the Kennedy space center. Rockets, space, astronauts, nice company (and even some food) - what more can one ask?
We got back to the hotel and I joined a couple of quick rounds in a card game I don't know the name of but was nice to play. Tired, I returned to my room and started writing this post, which, as you can see, I did not manage to complete before the conference was over.
Still, a whole lot more was waiting for me in the second day, but that's for another post that I hope to get on seen - there's still a week of vacation ahead of me, and I intend to make the most out of it.

Wednesday, August 8, 2018

CAST - Tutorial days

CAST has officially begun!
(I'll be keeping those posts short, because otherwise I won't have time writing them)
Yesterday I attended Anne-Marie Charrett's workshop about coaching testers. It gave me a lot of material to think about - The main message I took from there is that coaching someone is very much like teaching, only you need to be constantly aware of the meta-processes in place and enable the coachee (not really a word, I know) to walk the path they chose and develop the skills they need.
We had some nice exercises (though, practice coaching through Slack wasn't really working for me, and I probably owe my pair an apology for being semi-purposefully stubborn).
Besides the workshop, there was some nice food, and even nicer people that are always interesting to converse with.
After walking to dinner(much less time than the day before, where we walked for 20 minutes, found out that the restaurant was closing early due to an emergency and then walked 15 minutes further to another place), we played some board games that I have never heard about before, and there was much rejoice.
While some people decided to stay awake to watch the rocket launch (we are not far from Cape Canaveral), I was too tired and went to sleep (seriously - why launch stuff at 2am ? Can't those people just move the galaxy a bit so that it will be in a more convenient time?). 

Today was my day off - I did not attend any workshops, but instead took some time to go over my slides (still not done with that) and took a surfing lesson with Marianne. It was just immensely fun - all of that falling to the water and just barely being able to stay on the surfboard - I wasn't expecting to enjoy this as much, and yet I did.
Later today, I'll be going on a night paddling (it promises something with bio-luminescence, so it should be cool), and I expect to be completely wiped out after today, which appears to be my water-sports day.
So far - great time, great people.

Monday, August 6, 2018


So far, the conference experience is starting great as expected.
Yes, I know, the conference officially doesn't start until tomorrow's workshops, but as always, the conference is up once the people start to gather.
In hope of having some time to fend off jet-lag, I took an early flight, landing on Friday after a 12 hours flight to Newark, followed by 2.5 hours flight to Orlando (which I nearly missed after not checking that my gate wasn't moved - it was), then all I had to do is to try and stay awake (my Fitbit did not detect any sleep during Thursday, which was the night of the flight. I believe I did manage to squeeze in a couple of sleep hours) and hopefully, that would take care of the jet-lag. It seemed to work, I woke up at 7 AM after ~8 hours of sleep and set to explore the location. Well, it's humid, and hot. so walking about is not very pleasant, and Cocoa beach is a rather dull place, if you are not into surfing or simply staying at the beach.  I spent most of my day out, then rented a surfboard just to see what I can make out of it (never before have I ever held one of those, let alone try to use it). I had a nice time, but the sea claimed my hat.
I got to, briefly, meet Liz Keogh and Darren Hobbs before being picked up by Curtis Petit and his family for dinner.

Yesterday, I woke up at 4:30 AM (Jet lag? or simply because I slept my 6 hours? I don't know) and went to see the sunrise at ~6:00, where I encountered Anne-Marie Charrett. I then set out to find a new hat and from there to see the Kennedy Space center at Cape Canaveral. A tip for future visitors - Cape Canaveral is *Not* where the space center is, it's over 12 Miles to the north of that. I'll have to check the Kennedy space center another time. 
At the evening I got to meet some wonderful people - some for the first time. I met Curtis again, with Matt Heusser, Maria Kademo, Ben Simo Liz and Darren and we headed out for dinner.

So far, so good, looking forward for today.

Sunday, July 8, 2018

Things you can't polish until shine: A hostile reading in the 2018 ISTQB Foundation level syllabus

לפני שאני נכנס כאן לעובי הקורה, חשוב לי לומר דבר אחד: קל מאוד לפטור את הסילבוס הזה ב"כתבו אותו אנשים שלא מבינים דבר וחצי דבר בבדיקות תוכנה ועסוקים בבבירוקרטיה בלתי פוסקת כתחביב". זה לא המצב. לפחות ברשימת הסוקרים והמעירים נתקלתי בכמה שמות של אנשים שאני מכיר ומעריך במישור המקצועי. אני מניח שהבעיות שהפריעו לי נבעו מסיבות אחרות.

נתחיל עם השורה התחתונה - הסילבוס החדש אינו גרוע פחות מגרסת 2011 למרות ניסיונות להסתיר את ריח העובש בהתזת כל מיני מונחים "חדשניים" התחושה הכללית זהה, והערך הכללי של קריאת הסילבוס הוא שלילי. 

הקשבתי להרצאה של רקס בלאק על הסילבוס החדש, וכיוון שהוא נשמע יחסית מרוצה ממנו, החלטתי שלא סביר מצידי להמשיך ולהשמיץ את ההסמכה הבסיסית בלי שלפחות קראתי את הסילבוס המעודכן. אז הקדשתי לזה קצת זמן. כמה? לא המון. הכרחתי את עצמי להתרכז בעזרת פומודורו, ובסך הכל סיימתי לקרוא את הסילבוס בתוך שבע עגבניות של עשרים וחמש דקות. כלומר - כמעט שלוש שעות. תוך כדי הקריאה כתבתי לעצמי הערות בצד - חמישים ושבע מהן (על פני תשעים ושישה עמודים, למרות שדילגתי על תוכן העניינים, מעקב הגרסאות וכו'). מתוך ההערות האלה, בשני מקומות שמתי לב לשיפור שמצדיק אזכור כאן: נוסף סעיף (1.4.1) שמדבר על "תהליך הבדיקות בתוך הקשר", וגם בסעיף 5.3.2 - דו"ח בדיקות, יש הכרה בעובדה שדו"ח בדיקות צריך להיות מותאם לקהל היעד. שאר ההערות היו קיטורים, הערות שליליות ותזכורות לעצמי לקרוא בהמשך דברים (הרוב, לצערי, היו הערות שליליות). 
אז מה בעצם יש לי נגד הסילבוס הזה? ובכן, מצאתי טיוטה מ2016 שלא הגעתי לפרסם, אבל כל מה שיש שם עדיין נכון. בקצרה, הבעיה שלי נוגעת לשני מישורים: נכונות ועמידה בציפיות. 
נתחיל עם החלק של עמידה בציפיות. CTFL, מעבר להיותו רצף של ארבע אותיות, משמעו "בודק תוכנה מוסמך, רמת בסיס". אני לא יודע מה איתכם, אבל כשאני שומע "מוסמך" אני מצפה למישהו שיש לו את הכישורים הנדרשים למקצוע מסויים - חשמלאי מוסמך אמור להיות מסוגל להחליף מפסק בבית, עורך דין מוסמך אמור להיות מסוגל לייצג מישהו בבית משפט. המוסמכים אולי לא יהיו הטובים ביותר באזור, אבל את העבודה הבסיסית הם יהיו מסוגלים לעשות.
מוסמכי CTFL, מצד שני, מגיעים לשוק העבודה בלי שום יתרון על פני אדם אקראי מהרחוב - הם למדו שחייה בהתכתבות. הם לא נתקלו מעולם בפרוייקט תוכנה, אין שום דבר במהלך ההסמכה שדורש מהם תרגול או ניסיון (רוב הקורסים הארוכים מנסים לספק תרגול כזה או אחר, התוצאה, עד כמה שיוצא לי לראות, נלעגת) והקשר בין התהליכים התיאורטיים שנלמדו לטובת המבחן לבין המציאות הוא אפילו לא קשר מקרי - זה קשר שלילי. 
שנית, ההסמכה הזו קלה מדי. ארבעים שאלות אמריקאיות? מתוכן צריך קצת יותר מחצי כדי לעבור? רוב השאלות דורשות קצת שינון ותו לא? מי שמצליח להיכשל צריך להתבייש ולשבת בפינה. להסמכה קלה יש שני חסרונות: היא לא עוזרת לאנשים חיצוניים להבדיל בין מי שמוכשר לעבודה ובין מי שלא, והיא יוצרת רושם מוטעה כאילו המקצוע הזה קל מאוד. עד כמה קלה ההסמכה הזו? הסילבוס החדש מגדיר (במבוא, בחלק 0.7) שנדרש מינימום של קצת פחות מ17 שעות כדי להעביר את החומר הנדרש (וזה יותר זמן מאשר נדרש בסילבוס של 2011). כן, פחות משבע עשרה שעות. לצורך ההשוואה, קורס מבוא למדעי המחשב, משהו שאין ספק שמי שסיים אותו לא מסוגל לעשות הרבה בכל מה שקשור לתכנות, נמשך בין 78 ל84 שעות אקדמיות, שהן 58.5 עד 63 שעות מלאות, ועדיין לא ספרתי את עשרות השעות שמושקעות בתרגול ובשיעורי בית. האם בדיקות תוכנה הן עד כדי כך יותר קלות מאשר כתיבת קוד? אני בספק. 
עד כאן לגבי ציפיות. 
עכשיו לגבי נכונות - משהו רקוב בממלכת דנמרק. אני לא מדבר על טעויות בפרטים קטנים כמו ההתעקשות להתייחס לבדיקות "קופסה לבנה" כאל "סוג בדיקות" (זו טכניקה לפיתוח מקרי בדיקה, ובעזרתה אפשר לגזור בדיקות משני הסוגים האחרים המוזכרים - בדיקות פונקציונליות, ובדיקות תכונתיות - מונח שאני חושב שאתחיל להשתמש בו במקום "בדיקות לא-פונקציונליות" שאני לא מחבב), אני מדבר על טעות יסודית בגישה. הסילבוס מתייחס לעולם התוכנה כאל עולם הומוגני למדי בו יש תהליכים שמתאימים לכולם ולכן מנסה ללמד "מה" במקום "למה". הגישה הכללית של הסילבוס היא ציווי (או "הוראה", אבל המילה הזו כפולת משמעות כאן) ולא דיון. בכלל, נראה שמילת הקישור החביבה על מי שאחראי לטקסט היא "באופן מיטבי" (למשל, בעמוד 23, בתרגום חופשי שלי: "באופן מיטבי, ישנה נעקבות דו כיוונית בין תנאי הבדיקה לרכיבים המכוסים מתוך בסיס הבדיקות"). שימוש בניסוחים כאלה מדכא חשיבה עצמאית ומעודד צייתנות עיוורת. באמת, כשאני קורא במסמך הזה את הטענה השגויה "הסילבוס הזה מייצג פרקטיקות מיטביות שעמדו במבחן הזמן" (עמ' 91, כחלק מההסבר על הצורך בעדכון הסילבוס) אני מתחיל להבין למה לבאך ובולטון יש כאב בטן בכל פעם שמישהו מזכיר לידם את המונח הזה - מה שיש בסילבוס לא "עמד במבחן הזמן", הוא התעפש והתאבן. אחת הדרישות הבסיסיות מבודק תוכנה היום היא להיות מסוגל להסביר למה הוא עושה משהו, ולשקול את האלטרנטיבות - יש פרוייקטים בהם כדאי מאוד להשקיע הרבה עבודה בתכנון מראש ובכתיבת מסמכים מסודרים - אם מישהו יכול למות במקרה של תקלה, נניח. אבל יש כל כך הרבה פרוייקטים בהם בזמן שאכתוב את הניירת הדרושה, המתחרים כבר יכבשו את השוק, אז להשקיע בכתיבת הניירת שאף אחד לא יקרא?
בכלל, רוב הסעיפים בסילבוס מתוייג בK2, שזו רמה של "להבין מה זה", הרמה שהייתה יכולה להפוך את ההסמכה למשמעותית ולכן הייתה צריכה להיות על רוב הסעיפים היא K4 (ניתוח), שלא נמצאת בסילבוס בכלל (בגרסה של 2011 הרמה הזו הופיעה בהקשר של בדיקות קופסה לבנה, ובסילבוס הנוכחי היא נעדרת לגמרי). 
בקיצור - לא התרשמתי מהשינוי אפילו לא קצת. הוא קוסמטי בעיקרו ולא נוגע באף אחת מהבעיות שהיו. הדבר היחיד ששונה הוא שמדי פעם זורקים כאן את המונח "הקשר" בלי להתעכב על מה המשמעות של הקשר. במלוא הכנות? לא נראה לי שמי שיסתמך על הסילבוס הזה כמקור השכלה ראשוני יוכל לזהות הקשר אם הוא ידרוך על אחד בטעות. 

אז איך זה קרה? איך חבורה כזו גדולה של אנשי מקצוע מצליחה להוציא מתחת ידיה מסמך שהדרך היחידה לתאר אותו היא "מחפיר"? אני יכול רק לנחש - מה שקורה כאן הוא שמנסים "לתקן" ו"לעדכן". אז משפצים קצת פסקאות, מוסיפים ומסירים נושא או שניים, אבל הבסיס השבור נשאר - כמו בתרגיל חשבון שהסתבך, הגענו (מזמן) לנקודה בה הדרך היחידה לתקן היא למחוק הכל ולהתחיל מחדש. 

I spent some time reading the 2018 ISTQB CTFL syllabus, here are my thoughts. 
Before I start, though, there's one thing I want to say - I went over the list of reviewers and found some names of people I know, and really appreciate. The easiest thing to do with this syllabus is to dismiss it as something written by a bunch of detached incompetent buffoons, this is not the case. The people I recognized are professional practitioners of testing, and they are damn good at what they do. I assume that the issues I have with the syllabus are despite them being involved and not because of it. 

After listening to Rex Black's webinar about the new ISTQB CTFL syllabus and hearing how satisfied he was with it, I decided I cannot go on smearing the CTFL program without at least reading the updates and seeing if some of the issues I have were addressed. Short answer, for those not intending to read this long(ish) rant - The new syllabus is no less terrible than that of 2011.

When reading the syllabus, in order to keep myself on the task and not wandering off, I timed the reading using 25 minutes Pomodori. Seven of them, to be more precise (which amounts to almost 3 hours), as I was reading I wrote down some comments for later. all in all, out of the 96 pages (including Table-of-contents, references and appendices) I have 57 comments, mostly because I got to the point where I was saying the same thing over and over, so I narrowed my scope down to comments that would help me write this blog post. Out of those comments, 3 are positive to some extent, two of them are actually worth mentioning here: The addition of section 1.4.1 "Test Process in Context", and a  (rather trivial) recognition that the test report should be "tailored based on the report's audience" (page 72). The rest of the comments were rants, tasks and (mostly) negative comments.
All in all, I can say that the 2018 version of the syllabus is not less terrible than the 2011 version despite some glitter that was sprinkled on top of it. However, despite trying to mask the moldy scent by throwing a buzzword or two around - it still is very much the same in terms of both content and approach.

So, what do I have against the ISTQB CTFL syllabus?
As I was certain I have already written something about it before, I went looking at my old posts. I found a forgotten draft from 2016, it's a bit old, but everything there is still relevant, so here's a short summary: I think that the syllabus is not living up to the expectations it creates, and is fundamentally incorrect.
The expectations part is the easy one - CTFL, besides being a four letter word, is "certified tester (foundation level)". I don't know about you, but when I hear the word "certified" I expect someone that can actually do the job they are certified for. A certified electrician should be able to change a fuse and a certified accountant should be able to deal with a small business's tax report. They might not be the best of their profession (after all, they are just starting) but they are more proficient than a random person from the street. The people "certified" by the ISTQB (disclaimer, I have the foundation diploma somewhere in my drawer) are the equivalent of people learning to swim by reading a book. They don't have any real advantage over someone that is not "certified", they have never encountered a real software project, there is nothing during the certification process that requires any practice and the correlation between the material learned and reality isn't even random - it's negative.
Second thing is that the certification process is way too easy. 40 multiple choice questions? With passing grade at 26 "correct" answers? Where most of the questions require nothing more than memorization? Anyone who manages to fail this test should be ashamed of themselves. An easy certification has two main drawbacks: It fails in helping people identify the professionals from the amateurs, or the good professionals from the less competent, and it promotes the idea that the subject for which the certification is is easy and not challenging. How easy is it? The 2018 syllabus defines a longer minimal learning period which is higher than that of 2011, and gets to the laughable number of 16.75 hours. Just for comparison, The course "introduction to computer science" at the university takes between 78 to  84 academic hours (or 58.5 to 63 full hours) of frontal instruction (I've left out the significant time spent on homework), and no one assumes that after such a course the student is capable of any real programming work. Is testing this much more easy than programming? I doubt it.

Now, for being incorrect. Something is rotten in the state of Denmark. No, I'm not speaking about small concrete mistakes such as referring to "white box" as a "testing type" (it's a technique to derive tests and create any of the other "types" of tests the syllabus mentioned, once you have a test idea written down, it's not always possible to trace it back to the technique that was used to get to it), I'm speaking of an Intrinsic flaw in attitude: The syllabus refers to the world of testing as a rather homogeneous and understood space, and thus is prescriptive where it should be provoking discussion. It tries to teach what to do and skips almost entirely both the "why" and "when not to" parts. It seems that the favorite conjunction in this document is "ideally" (e.g., in page 23: "Ideally, each test case is bidirectionally traceable to the test condition(s) it covers". Really? is this property still "ideal" in a project where the requirement is "make it work"? or in a project that will have a significant makeover within six months?). Such language discourages thinking and creates the illusion that there is a "correct" generic answer. Check for instance this section in appendix C - "While this is a Foundation syllabus, expressing best practices and techniques that have withstood the test of time, we have made changes to modernize the presentation of the material, especially in terms of software development methods (e.g., Scrum and continuous deployment) and technologies (e.g., the Internet of Things)" [page 91, emphasis added].  Note how despite saying "we changed stuff" the message is "those are eternal truths". When reading this I can really relate with the stomachache Bach & Bolton express each time someone mentions "best practices" in their vicinity. Most of the stuff in the syllabus have "withstood the test of time" by fossilizing and growing mold. One of the requirements from a software tester today (I would say "a modern software tester", but this term is better used to indicate this) is to be able to communicate why some activities are needed and what are the trade-offs. Yes, even in the foundation level, as so many testers find themselves as lone testers in a team or even a small start-up company. There are cases where writing extensive documentation and planning well ahead of time is completely the right thing to do (for instance, if someone could die in case of failure) but in many other cases, by the time I'd be cone creating my bidirectional matrices my competitors would have already released similar functionality to the market and had time to revise it based on customer feedback. So, should I invest time in writing those documents no one will read?
Generally, most sections in the syllabus are labeled K2 or lower (K2 is defined as "understand", but it is more like "understand what a thing is" and not the complete grokking one usually relates to this term), The level that could have made this syllabus any valuable is K4 (Analyze, which was removed in 2018 version, and was applied only to code coverage in the 2011 syllabus) with a minority of some K3 (apply).
All in all - I was completely unimpressed by the 2018 syllabus. It does meet my expectations, but I'm very saddened by that. The changes are, almost entirely, cosmetic. The main difference is that the word "context" is thrown around a lot - I don't believe anyone who learned by this syllabus would be able to recognize context if it punched them in the face.

So, how did this happen? How come that a large group of involved, highly professional testers can get such a shameful document out, and even be proud of it? I can only guess - what I think has happened is that the task at hand was to "update" or even "fix" the 2011 syllabus, so people get to updating paragraphs, fixing sentences or even completely re-writing an entire sub-section. But, as the saying goes, you can't polish a turd (actually, you can). Like a math problem that went astray, this syllabus got (a long time ago) to the point where the best option is to throw everything away and start over.

Thursday, June 28, 2018

נו, יש לך גיבוי?

Don't worry, I got your back(up)

נעלמתי לזמן מה, בשל מגוון סיבות. אחת מהן הייתה שהכנתי את ההרצאה שלי לCAST, והעברתי אותה במיטאפ של TestIL, לי היה כיף, ואני מקווה שגם לקהל. 
סיבה אחרת בגללה נעלמתי היא שקרס לי המחשב. כך, בבוקר שישי אחד, אני מדליק מחשב שעד אתמול היה בסדר גמור (קצת יותר גמור מבסדר, אבל פעל באופן סביר) ולפתע אני מקבל הודעה שאי אפשר למצוא מערכת הפעלה. טוב נו, מתארגנים על תקליטור של אובונטו ומפעילים את המחשב בכל זאת כדי לראות אילו קבצים אפשר להציל. התשובה הקצרה - אי אפשר. משהו בכונן הקשיח נהרס והמחשב אפילו לא מזהה אותו. 
עכשיו, תרגיל קצר לקוראים:
עצמו את העיניים ונסו לדמיין אתכם במצב דומה, המחשב הראשי שלכם התקלקל\נגנב\הוצפן. עדיין יש לכם גישה לכל המידע המגובה שנמצא בעוד מקום נוסף על המחשב. מה אבד לבלי שוב? מה חשוב לכם לשחזר ואי אפשר? מה סתם ייקח המון זמן לשחזר?
עדיין לא עצמתם עיניים? עכשיו זה זמן טוב. 
אני מנחש שכנראה הצלחתם למצוא דבר או שניים, אבל סביר להניח שכמעט כל מה שיגרום לכם לכאב ראש כבר מגובה איפשהו - בענן או על כונן חיצוני. זה גם היה המצב אצלי, חוץ מאשר מצגת שהתחלתי לעבוד עליה אבל לא שמרתי עדיין בגוגלדרייב, יש לי גיבוי לכל מה שהצלחתי לחשוב עליו - תמונות מטיולים ומוזיקה שהעברתי מדיסקים ישנים למחשב (וגם כזו שהגיעה למחשב שלי בימי נאפסטר וקאזה, אבל אל תספרו לאף אחד) נמצאים על כונן חיצוני (שניים, למעשה), כמעט כל המסמכים שחשובים לי נמצאים בדוא"ל ואת התוכנות שמותקנות אפשר להוריד שוב. גם רוב הסימניות שלי בדפדפן, שמורות בגוגל בטעות, אחרי שחיברתי פעם את החשבון לכרום ודברים סונכרנו לפני שהספקתי לומר ג'ק רובינזון.גם המשחקים שקניתי היו דרך Steam או אחת החנויות האלה והמידע שלי שמור בענן, עד כדי משחקים שמורים שאולי נשמרים מקומית בלבד.  סך הכל, אחלה, לא?
בכל זאת, היה לי קצת חבל על שלוש השעות שאצטרך כדי לבנות מחדש את המצגת, ועל אבדנו של קובץ אקסל בו אני עוקב אחרי צריכת הדלק של הרכב שלי. לא סוף העולם, אבל סתם מציק. חוץ מזה, יש פה אתגר - כונן קשיח נגיש לחלוטין, אבל לא מזוהה ע"י מערכת ההפעלה. לא יכול להיות שמידע נעלם סתם ככה, נכון? מה שכנראה קרה הוא שהסקטור הראשון נדפק, ואז מערכת ההפעלה לא יודעת אילו ביטים הם חלק מקובץ ואילו אינם.
אז הורדתי תוכנה בשם testDisk, שנועדה לשחזור מחיצות ומסתבר שהיא יודעת גם לשחזר קבצים אבודים, עד רמה מסויימת, ולפתע - כל הכונן שלי נגיש שוב. פתאום גיליתי מה עוד לא בדיוק מגובה:
  • האם אתם זוכרים אילו תוכנות מותקנות לכם על המחשב? הגעתי לשלושים וארבע תוכנות שרציתי להתקין מחדש, לא כולל תוספים לnotepad++ או לכרום. סיור בתיקיות program files עזר לי למצוא את מה שאני רוצה להתקין באופן אקטיבי.
  • %APPDATA% - ועם אנשי הלינוקס הסליחה. לכל מיני אפליקציות מותקנות יש מידע שנשמר תוך כדי עבודה, ומכיל את מה שחשוב לכם באמת. למשל, נזכרתי שמותקן לי על המחשב לוח שנה עברי, יחד עם תאריכי ימי ההולדת של כמה חברים קרובים. אני לא זוכר את תאריך הלידה העברי של רובם, והיה מאוד נוח כשיכולתי למשוך את הקבצים הרלוונטיים מתוך ההיסטוריה. אותו הדבר היה נכון למסד הנתונים של ditto, בו אני שומר כל מיני דברים שחוסכים לי זמן. 
  • דברים שהשארתי על שולחן העבודה - למשל, יש לי תיקייה עם צילומים של שירים, כאלה שגזרתי מעיתון, או צילמתי מאחד מספרי השירה שלי. אני כנראה יכול לשחזר את הרוב, אבל לכו תזכרו מה היה שם. עד שלא ראיתי את זה, לא זכרתי שזה שם.
  • סיכומים מהאוניברסיטה  ושאר מסמכים מmy documents - לא שמשהו משם בער לי, אבל יוצא לי בערך פעם בחצי שנה להיזכר במשהו שאמור להיות בסיכומים שם ולחטט בו. 
  • תיקיות נוספות תחת כונן C - מדי פעם יש דברים שצריך לבחור להם מקום. למשל, כל מיני ספרים בפורמט PDF שקניתי או קיבלתי. די בטוח שהרוב שם מגובה, אבל אולי משהו הוחמץ. 
בקיצור - מגוון הפתעות נחמדות חיכו לי כשהתחלתי לחטט בכונן ההרוס, ואני בהחלט שמח שהצלחתי לשחזר ממנו את רוב המידע (לא מצאתי מה עוד חסר לי, אבל אני מניח שפספסתי דבר או שניים). 
עכשיו לתרגיל השני: טיילו בחמשת המקומות שהזכרתי והשוו את הרשימה של דברים שתרצו לומר "את זה אני מעביר למחשב הבא" לרשימה שבניתם בעיניים עצומות קודם - עד כמה הרשימה הזו ארוכה יותר?

חוץ מזה, האם מישהו מכיר דרך נוחה לסנכרן תיקיות לגיבוי? אני לא רוצה לשמור את הכל בdropbox, אבל הייתי שמח להגדיר תהליך שירוץ באופן שבועי ויגבה את כל הדברים הקטנים האלה שאני לא רוצה לטפל בגיבוי שלהם בעצמי. 

I've been away for a while - for many reasons. One of which is that I was busy preparing a talk for CAST, and practicing it in a local meetup. I had a lot of fun, and I hope the audience enjoyed it as well. 
Another reason, which is also the reason for this blog post, is that my PC died. Or rather, my hard drive did: One day my computer works (sort of) fine, then the next I switch it on, just to get a nice message that it cannot find an operating system. Oh well, I created an Ubuntu disk and managed to get the computer past the problematic point, just to see that the hard drive is not recognized. Something there is messed up. 
Now, a short exercise for the readers: Close your eyes and imagine your main computer crashes in a similar way, or gets stolen or encrypted by a ransomware. Your backups and online data are intact. How much data have you lost? 
Eyes still open? You can close them now.
My guess is that your answer would be "not very much" - I imagine that while you managed to find an item or two, most of the data that you expect to be missing in such a case is probably backed up either on the nebulous "cloud" or on a physical external drive (sometimes connected to a secondary computer). 
For me, the case was very similar: I had a presentation I worked on for a small number of hours and  did not yet save to my google drive, but apart from that, I had almost everything backed up: music that I ripped from discs purchased ages ago and probably lost by now (some of them might still be buried in a drawer at my parents house), pictures I took at various trips. Most of the important documents can be found in my gmail, my games are in Steam\Origins and their save data too (or, if it isn't, it's not important for me), the software I had installed can be downloaded again and even most of my bookmarks are stored in Google servers after I once signed in to Chrome and before I could say Jack Robinson my bookmarks were synced (up until the point where I disabled that and logged out). All in all - not that bad, right?
Well, I I decided to not give up, and downloaded a piece of software called testDisk and maybe save myself the need to reconstruct the slides. Surprisingly, it worked. I then found that there were some other stuff I really didn't want to lose, but have not thought about. 
  • Do you remember all of the programs that are installed on your computer? After brosing through the folder structure with special care for "program files" folders I could list about 34 programs I wanted to install again (not including plugins for notepad++ or chrome).
  • %APPDATA% (and the Linux people will have to find the equivalent on their own) - Some programs are valuable not because of the functionality they have, but rather because of the data stored in them, I had a Hebrew calendar where some friends Hebrew birthdays were stored, and some of them I did not remember. I managed to get this back by restoring the relevant files from %APPDATA%. Same goes for chrome bookmarks, or ditto's database (where I keep some copied strings as a shortcut).
  • Speaking of that calendar, I encountered another problem - downloading it again was a bit challenging, as the official site is "under construction" for at least a couple of years (according to the wayback machine). Installation files are not what I would normally bother to backup. 
  • Stuff I left on the desktop - A plethora of tiny things that are nice to have handy - I have a folder with poems I took out of some of the poetry books I have, or received by mail, or found online - it is backed up, but I'm not certain how up to date is the backup. 
  • "My documents" - while most of the documents there can be forgotten, some of them are in the category of stuff I remember once in a blue moon and recall that I want to share some of it, or read again. My University notes are such thing, and some documents with sentimental value to me. 
  • Other folders directly under C:\  - I found there a folder with PDFs I bought or received over the years (most of them are RPG books, probably from a kickstarter). 
Now, for the 2nd part of the exercise - Go over those folders on your computer and think - "what would I like to keep and wasn't in my list?"

And now to the question I have for you - Do you know a tool that allows syncing a folder to my preferred backup? Updating backups manually is not a real option, and I would rather be able to click something and have all of those small things backed up for me.  I think I'll give cwRsync a chance.