Friday, May 3, 2024

Book review: Modern software engineering


 This is a great book. It starts by presenting a thought-changing idea and then proceeds to a more familiar ground, painting it new (and better) in light of the first idea.

Let's start by ruining the surprise. Farley suggests that we've been using the term "Software engineering" all wrong, and this has been wreaking havoc on the way we think about creating software and on the way we work. He's claiming that contrary to a popular approach amongst professionals, software should not be treated as a craft, but rather as a field of engineering. No, you can put your automatic objections aside, not *this kind* of engineering.  His twist, and what creates an "aha" moment is his observation that there are two types of engineering: production engineering, and design engineering. The first, which is the mold we've been trying to cram software creation into, is dealing with the following problem: Once we've designed a product, "how can we create it with high precision and economical efficiency?" The second type, as the name hints, treats the question of "how do we design a product in the first place?". When we think of engineering, we usually think of the former, which is quite different from the latter. Design engineering is a discovery process, and should be optimized for learning and maximizing the effective feedback we get from our efforts. 

That's it. That's the one new idea I found in this book. Sure, it's more coherent and detailed than the short summary here, but this idea is quite powerful when we grok it. It also aligns quite well with "real" engineering - once we make the separation between design and production, it becomes evident that aiming for predictability and repeatability is just irrelevant and even harmful. The author even points to some of the physical engineering endeavors such as SpaceX choosing the material for their rockets' body, where after doing all of the calculations and computer simulations, they still went and blew some rockets to see if they got it right (or, as it is more properly stated - they experimented to test their hypotheses and to gather some more data).

Once the reader is convinced that it's appropriate to think of software creation as proper engineering field (or abandoning the book), the rest is advice that we've heard a million times about good practices such as CI\CD, TDD, and some design principles such as separation of concerns and cohesion. The only thing new is that now we have the language to explain not only why it works in the examples we can share, but also why is this the right approach initially. If we are participating in a design engineering effort, it follows that our main obstacle is complexity. The more we have to keep in our heads and the more our system does, the harder it is to understand, change and monitor. To deal with this complexity we have two main strategies that work well in tandem: Speed up our feedback and encapsulate complexity behind a bunch of smaller modules. 

As can be expected for such book, it refers a lot to "quality", which is a term I gave up on as being unhelpful. Following "Accelerate", the author has a nice way circumventing the problem of defining quality. More specifically, he refers to two measurable properties - speed and stability -  as a "yardstick" which is not the ultimate definition of quality, but rather the "best we currently understand". I like this approach, because it provides a measurement that is actionable and has real business impact, and because it helps counteracting the gut-instincts we have when we use poorly defined terms taken from other fields (or, as I might say after reading this book, from production engineering).

There are some points mentioned in the book that I believe are worth keeping in mind, even if you don't get to read the book:

  • The main challenge in software is to manage the complexity of our product. Both the complexity of our business, and that which is created by the environment our software operates in. 
  • In order to have great feedback from evaluating our system, we need precise measuring points, and high control of the variables in place.
  • Dependency and coupling is causing pain. It doesn't matter if it's a piece of code that depends on another, or two teams that needs to synchronize their work. While it can't be reasonably avoided, it should be managed and minimized. 
  • You don't get to test your entire product of dozens of microservices before production. Deal with it, plan for it. Trying to do otherwise will make your inter-dependency problem worse. 

One thing I found a bit odd was the claim that unit tests are experiments. For me, this is the place where the analogy breaks a bit. An experiment is something meant to increase your knowledge and usually in order to (hopefully fail to) disprove a theory. The theory "I did not break anything else when writing my code" is not the kind of theories I would consider interesting. If old tests are related to experimenting (and I can probably accept the claim that new tests are sort of an experiment), they are more like the measurements taken when manufacturing something, after the design is done we still run tests as part of quality control, and still measure that we've put everything exactly in place. Calling old unit tests an "experiment" sounds a bit pompous to me. But then again - it's OK if the analogy is imperfect - software engineering is a new kind of engineering, and just like chemical engineering is different than aerospace engineering, not everything can fall exactly into place. This analogy does tell a compelling story, and that can be more valuable than accuracy. 

 All in all, I highly recommend anyone dealing with software to read this book.

No comments:

Post a Comment