Is this the age of throw-away software systems?

In my ~ 20 years as Software Developer and Architect, it has always been unquestionable for me to aim for high quality in my software projects. I have especially appreciated to learn about Clean Code in the last 6 or 7 years. But recently I had a discussion with another external consultant about software development processes and practices, which showed me a completely different view. A view that might correspond indeed to the reality, but it’s definitely a view that occurs undesirable and ugly to me….

So that colleague tried to convince me, that my qualitative approach is neither what customers these days want nor what they need. His thesis consists of the following aspects:

Throw-away code: Just make it work – usually quick, dirty and cheap

In order to realize a shorter “time-to-market”, you set aside software quality because it’s not what the customer is interested in. The customer doesn’t really see what level of quality the software actually has, as long as it just works. So the only 3 things the customer is interested in are:

  • How fast can I get it?
  • Does it work?
  • How much does it cost?

Throw-away software systems: systems are no longer targeted to be long-lived solutions

According to him, the lifecycle of software systems is nowadays more short-termed. Once business is no longer happy with a system, then we re-write it. In his opinion, it also doesn’t make sense to invest in high quality solutions because the requirements nowadays change so fast that it is simply not worth the effort. So in his opinion, the lifecycle of an average software system is not expected to exceed 3- 5 years in most cases. During that time, the customer will be able to live with a slowly degrading solution as long as it still works at least somehow.

Throw-away requirements: “Agile” forces us to implement incomplete and unapproved requirements

In my ex-colleagues’ opinion, the agile software development approach allows and even forces us to implement even incomplete and unapproved requirements. In such cases, you create a working hypothesis from what you already know and you supplement it with some best guesses in order to get to an initial solution. In case the finalized requirement differs too much from what you have guessed and implemented, you throw away what has been done so far. His assumption is that this doesn’t happen very often, maybe in 20% of the cases. For the rest, it will be possible to finalize the features with some simpler refactorings. Anyway, he assumes that this approach will also help to speed up the time-to-market because the ratio between good and bad guesses should be ~ 80:20.

Microservices – the throw-away architecture?

Another aspect that came to my mind after the discussion is, that on the first view, the Microservice architecture seems to leverage the quick and dirty approach, because Microservices can be thrown away with less pain and rewritten much faster than traditional / monolithic systems.

Conclusion

I’d really like to hear, what you think about all that. Do I really have to redefine my working attitude? Are we already living in the throw-away IT-society? Is it more professional to be able to create quick and dirty solutions that just work instead of providing a clean solution that takes more efforts upfront? Does “Agile” really mean to work with throw-away requirements? Do we really have to sacrifice a qualitative requirements engineering by just quickly putting together a few user stories and write new stories later on – once we know more about the topic – just to fill the backlog and keep developers busy? Do we really also sacrifice a clean software development approach for the sake of the (potentially?!) quickest time-to-market possible? Do you actually throw away Microservices in reality after 3 – 5 years?

To be honest, for me the answer to almost all questions is still “No”. Maybe it’s not all black or white but quality must still be the foundation. We have to keep convincing the customers that they will benefit from quality. Anyway, the longer a system runs, the more important quality becomes, that’s obvious. We shouldn’t mix up changing requirements with intentionally guessing missing parts for keeping the developers busy. The first one is inevitable while the 2nd one should be done only to a very limited extent.

And I believe that it remains true even in a Microservice architecture what Uncle Bob said: “The only way to go fast is to keep the code clean”. The performance of the developers will start degrading sooner than project managers expect (in fact they do not expect that at all. Instead, they always assume the best-case, even if the project history has already prooved the opposite for several times).

What we IMO need is more awareness on the project manager- and middle management level concerning time schedules. Unrealistic time pressure is IMHO one of the key reasons for driving projects into chaos and quick and dirty solutions, but that’s a topic for another blog.

Leave a Reply

Your email address will not be published. Required fields are marked *