Software development by analogy

Analogies are always useful to get into the right mindset about how things work. They are even more useful if you’re trying to explain how software engineering works to a non-technical person.


People tend to think that building software involves some kind of black magic. In reality, this process is a lot more methodical and deterministic but is difficult to explain exactly what we do to any outsider.

It’s really hard to get an accurate analogy because it’s not as deterministic as some manufacturing process like crafting chairs but it’s also not as open-ended as writing a book or painting.

We also tend to think our job is mostly about coding and doing some other technical stuff, but the truth is that the software development process is more about the successful interactions between clients, management and software engineers than any other thing.

One useful analogy that could work is civil engineering. Is simple enough that anyone can understand it but is technically consistent to make it accurate.

Key Similarities

  • You cannot estimate the real effort of a project without doing some work first. You cannot expect to go to an engineering firm and simply ask “How much will it cost to build a bridge like the Golden Gate?”.
    If you’re an experienced engineer you probably do some rough estimate using the original cost of the bridge, some similar ongoing project, and some other variables but you know that these kinds of estimates are impossible to do.
    Anyway, folks go lightly asking “How much an app like uber will cost to do?” and most of us want to bang our heads to the wall trying to explain that this is not how it works.
  • Most estimates are bullshit and you’ll probably need a lot more time and money to build that. This is known in software. Many like to pretend estimates are written on stone but we know that they are at best an educated guess. In the construction business, this is also a thing, but they usually don’t pretend that they know what they are estimating.
    For example, the Berlin Airport was estimated to cost €2 billion in 2006 and was expected to be finished in 2012. This airport is still not finished and it’s expected to open on 31 October 2020 and it will cost at least €6.5 billion to complete.
  • Changing things in the middle of the project will cost a shit-ton of money and time. This is another fact most non-technical managers, product owners, and regular folks like to pretend it doesn’t exist. This is why when you’re building your home you don’t suddenly add an extra floor after 70% of the project is already done. But like I was saying, in the software industry we like to pretend is not a big deal and completely normal to add or remove 50% of the original requirements after we were working on the project for 10 months.

Some Limitations

Like any abstraction, they are accurate to a certain degree, and there some big differences we’ll need to take into account.

  • Stability of the requirements. As we saw changing the requirements in the middle of the construction is sometimes prohibitively expensive and some other times borderline impossible. That’s why you wouldn’t turn a house into a sky-crapper halfway down the project. That’s a common practice in software development.
  • Stability of prices and scale. Construction has some major advantages over software development. Most construction materials have stable and known prices and can be bought in bulk even with leasing plans. The scale of the project is also known beforehand, so you can more or less get an accurate estimate of workers and machines you’ll need.
  • Documentation. Even to build a little project like a house requires a hell-lot of documentation and calculations, you’ll need at least 3 different architectural plans. In most software projects documentation is non-existent or completely obsolete because it was never updated after the beginning of the coding phase.

Further reading

Earlier post

Is there any place for monoliths in 2020?

For more than 5 years, we are in the age of microservices. Monoliths sound like 90’s tech and a thing we should be embarrassed about having. We’ve seen countless discussions about microservices vs monoliths. We all remember the epic talks of engineers and CTOs from NetflixSpotify, and other big players. Loads of buzzwords thrown every five minutes: scalability, ownership, devops, redundancy, and more.

microservices server image

So what happened in the last five years? Microservices became the one-size-fits-all solution for every technical problem. But it wasn’t only a trend for startups and a catchy phrase for TED talking gurus. Even traditional corporations like Coca Cola and BestBuy are moving all their infrastructure to microservices.

So… there must be a catch right? It is time we have a serious bias-free discussion about this. More than enough time passed and this topic is somewhat mature. We can find lots of failures and successes, and even detailed post-mortem about them from the people who did it.

Pros and cons

As we stated earlier this topic was analyzed over and over, so we’ll just do a quick recap with no further explanation for the sake of simplicity. You could easily skip this part if you are already familiar with them.



  • Horizontal Scalability
  • Code reusability
  • No single-point-of-failure / Fault-tolerant (If done correctly)
  • Elastic (if done correctly and placed on an elastic infrastructure)
  • Can greatly reduce hosting cost
  • Versatile – Any microservice can be written on any language if the correct guidelines are implemented.
  • Infrastructure can be distributed “easily” between servers, cloud zones, etc.
  • Low coupling between components


  • Steeper learning curve
  • Need more experienced developers
  • The cost/time to migrate to a microservice architecture could be huge.
  • Testing could become hell if not handled properly.
  • Needs lots of requirements to be implemented correctly( ci/cd, monitoring, etc)
  • Cognitive load. Microservices can become a complicated mesh and introduce a lot of new problems inherent to them (concurrency, service discovery, data consistency, load balancing etc)



  • Simplicity
  • Need less boilerplate code to start a new project
  • No special knowledge is required. Every developer is used to work with monoliths. Makes hiring easier.
  • Business requirements are usually easier to implement.
  • Straightforward to test since the beginning.
  • Faster time to market if the requirements are concrete and well known.


  • Expensive to scale.
  • High coupling between components
  • Scalability has software and hardware limitations on monoliths.
  • Develop environments could become huge over time.
  • Could be hard/expensive to migrate into a cloud environment
  • Could be impossible to scale if there’s too much technical debt on the codebase.


To be honest, you probably don’t need microservices if you’re not google-scale or you’re not expecting to grow insanely fast like Netflix, and surely you don’t need to migrate your whole infrastructure to microservices right now.

It could sound a little cliché but you need the right tool for the job. Most of the companies and even crazy startups won’t see the full benefit of implementing them but will need to deal with all the difficulties they could create.

There’s also some kind of “conflict of interest” between developers and management in this topic (like in many others). Developers always want to try the new flashy things (and we got to admit that we all like to brag about using the latest trendy framework/tech) and also want to add to their resume all that beautiful buzz-words.

On the other hand, management mostly sees risks in changing the whole architecture ( with some common-sense) and almost no tangible benefits; and management mostly wants new fancy features they can sell to the clients (or the upper management). All this is a recipe for a perfect disaster.

Moving into microservices must be done almost perfectly to make it worth it. Even experienced developers could lead a decent monolith into a microservice hell. Almost all failed attempts get the worst of both worlds, an anti-pattern called Distributed Monolith (you can read more here)


If we analyze most of the successful migrations into microservices, they were driven almost exclusively from necessity rather than preference. Most of the big early adopters (like Netflix) couldn’t find a way to achieve their business goals with a monolithic architecture.

This also isn’t a recipe for guaranteed success. If you can’t design and implement a modular and robust monolith what makes you think you could do it better with microservices?

Like almost everything in life, the sweet spot is somewhat in the middle. The ideal scenario would be designing a modular enough monolith that could be eventually divided into smaller pieces. 

Following that approach, you can start to migrate big modules into stand-alone services (not microservices), and begin turning your monolith into a Service Oriented Architecture (SOA) as needed. After some time, services will be split into smaller services and it will be a time when you’ll have to unavoidably start to deal with all the microservices problems (service discovery, load balancing, consistency, concurrency, latency, etc).

But after all, this approach is much more humane. All the teams can transition seamlessly into the microservice ecosystem without the hassle of hiring 200 new specialists and the priorities can be managed more accordingly to the business requirements.

As some final words, this is just one approach and nothing here is written on stone. Many authors claim that is impossible to seamlessly transition a monolith into successful microservices, and that you’ll need to eventually re-write everything from scratch.

This is debatable but the main topic is that reality usually doesn’t go well with absolutes. Every problem will need a different degree of granularity. So maybe there’s no point in keep discussing microservices vs monoliths