Why unions won’t work for developers and could even be harmful

Just Transition - what do trade unions demand? | IndustriALL

The unionization of the tech industry, especially developers, becomes a hot topic from time to time. There’s usually a “no-brainer” argument, that tells unions and Collective Bargaining Agreements (CBA) are the magic bullets to end exploitation and improve greatly the benefits of employees of the software industry.

But is it true? What is the empirical data telling us? Let’s find out.

Historical Background

The concept of people from a similar craftmanship gathering together to defend their activities is almost as old as organized human activities themselves. These kinds of structures peaked in the Italian Renaissance with the creation of official Trade Guilds, but modern Unions as we know them became widely spread only after the Industrial Revolution of the 18th Century, although they remained illegal in most countries until the beginning of the 20th Century.

This was the first time in history that labor was standardized and serialized in massive scales to fit the requirements of the newly created factories.

The inevitable categorization of similar jobs into well-defined specifications made it easy to quantize how difficult and well-compensated the task was on each factory.

Management on this stage was reduced to trial and error, so working conditions and compensation varied widely from one factory to another. 18-hour shifts, child labor, slavery, and fatal accidents were more than common.

At this point, unions were really a no-brainer and began flourishing at an explosive rate bringing massive improvements on working conditions at factories. As usual, compensation still lagged behind, but the trend was clear.

By the year 1920, unions were universally spread in most of the developed world and the first international labor organization was created in Europe (World Confederation of Labour). The end of the second world war marked the beginning of the golden age of unions until the late ’70s. 

From then on, the increasing number of free-market countries, globalization, and the decline of manufacturing industries in general, created a steady reduction in unionization rates across the world. 

As we saw, most of the concepts behind unions were baked at least 100 years ago. They were a perfect fit to even the playfield of a big, homogeneous, and unskilled workforce that did mainly physically intensive and repetitive tasks.

These regulations provided a nice and rock-solid framework for almost all industrial jobs, but they were designed in a world in which most modern jobs didn’t even exist. 

The tech industry case

Medieval stone masons - Stock Image - C007/7580 - Science Photo Library

Technology workers and especially software developers have more in common with medieval stonemasons than unskilled factory workers of the ’50s.

Stonemason’s job was a delicate balance between art and engineering. They designed, planned, and built the biggest projects of their time, mostly cathedrals and castles of which many remain until today.

Each project was unique in scale, features, and requirements; and their skill and qualifications varied a lot, as well as their compensation. Even so, all the great stonemasons managed to be backed by a worldwide guild. So if you were fucked by a medieval lord or noble, you could be sure he wouldn’t be able again to find a good stonemason to finish their project.

Software Development is not a lumbermill

If you worked long enough in this industry, you’ll probably know there’s no standard employment contract for software developers. Each one is Sui Generis and almost always defined on a company by company basis. 

The same position in Company A could be paid 5x more in Company B, and compensation is only the tip of the iceberg. Some companies are remote-first, some are only in-office (probably not anymore). Some pay bonuses, some others don’t. Some pay in stock, some don’t. 

You can now begin to notice the trend… There’s no size-fits-all, and it’s a good thing. It means we are free to negotiate the deal we want.

Unionizing and things like Collective Bargain Agreements (CBA) sound amazing at first, but after you dig a little, you realize they are not compatible with our industry at all.

Do you want some examples?

  • #1 Did you strike an amazing deal with 90% stock options but a really low salary for a year? No way, the CBA set a maximum of 25% of the compensation in stock options and other bonuses.
  • #2 You managed to negotiate a 4-day workweek of 10 hours each to have 3 free days? No way bro, the CBA set the maximum regular working hours to 8, the company has to pay you the other hours as extra hours with a premium so it’s no longer viable for them.
  • #3 Do you want to casually freelance as a side-gig? No way, the CBA establishes that you need to be hired for a minimum of 40h a month to prevent labor precarization and unfair competition.
  • 4# Do you want a fully remote position? Keep dreaming, now the company has to transform your home office to be compliant with all the industrial regulations of a regular workplace to be able to offer it to you as a benefit. If not they would be liable if you happen to fall off your chair and break your wrist. 

Disclaimer: Examples not based on actual US regulations, but based on real regulations from other places with high unionizing rates like some parts of Europe and South America.

Yeah, CBA and unions can work with factory jobs to prevent 12h shift on well-defined tasks, but the moment you try to apply the same formula to high-skilled and creative jobs, the thing gets perverted as fuck in no time.

The implications of strict labor regulation seem like an abstract subject, but the world is full of failed examples. Do you want to see a monstrosity at play? Research the Argentinian labor legislation and its effects on the labor market over the last 50 years.


Unions are not bad per se, but they clearly do not fit in tech. People need to be aware of it. If this unionizing trend keeps growing in the software industry, the long-term effects will be catastrophic. 

These concepts can be deceiving at first. In the short term, the general level of wages can rise, but in the long run, it will only destroy jobs, cripple the industry innovation rate and kill the flexibility and perks all the other industries workers crave and envy.

Please do not believe my words, be sure to do some little digging for yourself. You need to know this union agenda is not naive and emergent from the software workers. It’s pushed by nefarious actors to endanger the leverage we managed so hard to obtain in our jobs.

Our industry is not perfect by any means, there are lots of shitty companies that exploit their workers with low pay and long hours. But still, we are in a much better position than the obsolete and slaving industries where the unions thrive to this day. Do not be fooled by these actors whose only objective is to get a cut of the juicy tech salaries by imposing union fees.

Hungry for more articles? Try reading one of the previous ones:

Why you shouldn’t care about Bitcoin’s future

Bitcoin image

On every prolonged Bitcoin bear or bull the same set of questions flood the internet: is BTC dead? is BTC the future? Will BTC replace the banks? Is BTC a bubble?
This already happened on the bitcoin rush of 2017, and will likely keep happening for a while.

I’m in no way a cryptocurrency expert, but I’m a serious enthusiast and I’ve been following bitcoin and another crypto since almost the beginning.

For a couple of years, I’ve seen a persistent trend of people discussing crypto as if BTC was the final stage of it all. From my perspective, this is blatantly wrong and I’ll explain to you why.


The famous white paper “Bitcoin: A Peer-to-Peer Electronic Cash System” by Satoshi Nakamoto in 2008, set in motion an unexpected and unprecedented global phenomenon. It created the first successful digital currency and, the first decentralized and distributed “banking” infrastructure of the world through the blockchain technology also described in the original paper.

In the beginning, Bitcoin was taken as a joke by most people and as a wild cipher-punk dream by others. All the technical and economic concepts were already there working as intended, but as 2009-2010, there was a lack of a real-world application for this new cryptocurrency.

This changed when the black-market “Silk Road” appeared on the mainstream circa 2011-2012. This website introduced two of the hottest trends (The TOR Network and Bitcoin) and mix them to achieve the “perfect” infrastructure for an illicit online marketplace.

After this event, it was proved in vivo that bitcoin worked and it was not a crazy side project of a bored crypto-anarchist. This came with a cost too, the mainstream media immediately associated Bitcoin with Crime and Illegal stuff. This biased view persists in the minds of some journalists, politicians, and businessmen.

When the Silk Road scandal popped this was probably true for the most part. This illegal marketplace was the most prominent use of cryptocurrencies at that time, but only a couple of years after this became only a cliche. In the present day, the vast majority of bitcoin transactions are originated from legitimate sources.


Now Bitcoin is more mainstream than ever, and it’s still the undisputed king of cryptocurrencies with a market cap of $732.18B. So it’s very tempting to get blinded by the stats and believe the whole future of crypto depends on the fate of bitcoin, but from my point of view, this is incorrect. Bitcoin already fulfilled most of its original objectives and technologically is showing signs of being dated.
What Bitcoin achieved:

  • Proving that a decentralized network with no third-party authorities for digital currencies is not only viable but profitable and sustainable.
  • Breaking the psychological barrier between physical and digital money.
  • Implementing a censorship-resistance payment network for countries and regions with authoritarian governments and/or extremely volatile FIAT currencies through decentralization and pseudo-anonymity.
  • Introducing the blockchain technology to not only financial system but also very unexpected applications like medicine or supply-chain management.
  • Being viable as a store of value, in spite of the claims of extreme volatility and no “real” value in Bitcoin.

What Bitcoin didn’t achieve:

  • Replacing traditional digital payment systems.
    • The transaction cost is absurdly expensive for little transactions and even mid-sized ones when fees are high.
    • The transactions are slow. The standard 3-confirmation can fluctuate in values from 1 or 2 minutes to even 15 or 20 minutes.
    • Bitcoin as is, is not scalable to global payment system. Is still even way worse than traditional credit card infrastructure like VISA who has technology in place from the 80’s. Bitcoin is more expensive and slower and not as remotely scalable as credit cards.
    • Bitcoin process 4.6 transactions per second, which is a laughable number compared to the more than 1700 than visa handles every second. This is not even a serious competition.
  • Being truly anonymous and privacy-friendly. It’s true that no bitcoin wallet is directly associated to any individual, but every transaction of every wallet is public and easily traceable. So with enough time you can establish relationships with transactions to a single person or internet user using things like operations with known wallets of public exchanges and the timestamps of every transaction.
    • This was used a lot of times in the past to track individuals and it was famously used to determined that a dirty cop stole bitcoins from Lars Ulbricht in the bust of Silk Road.
  • Being a sustainable financial infrastructure.


So? What’s the big deal?
We need to remember we are only in the infancy of crypto. Bitcoin was only the first one and most likely not the last one that will last. It will probably remain relevant for a long time, but we need to start thinking about removing the anxiety that surrounds bitcoin price and market cap. At this point, it doesn’t even matter if bitcoin fails miserably and goes to 0$.

Would it be painless? It surely won’t. We have already seen a lot of people going broke in the latest crash, for holding way too leveraged positions. If bitcoin is indeed a bubble and it drops to an absurd value like 10$ or 20$ the crisis would be very severe and will surely last for at least a couple of years.

But, the changes Bitcoin set in motion ten years ago are not going anywhere. Concepts like the blockchain are the real game-changers here, and these concepts will thrive even if the original bitcoin disappeared.

The innovation is going fast, other cryptos/platforms like Chainlink, Cardano, or Ethereum are tackling one or more of the issues Bitcoin has with incredible success. We are in a much better position now. We don’t have a single chokepoint, like 6 or 7 years ago, where if Bitcoin failed the whole ecosystem went to the garbage.

Anyway, I personally don’t expect bitcoin to fail miserably any time soon, but this scenario should make us realize we should focus on the health of the crypto ecosystem as a whole, instead of panicking when bitcoin goes down 20% or goes up 30%.

This behavior will make the crypto landscape more stable and less stagnant, improving the innovation without the need to carry the baggage of the old man of cryptos.

What do you think? Will BTC still be needed to have a successful crypto market? Or is it just the tree that doesn’t let us see the forest? Let me know in the comments.

Hungry for more articles? Try reading one of the previous ones:

Software development by analogy

Analogies are always useful to get into the right mindset about how things work. They are even more useful if you’re trying to explain how software engineering works to a non-technical person.


People tend to think that building software involves some kind of black magic. In reality, this process is a lot more methodical and deterministic but is difficult to explain exactly what we do to any outsider.

It’s really hard to get an accurate analogy because it’s not as deterministic as some manufacturing process like crafting chairs but it’s also not as open-ended as writing a book or painting.

We also tend to think our job is mostly about coding and doing some other technical stuff, but the truth is that the software development process is more about the successful interactions between clients, management and software engineers than any other thing.

One useful analogy that could work is civil engineering. Is simple enough that anyone can understand it but is technically consistent to make it accurate.

Key Similarities

  • You cannot estimate the real effort of a project without doing some work first. You cannot expect to go to an engineering firm and simply ask “How much will it cost to build a bridge like the Golden Gate?”.
    If you’re an experienced engineer you probably do some rough estimate using the original cost of the bridge, some similar ongoing project, and some other variables but you know that these kinds of estimates are impossible to do.
    Anyway, folks go lightly asking “How much an app like uber will cost to do?” and most of us want to bang our heads to the wall trying to explain that this is not how it works.
  • Most estimates are bullshit and you’ll probably need a lot more time and money to build that. This is known in software. Many like to pretend estimates are written on stone but we know that they are at best an educated guess. In the construction business, this is also a thing, but they usually don’t pretend that they know what they are estimating.
    For example, the Berlin Airport was estimated to cost €2 billion in 2006 and was expected to be finished in 2012. This airport is still not finished and it’s expected to open on 31 October 2020 and it will cost at least €6.5 billion to complete.
  • Changing things in the middle of the project will cost a shit-ton of money and time. This is another fact most non-technical managers, product owners, and regular folks like to pretend it doesn’t exist. This is why when you’re building your home you don’t suddenly add an extra floor after 70% of the project is already done. But like I was saying, in the software industry we like to pretend is not a big deal and completely normal to add or remove 50% of the original requirements after we were working on the project for 10 months.

Some Limitations

Like any abstraction, they are accurate to a certain degree, and there some big differences we’ll need to take into account.

  • Stability of the requirements. As we saw changing the requirements in the middle of the construction is sometimes prohibitively expensive and some other times borderline impossible. That’s why you wouldn’t turn a house into a sky-crapper halfway down the project. That’s a common practice in software development.
  • Stability of prices and scale. Construction has some major advantages over software development. Most construction materials have stable and known prices and can be bought in bulk even with leasing plans. The scale of the project is also known beforehand, so you can more or less get an accurate estimate of workers and machines you’ll need.
  • Documentation. Even to build a little project like a house requires a hell-lot of documentation and calculations, you’ll need at least 3 different architectural plans. In most software projects documentation is non-existent or completely obsolete because it was never updated after the beginning of the coding phase.

Further reading

Earlier post

Is there any place for monoliths in 2021?

For more than 5 years, we are in the age of microservices. Monoliths sound like 90’s tech and a thing we should be embarrassed about having. We’ve seen countless discussions about microservices vs monoliths. We all remember the epic talks of engineers and CTOs from NetflixSpotify, and other big players. Loads of buzzwords thrown every five minutes: scalability, ownership, devops, redundancy, and more.

microservices server image

So what happened in the last five years? Microservices became the one-size-fits-all solution for every technical problem. But it wasn’t only a trend for startups and a catchy phrase for TED talking gurus. Even traditional corporations like Coca Cola and BestBuy are moving all their infrastructure to microservices.

So… there must be a catch right? It is time we have a serious bias-free discussion about this. More than enough time passed and this topic is somewhat mature. We can find lots of failures and successes, and even detailed post-mortem about them from the people who did it.

Pros and cons

As we stated earlier this topic was analyzed over and over, so we’ll just do a quick recap with no further explanation for the sake of simplicity. You could easily skip this part if you are already familiar with them.



  • Horizontal Scalability
  • Code reusability
  • No single-point-of-failure / Fault-tolerant (If done correctly)
  • Elastic (if done correctly and placed on an elastic infrastructure)
  • Can greatly reduce hosting cost
  • Versatile – Any microservice can be written on any language if the correct guidelines are implemented.
  • Infrastructure can be distributed “easily” between servers, cloud zones, etc.
  • Low coupling between components


  • Steeper learning curve
  • Need more experienced developers
  • The cost/time to migrate to a microservice architecture could be huge.
  • Testing could become hell if not handled properly.
  • Needs lots of requirements to be implemented correctly( ci/cd, monitoring, etc)
  • Cognitive load. Microservices can become a complicated mesh and introduce a lot of new problems inherent to them (concurrency, service discovery, data consistency, load balancing etc)



  • Simplicity
  • Need less boilerplate code to start a new project
  • No special knowledge is required. Every developer is used to work with monoliths. Makes hiring easier.
  • Business requirements are usually easier to implement.
  • Straightforward to test since the beginning.
  • Faster time to market if the requirements are concrete and well known.


  • Expensive to scale.
  • High coupling between components
  • Scalability has software and hardware limitations on monoliths.
  • Develop environments could become huge over time.
  • Could be hard/expensive to migrate into a cloud environment
  • Could be impossible to scale if there’s too much technical debt on the codebase.


To be honest, you probably don’t need microservices if you’re not google-scale or you’re not expecting to grow insanely fast like Netflix, and surely you don’t need to migrate your whole infrastructure to microservices right now.

It could sound a little cliché but you need the right tool for the job. Most of the companies and even crazy startups won’t see the full benefit of implementing them but will need to deal with all the difficulties they could create.

There’s also some kind of “conflict of interest” between developers and management in this topic (like in many others). Developers always want to try the new flashy things (and we got to admit that we all like to brag about using the latest trendy framework/tech) and also want to add to their resume all that beautiful buzz-words.

On the other hand, management mostly sees risks in changing the whole architecture ( with some common-sense) and almost no tangible benefits; and management mostly wants new fancy features they can sell to the clients (or the upper management). All this is a recipe for a perfect disaster.

Moving into microservices must be done almost perfectly to make it worth it. Even experienced developers could lead a decent monolith into a microservice hell. Almost all failed attempts get the worst of both worlds, an anti-pattern called Distributed Monolith (you can read more here)


If we analyze most of the successful migrations into microservices, they were driven almost exclusively from necessity rather than preference. Most of the big early adopters (like Netflix) couldn’t find a way to achieve their business goals with a monolithic architecture.

This also isn’t a recipe for guaranteed success. If you can’t design and implement a modular and robust monolith what makes you think you could do it better with microservices?

Like almost everything in life, the sweet spot is somewhat in the middle. The ideal scenario would be designing a modular enough monolith that could be eventually divided into smaller pieces. 

Following that approach, you can start to migrate big modules into stand-alone services (not microservices), and begin turning your monolith into a Service Oriented Architecture (SOA) as needed. After some time, services will be split into smaller services and it will be a time when you’ll have to unavoidably start to deal with all the microservices problems (service discovery, load balancing, consistency, concurrency, latency, etc).

But after all, this approach is much more humane. All the teams can transition seamlessly into the microservice ecosystem without the hassle of hiring 200 new specialists and the priorities can be managed more accordingly to the business requirements.

As some final words, this is just one approach and nothing here is written on stone. Many authors claim that is impossible to seamlessly transition a monolith into successful microservices, and that you’ll need to eventually re-write everything from scratch.

This is debatable but the main topic is that reality usually doesn’t go well with absolutes. Every problem will need a different degree of granularity. So maybe there’s no point in keep discussing microservices vs monoliths