test it. code it. ship it.

If You Don't Fight It You End Up With a Monolith

Posted on in architecture, monolith, | Comments

You planned your software. You talked to your business owners. As soon as you saw a new use case which should be implemented, you modularized it. You did everything right. And then you hit the integration layer. Sounds familiar?

At some point in your project you need to integrate all modules with each other. In case of a website you need to produce HTML.

There are different approaches how to tackle this problem. You can integrate it on the server side, grabbing all ui code from the affected systems and pack it together or you can integrate it in the browser.

Why you should care? Both ways seem to have some advantages. What is the reason we modularize in the beginning? It could be something like code reuse or that we could refactor better. These are valid technical reasons.

My take on it is that we enable the system to have different lifecycles for each module. This means we can ship faster. That means we are not depending on other teams/modules. Why would you like to ship things faster? It could be that your product owner (or somebody like that) would like to have the features deployed as soon as they are ready. Or he likes the idea that whenever a bug occurs that you are able to deploy it very fast once you fixed it in the code. I saw it (and still see it) way too often that teams can’t deploy fixes because a module (of another team) they are depending on can not deployed in the newest (changed) version just right now. You can try to mitigate that by branching the code but this strategy has it’s own problems (merge hell).

This is not restricted to UI code. Usually a system depends on a number of other systems. If these other systems are also managed by the very same team, well, this team can decide all by themselves when and how to ship code, we are good. If, however, the other systems are owned by other teams you might have a problem. Raise your hand if you never had this situation.

One way to get around this is to have never changing interfaces to the systems. As good as this sounds, this will not be the reality for long. This is just impractical. Another way is to have backwards compatible interfaces. Whatever the other systems do, they need to make sure they don’t break existing code. To help them to detect this, one could write contract tests where each team tests the behaviour of the interfaces they use. From my expierence this will somehow work but of course clever teams will find ways around it. Depending on the number of systems people will try to find all their clients, make sure they change as well and make incompatible changes. This could work assuming you have tests for everything and/or like to find bugs in production. You can also add a new version/new resource. This is a backwards compatible change and should not cause any problems (well except the API might be bloated after some time). Versioning interfaces is a whole new story and everybody has their opinion on that. This stuff is hard.

If you do not have these problems you have

  • a) a very nice (maybe small) and easy system
  • b) a monolith
  • c) I’m jealous

Since monolith is a somewhat bad word these days:

What is a monolith anyway?

In a distributed system for me it is:

You have a monolith in a distributed system if a change in one system requires deployment in at least one other system which the same team can’t control/deploy all by themselves.

If it’s two teams, make sure they sit and work closely together. For more than 2, 3 teams: You’re doomed.

It’s all about being independent. If you have systems which are coupled: this might not be bad. Make sure the same team is responsible for both. Problem solved. Migrating to that kind of solution can involve a differently layered business case, which means, talking to the business owners. This process can take very long.

Why did I write all of the above? I saw it multiple times (I am guilty myself) that we wrote (with the best intensions) distributed systems but then had a problem at the integration of all of them. They all were required to be deployed together to make the system work. In the worst case this meant down time deployment. This is a reminder for myself that nothing in distributed systems is as easy as it looks.

Nothing will be ever perfect. But it is a goal.

Just don’t give up on it.

One anecdote from a monolith workshop: Once you are on your way away from the monolith you will expierence that the monolith sucks you back in. I just expierenced that.

If you want to learn more / talk about about microservices and monoliths come and join us at the microXchg 2015 community conference in February in Berlin, Germany.