Get the top HN stories in your inbox every day.
lr4444lr
eduction
Wouldn't this just be "having one or two services"? I don't think that's the same as "microservices".
Correct me if I'm wrong, but isn't "microservices" when you make internal components into services by default, instead of defaulting to making a library or class?
philwelch
This is why I prefer the old term "service oriented architecture" over "microservices". "Microservices" implies any time a service gets too big, you split it apart and introduce a network dependency even if it introduces more problems than it solves.
shados
It's a pretty common issue. If you have 2-3 services, it's pretty easy to manage. And if you have 1000, you likely have the infra to manage them and get the full benefit.
But if you have 20 engineers and 60 services, you're likely in a world of pain. That's not microservices, it's distributed monolith and it's the one model that doesn't work (but everyone seems to do)
alrs
A "microservice" solves scaling issues for huge companies. If you have 60 microservices, you should probably have 600 engineers (10 per) to deal with them. If you're completely underwater and have 10 services per engineer, you're 100% absolutely play-acting "web-scale" for an audience of really dumb managers/investors.
devjab
It depends on how you do it. We have 5 engineers and around 50 services and it’s much easier for us to maintain that than it was when it was monolith with a couple of services on top.
Though to understand why this is, you would have to know just how poorly our monolith was designed. That’s sometimes the issue with monoliths though, they allow you to “cut corners” and suddenly you end up with this huge spiderweb mess of a database that nobody knows who pulls what from because everyone has connected some sort of thing to it and now your monolith isn’t really a monolith because of it. Which isn’t how a monolith is supposed to work, but is somehow always how it ends up working anyway.
I do agree though, that the “DevOps” space for “medium-non-software-development” IT departments in larger companies is just terrible. We ended up outsourcing it, sort of, so that our regular IT operations partner (the ones which also help with networking, storage, backups, security and so on) also handle the management part of our managed Kubernetes cluster. So that once something leaves the build pipeline, it’s theirs. Which was surprisingly cheap by the way.
I do get where you’re coming from of course. If we had wanted to do it ourselves, we’d likely need to write “infrastructure as code” that was twice the size of the actual services we deploy.
namaria
I have worked with a company that had around 8 developers and 30 'microservices'. They wanted the front end team (fully remote, overseas, different language, culture) to go micro front end. They are awesome at presentations and getting funded tho. A common theme in European startups.
jabradoodle
A distributed monolith isn't based on how many services you have, a better question is, how many services do you need to redeploy/update to make a change.
Yes, by the time you get to thousands of services you hopefully have moved past the distributed monololith, if you built one.
devoutsalsa
>> That's not microservices, it's distributed monolith and it's the one model that doesn't work (but everyone seems to do)
I called ours a macrolith.
lr4444lr
I don't want to get too tied up in the terminology, but "microservices-first" does not seem to be the problem the post is describing:
One way to mitigate the growing pains of a monolithic backend is to split it into a set of independently deployable services that communicate via APIs. The APIs decouple the services from each other by creating boundaries that are hard to violate, unlike the ones between components running in the same process
wongarsu
Without getting tied up in the whole "every language except assembly, Python and possibly JavaScript already solved this problem by forcing people to adhere to module-level APIs" argument, I think the crux of the issue is that the article just defines microservice architecture as any architecture consisting of multiple services, and explicitly states "there doesn’t have to be anything micro about the services". Which imho is watering down the term microservices too much. You don't have microservices as soon as you add a second or third service.
Dudester230602
Exactly, just use well-factored services of any size and smack anyone saying "micro..." with a wet towel for they are just parroting some barely-profitable silicon valley money sinks.
gedy
For some reason, most of the people I've worked with recently are either fully into monoliths or lots of fine grained, interdependent microservices.
They don't seem to understand there's a useful middleground of adding fewer, larger data services, etc. It's like SOA isn't a hot topic so people aren't aware of it.
andorov
I've been using the term 'macro-services' to describe this middle ground.
collyw
Start with a monolith and scale out the parts as needed.
buster
Actually, you are wrong. Microservices are surely not about defaulting to new microservices, but to capture a specific context into one service. There is no rule about how big a context is. A context can contain other context's. There can be technical reasons to split deployments into different microservices, but that's not the norm. What you describe is what happens, when people get microservices wrong.
In the end, i like the viewpoint that microservices are a deployment pattern, not so much an architecture pattern. Usually, you can draw a component diagram (containing an OrderService and a DeliveryService, etc.) and without technical details (execution environment, protocols), you couldn't tell if it's describing multiple microservices or multiple components in one service.
jupp0r
To add another to your list:
Being able to easily use different programming languages. Not every language is a good fit for every problem. Being able to write your machine learning deduction services in Python, your server side rendered UI in Rails and your IO and concurrency heavy services in Go might justify the additional overhead of having separate services for these three.
Cthulhu_
Yes, but the choice to add a new programming language to your company's profile has to be taken with care and due diligence; you should make sure to have a number of developers that know the new language, offer training, incorporate it into your hiring, etc. It's an added node to your dependency graph, which can quickly become unmanageable.
You should always look into existing languages first. There's a lot of "I rewrote this thing into $language for a 100x performance boost" posts, in a lot of cases the comments are like "Yeah but if you rewrote it in the original language you could make it a lot faster too".
For a cool visualisation of this problem, check out https://boringtechnology.club/
renegade-otter
These are almost never pragmatic decisions. Giving teams independence over the stack usually results in resume-driven development, and now your JS developers are forced to maintain a Go server because some jock thought it was a cool thing to do .
Due diligence in these cases is rare.
marcus_holmes
I've seen this play out for reals at a place a few years ago. Every team used a different tech, and all of them selected because of resume-driven development. People moving teams to get a particular tech on their resume. No common method for deployment, and endless issues getting things deployed. Everyone's a newbie because we were all learning a new, cool, stack. And everyone making stupid newbie mistakes.
Never again. When I build teams forevermore, I pick the tech stack and I recruit people who know that tech stack, or want to learn that tech stack. And we stick to that tech stack whenever possible.
pjmlp
Indeed, I am also an advocate of having the organization define a specific set of languages.
Even with just one, it isn't really a single one.
To pick on JS as example, not only there is JavaScript to learn, TypeScript might also be part of the stack, then there is the whole browser stack, and eventually node ecosystem as well.
Take the remaining UNIX/Windows related knowledge to get development and deployment into production going, SQL and the related stored procedures syntax for RDMS backends.
Eventually the need to know either C or C++, to contribute to V8 native extensions or WASM.
Now those folks need to learn about Go, Go's standard tooling, Go's ecosystem, IDE or programmer's editor customizations for Go code, and how to approach all the development workflows they are confortable of from the point of view of a Go developer.
Cthulhu_
I don't believe a team should have that much say in their technology, unless they themselves are also responsible for hiring, training, etcetera - so it kinda depends on how autonomous a team is.
That said, as the article also mentions, "micro" can be a bit of a misnomer; you could have a 50 person team working on a single "micro" service, in which case things like hiring and training are much more natural. (to add, I've never worked in a team of 50 on one product; 50 people is A Lot)
j45
There are many ways to architecture well that doesn’t mean prematurely introducing micro services.
I’m a fan of microservices btw.
Premature optimization and scaling is almost as bad of a form of technical debt as others when you have to optimize in a completely different manner and direction.
eterevsky
1) Why is it better than wrapping it in an interface with a clear API without extracting it into a separate service?
voxic11
Because of dependency issues like he mentioned. If I am using Library A which depends on version 1 of Library C and I need to start using Library B which depends on version 2 of Library C then I have a clear problem because most popular programming languages don't support referencing multiple different versions of the same library.
slowmovintarget
OSGi and Java Modules would like a word.
Too few developers use the facilities available for that kind of in-process isolation, even when it is possible. (Don't tell me Java isn't popular... It may be the new COBOL, but it's still mainstream.)
foobiekr
You’re not wrong here but usually this comes down to having one or a small number of version specific container processors (and I do not mean container in the docker sense) to host those functions. Not microservices.
That said, almost always, if you are seriously trying to keep old unsupported dependencies running, you’re violating any reasonable security stance.
I’m not saying it never happens, but often the issues that the code is not understood, and no one is capable of supporting it, and so you are deep in the shit already.
furstenheim
This node got it just right. You only get this issue for big stateful libraries, like frameworks
rmbyrro
Can't we just isolate these two services in containers, using different library versions?
They don't need to be microservices in order to isolate dependencies, do they?
In Python, for instance, you don't even need containers. Just different virtual environments running on separate processes.
paulddraper
Those dependencies might cross language boundaries, or interfere with your other dependencies running in the same process.
bingemaker
On the contrary, when they fail silently, it is hard debug "dependent" services
collyw
More moving part. A network call can fail in more ways than a function call. Also something no one has mentioned until now, the more "services" you have the more of a pain in the arse it is to get a development environment running.
paulddraper
And those would apply to < 10% of your total code base.
nevinera
The right time to extract something into a separate service is when there's a problem that you can't tractably solve without doing so.
Increasing architectural complexity to enforce boundaries is never a solution to a lack of organizational discipline, but midsize tech companies _incessantly_ treat it like one. If you're having trouble because your domains lack good boundaries, then extracting services _is not going to go well_.
arzke
"The last responsible moment (LRM) is the strategy of delaying a decision until the moment when the cost of not making the decision is greater than the cost of making it."
Quoted from: https://www.oreilly.com/library/view/software-architects-han...
TeMPOraL
Still trying to unlearn that one. Turns out, most decisions are cheap to revert or backtrack on, while delaying them until Last Responsible Moment often ends in shooting past that moment.
smaudet
Depends who you are, mostly the ones making the decisions aren't usually listening to their developers (maybe by choice maybe because they are at the whim of a customer), so their cost functions are calibrated towards course changing being more expensive than less.
By the time your devs are saying "this sucks" you've long overshot.
lmm
Committing to a decision is costly, but implementing it one way or the other and seeing how that works is often the cheapest way to proceed.
cle
I don't think you need to unlearn it, but update your cost function.
jlundberg
Interesting term and I am curious to learn a few examples on overshooting here!
My experience is that the new data available when postponing decisions can be very very valuable.
undefined
insanitybit
> Increasing architectural complexity to enforce boundaries is never a solution to a lack of organizational discipline,
And yet we do this all the time. Your CI/CD blocking your PRs until tests pass? That's a costly technical solution to solve an issue of organizational discipline.
nevinera
That's technical, and not architectural. I'm _all about_ technical solutions to lack of discipline, and in fact I think technical and process solutions are the only immediate way to create cultural solutions (which are the long-term ones). I'd even consider minor increases to architectural complexity for that purpose justifiable - it's a real problem, and trading to solve it is reasonable.
But architectural complexity has outsized long-term cost, and service-orientation in particular has a _lot_ of it. And in this particular case, it doesn't actually solve the problem, since you _can't_ successfully enforce those domain boundaries unless you already have them well-defined.
insanitybit
Can you explain the salient distinction between a "technical" versus "architectural" solution? Candidly, I'm not convinced that there is one.
> But architectural complexity has outsized long-term cost
As do technical solutions, of course. CI/CD systems are very expensive, just from a monetary perspective, but also impose significant burdens to developers in terms of blocking PRs, especially if there are flaky or expensive tests.
> And in this particular case, it doesn't actually solve the problem, since you _can't_ successfully enforce those domain boundaries unless you already have them well-defined.
Ignoring microservices, just focusing on underlying SoA for a moment, the boundary is the process. That is an enforceable boundary. I think what you're saying amounts to, in microservice parlance, that there is no way to prevent a single microservice from crossing multiple bounded contexts, that it ultimately relies on developers. This is true, but it's also just as true for good monolithic designs around modules - there is no technical constraint for a module to not expand into domains, becoming cluttered and overly complex.
Microservices do not make that problem harder, but SoA does give you a powerful technical tool for isolation.
jcstauffer
I would hope that there is more process in place protecting against downtime than code review - for example automated tests across several levels, burn-in testing, etc.
People are not reliable enough to leave them as the only protection against system failure...
jaxr
Agreed. isn't that why strongly typed languages made a comeback?
mvdtnz
No, not at all. CI/CD blocking pull requests is in place because large systems have large test suites and challenging dependencies which mean that individual developers literally can't run every test on their local machine and can often break things without realising it. It's not about organisational discipline, it's about ensuring correctness.
bluGill
I can run every test on my machine if I want. It would be a manual effort, but wouldn't be hard to automate if I cared to try. However it would take about 5 days to finish. It isn't worth it when such tests rarely fail - the CI system just spin them off to many AWS nodes and if something fails then run just that test locally and I get results in a few hours (some of the tests are end to end integration that need more than half an hour).
Like any good test system I have a large suite of "unit" tests that run quick that I run locally before committing code - it takes a few minutes to get high code coverage if you care about that metric. Even then I just run the tests for x86-64, if they fail on arm that is something for my CI system to figure out for me.
vrosas
The other problem is that these self-imposed roadblocks are so engrained in the modern SDLC that developers literally cannot imagine a world where they do not exist. I got _reamed_ by some "senior" engineers for merging a small PR without an approval recently. And we're not some megacorp, we're a 12 person engineering startup! We can make our own rules! We don't even have any customers...
jacquesm
Your 'senior' engineer is likely right: they are trying to get some kind of process going and you are actively sabotaging that. This could come back to haunt you later on when you by your lonesome decide to merge a 'small PR' with massive downtime as a result of not having your code reviewed. Ok, you say, I'm perfect. And I believe you. But now you have another problem: the other junior devs on your team who see vrosas commit and merge stuff by themselves will see you as their shining example. And as a result they by their lonesomes decide to merge 'small PR's with massive downtime as a result.
If you got _reamed_ you got off lucky: in plenty of places you'd be out on the street.
It may well be that you had it right but from context as given I hope this shows you some alternative perspective that might give you pause the next time you decide to throw out the rulebook, even in emergencies - especially in emergencies - these rules are there to keep you, your team and the company safe. In regulated industries you can multiply all of that by a factor of five or so.
elzbardico
Really man. I have almost two decades developing software and yet, I feel a lot more comfortable having all my code reviewed. If anything I get annoyed by junior developers in my team when they just rub-stamp my PRs because supposedly I am this super senior guy that can't err. Code Reviews are supposed to give you peace of mind, not being a hassle.
During all this time, I've seen plenty of "small changes" having completely unexpected consequences, and sometimes all it would take to avoid would someone else seeing it from another perspective.
insanitybit
Indeed, dogmatic adherence to arbitrary patterns is a huge problem in our field. People have strong beliefs, "X good" or "X bad", with almost no idea of what X even is, what the alternatives are, why X was something people did or did not like, etc.
avelis
Sounds like a SISP (solution in search of a problem). Or throwing a solution at a problem not understanding the root issue of it.
jeffbee
I find it odd that there is this widespread meme—on HN, not in the industry—that microservices are never justified. I think everyone recognizes that it makes sense that domain name resolution is performed by an external service, and very few people are out there integrating a recursive DNS resolver and cache into their monolith. And yet, this long-standing division of responsibility never seems to count as an example.
nevinera
You're certainly misunderstanding me. Microservices are definitely justifiable in plenty of cases, and _services_ even more often. But they _need to be technically justified_ - that's the point I'm making.
The majority of SOA adoption in small-to-medium tech companies is driven by the wrong type of pain, by technical leaders that can see that if they had their domains already split out into services, their problems would not exist, but don't understand that reaching that point involves _solving their problems first_.
foobiekr
Whenever someone on the projects, I’m attached to tries to create a new service, first I asked them what data it works with, and then I ask them what the alternative to making this a service would be. Usually, by the time we start answering the second question, they realize that, actually, adding the service is just more work.
To me, what’s amazing is that in almost no organization is there a requirement to justify technically the addition of a new service, despite the cost and administrative and cognitive overhead of doing so.
Scarblac
_Services_ are obviously a good idea (nobody is arguing something like PostgreSQL or Redis or DNS or what have you should all run in the same process as the web server).
_Microservices_ attract the criticism. It seems to assume something about the optimal size of services ("micro") that probably isn't optimal for all kinds of service you can think of.
mjr00
It's funny because the term "microservices" picked up in popularity because previously, most "service-oriented architecture" (the old term) implementations in large companies had services that were worked on by dozens or hundreds of developers, at least in my experience. So going from that to services that were worked on by a single development team of ~10 people was indeed a "microservice" relatively speaking.
Now, thanks to massive changes in how software is built (cloud, containers et al) it's a lot more standard for a normal "service" with no prefix to be built by a small team of developers, no micro- prefix needed.
xnorswap
That may be true, but the article describes a services architecture and labels it microservices, indeed goes onto to say:
> The term micro can be misleading, though - there doesn’t have to be anything micro about the services
jeffbee
This seems subjective. It's like putting "compatible with the character of the neighborhood" in a city's zoning codes.
arethuza
You do see people arguing for running SQLite in-process rather than using a separate database server like PostgreSQL?
mdekkers
> I find it odd that there is this widespread meme—on HN, not in the industry—that microservices are never justified.
Many HN patrons are actually working where the rubber meets the road.
DNS is a poor comparison. Pretty much everything, related to your application or not, needs DNS. On the other hand, the only thing WNGMAN[0]may or may not do, is help with finding the user’s DOB.
mjr00
> I find it odd that there is this widespread meme—on HN, not in the industry—that microservices are never justified.
There's a few things in play IMO.
One is lack of definition -- what's a "microservice" anyhow? Netflix popularized the idea of microservices literally being a few hundred lines of code maintained by a single developer, and some people believe that's what a microservice is. Others are more lax and see microservices as being maintained by small (4-10 person) development teams.
Another is that most people have not worked at a place where microservices were done well, because they were implemented by CTOs and "software architects" with no experience at companies with 10 developers. There are a lot of problems that come from doing microservices poorly, particularly around building distributed monoliths and operational overhead. It's definitely preferable to have a poorly-built monolith than poorly-built microservice architectures.
I've been at 4 companies that did microservices (in my definition, which is essentially one service per dev team). Three were a great development experience and dev/deploy velocity was excellent. One was a total clusterfuck.
insanitybit
It doesn't lack a definition, there's lots of people talking about this. In general you'll find something like "a small service that solves one problem within a single bounded context".
> It's definitely preferable to have a poorly-built monolith than poorly-built microservice architectures.
I don't know about "definitely" at all. Having worked with some horrible monoliths, I really don't think I agree. Microservices can be done poorly but at minimum there's a fundamental isolation of components. If you don't have any isolation of components it was never even close to microservices/SoA, at which point, is it really a fair criticism?
pjc50
DNS resolution is genuninely reusable, though. Perhaps that's the test: is this something that could concievably be used by others, as a product in itself, or is it tied very heavily to the business and the rest of the "microservices"?
Remember this is how AWS was born, as a set of "microservices" which could start being sold to external customers, like "storage".
jeffbee
What about a mailer that's divided into the SMTP server and the local delivery agent?
rqtwteye
"I find it odd that there is this widespread meme—on HN, not in the industry—that microservices are never justified"
I think the problem is the word "micro". At my company I see a lot of projects that are run by three devs that and have 13 microservices. They are easy to develop but the maintenance overhead is enormous. And they never get shared between projects so you have 5 services that do basically the same.
undefined
manicennui
I rarely see anyone claiming that microservices are never justified. I think the general attitude toward them is due to the amount of Resume Driven Development that happens in the real world.
nevinera
Eh, I think a lot more of it is caused by optimism than RDD - people who haven't _tried to do it_ look at the mess they've got, and they can see that if it were divided into domain-based services it would be less of a mess.
And the process seems almost straightforward until you _actually try to do it_, and find out that it's actually fractally difficult - by that point you've committed your organization and your reputation to the task, and "hey, that was a mistake, oops" _after_ you've sunk that kind of organizational resources into such a project is a kind of professional suicide.
smrtinsert
I've always thought of this as the best example of deferred execution. It's surprising how often businesses get it wrong. I think the problem is most "good" workers are over eager to demonstrate value. They do that by over engineering and that leads to hell etc...
mike_ivanov
I want this printed and neatly framed. Where to order?
jihadjihad
Microservices
grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too
seem very confusing to grug
https://grugbrain.dev/#grug-on-microservicessethammons
grug not experience large teams stepping on each other's domains and data models, locking in a given implementation and requiring large, organizational efforts to to get features out at the team level. Team velocity is empowered via microservices and controlling their own data stores.
"We want to modernize how we access our FOO_TABLE for SCALE_REASONS by moving it to DynamoDB out of MySQL - unfortunately, 32 of our 59 teams are directly accessing the FOO_TABLE or directly accessing private methods on our classes. Due to competing priorities, those teams cannot do the work to move to using our FOO_SERVICE and they can't change their query method to use a sharded table. To scale our FOO_TABLE will now be a multi-quarter effort providing the ability for teams to slow roll their update. After a year or two, we should be able to retire the old method that is on fire right now. In the meanwhile, enjoy oncall."
Compare this to a microservice: Team realizes their table wont scale, but their data is provided via API. They plan and execute the migration next sprint. Users of the API report that it is now much faster.
jjtheblunt
> Team velocity is empowered via microservices and controlling their own data stores.
Bologna. Choosing clear abstractions is an enabler of focus, but that doesn’t necessarily imply those abstractions are a network call away.
sethammons
our team of 300 - we _can't_ enforce the clear abstractions. New dev gets hired, team feels pressured to deliver despite leadership saying to prioritize quality, they are not aware of all the access controls, they push a PR, it gets merged.
We have an org wide push to get more linting and more checks in place. The damage is done and now we have a multi-quarter effort to re-organize all our code.
This _can_ be enforced via well designed modules. I've just not seen that succeed. Anywhere. Microservices are a pain for smaller teams and you have to have CI and observability and your pains shift and are different. But for stepping on eachother? I've found microservices to be a super power for velocity in these cases. Can microservices be a shitshow? Absolutely, esp. when they share data stores or have circular dependencies. They also allow teams to be uncoupled assuming they don't break their API.
crabbone
Found the author of CISCO / Java Spring documentation.
I honestly expected people used passive-aggressive corpo-slang only for work / ironically. But this reads intentionally obfuscated.
But, to answer to the substance: you for some reason assumed that whoever designed first solution was an idiot and whoever designed the second was, at least clairvoyant.
The problem you describe isn't a result of inevitable design decisions. You just described a situation where someone screwed up doing something and didn't screw up doing something else. And that led you to believe that whatever that something else is, it's easier to design.
The reality of the situation is, unfortunately, the reverse. Upgrading microservices is much harder than replacing components in monolithic systems because it's easier to discover all users of the feature. There are, in general, fewer components in monolithic systems, so less things will need to change. Deployment of a monolithic system will be much more likely to discover problems created by incorrectly implemented upgrade.
In my experience of dealing with both worlds, microservices tend to create a maze-like system where nobody can be sure if any change will not adversely affect some other part of the system due to the distributed and highly fragmented nature of such systems. So, your ideas about upgrades are uncorroborated by practice. If you want to be able to update with more ease, you should choose a smaller, more cohesive system.
perrylaj
I think I view the situation in a similar fashion as you. There's absolutely nothing preventing a well architected modular monolith from establishing domain/module-specific persistence that is accessible only through APIs and connectable only through the owning domain/module. To accomplish that requires a decent application designer/architect, and yes, it needs to be considered ahead of time, but it's 100% doable and scalable across teams if needed.
There are definitely reasons why a microservice architecture could make sense for an organization, but "we don't have good application engineers/architects" should not be one of them. Going to a distributed microservice model because your teams aren't good enough to manage a modular monolith sounds like a recipe for disaster.
nijave
Microservices don't magically fix shared schema headaches. Getting abstraction and APIs right is the solution regardless of whether it's in-memory or over the network.
Instead of microservices, you could add a static analysis build step that checks if code in packages is calling private or protected interfaces in different packages. That would also help enforce service boundaries without introducing the network as the boundary.
tubthumper8
I guess I'm confused by the suggestion - doesn't that static analysis step to check that code isn't calling private interfaces already exist, and is called a "compiler"?
com2kid
Or how about "we want to update a 3rd party library due to a critical security issue, but we can't because that library is used in 500 different parts of the code and no way in hell can we stop development across the entire org for multiple weeks".
With microservices, you deploy the updates to public facing services first, write any learnings down, pass it along to the next most vulnerable tier.
Heck on multiple occasions I've been part of projects where just updating the build system for a monolithic codebase was a year+ long effort involving dozens of engineers trying to work around commits from everyone else.
Compare this to a microservice model where you just declare all new services get the new build tools. If the new tools come with a large enough carrot (e.g. new version of typescript) teams may very well update their own build tooling to the latest stuff without anyone even asking them to!
shadowgovt
As with so many software solutions, the success of microservices is predicated upon having sufficient prognostication about how the system will be used to recognize where the cut-points are.
When I hear success stories like that, I have to ask "Is there some inherent benefit to the abstraction or did you get lucky in picking your cleave-points?"
esafak
That comes with experience, but you can let time be the judge if you factor your monolith early enough. If the factorization proves stable, proceed with carving it into microservices.
sally_glance
This applies to performance optimizations which leave the interface untouched, but there are other scenarios, for example:
- Performance optimizations which can't be realized without changing the interface model. For example, FOO_TABLE should actually be BAR and BAZ with different update patterns to allow for efficient caching and querying.
- Domain model updates, adding/updating/removing new properties or entities.
This kind of update will still require the 32 consumers to upgrade. The API-based approach has benefits in terms of the migration process and backwards-compatibility though (a HTTP API is much easier to version than a DB schema, although you can also do this on the DB level by only allowing queries on views).
simonw
> Team realizes their table wont scale, but their data is provided via API. They plan and execute the migration next sprint.
... followed by howls of anguish from the rest of the business when it turns out they were relying on reports generated from a data warehouse which incorporated a copy of that MySQL database and was being populated by an undocumented, not-in-version-control cron script running on a PC under a long-departed team member's desk.
(I'm not saying this is good, but it's not an unlikely scenario.)
mjr00
> they were relying on reports generated from a data warehouse which incorporated a copy of that MySQL database and was being populated by an undocumented, not-in-version-control cron script running on a PC under a long-departed team member's desk.
This definitely happens but at some point someone with authority needs to show technical leadership and say "you cannot do this no matter how desperately you need those reports." If you don't have anyone in your org who can do that, you're screwed regardless.
mason55
> ... followed by howls of anguish from the rest of the business when it turns out they were relying on reports generated from a data warehouse which incorporated a copy of that MySQL database and was being populated by an undocumented, not-in-version-control cron script running on a PC under a long-departed team member's desk.
Once you get to this point, there's no path forward. Either you have to making some breaking changes or your product is calcified at that point.
If this is a real concern then you should be asking what you can do to keep from getting into that state, and the answer is encapsulating services in defined interfaces/boundaries that are small enough that the team understands everything going on in the critical database layer.
pjc50
This was why the "if you breach the service boundary you're fired" Amazon memo was issued.
sethammons
another reason why you provide data only over API - don't reach into my tables and lock me into an implementation.
danmaz74
Team size is probably the most important factor that should influence the choice about microservices. Unfortunately, there was a period when it looked like every project and every team had to adopt them or be declared a dinosaur.
manicennui
I just wanted to note that static typing isn't required for autocomplete. JetBrains has IDEs for languages like Ruby and Python that can do it. If you open the REPL in a recent version of Ruby you get much of what you expect from an IDE with a statically typed language (with regards to autocomplete and syntax checking).
manicennui
Also, DRY is not about repeated code. This drives me crazy. Ruby developers love to make code worse by trying to "DRY it up".
Capricorn2481
What are you replying to? The article is about how Dry shouldn't be over applied.
Dry is literally "Don't Repeat Yourself" and is definitely pushed for cleaning up redundant code, so it's not unreasonable for people to think that's what about. It's only recently that people have pointed out that there's a difference between Duplicated code and Repeated code.
998244353
You are correct, but static typing does make it a lot easier. Working with Rider feels like working with an IDE that fully understands the code, at least structurally. Working with PyCharm feels like working with an IDE that makes intelligent guesses.
manicennui
The REPL in current versions of Ruby is probably a better example of how it should be done. Because it is actually running the code it has much better information.
https://ruby-doc.org/core-3.1.0/NEWS_md.html#label-IRB+Autoc...
agumonkey
new role: lead grug
insanitybit
Network calls are a powerful thing to introduce. It means that you have an impassable boundary, one that is actually physically enforced - your two services have to treat each other as if they are isolated.
Isolation is not anything to scoff at, it's one of the most powerful features you can encode into your software. Isolation can improve performance, it can create fault boundaries, it can provide security boundaries, etc.
This is the same foundational concept behind the actor model - instead of two components being able to share and mutate one another's memory, you have two isolated systems (actors, microservices) that can only communicate over a defined protocol.
nicoburns
> Network calls are a powerful thing to introduce. It means that you have an impassable boundary, one that is actually physically enforced - your two services have to treat each other as if they are isolated.
That is not too true at all. I've seen "microservice" setups where one microservice depends on the state within another microservice. And even cases where service A calls into service B which calls back into service A, relying on the state from the initial call being present.
Isolation is good, but microservices are neither necessary nor sufficient to enforce it.
insanitybit
Well, I'd say you've seen SoA setups that do that, maybe. But those don't sound like microservices :) Perhaps that's not a strong point though.
Let me be a bit clearer on my point because I was wrong to say that you have to treat a service as being totally isolated, what I should have said that they are isolated, whether you treat them that way or not. There is a physical boundary between two computers. You can try to ignore that boundary, you can implement distributed transactions, etc, but the boundary is there - if you do the extra work to try to pretend it isn't, that's a lot of extra work to do the wrong thing.
Concretely, you can write:
rpc_call(&mut my_state)
But under the hood what has to happen, physically, is that your state has to be copied to the other service, the service can return a new state (or an update), and the caller can then mutate the state locally. There is no way for you to actually transfer a mutable reference to your own memory to another computer (and a service should be treated as if it may be on another computer, even if it is colocated) without obscene shenanigans. You can try to abstract around that isolation to give the appearance of shared mutable state but it is just an abstraction, it is effectively impossible to implement that directly.But shared mutable state is trivial without the process boundary. It's just... every function call. Any module can take a mutable pointer and modify it. And that's great for lots of things, of course, you give up isolation sometimes when you need to.
crabbone
The reality of this situation is that the tool everyone is using to build microservices is Kubernetes. It imposes a huge tax on communication between services. So your aspiration as to improving performance fly out of the window.
On top of this, you need to consider that most of the software you are going to write will be based on existing components. Many of these have no desire to communicate over network, and your micro- or w/e size services will have to cave in to their demands. Simple example: want to use Docker? -- say hello to UNIX sockets. Other components may require communication through shared memory, filesystem, and so on.
Finally, isolation is not a feature of microservices, especially if the emphasis is on micro. You have to be able to control the size and where you want to draw the boundary. If you committed upfront to having your units be as small as possible -- well, you might have function-level isolation, but you won't have class- or module- or program-level isolation, to put it in more understandable terms. This is where your comparison between the actors model and microservices breaks: first doesn't prescribe the size.
insanitybit
Microservices definitely predate k8s, but sure, lots of people use k8s. I don't know what penalty you're referring to. There is a minor impact on network performance for containers measured in microseconds under some configurations. Maybe Kubernetes makes that worse somehow? I think it does some proxying stuff so you probably pay for a local hop to something like Envoy. If Envoy is on your system and you're not double-wrapping your TLS the communication with it should stay entirely in the kernel, afaik.
http://domino.research.ibm.com/library/cyberdig.nsf/papers/0...
In no way is this throwing out performance. It's sort of like saying that Kafka is in Java so you're throwing away performance when you use it, when there are massive performance benefits if you leverage partition isolation.
> Many of these have no desire to communicate over network, and your micro- or w/e size services will have to cave in to their demands. Simple example: want to use Docker? -- say hello to UNIX sockets. Other components may require communication through shared memory, filesystem, and so on.
I'm not sure what you're referring to. Why would that matter at all? I mean, ignoring the fact that you can easily talk to Docker over a network.
> Finally, isolation is not a feature of microservices,
Isolation is a feature of any process based architecture, whether it's SoA, actors, or microservices.
> well, you might have function-level isolation, but you won't have class- or module- or program-level isolation,
You get isolation at the service layer. I don't see why that would be contentious, it's obvious. If you're saying you want more isolation, ok, you can write your code to do that if you'd like.
> first doesn't prescribe the size.
Yep, the actor model is very low level. Microservice architecture is far more prescriptive. It's one of the reasons why I think Microservice architecture has been far more successful than actor based systems.
undefined
manicennui
It is trivial to tightly couple two services. They don't have to treat each other as isolated at all. The same people who create tightly coupled code within a single service are likely going to create tightly coupled services.
insanitybit
I think I've covered this in another comment just below the one you've replied to.
collyw
It's also something else that breaks. A lot more than function calls.
insanitybit
Sure, but breaking isn't always bad. I realize that sounds a bit crazy, but it's true.
a) Intermittent failures force you to treat systems as if they can fail - and since every system can fail, that's a good thing. This is why chaos engineering is great.
b) Failures across a network are the best kind - they're totally isolated. As I explain elsewhere, it's impossible to share state across a network, you can only send copies of state via message passing.
These things matter more or less depending on what you're doing.
js8
I think people get the modularity wrong. Modularity is important, but I came to conclusion there is another important architectural principle, which I call "single vortex principle".
First, a vortex in a software system is any loop in a data flow. For example, if we send data somewhere, and then we get them back processed, or are in any way influenced by them, we have a vortex. A mutable variable is an example of a really small vortex.
Now, the single vortex principle states that there ideally should be only one vortex in the software system, or, restated, every component should know which way its vortex is going.
The rationale is when we have two vortices, and we want to compose the modules that form them into a single whole. If the vortices are correctly oriented, composition is easy. If they have opposite orientation, the composition is tricky and requires decision on how the new vortex is oriented. Therefore, it is best if all the vortices in all the modules have the same orientation, and thus form a single vortex.
This principle is a generalization of ideas such as Flux pattern, CQRS, event sourcing, and immutability.
Freebytes
This is a very good point, and you could probably write quite a few articles about this particular subject. You may even have a service A that calls service B that calls service C that calls service A. Then you have a problem. Or, you have C get blocked by something happening in A that was unexpected. Ideally, you only have parents calling children without relying on the parents whatsoever, and if you fail in this, you have failed in your architecture.
andreygrehov
I think Robert C. Martin has already described that fairly well [1] [2].
[1] https://www.youtube.com/watch?v=N7agCpAYp1Q
[2] https://en.wikipedia.org/wiki/Acyclic_dependencies_principle
samsquire
I like this.
When I look at an architecture diagram, half the battle is trying to work out what is async/sync, push/pull and the direction of travel and ordering.
Enterprise service buses and message queues are kind of solutions to the creating vortexes where there is not a matching direction on each side.
salvadormon
I find this vortex concept interesting. Do you have any books or online sources that I could use to study this?
js8
No, I made it all up. I wish I had time and interest to formalize it.
ahoka
It’s called engineering. /s
porridgeraisin
I didn't quite understand your second-to-last paragraph(the rationale behind...), can you (or someone else) explain it further?
bob1029
> And think of the sheer number of libraries - one for each language adopted - that need to be supported to provide common functionality that all services need, like logging.
This is the #1 reason we quit the microservices game. It is simply a complete waste of mental bandwidth to worry about with the kind of tooling we have in 2023 (pure cloud native / infiniscale FaaS), especially when you have a customer base (e.g. banks & financial instutitions) who will rake you over hot coals for every single 3rd party dependency you bring.
We currently operate with one monolithic .NET binary distribution which is around 250 megs (gzipped). Not even the slightest hint of cracks forming. So, if you are sitting there with a 10~100 meg SaaS distribution starting to get nervous about pedantic things like "my exe doesnt fit in L2 anymore", then rest assured - Your monolithic software journey hasn't even begun yet.
God forbid you find yourself with a need to rewrite one of these shitpiles. Wouldn't it be a hell of a lot easier if it was all in one place where each commit is globally consistent?
superfrank
I think this is a false dichotomy. Most places I've worked with microservices had 2 or 3 approved languages for this reason (and others) and exceptions could be made by leadership if a team could show they had no other options.
Microservices doesn't need to mean it's the wild west and every team can act without considering the larger org. There can and should be rules to keep a certain level of consistency across teams.
orochimaaru
Not sure why you’re downvoted - but you’re right. We heavily use microservices but we have a well defined stack. Python/gunicorn/flask/mongodb with k8s. We run these on Kafka or rest api. We even runs jobs and corn jobs in k8s.
Functional decomp is left to different teams. But the libraries for logging, a&a, various utilities etc are common.
No microservices that don’t meet the stack unless they’re already developed/open source - eg open telemetry collectors.
Edit: I think the article is a path to a book written by the author. It’s more of an advertisement than an actual assessment. At least that’s my take on it.
tcgv
> Most places I've worked with microservices had 2 or 3 approved languages for this reason (and others) and exceptions could be made by leadership if a team could show they had no other options.
This works well if you have knowledge redundancy in your organization, i.e., multiple teams that are experienced in each programming language. This way, if one or more developers experienced in language 'A' quit, you can easily replace them by rearranging developers from other teams.
In small companies, this flexibility of allowing multiple languages can result in a situation in which developers moving to other jobs or companies will leave a significant gap that can only be filled with recruiting (then onboarding), which takes much more time and will significantly impact the product development plan.
More often than not, the choice between Microservices and Monoliths is more of a business decision than a technical one to make.
LelouBil
> More often than not, the choice between Microservices and Monoliths is more of a business decision than a technical one to make.
I think that, technically you can use one or the other and make it work. However management is very different in the two cases, so I completely agree with you. I hadn't thought of the part about moving people between teams.
It's my first job but I understand why they chose microservices : 6 teams working on 6 "features/apps" can be managed (almost) fully independently of each other if you split your code base.
nijave
I think it's fair to say microservices increase the need for governance, whether manual or automated systems. When you start having more than 1 thing, you create the "how do I keep things consistent and what level of consistency do I want" problem
LelouBil
In the department I work there's a lot of microservices, about 5-6 so 5-6 teams. But everything is quarkus/spring java and nothing else.
ilyt
[dead]
cpill
> God forbid you find yourself with a need to rewrite one of these shitpiles. Actually, this is much easier with micro services as you have a clear interface you need to support and the code is not woven into the rest of the monolith like a French plat. The best code is the easiest to throw away and rewrite, because let's face it, the older the code is, the more hands it's been through, the worse it is, but more importantly the less motivated anyone is in maintaining it.
lemmsjid
If the monolithic application is written in a language with sufficient encapsulation and good tooling around multi-module projects, then you can indeed have well known and encapsulated interfaces within the monolith. Within the monolith itself you can create a DAG of enforced interfaces and dependencies that is logically identical to a set of services from different codebases. There are well known design issues in monoliths that can undermine this approach (the biggest one that comes to mind is favoring composition over inheritance, because that's how encapsulation can be most easily broken as messages flow across a single-application interfaces, but then I'd also throw in enforced immutability, and separating data from logic).
It takes effort to keep a monolithic application set up this way, but IMHO the effort is far less than moving to and maintaining microservices. I think a problem is that there's very popular ecosystems that don't have particularly good tooling around this approach, Python being a major example--it can be done, but it's not smooth.
To me the time when you pull the trigger on a different service+codebase should not be code complexity, because that can be best managed in one repo. It is when you need a different platform in your ecosystem (say your API is in Java and you need Python services so they can use X Y or Z packages as dependencies), or when you have enough people involved that multiple teams benefit from owning their own soup-to-nuts code-to-deployment ecosystem, or when, to your point, you have a chunk of code that the team doesn't or can't own/maintain and wants to slowly branch functionality away from.
protomolecule
"composition over inheritance, because that's how encapsulation can be most easily broken as messages flow across a single-application interfaces, but then I'd also throw in enforced immutability, and separating data from logic"
Could you elaborate on this? I see how "separating data from logic" is a problem but what about the other two?
cpill
> It takes effort to keep a monolithic application set up this way Yeah, this is the problem and _why_ I think microservices are the way forward as it doesn't take effort because the programmer is forced into do the right thing. On project with many different types of coders (and lets face it we are all different) consistency drops off fast. Of course you can do it with monoliths but I'm coming from a "real world" scenario where there are many people with different levels of ability and different levels of giving a cr@p about code quality. Micro services let people who code badly to do it in isolation and let themselves be the only ones who have to suffer under it, and ultimately learn from it (if they are not fired first). Also decoupling in a monolith vs by deployment is really just which git repo the code lives in, which are next to each other in the same directory on your hard drive. If there is shared code factor it out as a library/module and install it into the projects that need it. Its not a big deal
gedy
> If the monolithic application is written in a language with sufficient encapsulation and good tooling around multi-module projects, then you can indeed have well known and encapsulated interfaces within the monolith. Within the monolith itself you can create a DAG of enforced interfaces and dependencies that is logically identical to a set of services from different codebases.
So then, not Rails apps
LelouBil
With microservices, you can also version them independently. In a monolith you can't roll back "a part" of the app to the latest version if you pushed multiple unrelated features at once.
butlike
I disagree. The older the code is, the more hands it's been through, which generally means it's stronger, hardened code. Each `if` statement added to the method is an if statement to catch a specific bug or esoteric business requirement. If a rewrite happens, all that context is lost and the software is generally worse for it.
That being said, I agree that maintaining legacy systems is far from fun.
smaudet
> If a rewrite happens, all that context is lost and the software is generally worse for it.
Unless you have a strong test suite which tests the absence of those bugs. OFC you can never prove the absence of an issue just the continued functionality of your codebase, but re-writes are often prompted by weird "in-between" functionality becoming the norm (or slow/buggy behavior).
Of course a lot of test suites are of dubious quality/many devs have no idea what a good test suite looks like (usually unit tests are some combination of throw-away/waste-of-time, acceptance tests are mostly happy-path and due to recent trends integration tests are all but non-existent).
But in theory, re-writes are fine when you do have a test-suite. Even with a bad one, you learn what areas of the application were never properly tested and have opportunities to write good tests.
wernercd
> The best code is the easiest to throw away and rewrite, because let's face it, the older the code is, the more hands it's been through, the worse it is, but more importantly the less motivated anyone is in maintaining it.
The more testing that's been done and the more stable it should be.
The argument for new can be flipped because new doesn't mean better and old doesn't mean hell.
antonhag
This assumes a clear interface. Which assumes that you get the interfaces right - but what's the chance of that if the code needs rewriting?
Most substantial rewrites crosses module boundaries. In micro services changing the module boundary is harder than in a monolith, since it can be done in a single commit/deploy.
bibabaloo
How many developers do you have working on that monolith though? The size of your binary isn't usually why teams start breaking up a monolith.
starttoaster
> Wouldn't it be a hell of a lot easier if it was all in one place where each commit is globally consistent?
I always find this sentence to be a bit of a laugh. It's so commonly said (by either group of people with a dog in this fight) but seemingly so uncommonly thought of from the other group's perspective.
People that prefer microservices say it's easier to change/rewrite code in a microservice because you have a clearly defined contract for how that service needs to operate and a much smaller codebase for the given service. The monolith crowd claims it's easier to change/rewrite code in a monolith because it's all one big pile of yarn and if you want to change out one strand of it, you just need to know each juncture where that strand weaves into other strands.
Who is right? I sure don't know. Probably monoliths for tenured employees that have studied the codebase under a microscope for the past few years already, and microservices for everyone else.
tklinglol
My first gig out of school was a .net monolith with ~14 million lines of code; it's the best dev environment I've ever experienced, even as a newcomer who didn't have a mental map of the system. All the code was right there, all I had to do was embrace "go to definition" to find the answers to like 95% of my questions. I spend the majority of my time debugging distrubuted issues across microservices these days; I miss the simplicity of my monolith years :(
iterateoften
Not so much a ball of yarn but more like the cabling coming out of a network cabinet. It can be bad and a big mess if you let it, but most professionals can organize things in such a way that maintenance isn’t that hard.
starttoaster
Well, the point I was making was that the same can easily be true of microservice architectures. If you have people that don't know what they're doing architecting your microservices, you'll have a difficult time maintaining them without clear and strict service contracts.
It's not clear to me that we're ever comparing apples to apples in these discussions. It would seem to me that everyone arguing left a job where they were doing X architecture the wrong way and now they only advocate for Y architecture online and in future shops.
disintegore
I've noticed that a lot of industry practices demonstrate their value in unexpected ways. Code tests, for instance, train you to think of every piece of code you write as having at minimum two integrations, and that makes developers who write unit tests better at separating concerns. Even if they were to stop writing tests altogether, they would still go on to write better code.
Microservices are a bit like that. They make it extremely difficult to insert cross cutting concerns into a code base. Conditioning yourself to think of how to work within these boundaries means you are going to write monolithic applications that are far easier to understand and maintain.
masterj
If an organization can't figure out how to factor out clearly-defined contracts within a single codebase and maintain that over time, adding a network hop and multiple codebases into that will not make it any easier.
protomolecule
"because you have a clearly defined contract for how that service needs to operate"
But if you need to change the contract the changes span the service and all of its clients.
sanderjd
> you just need to know each juncture where that strand weaves into other strands.
No I don't, that's what computers are for. It's why static analysis is good. Instead of knowing what calls what, you say, "yo, static analysis tool, what calls this?".
BHSPitMonkey
The comment you quoted is talking about the non-monolithic situations where static analysis tools cannot help you, e.g. when the callers are external and difficult to trace.
tsss
No one says you can't have all of your microservices use the same language...
bob1029
Absolutely. This is how we operated when we were at the peak of our tech showmanship phase. We had ~12 different services, all .NET 4.x the exact same way.
This sounds like it could work, but then you start to wonder about how you'd get common code into those 12 services. Our answer at the time was nugets, but we operate in a sensitive domain with proprietary code, so public nuget services were a no go. So, we stood up our own goddamn nuget server just so we could distribute our own stuff to ourselves in the most complicated way possible.
Even if you are using all the same language and patterns everywhere, it still will not spare you from all of the accidental complexity that otherwise becomes essential if you break the solution into arbitrary piles.
ValtteriL
Can't you share them via shared libraries in .NET?
lmm
Does .net not have a way to run simple private repository like Nexus? Over in JVM-land that's a basic thing that you'd be doing anyway.
undefined
ljm
Ideally you would use the same language if it had distributed support. Use Erlang or Elixir for example and you have everything you need for IPC out of the box. Might take a little bit more effort if you're on Kubernetes.
One of my problems with microservices isn't really the services themselves, but the insane amount of tooling that creeps in: GRPC, Kafka, custom JSON APIs, protobufs, etc. etc. and a lot of them exist in some form to communicate between services.
tsss
If you do so much IPC that you have to design your language around it you're probably doing it wrong. I don't think that moving to a monolith would really cut down that much on other technologies. Maybe you can do without Kafka, but certainly you will need some other kind of persistent message queue. You will still need API docs and E2E integration tests.
switch007
You need a CTO with a backbone to say no to the senior people hell bent on padding their resumes and scratching an itch though
Too often devs weaponise micro services and say it’s one of the key reasons for microservices is to allow free language choice !
duncan-donuts
I was in a meeting to talk about our logging strategy at an old company that was starting micro services and experiencing this problem. In the meeting I half heartedly suggested we write the main lib in C and write a couple wrapper libs for the various languages we were using. At the time it felt kinda insane but in hindsight it probably would have been better than the logging mess we created for ourselves.
anonymousDan
Maybe logging should be a microservice....
Thaxll
Why did you have 2-3 languages, does not make sense, I've been using microservice for a while and it was only one language at a time.
drzaiusx11
We have 2 supported langs across our cloud teams and our core libraries are dual-lang'd where applicable; meaning they're both Ruby Gems and Pypi packages (Python3) in one repo with unified APIs and backed by a shared static config where applicable (capability definitions etc.) Each dual lib is released simultaneously with matching SemVer versions to our various artifactory instances & S3 buckets (for our lambda "layers"), automatically on every push to mainline by CI/CD.
It works surprisingly well. We're evaluating a 3rd language but won't make that choice lightly (if it happens at all.)
We have 14+ micro services, and it's fairly easy to "rewrite the shit pile" when you actually follow the micro designation. One of our services was originally in perl and we, quite mechanically, rewrote it in ruby in a sprint to align with our other services.
Speaking from personal experience, when monoliths and the teams working on them get big enough, you start having "action at a distance" problems, where seemingly benign changes affect completely unrelated flows, often to catastrophic effect.
You make an innocuous looking resource update in the monolith, like an update to a CSS stylesheet that fixes a bug in a flow your team owns, now breaks 10 others flows owned by teams you never heard of because they were using the existing structure for selenium tests or some js that now fails to traverse the dom because some order of selectors changed, etc.
Microservices are as much a team organizational tool as they are a code one. The idea being those that work on the service "know it", and all it does. They can wrap their head around the whole thing. I think some orgs don't really get this point and _start_ with microservices, completely unnecessarily, for the stage they're at as a company. You always start with a monolith and if you get to the point where everyone is stepping on each other's toes from the lack of enforceable boundaries in the code, you do the obvious and start to create those boundaries.
Microservices aren't the only way to do this of course. Any way of dividing up your service with enforceable contracts will work. Modules get designated with codeowners, assigned to various teams. Resources that were once shared get split up to align with the team structures better. Many frameworks allow multiple "apps" or distinct collections of APIs, so you can still ship your one-binary without splitting out the collections into different services. As soon as you have to independently scale one set of APIs but not another, you can state looking at service boundaries again. For the majority, that day will never come.
sazz
Speaking from a Release Management point of view going from a monolith to microservices is often done for the wrong reasons.
The only valid reason for actual doing the change seems to be for scaling reasons due to performance bottlenecks. Everything else is just shifting complexity from software development to system maintenance.
Of course, developers will be happy that they have that huge "alignment with other teams" burden off their shoulders. But the clarity when and how a feature is implemented, properly tested across microservices and then activated and hypercared on production will be much harder to reach if the communication between the development teams is not mature enough (which is often the actual reason from breaking up the monolith).
buster
There are many valid reasons (and many wrong reasons). I would say: If you have multiple stakeholders, evoling business needs and many ( > 10) developers, there might be a good reason to have independent deployable, testable and releaseable units. Having few developers with a well defined context working on multiple microservices is a pain, though.
Regarding "Everything else is just shifting complexity from software development to system maintenance.": This sounds reasonable if your software is actively developed. Development is expensive. It may very well be, that the costs of maintaining a distributed system is lower then the cost of developing a very large monolith with a large team. In the end, it depends.
sazz
"There might be a good reason to have independent deployable, testable and releaseable units"
Of course this is the bottom line. But everything you define in the sentence can be achieved with a proper pipeline and repository architecture based on a monolith as well. For example teams could use a branch setup where they own their own team branches capable of merging to master and deploying. Each team could then define their own testing strategy and Definition of Done on their "team master".
Having the ability to release independently is actually a social problem, not a technical one. But the symptom of that social misalignment often shows up as a technical problem (dropping release KPIs, etc.)
So changing from a monolith to microservice will most likely only fight the symptom, not the root cause.
buster
First, you offer technical solutions (pipelines, branching...).. The cost of having multiple teams branching and merging the same code base can be significant. It's often not as easy as you make it sound.
altairTF
This is the sole reason we're considering breaking this out into a separate component of our app. It's become too large to maintain effectively. The rest of the app will remain unchanged
0xbadcafebee
So where's "the costs of monoliths" post? They don't show up here, because everyone is out there cluelessly implementing microservices and only sees those problems. If everyone were out there cluelessly implementing monoliths, we'd see a lot of "monoliths bad" posts.
People don't understand that these systems lead to the same amount of problems. It's like asking an elephant to do a task, or asking 1000 mice. Guess what? Both are going to have problems performing the task - they're just different problems. But you're not going to get away from having to deal with problems.
You can't just pick one or the other and expect your problems to go away. You will have problems either way. You just need to pick one and provide solutions. If 'what kind of architecture' is your biggest hurdle, please let me work there.
yCombLinks
The 8 trillion monolith bad posts are why we are now over inundated with microservices. This is the blowback when people are realizing the cost benefit didn't work for them
wnolens
If a monolith is well-factored, what is the difference between it and co-located microservices?
Probably just the interface - function calls become RPC. You accept some overhead for some benefits of treating components individually (patching!)
What is the difference between distributed microservices v.s. co-located microservices?
Deployment is more complex, but you get to intelligently assign processes to more optimal hardware. Number of failure modes increases, but you get to be more fault tolerant.
There's no blanket answer here. If you need the benefits, you pay the costs. I think a lot of these microservice v.s. monolith arguments come from poor application of one pattern or the other, or using inadequate tooling to make your life easier, or mostly - if your software system is 10 years old and you haven't been refactoring and re-architecting as you go, it sucks to work on no matter the initial architecture.
elevation
Monolith->microservice is not a trivial change no matter how well-factored it is to begin with -- though being poorly architected could certainly make the transition more difficult!
> Probably just the interface - function calls become RPC.
This sounds simple, but once "function calls become RPC" then your client app also needs to handle:
* DNS server unreachable
* DNS server reachable but RPC hostname won't resolve
* RPC host resolves but not reachable
* RPC host reachable but refuses connection
* RPC host accepts connection but rejects client authentication
* RPC host presents untrusted TLS cert
* RPC accepts authentication but this client has exceeded the rate limit
* RPC accepts authentication but says 301 moved permanently
* RPC host accepts request but it will be x seconds before result is ready
* RPC host times out
Even for a well-factored app, handling these execution paths robustly probably means rearchitecting to allow you to asynchronously queue and rate limit requests, cache results, handle failure with logarithmic back-off retry, and operate with configurable client credentials, trust store, and resource URLs (so you can honor 301 Moved Permanently) and log all the failures.
You'll also need additional RPC functions and parameters to provide data that had previously been in context for local function calls.
Then the monolith's UI may now need to communicate network delays and failures to the user that were impossible before network segmentation could split the app itself.
Refactoring into microservices will require significant effort even for the most well built monolith.
smaudet
This is why I saw microservices are "closer to the metal", i.e. they depend more on the physical characteristics of their environment than non-microservices.
A function call in a monolith can:
* segfault
* be called with the wrong number of parameters
* call the wrong function
* go through but never return
* substitute itself with another function
All of which are very similar to the RPC situation. However we practically never see this because of the OS guarantees like memory safety, security, etc, plus there are standardized ways of handling when these problems do occur, notably try / catch patterns.These issues can* be abstracted as well, but the advantage of scaling (being close to the metal) is the disadvantage as well (being very close to the hardware abstractions that let you scale).
E.g., there is no difference between running a microservice on a scaling computer than guarantees memory access across a cluster with interrupts, etc (some millions of cpus and terabytes of memory), and running it on a bunch of instances, except the hardware. The former is exotic and abstracts the hardware, the later does not and so all these "low level" errors surface with great frequency.
wnolens
I was offering that simplification to compare monolith single host to multi-service single host. No network hops.
But yes, you proceed to explain (some of) the complexities of RPC over the network. Security concerns make this even worse. I could go on..
Engineering is hard
ReflectedImage
Microservices not being able to talk to each other the network basically never comes up.
What you are saying is outright ridiculous.
nijave
That is all true but most languages have semi-sane libraries with semi-sane defaults that handle most of that. Sure you need some config and tuning but it's not completely uncharted waters
Aerbil313
Honestly we just need better tooling. Where is my micro-service-fleet creator, which handles built-in all these failure modes of RPC calls?
hu3
> Probably just the interface - function calls become RPC.
Network calls introduce new failure modes and challenges which require more plumbing.
Can I retry this RPC call safely?
How about exponential backoff?
Is my API load-balancer doing its thing correctly? We'll need to monitor it.
How about tracing between microservices? Now we need OpenTelemetry or something like that.
How harder is it to debug with breakpoints in a microservice architecture?
How can I undo a database transaction between 2 microservices?
Suddenly your simple function call just became a lot more problematic.
wnolens
My first Q was re: monolith on one host to many services on one host.
And yes, when going off-host, you'll have these issues. One should not employ network hops unnecessarily. Engineering is hard. Doesn't make it not worth doing.
charcircuit
Even on one host processes can crash at different times.
crabbone
When discussing microservices, the proponents almost always forget the key aspect of this concept, the micro part. The part that makes this stand out (everyone already heard about services, there's no convincing necessary here, if you feel like you need a service, you just make one, and don't fret about it).
So, whenever someone advocates for this concept, they "forget" to factor in the fragmentation caused by requiring that services be very small. To make this more concrete: if you have a monolith + microservice, you don't have microservices, because the later implies everything is split into tiny services, no monoliths.
Most of the arguments in favor of microservices fall apart as soon as you realize that it has to be micro. And once you back out of the "micro" requirement, you realize that nothing new or nothing deep is being offered.
marcosdumay
From the developer POV, the difference is on the interface. And while we have all kinds of tools to help us keeping a monolith in sync and correct, the few tools we have to help with IPC can not do static evaluations and are absolutely not interactive.
From the ops POV, "deploy is more complex" is a large understatement. Each package is one thing that must be managed, with its own specific quirks and issues.
yCombLinks
You lose atomic transactions. Every business action is orders of magnitudes more more complex because of that.
buster
Absolutely true, but also: Usually, your business transactions happen in a business context, which happens to be in a microservice. It can be a sign of bad design if you happen to have a lot of those transactional problems.
You will have distributed transactions with a distributed microservice setup, but most transactions will still be be contained within a single microservice (and thus be atomic and not distributed).
PedroBatista
Similarly with frontend devs thinking a "modern web-app" can only be built with frontend frameworks/libs like React/Vue/Svelte, etc, lately I feel there's an idea floating around that "monolith" equals running that scary big black ball of tar as a single instance and therefore "it doesn't scale", which is insane.
Another observation is the overall amount of code is much bigger and most of these services are ~20% business/domain code and ~80% having to deal with sending and receiving messages from other process over the network. You can hide it all you want, but at the end of the day it's there and you'll have to deal with the network in one way of another.
Just like the frontend madness, this microservice cult will only end once the economy goes to crap and there's no money to support all these Babel Towers of Doom.
PS: microservices have a place, which is inside a select few of companies that get something out of it more than the cost they pay.
superfrank
I think the thing people normally miss about microservices is that the goal of microservices is usually to solve organization and people problems and not technological ones. There's some tech benefits like allowing services to scale independently, but the biggest benefit is clear ownership boundaries and preventing code from becoming overly coupled where it doesn't need to be.
If you're a team small team working on a single product you probably don't need microservices since you likely don't have the type of organizational problems that microservices solve. Microservices are likely premature optimization and you're paying the price to solve problems you don't yet have.
koliber
Yep.
There are situations where microservices genuinely add net value. In discussions like this one, people make valid points for microservices and for monoliths. Often, words like “large” and “many” are used without providing a sense of scale.
Hers is a heuristic. It’s not a hard rule. There are likely good counter examples. It does sketch a boundary for the for/against equation for microservices. Would love to hear feedback about whether “100” is that number or a different heuristic would be more accurate.
Engineering departments with fewer than 100 engineers should favor monoliths and seriously question efforts to embrace microservices. Above that, there’s an increasing chance the microservices could provide value, but teams should stick to a monolith as long as possible.
qudat
> Similarly with frontend devs thinking a "modern web-app" can only be built with frontend frameworks/libs like React/Vue/Svelte
Those aren’t frameworks, they are view libraries and I’ll die on that hill.
Further, front end complexity is all in your head: https://bower.sh/front-end-complexity
PedroBatista
That's why I included "/libs", for people like you :)
To be clear, for me they are libraries, but realistically people use them as frameworks, as in "standard" ways to think, "frame" and implement something.
But because I'm not dying on any hill specially front-end ones, I'll take a right then a left and go on my way.
economist420
While I agree with you that a lot of websites overuse javascript and frameworks. Can you tell me what else I'm supposed to use if I'm going to build a desktop class web app without it becoming a huge mess or I having to end up up inventing the same concepts already existing in these frameworks?
PedroBatista
"desktop class web app" is a subset of "modern web-app". If you need frameworks, use them.
economist420
The meaning behind these is pretty blurry between whats considered a website vs an app, it's a spectrum. I consider a web app something that has a similar UI experience to a desktop app, as the word "application" came derived from the desktop.
jeffbee
The article sees to take for granted that your development org is completely broken and out of control. They can't decide what to work on during sprints, they furtively introduce third party libraries and unknown languages, they silently ship incompatible changes to prod, etc. I guess microservices are easier if your developers aren't bozos.
alt227
Unfortunately there are very often bozos in your team or complete teams of bozos working on the same project as you. Im sure Microservices are easier if you work in a development team of smart, competent, intelligent developers. However Im sure everything would be easier then!
simonw
Every developer is a bozo for their first few months in a new job, simply because it takes time to absorb all of the information needed to understand how the existing systems work.
novia
In my experience the bozos were absolutely not the newbies. Maybe you work in a job that is dedicated to engineering only, but what happens is that often in a company of non-engineers, some kinda reorg happens where a person ends up on your team who never studied programming in their life, with the assumption that the person is a go-getter who will be able to pick all this stuff up. The experiment is never called a failure when they consistently fail to learn. 2 years later the same underwhelming "engineer" will still be there getting other people to do their work while desperately trying to introduce bugs into production.
jeffbee
Acculturating new developers is one of the main tasks of an organization. I don't think it's very difficult to communicate that some company uses language[s] X[, Y and Z] only.
simonw
That depends on the culture of each specific organization. Are there top-down engineering decisions? Is there a push for more team autonomy?
My experience is that many organizations have something of a pendulum swinging between those two positions, so the current state of that balance may change over time.
Also: many new developers, when they hear "microservices", will jump straight to "that means I can use any language I want, right?"
(Maybe that's less true in 2023)
aleksiy123
I recently had some discussions and did some research on this topic and I feel like there is a lot people don't talk about in these articles.
Here are some more considerations between micro services and monolothic tradeoffs. Its also important to consider these two things as a scale and not a binary decision.
1. Isolation. Failure in on service doesn't fail the whole system. Smaller services have better isolation.
2. Capacity management. Its easier to estimate the resource usage of a smaller service because it has less responsibilities. This can result in efficiency gains. Extended to this is you can also give optimized resources to specific services. A prediction service can use GPU while web server can use CPU only. A monolothic may need to use compute with both which could result in less optimized resources.
3. Dev Ops Overhead. In general monolothic services have less management overhead because you only need to manage/deploy one or few services over many.
4. Authorization/Permissions. Smaller services can be given a smaller scope permissions.
5. Locality. Monolothic can share memory and therefore have better data locality. Small services use networks and have higher overhead.
6. Ownership. Smaller services can have more granular ownership. Its easier to transfer ownership.
7. Iteration. Smaller services can move independently of one another and can release at seperate cadences.
tazard
1. Isolation
With a well built monolith, a failure on a service won't fail the whole system.
For poorly built microservices, a failure on a service absolutely does being down the whole system.
Not sure I am convinced that by adopting microservices, your code automatically gets better isolation
scurvy_steve
I work on low code cloud ETL tools. We provide the flexibility for the customer to do stupid things. This means we have extremely high variance in resource utilization.
An on demand button press can start a processes that runs for multiple days, and this is expected. A job can do 100k API requests or read/transform/write millions of records from a database, this is also expected. Out of memory errors happen often and are expected. It's not our bad code, its the customer's bad code.
Since jobs are run as microservices on isolated machines, this is all fine. A customer(or multiple at once) can set up something badly, run out of resources, and fail or go really slow and nobody is effected but them.
aleksiy123
Its not automatic but it has the potential for more isolation by definition.
If your service has memory leak, crash it only takes down the service. It is still up to your system to handle such a failure gracefully. If such a service is a critical dependency then your system fails. But if it is not then your service can still partially function.
If your monolith has memory leak, or crash it takes down the whole monolith.
manicennui
1. Except that a single process usually involves multiple services and the failure of one service often makes entire sequences impossible.
aleksiy123
But not all sequences. It depends on your dependencies. Some services are critical for some processes. In the monoloth design its a critical dependency for all processes.
Gluber
Every company i have advised jumped on the microservice bandwagon some time ago.... Here is what I tell them:
1. Microservices are a great tool... IF you have a genuine need for them 2. Decoupling in and on itself with services it not a goal 3. Developers who are bad at writing proper modular code in a monolithic setting will not magically write better code in a Microservice environment.. Rather it will get even worse since APIS ( be it GRPC, Restful or whatever ) are even harder to design 4. Most developers have NO clue about consistency or how to achieve certrain gurantees in a microservice setting ( Fun fact: My first question to developers in that area is: Define your notion of consistency, before be get to the fun stuff like RAFT or PAXOS) 5. You don't have the problems where microservices shine ( e.g banks with a few thounsands RPS ) 6. Your communication overhead will dramatically increase 7. Application A that does not genuinly need microservices will be much cheaper as a monolith with proper code seperation
Right now we have generation of developers and managers who don'T know any better, and just do what everybody else seems to be doing: Microservices and Scrum ... and then wonder why their costs explode.
Get the top HN stories in your inbox every day.
Don't disagree with the article, but to play Devil's Advocate, here are some examples of when IME the cost IS worth it:
1) there are old 3rd party dependency incompatibilities that you can spin off and let live separately instead of doing a painful refactor, rebuilding in house, or kludgy gluing
2) there are deploy limitations on mission critical high available systems that should not hold up other systems deployment that have different priorities/sensitive business hours time windows
3) system design decisions that cannot be abstracted away are at the mercy of some large clients of the company that are unable or unwilling to change their way of doing things - you can silo the pain.
And to be clear, it's not that these things are "cost free". It's just a cost that is worth paying to protect the simpler monolith from becoming crap encrusted, disrupted with risky deploys, or constrained by business partners with worse tech stacks.