Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Whenever you do code updates of separate services, you still need to update all running instances at the same time. Being able to change the code in 1 commit does not necessarily mean the services also update at the same time, which will cause api errors.


It depends on what kind of APIs we're talking about. There's APIs in the sense of protobuf microservices, which are deployed individually and talk to live systems, but there's also APIs in the sense of libraries.

In my company the former is handled by never deprecating any field (and then clients just get to deal with picking and choosing the ones they want). If a major schema change is required, spin up an entirely new service and tell people to migrate over and eventually turn off the old one when nobody's using it anymore.

Library changes can be done atomically: change the API, change the call sites, if tests pass you're done. One may opt to use the same strategy as w/ microservices here too.

Regardless, the dynamics for interacting w/ microservice API changes doesn't change based on whether you're on a monorepo or not. But a monorepo can help in the sense that some aspects of a service are version-controllable (namely the schema definition files) and it's in the clients' best interest to have those always up-to-date rather than having to remember to run some out-of-band syncing command all the time.


If you wanted to you could spin up new clusters of every changed service and direct traffic such that every version of a service is a separate cluster. Then slowly redirect external traffic to new services.

Every internal service would always hit the exact version it was compiled with and you only need to worry about external api compatibility at that point.

Most use cases you can just get some scheduled downtown, though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: