If the change is in a shared library (and not a service shared over the network), it's fine to change all usages at once. Deploying a service wouldn't affect the others.
If the change affects the public interface of a service, then there's no option but to make your changes backward-compatible.
I hope nobody's life depends on the uptime of a web-based distributed system.
But, well, I also expect nobody's life to depend on it. There would be a short window between people getting into that situation and they not have any life to depend on anything.
Well the arm CPUs we use are in general purpose computers as well. Though you are correct, we don't follow the same practices as general purpose computers.
I don't think synchronized deployments is really possible - you'd have to either still do the iterative thing, or possibly have some versioning system in place
It is possible for trivial cases. What I do in my basement for example - though even there I have come to prefer keeping things intentionally unsynchronized: it ensures that after updates I have some system that still works.
It takes the guesswork out of library migrations. API migrations still need forwards/backwards compat hygiene, unless you blue/green your entire infrastructure to compatible versions, which is possible but not necessarily practical
If you design with a "no deprecations" mentality and deploy backend before frontend, in _most cases_ this isn't an issue -- the frontend code that needs the new table or column or endpoint that doesn't exist yet won't run until those things are deployed, and the new backend endpoints will be fully backwards compatible with the old frontend, so no issues.
You don't even need to be that dogmatic to make this work either -- simply stipulating backwards compatibility between the two previous deploys should be sufficient.
The better version of this is simply versioning your backend and frontend but I've never been that fancy.
Because they do nothing to make it hard to add a coupling or break modularity.
You should of course use good discipline to ensure that doesn't happen. Compared to mutli-repo it is a lot easier to violate coupling and modularity and not be detected. Anyone who is using a monorepo needs to be aware of this downside and deal with it. There are other downsides of multi-repo, and those dealing with them need to be aware of those and mitigate them. There is no perfect answer, just compromises.
They make it easy, and then human nature and dev laziness does the rest. If you can reach across the repo and import any random piece of code, you end up with devs doing just that. It's a huge huge pain to try to untangle later.
That's why tools like Bazel are strict about visibility and put more friction and explicitness on those sorts of things. But this tends to not be the first thing at the top of people's minds when starting a new project... so in the monorepos I've worked on, it's never been noticed until it's too late to easily fix.