The package repo is not lost. It is online and serving packages with corrupted signatures.
That's a crypto error interpretable as an active MITM attack on the repo, which causes a apt-get and subcommands to terminate abruptly with a critical error.
Last but not least, the repository is mirrored and cached, the mirrors simply reproduced the corrupted source repo :D
It is not a mere case of offline repository, though I can understand the confusion ;)
I'm well aware of how traditional "dumb" mirrors work (e.g. ftpsync), and those would indeed just mirror the packages with their incorrect signatures. But for an internal network, surely you'd want something more like aptly[0] so you can roll back to a working version?
aptly 1.0.0 => released on 27 march, that's exactly 1 month ago.
First commit done in 2014.
It's funny how people always mention workarounds that were non existent or non viable at the time of the issue. ;)
I have worked on mirrors a few times in my career. I am sadly well aware than even the best mirroring solutions are rather poor. Not gonna argue that they have flaws. Not gonna argue that we wish for better.
Nonetheless, it probably wasn't nearly as stable, and certainly wasn't as well-known then. So I'll concede that it may not have been viable at that point.
Honestly, it's reasonable to live with an apt outage one day a year. Any workaround that adds critical components in the distribution chain is guaranteed to make it worse.
The Docker crypto fuckup was quite peculiar. The distribution pipeline can't handle that. It also broke their other repos (including ubuntu) and propagated to the mirrors.
That's a crypto error interpretable as an active MITM attack on the repo, which causes a apt-get and subcommands to terminate abruptly with a critical error.
Last but not least, the repository is mirrored and cached, the mirrors simply reproduced the corrupted source repo :D
It is not a mere case of offline repository, though I can understand the confusion ;)