If they posted it on an error or outage page then they probably didn't mean to set it that way, and that implies that there was a non-obvious mistake. They might be doing something silly with their permissions.
And that is presuming that this is some sort of technical issue.
"As part of an internal change task" is the justification listed. Maybe this is a genuine accident.
Someone paranoid might think that the for-profit management at Elastic is trying to pull some of their previously free software behind a paid-for product. Perhaps they accidentally marked all repos private when they only intended to make a few of them private. They have had beef with AWS in the past where they changed their licensing due to things AWS was doing. So I'll fully believe that it was a genuine accident if all the formerly public repos become public again.
It's a configuration error (sorry!). Also with thousands of forks this would be a pretty pointless operation. Once something is out (and that includes a license), you cannot just take it back — it will be there forever.
I seem to remember someone posting about this once -- you lose all your stars / followers when going public -> private, and they're not restored when you go back.
I would bet, as a result of this and other things like fork management, that they'll be working with GitHub support to try to reverse the go-private and all its consequences.
If it's this: https://news.ycombinator.com/item?id=41060102
Then they will need to delete(or rename), remake the repos and push again. Any security problem would also require doing some due diligence to make sure you really squashed it.
Yeah. This was a configuration error. Keys you just rotate. Making repos private accidentally creates a whole new mess with forks, stars,... Not recommended
See the other two posts for recommendations, but be aware that neither of the options listed is at the level of experience Steam may have spoiled us with. My last try was 4 or 5 months ago and I ended up just going with a VM.
Rclone crypt is not much related to Borg. That’s a tool for copying files from one machine to another, in this case encrypting before copying. That’s rsync, working with cloud.
Borg is a different tool, for backup. It deduplicates, encrypts, snapshots, checksums, compresses, … source directories into a single repository. It doesn’t work with files, rather blocks of data. It includes commands for repository management, like searching data, pruning or merging snapshots, etc. You will then transfer or sync the repository to wherever you want, with a tool such as rsync/SSH or rclone. Rclone is now natively supported, so that you don’t need to store the repository locally and on remote, rather back up directly to remote.
Sure, but there is some requirement to not just blindly copy everything over-and-over, and that is where I've seen things get tricky before. If you enable encryption you have to re-upload the entire snapshot periodically.
It's annoying because if you have TBs of stuff that blows. I'm just curious what systems exist for incremental, encrypted backups that don't require full uploading new snapshots.
Duplicity is very old backup software that uses the "full + incremental" strategy on a file-by-file basis, like tape backup systems. The full backup must be restored first and then all of the incrementals. This becomes impractical over time, so as with tapes, you must periodically repeat the full backup so the incremental chains do not become too long.
Modern backup programs split files into blocks and keep track of data at the block level. You still do an initial full backup followed by incrementals, but block tracking allows you to restore any version of any file without restoring the full first and all following incrementals. The trade-off is in complexity: tracking blocks is more complex than tracking files.
Cant they just make them public? Am i missing something?