It strikes me that, if we are ever to expand the Internet into space on a universal scale (a la Vint Cerf and "delay tolerant networking", as an example), the inherent physics problems involved with distances and connectivity in space would probably make decentralization an absolute requirement. I mean, it seems it would not be uncommon for there to be a "local net" and a "universal peer-to-peer or mesh network net".
We're obviously a long way off from colonizing space and needing the Internet to spread, but we still have the physics problems here on Earth.
I'm not convinced that centralization in its current iteration (cloud operators controlling huge infrastructures) is the best long run. As we saw with the recent Azure outage in South Central US, even the huge infrastructure can have problems too.
Secure decentralization has seemed like a panacea for a long time - for all things that resemble a public utility. Even things like the power grid.
From a conference I attended years ago, the key motivator is that request/response networking is pretty much already broken when we get to the moon; once Mars comes into the picture, it'll be even more so. When each TCP packet requires an ACK, and an ACK takes minutes to arrive... things break down a bit.
So, the idea behind IPFS and others (SSB comes to mind, except, yacht-themed) is that it's largely a collection of offline networks, and when the planets align -- quite literally -- those networks will exchange all their new blocks.
Each TCP packet does not require an ack, at least not as a "receive TCP packet send out ACK". They're returned collectively as a bitset for the last N packets instead based on your transmission window (which TCP stacks already tune based on your RTT). Can't see how you'd get rid of that if you're looking for reliable real-time transport medium.
Additionally you can layer in forward error-correction above TCP to reduce packet loss due to the physical medium.
None of that really matters if Mars is on the other side of sun from Earth. You'd need relay satellites to direct the signal around the sun, and even at the speed of light that's going to take tens of minutes, one way.
As the parents suggest, even at Earth-Moon distances, we need to completely rethink things.
My guess is we'll just have local datacenters on mars for the big services (google, netflix, etc). And then more websites will use services like cloudflare so they can get their website cached on mars. AWS will eventually have a mars datacenter. No need for IPFS.
This papers over the whole "relying on large cloud providers" issues in the first place. A decentralized system will ensure that websites won't need to rely on large, centralized powers beyond core infrastructure providers (which we have to fight to ensure are neutral parties, ala Net Neutrality) to avoid concentration of power to a select few.
While I agree with your assessment, it reminds me of the "flying horse carriage" view of the future. Wouldn't be surprised, if multiplanetary hosting would change things more generally.
You will have to attend community tournaments and compete locally for the chance to play against a Mars team over a high-bandwidth satellite array, possibly on a lunar base, possibly only during seasons of opposition between Earth and Mars.
Even then, you will be playing on a specially-modified version of the game that disables server-side anticheat systems, instead relying on human referees.
I'm not a networking expert, but imo i think that with extreme delays, you would not want a real-time transport medium at all. The higher-level protocols would be built on top of a non-real time substrate. You would want to have so much forward error-correction that you could remove ACK completely (perhaps a higher level protocol could still be used to request retransmission in the rare case of failure, e.g. one could request a web page be resent via HTTP, but the hypothetical TCP replacement under that would have no concept of ACK).
The "Fire upon the Deep" describes an interstellar communication system which is very email-nntp-fidonet like. I'm afraid it's more realistic that we could imagine.
It was written during the time that was the peak of those systems (the endless September would happen a year after the book was published), so there's no surprise there.
But I think it's not particularly accurate, in that, while latency would be extreme, that doesn't necessarily translate to bandwidth - and bandwidth constraints shaped Usenet and especially FidoNet as much as latency.
I think a more likely primary mode of operation would be WWW-like, but only your local part is actually real-time; everything else is synced in bulk as and when possible, with some creative approaches to update conflicts for writable resources.
On a related note, I'm wondering how distributed systems that rely on atomic clocks (e.g. Google Spanner) would work in the space era, given that relativity says that there's no such thing as a global clock.
They can still work, with a few changes and worse performance.
The clock is not used to say that the time is exactly the same on all nodes, it is used to guarantee that if two events have timestamps whose difference is larger than some threshold they can be ordered reliably. You don't need an atomic clock to do it, for instance CockroachDB only requires NTP, but of course, the smaller the error margin, the faster the system is.
That being said, given that speed will be limited by the distance traveled by information and the speed of light, I suppose those systems won't have much edge over purely causality-based ones. In other words, CRDTs will rule Space :)
Not an expert by any means but I would imagine you simply prefix your time with a specific „large-sclae time zone“ that you are moving in, e.g., it could be earth, another planet or your spacecraft. Wouldn‘t solve it completly but seems to be a pragmatic solution that could work alright for most cases.
However, a total ordering of events seems plausible only from the relative perspective of an observer and we would need to figure out, how to think about how things like transfer duration affect each oberservers understanding of ordering.
Not a physicist but isn't it the case that any two observers can still compute at what time the other person perceived any given event, if they know each other's history of travel and the history of travel of the event?
So you would just have to agree that one observer's clock is the "master clock", and then everyone translates their local clock time into the corresponding master clock time (and all timestamps are written with respect to the 'time zone' of the master clock).
I suppose those continue working fine in very local systems (contained in a ball of a few light-seconds radius) whose components move at speeds where relativistic effects can be discarded. Drop those constraints and you also need to drop even system-local globality because of relativity.
You _could_ introduce the One True Lamport Clock, and as long as you're in its light cone you can get global syncronization, but that comes at the cost of having to learn a lot about patience.
Quantum entanglement seems poised to provide real-time interstellar communication.
That makes for good science fiction, because Quantum Mechanics is so poorly understood by most people, but it’s in no way possible or implied by the theory. Any entangled channel of communication would appear to be random noise without a Classical channel of communication, which effectively limits entanglement to light speed.
Alice therefore still measures two overlapping bell curves, overall! Where are the interference patterns?! That is very simple: when Bob and Alice compare their measurements in the first case, Bob's 0-measurement can be used to "filter" Alice's patterns...
That comparison is what requires the Classical channel, and we’re back to light speed. If you try to use a Quantum channel to compare you just have two things to compare and a lot of noise.
I think the best way forward would be Satellite based Internet where people can tune their dish antennas to the sky and get Internet. That would be Interruption free and decentralised. I have very low faith on Bitcoin mining operations to be repurposed for decentralised Internet, though in theory they have the compute nodes no body is free from the Internet service providers. A decentralised network based on existing network can never be totally free or decentralised.
We're obviously a long way off from colonizing space and needing the Internet to spread, but we still have the physics problems here on Earth.
I'm not convinced that centralization in its current iteration (cloud operators controlling huge infrastructures) is the best long run. As we saw with the recent Azure outage in South Central US, even the huge infrastructure can have problems too.
Secure decentralization has seemed like a panacea for a long time - for all things that resemble a public utility. Even things like the power grid.