IP doesn't require connection state. If you want resilience in the face of loss you need to add more, like holding the data until a transmission has been acknowledged as received. TCP does that fine for modest packet loss and short outages.
For longer timescales, I don't think you want some sort of store-and-forward baked into the normal network protocols, because meaning decays with time. Imagine I want to watch a movie, but the service is down for a week, so I request it every night. The network stores those 7 requests and delivers them when the movie service comes up again. Do I really want to get 7 copies of a movie I may have given up on? Or even one? That question isn't resolvable at the network layer; it requires application level knowledge. Better to let application logic handle it.
I agree with your concern about government control, but I don't think the IP layer is a place to address it. I think the work around ad-hoc and mesh networks is much more interesting there. That would also drive more resilient applications, but I think that can be built on top of IP just fine.
Consider a society that has left Earth. If light takes minutes to travel between locations, is a chatty protocol what you want?
Or do you need a whole host of different application-level solutions to this problem?
There's a different lens to look at it. There's an old tech talk that got me out of the IP-shaped box a lot of us are trapped in: https://www.youtube.com/watch?v=gqGEMQveoqg
I got Gemini 1.5 Pro to summarize the transcript, which I butchered a bit to fit in a text box:
The Problem:
The Internet is treated as a binary concept: You're either connected or not, leading to issues with mobile devices and seamless connectivity.
The focus on connections (TCP/IP conversations) hinders security: Security is tied to the pipe, not the data, making it difficult to address issues like spam and content integrity.
Inefficient use of resources: Broadcasting and multi-point communication are inefficient as the network is unaware of content, leading to redundant data transmission.
The Proposed Solution:
Data-centric architecture: Data is identified by name, not location. This enables:
Trust and integrity based on data itself, not the source.
Efficient multi-point communication and broadcasting.
Seamless mobility and intermittent connectivity.
Improved security against spam, phishing, and content manipulation.
Key principles:
Data integrity and trust are derived from the data itself through cryptographic signatures and verification.
Immutable content with versioning and updates through supersession.
Data segmentation for efficient transmission and user control.
Challenges:
Designing robust incentive structures for content sharing.
Mitigating risks of malicious content and freeloaders.
This all is correct, but it's not a reason to abandon IP. It's a reason to understand its place.
Currently IP does a good job of isolating the application layer from specifics of the medium (fiber, Ethernet, WiFi, LoRa, carrier pigeons) and provides a way of addressing one or multiple recipients. It works best for links of low latency and high availability.
To my mind, other concerns belong to higher levels. Congestion control belongs to TCP (or SCTP, or some other protocol); same for store-and-forward. Encryption belongs to Wireguard, TLS, etc. Authentication and authorization belongs to levels above.
Equally, higher-level protocols could use other transport than IP. HTTP can run over plain TCP, TLS, SPDY, QUIC or whatnot. Email can use SMTP or, say, UUCP. Git can use TCP or email.
Equally, your interplanetary communication will not speak in IP packets at application level, no more that it would speak in Ethernet frames. Say, a well-made IM / email protocol would be able to seamlessly work with ultrafast immediate-delivery networks and slow store-and-forward links. It would have message-routing mechanism that allows to deliver a message next desk with an IP packet, or another planet with a comm satellite rising above your horizon to pick and store message, then flying over the other side of the planet to deliver it.
The point about data being identified by name and not location is quite strong I think. It pushes concerns that lots of applications would have under conditions of high latency and intermittent connectivity to the network itself, rather than having to be solved repeatedly with minor differences for every application. Encryption and authentication wouldn't belong to higher levels and I think that's right.
I absolutely agree it's not a replacement for IP and that IP has its place. The point rather is to shift one's perspective on what IP is, the implicit constraints it has, and how under different constraints a very different model would be useful and IP would be a lot less useful there. Applications would not be dealing with symbolic aliases for numeric network locations because that wouldn't work.
Names need namespaces, else they cannot be unique enough. IPv6 is one such namespace, DNS (on top of IP), another, email addresses (on top of DNS) is yet another. They are hierarchical; the namespace of torrent magnet links is flat, and still works fine on top of an ever-changing sea of IP addresses. We already have mechanisms of mapping namespaces like that, and should reuse it.
I don't think IP is going to be outright replaced by other transport-level protocols right at the Internet's "waist", but it can be complemented with other protocols there, still keeping the waist narrow.
Consider the case where you've got a computer aboard a delivery truck and it's hopping on and off home wifi networks as it moves through the neighborhood. From prior experience it knows which topics it is likely to encounter publishers for, and which it is likely to encounter subscribers for. There's a place for some logic--it's not precisely routing, but it's analogous to routing--which governs when it should forget certain items once storage is saturated for that topic.
IP is pretty much useless here because both the truck (as a carrier) and the people (as sources/sinks), end up with different addresses when they're in different places. You'd want something that addresses content and people such that the data ends up nearest the people interested in it.
It's an example of a protocol which would be in the waist, were it not so thin.
The computer aboard the delivery truck can just broadcast every time it hops on to a new access point?
It might not have the authority to broadcast via every access point, so it will likely be very circumscribed, but that’s just a question of the relative rank and authority between the truck operators and access point operators, routing layers, etc… not a question of the technology.
Since even several round trips of handshaking across the Earth takes only a few hundred milliseconds, and the truck presumably spends at least a few seconds in range.
> even several round trips of handshaking across the Earth takes only a few hundred milliseconds
then I'm not sure why you'd bother using the delivery truck as data transport anyhow.
I'm more interested in the case where such infrastructure is either damaged, untrustworthy, or was never there in the first place. If there was a fallback which worked, even if it wasn't shiny, there'd be something to use if the original went bad for some reason.
I took from your description that these broadcasts were being forwarded by other parties around the planet. But in the alternate reality I'm trying to sketch, the one where something quite different from IP is in the thin waist of our protocol stack, something content-addressed and not machine-addressed, nodes only select data from their peers if they're interested in it for some reason. If you've got a network of people that are so motivated to share what you have to say that it shows up on the other side of the planet near instantly, with no time spend validating it for accuracy or whatever other propagation criteria is relevant for that data, then you're likely a very influential person. That's the unlikely situation I was talking about.
No I anticipate that it could take days or weeks for a given message to hitch a ride all the way around the world. It would have to be rather important for so many people to decide to allocate space on their devices for it. (Whether that's automated or involves a human in the loop would be an app-specific detail).
Of course you could sort of fake locality by piping it through the traditional internet, but the point is to design something (an alternative to IP) that would be resilient even when the traditional internet has failed so using the traditional internet is sort of cheating.
As far as access... I'm not really sure what that means if we're addressing content and not devices. A node isn't going to pick up some data unless it finds that data to be interesting for some reason--likely because it bears a signature from a trusted individual and the cryptography checks out re: its not having been tampered with. If you're assessing trustworthiness based on content there's no need to scrutinize the device you got it from.
Also I apologize I haven't been making a tremendous amount of sense. I think I've maybe had COVID brain for a few days and only just now have I become fully aware of how impaired I am.
Why would anyone adopt this system, even as a backup, when the postal system already serves as a backup communications network? And has already proven its performance?
And in such an extreme scenario I imagine roughly no adults will care about any ‘content’ at all, food, water, and shelter would overwhelm all other nice to haves.
I think you lost me at the point where the truck was hopping on and off home wifi networks. That doesn't really match with our current notion of local networks as security contexts. I'm also not clear about exactly who the truck would be talking with here, or what it would have to say. Maybe you can expand on that part?
Well, in the world that grew up around IP addresses, in our world, you need to have security contexts like that because somebody can just reach in from anywhere and do something bad to you. But if I try to envision an alternative... one where we're not addressing machines, then I figure we're probably working in terms of users (identified by public key) and data (identified by hash).
In this world security looks a bit different. You're not open to remote action by third parties, they don't know where to reach you unless they're already in the room with you. Instead you've got to discover some peers and see if they have any data that you want. Then the game becomes configuring the machines so that data sort of... diffuses in the appropriate direction based on what users are interested in. It would be worse in many ways, but better in a few others.
So suppose the whole neighborhood, and the delivery driver, all subscribe to a topic called "business hours". One neighbor owns a business and recently decided that it's closed on Sunday. So at first there's just one machine in the neighborhood with the updated info, and everybody else has it wrong. But then the driver swoops by, and since they're subscribed to the same topic as the homeowners, their nodes run some topic-specific sync function. The new hours are signed by the owner of the restaurant with a timestamp that's newer than the previous entry, so the sync function updates that entry on the driver's device. Everyone else who runs this protocol and subscribes to this topic has the same experience, with the newer more authoritative data overwriting the older staler data as the driver nears their house. But at no point does the data have a sender or a receiver which are separated by more than a single hop, and we trust or distrust data based on who signed it, not based on where we found it.
I have an application in mind that I think would run well on it, but because our world has crystalized around point-to-point machine-addressed networking, like a million tiny phone calls, it feels like a pretty big lift whereas innovating at other layers in the stack feels much easier--a consequence of the thin waist.
I guess I'm not persuaded that a system like you describe wouldn't have its own lower layers that serve equivalent functions to IP. While what you describe is something that sounds plausible to implement as application-layer stuff that would work on a wide variety of raw network implementations.
IP doesn't require connection state. If you want resilience in the face of loss you need to add more, like holding the data until a transmission has been acknowledged as received. TCP does that fine for modest packet loss and short outages.
For longer timescales, I don't think you want some sort of store-and-forward baked into the normal network protocols, because meaning decays with time. Imagine I want to watch a movie, but the service is down for a week, so I request it every night. The network stores those 7 requests and delivers them when the movie service comes up again. Do I really want to get 7 copies of a movie I may have given up on? Or even one? That question isn't resolvable at the network layer; it requires application level knowledge. Better to let application logic handle it.
I agree with your concern about government control, but I don't think the IP layer is a place to address it. I think the work around ad-hoc and mesh networks is much more interesting there. That would also drive more resilient applications, but I think that can be built on top of IP just fine.