Too bad that browsers are depreciating both RSS (ie, firefox removed feed rendering support) and HTTP (ie, firefox pushing HTTPS only). And HTTP3 isn't even tcp anymore it uses the google-mostly QUIC on udp.
For the corporate web RSS and HTTP are dead. But as a non-corporate human person I'll be sure to keep RSS and HTTP alive on my webservers.
On RSS I'd agree, but I think you've missed the mark on HTTPS.
HTTP-over-TLS works fine across all "versions" of HTTP, HTTPS has been supported by browsers since 1994; over half a decade before RSS was even a thing.
Your criticisms of HTTPS seem more targeted at HTTP3, but that's actually a separate topic (HTTP3 may be HTTPS-only in practice but that's not the specific aspect you're critiquing).
HTTP still works practically as-is on devices from about 25 years ago.
HTTPS does not even work on devices from 5 years ago.
After non-removeable batteries, SSL/TLS implementations have been the single largest headache when you want to keep using a device for more than a couple of years.
I presume when you say "device" you actually mean software. HTTPS can absolutely run on devices from 25 years ago.
Expecting network-connected clients to work securely without software updates for even a few months these days though isn't really possible, but that's down to the externalities of the world and has nothing to do with spec design.
The mechanism by which https rendered the old device inoperable does not change the fact that https rendered the old device inoperable.
One may argue that the problem is essentially unavoidable because the alternative is untenable (delivering any sort of communication without both encryption and authentication to ensure the veracity of the data) but that doesn't make the observation any less true. https is in fact a pain in the ass and introduces a lot of overhead and grief and shortened the useful life of countless things, relative to the time before https.
> The mechanism by which https rendered the old device inoperable does not change the fact that https rendered the old device inoperable
This is one of those many cases on the internet where people make an unfounded statement and then just move on as if it's accepted as true. How does the mechanism not change the fact???
The mechanism in this case is outdated software; upgrading or changing the software on the device makes the device operable (you absolutely can install modern TLS on Windows XP, alternatively you could also install a modern OS on most devices that came with Windows XP). That fact seems to make the mechanism extremely relevant in whether the device is operable.
You seem to be redefining the "built to last" to encapsulate backward compat, which is an odd definition, but let's go with it.
The only aspect of HTTPS that isn't backward compatible in the way you're talking about is, effectively, the algorithms keeping up with modern attack vectors. That's not an aspect of HTTPS the spec, it's an aspect of the arms race in network security. So pinning this issue on HTTPS is odd.
The reason you can't use outdated software to connect to modern servers isn't due to protocol design, it's due to threat actors.
> You seem to be redefining the "built to last" to encapsulate backward compat, which is an odd definition, but let's go with it.
What other definition there is? A protocol that has absolutely no backward compatibility, not even with itself, and by design, is "built to last"? Or that a protocol that is frequently rewriting its core algorithms in its base specs is "built to last" ?
> That's not an aspect of HTTPS the spec, it's an aspect of the arms race in network security.
It is an aspect of the spec. There is literally no HTTPS without SSL/TLS. If there was, you could have an argument there, but: the very definition of HTTPS is "HTTP + SSL".
Really, there is no color. Ethernet, IP, HTTP still allow an unmodified client from 25 years ago to work with practically full functionality. HTTPS does not. Whether this is because HTTPS has more ambitious goals (which it obviously has), or whether the designers were smoking something better, or because computer security requires an arms race (that's another debate), the effective result is that HTTPS is not built to last.
Everything you build on top of it will require continuous rearchitecturing every couple of years. And I'm not talking about changing some certificates or the like. I am talking spec changes. I'm talking most of the algorithms having changed significantly.
Or maybe TLS 1.3 will be the good one this time. Who knows.
It certainly would be quite the assertion if one were to choose to interpret "a 25yo device" to mean "any 25yo device", but doing so would be obviously ridiculous.
HTTP over TLS does not work fine for all versions of HTTP. A fully updated install of Windows XP is essentially incapable of establishing a modern TLS connection because everyone removed support for older crypto. It follows that modern TLS is a crypto treadmill that ejects older devices as soon as they cannot keep up with security updates.
I'm absolutely not trolling. I regularly have to use an old XP machine for various tasks, and as long as it's firewalled properly (preferably only allows traffic to a pre-set list of servers) and the security surface is reduced by removing unused services, it shouldn't be too risky to use for special-purpose tasks.
General-purpose browsing is a whole different matter. I wouldn't advise anyone to do that unless on a patched, up to date and supported machine.
I take it that it may be difficult to build for Windows, but that's what stunnel [1] is for. If you can do that and proxy your outbound traffic you should at least be able to communicate with modern crypto. Opening up an old XP install to the general internet though... is an endeavor one should probably be very careful about.
It's definitely possible, and it's what I would do if I was doing this for commercial purposes. I am merely a hobbyist in these things, so the effort isn't justified.
Setting up a firewall with a small strict set of servers to allow traffic to seems a lot more restrictive and complicated to set up than simply installing Firefox or some other solution that supports modern TLS.
Either way though, I do personally think it's fairly reasonable for server admins to maintain webservers that target "General-purpose browsing" without also having to include support for odd individual long-tail edge-cases of people running restrictively firewalled instances of outdated insecure OSes.
Firefox is no longer updated on such older OSes, and besides the client application you're using may be abandonware - after all, not only browsers use TLS but also email clients and so on. The only option is to retrofit a newer TLS engine onto a proprietary OS (ha ha), or strip the encryption entirely via a trusted MITM to allow the application to connect at all.
> I do personally think it's fairly reasonable for server admins to maintain webservers that target "General-purpose browsing" without also having to include support for odd individual long-tail edge-cases
Old software speaking to other old software or hardware. The only solution to get the entire thing to work without spending lots of time on it, is to replicate a "period-correct" environment inside a VM. Modern OSes may support a subset of the functionality, but definitely not all of it.
If it is behind a NAT and only browsing trusted sites over trusted ISP, the risk of using even plain HTTP is pretty low.
Many enjoy and find comforting the older OSes they're used to.
Research even shows that reliving sensory experiences from a younger age can have a measurable positive effect on a person's physiology and health.
I think the general trend of sacrificing everything to so-called security, which in reality just means being owned by the root certificate holders, all several hundred of them, and all their employees and affiliates, even for low-risk and low value targets, like someone who wants to blog with IE3, is overrated and stupid and will be disposed off once we feel more sure of the security of our underlying network...
That's true, I would ordinarily agree. But there are certain edge cases where communication outside of that bubble to select servers is needed, and that's where the encryption problems are back on stage.
If I can't establish a TLS connection, I can't get at the HTTP that's flowing over the encrypted pipe. It's pointless to split up HTTP and TLS/SSL when they were always deployed in lockstep anyway (unless you were happy with plaintext, or BYO encryption).
If we imagine the HTTP+plaintext at one extreme, and HTTP+TLSv$latest at the other, a whole bunch of specs in the middle have effectively died out because nobody deploys them any more.
Therefore, it's unfair to claim that "HTTP-over-TLS works fine across all versions of HTTP" (because it doesn't, unless you also support modern crypto), or that "HTTPS has been supported by browsers since 1994" (because the 1994 HTTPS is not even partly forwards-compatible with the 2022 HTTPS).
I guess I should be more specific in my wording: I should have said "advocating for specific older specs" (i.e. HTTP).
Not all older specs are worth advocating for.
Security is unfortunately an arms race, which means that broadly speaking software-updates are important for anything network-connected. Using old hardware is thankfully still possible, but the idea of expecting a 12-year-old piece of software to securely connect you to the internet today is disconnected from the reality of the threat landscape we live in.
RSS is just an XML format, and I don't think in-browser support was what was holding it together. There's still plenty of readers, and there's nothing stopping other readers from popping up.
It'd be like saying Firefox is deprecating JSON because it doesn't have a nice viewer for it...
They also removed the little RSS icon that shows up in the URL bar on autodiscovery of RSS feeds by HTML head entries. That hurts RSS far more than removing the render did. It's part of a series of intentional moves to depreciate RSS and Atom.
I think that QUIC and HTTP/3 are great, but I worry that HTTP/1.1 will eventually be abandoned. I think it's important to be able to implement basic HTTP functionality with a few lines of code.
> I worry that HTTP/1.1 will eventually be abandoned.
HTTP/2? Sure HTTP/1.1? Not a chance. It's too widespread and too valuable, and the floor is too low. Unless Google starts a singular effort against it and manages to pull in the other big cos while avoiding the ire of the liberal states, which seems unlikely.
Yes, headers in text plus body in any content type is incredibly simple and flexible. It’s hard to imagine redesigning HTTP/1 to be any more straightforward.
I sincerely hope you're right. I certainly intend to always support it on my services. But it's all too easy for me to imagine a world where Chrome/Blink drops support.
RSS is still the predominant format for syndicating podcasts. I think it's to come under attack by big players looking to build a walled garden (Spotify and Apple in particular) but it's hanging on.
HTTP 1 will outlive the death of HTTP 3, QUIC, and Google. New standards come and go, old standards exist forever. RSS will probably make a comeback eventually: The ability to automatically process information is a common wheel we reinvent.
It's true, and especially HTTP/1.1 has a whole 25 years worth of browsers which are still usable today!
I think as the nostalgia/retro factor kicks in, it will become cool again to support "Any Browser".
A few common questions about this are:
Security exploits: Yes, you do have to do this on a safe network, but at least over where I am, the Internet is safe enough for me to visit un-malicious websites, perhaps using a VM.
The difficulty of it: It's moderately difficult, but quite doable, with certain constraints combined with progressive enhancement.
(HTTP/1.0 browsers are slightly more difficult to support, because you need a dedicated IP address, as there is no Host: header yet. Netscape 1.x, IE 1.x and 2.x, Mosaic are quite usable if you can get this set up.)
even worse, QUIC embeds layering violation into the protocol, it can be ok as a web browser-web server protocol to minmax the TTB, but cannot be a protocol to build new applications
QUIC was brought to the ietf for standardization and they decided to separate the transport protocol from the application protocol and potentially support other application protocols over quic in the future, resulting in http3 becoming a separate spec.
Anything built on domain names cannot be described as "built to last." Not HTTP, not SMTP, not Gopher, not DNS, not the Fediverse. This is because domain names are rented out, and have expiration times attached to them, so obviously they can only last as long as the organization behind both the domain and the registrar keeps it there.
IRC and USENET were built to last. Names in either network weren't tied to anyone but the collective "network." Neither network gets used much today, since names aren't tied to anyone in particular. It turns out that globally writable data stores are great vehicles for spam and fraud.
Content-addressed systems like BitTorrent and I2P can theoretically maintain content availability for as long as anybody wants to keep it available, not just whoever originally published it. BitTorrent is also pretty secure, but it's not truly fair since it's an immutable data store, and all the spam and fraud is just outsourced to HTTP instead of being eliminated entirely.
While I 100% agree on your critique of DNS, there's two small issues:
1. The protocols you list are not strictly married to DNS in any way. e.g. https://<ip-address> works fine. Granted it's not very practical to use right now but there's nothing in the HTTP spec. preventing things working without DNS.
2. DNS isn't technically married to ICANN either. There are alternative domain name assignment proposals and systems that use the same protocols without ICANN. Again, not super practical today but theoretically possible to use.
So the fact that these systems are all so loosely coupled makes them pretty resilient and very much "built to last". Moving away from ICANN may be hampered by the inertia of a gargantuan network effect, but that's not any bigger than the inertia of moving off HTTP completely.
> theoretically maintain content availability for as long as anybody wants to keep it available, not just whoever originally published it
I'd argue that this is probably as good as we can get. Even if something is built-to-last, a person can still throw something away (and indeed many people find good stuff in dumpsters!) If nobody is willing to pay the cost to actually host something, then it goes "into the trash" so to speak. I realize that physical goods last even in the trash, but that's not something a digital good can ever be unless it's serialized to some form of storage.
VPRI (of Alan Kay fame) has published an interesting paper [1] trying to create a Cuneiform Tablet of code.
I was actually thinking of Freenode - while someone may have been able to "hijack" the domain and even the name Freenode, I'd argue that the network itself has outlived both of those.
> I’d argue that the web, more specifically http, offered up the last truly new paradigm in computing. It is a single, elegant standard for distributed systems. We should all take the time every now and then to think about the beauty, power, and simplicity of this standard.
What is really funny about that sentence is that the word "http" links to the wikipedia article through href.li, in order to hide the referer, a built-in feature of HTTP 1.1 and the web. So much for elegance and simplicity when I am staring at a workaround using an external system for something as simple as a link.
So? There is an attribute to control that. Sure, the default has less privacy but that has little to do with the transport and is a property of the document spec.
If you're an Apple user, NetNewsWire [0] is back, open source, with iOS and macOS clients that sync with a variety of servers, now including zero-configuration iCloud sync (which is what I was waiting for).
Works great, fast, clean, simple. Highly recommended.
Disjoint set of platforms from the ones you mention in your second sentence, though.
If you are fine with self-hosting, then I recommend getting FreshRSS over TT-RSS and FreshRSS have support for RSS Bridge which will make FreshRSS a powerful tool. I prefers FreshRSS because I can set it up in my XAMP easily, TT-RSS moved to Docker only. Also FreshRSS provide their RSS feed API for other RSS software to hook into it. So you can use mobile app with FreshRSS feed directly to it.
I've used almost all of the readers, web and software based, and ended up settling on newsboat. You can set up a url handler script that throws links at appropriate viewers (like mpv for youtube videos, rtv for reddit threads) and it ends up being pretty seamless and fast. No mobile of course which I don't mind, but you could just export your feed list and use another rss reader for that if you really needed your feeds on your phone too.
I wrote an RSS reader named Brook that's open source, runs on Firefox, and keeps all your data local, so you don't need to rely on a service that might disappear.
I'm a happy customer of https://bazqux.com. It is a paid RSS reader that seems to be developed by one guy. The web UI is very good, and it's supported by Reeder, an iOS news aggregator app I use.
I haven't found one. The features I really need are:
- decent filters, portable across all clients (macos + iOS for me)
- reader mode
- shared read/unread/starred (i.e. keep even if read) state
- ad blocking works
I don't really like in-browser but could tolerate it if it could do all of the above
I use ReadKit on the Mac (has filters, but local to the client) and Reeder on iOS. I am using FeedWrangler as the back end; its filter language (at least as documented) is unfortunately inadequate for my needs. Otherwise it's been fine.
I used Thunderbird for a while, then I switched to newsblur (paid, but cheap) so that it would fetch articles even when my computer wasn't on and so that I could access it on my phone.
1. Inconsistent implementation of standards: The implemented versions of RSS and Atom out there makes parsing more of an art than just throwing a library at it as no RSS parsing library out there can handle all the edge cases. (The last time I did a test a few years ago in an hour long sample from one of the RSS firehoses out there pulled up hundreds of custom namespaces and tag names.)
2. XML formatting: Consistently formatted, well formed XML is never 100%, even from by major news organizations. Embedded CDATA means parsing content is a quagmire of double escaping.
3. Inconsistent content: A RSS feed could just have the last few items that have been updated, with just titles or links, or it could be literally all of the content of a blog, jammed into some 20MB+ text file, double escaped and simply enlarged after every new update.
4. Inconsistent unique identifiers and HTTP header responses: Many sites will respond appropriately to requests with a 304 if there are no changes. Many will not. Many sites will give each RSS item a globally unique identifier, many will not. This forces every Reader to simply request the whole doc over and over again, comparing unique items with a blend of logic and magic.
5. Inconsistent support: Most sites that use RSS have no business model attached to it, so it's just sort of an afterthought and may be shut down at any time, and often is.
All this leads to: Massive amounts of wasted bandwidth as bots poll endlessly for updates, wasted processing time parsing unformatted or badly formatted content, wasted storage because of bad IDs and URLS, wasted effort on the user's part dealing with the inevitable errors, and wasted effort on the admin side dealing with an antiquated tech that should have gone away with MySpace.
RSS should be scrapped. Killed. Replaced. Forgotten.
You're entirely right, and yet I think there's still a place for it.
It all depends on what you want to use it for. For my use case, I built a feed reader that just needs to know the titles, URLS, and publication dates of articles. That let me build an RSS/Atom reader that lets me curate my own news feeds and delegates everything else to the normal browser.
Are there inconsistencies and broken feeds? Sure a few, but for my purposes, I can ignore 99% of that. Is it wasteful? Sure, it. could be more efficient, but honestly downloading React and a hundred dependencies every time I visit a webpage is as well.
Fear not though, many sites built with tools like Gatsby aren't including RSS feeds, so your wishes may still come true.
Your criticism of RSS implementation is just, but not so of Atom; Atom feeds are pretty consistently spec-correct and don’t require guessing games to parse. My experience is that the only places where interpretation of Atom feeds goes wrong is where it’s dragged down by RSS, with people coding against a library that handles both, but treating it as though it were only RSS (e.g. not handling HTML titles correctly, perhaps leading to XSS attacks).
Fun fact on #3 that most people don’t know: Atom supports paginated feeds: <https://datatracker.ietf.org/doc/html/rfc5023#section-10.1>. No idea what library support is like. I admit it was defined in the AtomPub spec, but it should still apply to regular Atom feeds too. My own website’s feed is approaching half a megabyte with all my content ever; I’d like to make it paginated and only include the most recent ones in the first page (while still satisfying my unjustified thirst for the feed to still technically contain all the content), but as long as I’m using someone else’s static site generator that doesn’t already support that I’m probably not going to get round to implementing it.
For the corporate web RSS and HTTP are dead. But as a non-corporate human person I'll be sure to keep RSS and HTTP alive on my webservers.