It feels like this is already a work in progress using Beaker Browser and Dat.
- Beaker lets you browse entire websites (dat archives) and fork them.
- It lets you create and serve your own sites directly from the browser and seed it from a server (like a torrent).
- It lets other sites create templated sites under your name for user generated content.
- Visitors by default temporarily seed your website which may reduce single point of failures, hug of deaths and costs.
With this peer-to-peer torrent-like approach, the web can become distributed again and feel more like a "web". There's still a lot of work left and maybe Beaker itself isn't the best implementation for this idea, but it's a good start.
I'd like to also add that not only does Dat / Beaker / Bunsen offer a great way to decentralize the Web for existing users, it also makes it more affordable for billions of other people to browse and host websites because you can sync websites P2P over offline wifi. The use of cryptography under the hood guarantees users don't get hit by man-in-the-middle attacks. It's like SSL for offline.
*Disclaimer: I am a volunteer contributor to Bunsen Browser.
In Bunsen Browser and others, while browsing offline, are you aware if there is a feature that allows version control tracking, so willing websites can have an API call that shares latest version - to return to say if an update is "required" or available?
> are you aware if there is a feature that allows version control tracking, so willing websites can have an API call that shares latest version - to return to say if an update is "required" or available?
@loceng Yes, that is how Dat works, thus any browser serving Dat Archives help propagated changes to Dat archives. Note that only changes that are signed with the archive's private key for the archive are propagated. All of that happens under the hood as an owner of a dat archive though. You just run `dat share` command in a directory, you get a public address for the archive, and any time you make a change to a file in the archive it automatically signs with the private key and share it out to the network.
It seemed to me that the OP was making a distinctly different suggestion: the real challenge is offering a better UX.
Emerging distributed tech won't fix a UX problem just because it happens to be technologically sophisticated (he calls out TOR, but I think he is making a general comment here).
Instead, he asks, why not spend some effort giving older tech like RSS a better UX?
I'm inclined to agree, but on the other hand it seems like the marketplace of ideas speaks for itself and we should be keeping our eyes on the future.
I would argue that Beaker is providing a better UX for the web while solving the centralised nature of it. Its API allows websites to provide interfaces for creating and modifying websites tailored for specific audiences, owned by the user.
So you can have your own RSS subscriptions in a Dat, a feed reader in another Dat, click a button on a website to subscribe to it and add it to your Dat. The Feed reader can keep track of what you've read and store it in its own Dat or a different Dat (if you want client/data separation). Your mobile phone can sync to your Dat(s) so you have Desktop/Mobile sync all in a single place.
I've not tried this, but I don't see why it wouldn't work.
I've been helping to develop Bunsen browser for Android but we don't have an iPhone build yet. The hard part is just building it with nodejs all wired up correctly for iOS, but there are tools for that. We just don't have the volunteer working on it yet.
Mobile as a whole is an unsolved problem for decentralized systems. The mobile revolution is and has been by far the most powerful driver of centralization in the last 10-15 years.
Mobile devices are slower, have less memory, and must consume less power than desktop, laptop, or server devices. To achieve good battery life they really need to be in an almost-off state most of the time. Add to this the fact that cellular data plans limit bandwidth and cellular networks are a lot slower than most land-line networks and you also have to be very efficient with the use of bandwidth.
This means that decentralized systems that rely on peer to peer participatory propagation of data or distributed compute just don't work well on mobile. Anything with P2P data propagation will use too much data plan and run the radio too much, shortening battery life, while anything with distributed compute will destroy battery life and turn your phone into a pocket hand warmer.
Mobile devices really are thin clients. I call them "dumb terminals for the cloud." Since the cloud is mainframe 2.0, mobile devices are the "glass TTY" (e.g. VT100) 2.0.
The best solution is probably not to fight the nature of mobile devices as thin clients but to tether them to stationary devices. But which stationary devices? Laptops are themselves mobile and are off half the time, and most people (myself included) no longer own desktops. I have a personal server but I'm a geek and a huge minority. Most people just do not own an always-on device.
Farming this out to random always-on devices is a security nightmare or at best is no better than the vertically integrated silo-ed cloud.
I see only three solutions:
(1) Create a niche for a personal always-on server type device and successfully market one to the end user. It would have to be open enough to allow the server side of 'apps' to be installed. Many have tried to do this but nothing has caught on.
(2) Create a mobile device that's designed to be a "real computer." With 5G coming the bandwidth for this might be on the way, but you'd also have to contend with battery life and heat dissipation. One avenue would be to split the CPU in two: a high-power burstable CPU and a low-power slow always-on CPU. Require the always-on parts of decentralized services to run there and as a result to be very optimized. The problem is that a mass-market mobile device is a huge undertaking. Another route might be to sell a snap-on case that carries an extra battery and also includes a mini-server CPU, RAM, storage, etc. This would make your phone a bit bulkier but if there are benefits / killer apps it could catch on.
(3) Solve the security problems inherent in appointing random stationary nodes to serve random mobile devices. This would probably involve a major innovation like fast scalable fully homomorphic encrypted virtual machines or really tough security enclave processors.
Very good analysis. I found myself thinking about this phrase:
> The mobile revolution is and has been by far the most powerful driver of centralization in the last 10-15 years.
Many of us who were active users of Skype in its earlier days (mid-2000s) might remember Skype's first attempt at a mobile client. They took all the distributed P2P goodness of the desktop client and tried to have that run on the mobile environment.
The result was sadly a smartphone app that was slow and rapidly drained your battery. For those of us with many Skype group chats open, the mobile client was basically unusable.
So Microsoft/Skype had to go back and rethink the mobile client. To your points in your reply... they made it a "thin client" with all the power in the centralized servers.
As they did that, it seemed from the outside that they determined over time that maintaining a desktop P2P source code and a mobile thin-client/server source code didn't make sense. And so ultimately the desktop P2P was abandoned and everything became client/server. (Which is the case now - Skype on your desktop is basically a wrapper for a web client.)
And so... the quest for a good mobile user experience wound up being one of the drivers for centralizing one of the original decentralized P2P apps. [1]
Good analysis!
[1] Yes, there were, I'm sure, many other contributing factors, including the issues around the supernodes that led to one of the major outages. And yes, I do realize that Skype, even its earliest form was NOT a completely-decentralized communications app. They did have a centralized mechanism for logins / authentication and also for PSTN gateways and other services.
I really enjoy your thinking.. Do you have some kind of blog where I can follow you? :D As you already mentioned, contributing whatever resources you consume is relatively unreasonable on mobile devices, because it would pretty much double data and battery usage. So while there is most likely some kind of overhead connected to the third solution you suggested, I still think it is probably the easiest one because it doesn't require any new specialised hardware.
Maybe regulation can solve some of the problems with the current systems, but the idealist in me really wants to see provably transparent (open source) and secure solutions which don't require trust in the hardware so we can still make use of modern, efficient (federated) server farms without having to giving up control over our data.
It's actually worse than doubling. The nature of distributed systems means that participating in resources consumed normally triples resource consumption at least. I'm not aware of any approach to decentralization of services like Facebook, Twitter, etc. that would merely double it.
Your typical desktop or laptop has a lot of resources to spare. Your typical mobile device has none. Mobile promotes a client/server mainframe/dumb-term architecture for fundamental technical reasons.
We would love for someone to test these theories using Bunsen Browser! Getting some solid metrics would go a long way to starting to solve any problems that might be there. Theoretically it shouldn't be eating up much bandwidth because every device visiting a Dat Archive helps contribute to the network.
Mobile devices could certainly be used to host such services /when they are being charged/. In practice most of us charge our devices during the night and, like cheap night-time electricity, we could have overnight mobile seeding.
... and it is always night-time somewhere in the world.
Yes, the personal router is a logical spot to put an always-on converged router/server device. Unfortunately nobody's done it well enough yet. It's an area that's ripe for an "iPhone moment."
The other problem is that we're in an era where it's very hard to market anything if you're not Google, Amazon, or Apple, and those firms have negative interest in promoting any form of decentralization.
This is something addressed in a side project I'm building. Websites are converted to JSON (or built as JSON), then built in the browser by a small Javascript engine.
Since sites are just JSON, they're highly portable, and sections or whole pages can be simply copied from one file to another, to add content to your site.
The project is in late alpha - I'm just now completing the in-browser editor that uploads to S3. Other than requiring fewer server calls, it uses traditional browsers, servers, networks, etc.
Because there's no money in making a better RSS reader.
I'm sure the internal story of why Google killed off Reader is far more mundane office politics that we'll ever know, but since then, there's not been another that's risen to popularity. As the article mentions there's Feedly who's UI is functional if a bit baroque (why does every feed need to be tagged?) but ultimately it's still like trying to drink from a firehose. There's not been an RSS reader company that has come about since that shows Google was wrong to kill off Reader.
It's easy enough to think up improvements to their UX, but we don't have a marketplace of ideas because there is so much friction (even ignoring the work involved in starting up a company and hiring a team, there's no way to introduce a small tweak to Feedly without recreating their platform - and then you'd still have to convince enough people to migrate to your Feedly-clone first).
What we have a marketplace of VC-funded corporations, and branding is king. There's no stock exchange for listing specific features Feedly could implement in order to promote better RSS reader software.
In the near future I will be in a project that generates sensor data in PostgreSQL/TimescaleDB and/or InfluxDB which I'd like to open to the public.
Any recommendation on how to make time series data available via Dat or IPFS is highly appreciated.
There will be base data of different systems and experiment data from experiments running in those environments. So far I have no concrete idea on how to segmentize and make available the data in a sensible way.
I know IPFS is happy to help with implementation advice on discuss.ipfs.io or #ipfs on IRC, they did for me.
It does depend on the scale of your data and whether you want upload data in a streaming or discrete fashion. If by segmentize you mean to like chunk your data into smaller sections, then that is handled automatically by both DAT and IPFS.
Interesting idea and project. The browser's UI is very different... and lacks some (most) customization options... I spent a long time looking for a way to increase the font-size, for example. (Not all of us are accustomed to squinting at phones.) Preferences to set: I thought that was hiding, but it seems to be missing. Say what?
After an hour toying with the interface: technically it looks to have a lot of possibilities. I have serious concerns about the lack of transparency ... unlike most browsers today ... what's going on in security, ad-blocking, tracking? I found no way to tell.
In sum, cool project. The potential is clear. I can't imagine anyone non-technical not running away on first sight. It's more opaque than Ello, even. More like an oscilloscope than a mobile!
Beaker dev here, quick question: did you try our beta release? We've done a ton of work in the past few months to make it less opaque and to freshen up our UIs. Would love to hear your thoughts on the beta if you have the time!
I like it. It's not clear to me though from skimming over the sites, whether beaker and dat are standards or pure tools. If they are standards and therefore people are able to do their own implementation, I believe it can be successful. However if it's just a tool without standardization efforts behind it, then it's still a centralized system.
Beaker dev here. Dat is a protocol that can anyone can implement, and Beaker is a tool that implements Dat in the browser. We built Beaker as a demonstration of what becomes possible when you put a peer-to-peer protocol in the browser, with the hope that other browsers will someday follow along in our footsteps.
I think most websites can be generated statically. But yes, it does mean a lot of the existing patterns used on the current web needs to be redesigned for it. Again, this is all very new so there's a lot that needs to be worked out.
I can give an example of non-static data (though what is static vs dynamic can be a grey area):
SPAs work great with Beaker/Dat since users can download the app and use it offline. The data can be any Dat archive. So for a social network, each user can have their own Dat archives of images and posts. The root site can hold an index of each user and download individual files from their Dat and display them using client-side routing. In this scenario, each user has their own database as a Dat which is indexed by a parent Dat website.
Dats aren't static. They're public key addressed, so you can make changes. The next protocol iteration will have support for multiple writers using CRDTs (sometime this summer).
Myself and others are currently volunteering to help bring the Dat Archive API to Bunsen Browser, a mobile Dat Web client currently only for Android (unless someone wants to jump in and make the build for iOS).
Are there any DAT:// homepages or web-rings or whatever that I can start using to browse around? I have it installed but can't find any cool DAT sites to browse.
Shameless plug for my personal website: dat://tomjwatson.com
I also wrote a blog post about my experience with dat/beaker and getting my domain set up for access in beaker - dat://tomjwatson.com/blog/decentralising-the-web/
- Beaker lets you browse entire websites (dat archives) and fork them.
- It lets you create and serve your own sites directly from the browser and seed it from a server (like a torrent).
- It lets other sites create templated sites under your name for user generated content.
- Visitors by default temporarily seed your website which may reduce single point of failures, hug of deaths and costs.
With this peer-to-peer torrent-like approach, the web can become distributed again and feel more like a "web". There's still a lot of work left and maybe Beaker itself isn't the best implementation for this idea, but it's a good start.
https://beakerbrowser.com/
https://datproject.org/