Not to be mistaken for high-availability Dqlite[1], which is one of the options one can run the k3s kubernetes distribution on (instead of etcd), via the Kine etcd shim[2]. Ultimately though the K3s team replaced Dqlite with an embedded etcd to get high-availability[3].
its a very simple request log in front of SQLite so unless there is a problem with the (Paxos/raft) algorithm used for replicating the logs it should hold very well.
Suggestion: have you considered running office hours and inviting your users to chat to you about what they're doing with it?
I've been doing that for six months for my Datasette project and I've had over 60 conversations now, it's been a revelation - it almost completely solved the "I don't know how people are using this" problem for me, and gave me a ton of ideas for future directions for the project.
Interesting! An old colleague of mine, Ben Johnson, also does the same thing for litestream, his latest SQLite replication project. I thought it was just him.
Now that I know two folks do it, I'll have to give it serious thought. Thanks for the blog post ref
Thanks for the quality work here. Have been considering using this over infinicache or hazelcast for locally replicated caching scenario. Are there any battle testing stories/case-studies out there?
Olric did come up in my search but I never payed attention to it due to lack of production usage examples. I usually try to piggy back on existing well tested tools. The exception for rqlite because I understand both SQLite and Raft really well. So even if something goes wrong I will know what needs to be done.
isn't knowing every member of a cluster and the leader a core function of raft?
I feel like this post is leaving out critical pieces of information, like why the URL can't be deterministic or data about comparisons of different approaches.
At what cluster size and concurrency does asking every node break down?
I have been meaning to take a closer look at rqlite and want to understand more about it.
Yes, you're right every node knows the Raft network address of every other node. But Raft network addresses are not the network address used by clients to query the cluster. Instead every node also exposes a HTTP API for queries.
So code needs exist to share information -- in this case the HTTP API addresses -- between nodes that the Raft layer doesn't handle.
Also the HTTP API URL isn't deterministic because a) the operator sets it for any given node, and b) over the lifetime of the cluster the entire set of nodes could change as nodes fail, are replaced, etc.
>At what cluster size and concurrency does asking every node break down?
None, a follower only needs to ask the leader. So regardless of the size of the cluster, in 6.0 querying a follower only introduces a single hop to the leader before responding to the client. While this hop was not required in earlier versions, earlier versions had to maintain state -- and stateful systems are generally more prone to bugs.
I included details in the blog post, the 3.x to 5.x design had the following issues:
- stateful system, with extra data stored in Raft. Always a chance for bugs with stateful systems.
- some corner cases whereby the state rqlite was storing got out of sync with the some other cluster configuration. Finding the root cause of these bugs could have been very time-consuming.
- certain failure cases happened during automatic cluster operations, meaning an operator mightn't notice and be able to deal with them. Now those failures cases -- while still very rare -- happen at query time. The operators know immediately sometime is up, and can deal with the problem there and then, usually by just re-issuing the query.
As I was reading I was thinking "Why doesn't the Follower proxy the request to the Leader?" which I see was covered later on in "Transparent request forwarding". Good stuff!
I remember building a janky version of sqlite+raft for Stripe's 2014 CTF. I'm sure others here have made a similar comment when rqlite gets posted to HN.
rqlite author here. Yes, it's coming in a future release and is much easier to do now.
One key principle of rqlite has always been quality, clean design, and simplicity of operation. So I've been reluctant to add a feature -- in this case Request Forwarding -- until I was sure it would be a clear win and not make rqlite less robust. After years of experience with the system now, I'm happy it can be added in a high-quality manner.
It wouldn't make too much sense to run a distributed raft-based SQLite system in its entirety in a browser (via WASM). However, you can run an individual SQLite instance in the browser (via WASM) using this: https://sql.js.org/#/
Cause that's a single node, this is made for clusters. Maybe what you are looking for is a replication of another data? What use case would you have for multiple browsers syncing over a network?
[1] https://dqlite.io/
[2] https://github.com/k3s-io/kine
[3] https://rancher.com/docs/k3s/latest/en/installation/ha-embed...