Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

isn't knowing every member of a cluster and the leader a core function of raft?

I feel like this post is leaving out critical pieces of information, like why the URL can't be deterministic or data about comparisons of different approaches.

At what cluster size and concurrency does asking every node break down?

I have been meaning to take a closer look at rqlite and want to understand more about it.



rqlite author here.

Yes, you're right every node knows the Raft network address of every other node. But Raft network addresses are not the network address used by clients to query the cluster. Instead every node also exposes a HTTP API for queries.

So code needs exist to share information -- in this case the HTTP API addresses -- between nodes that the Raft layer doesn't handle.


Also the HTTP API URL isn't deterministic because a) the operator sets it for any given node, and b) over the lifetime of the cluster the entire set of nodes could change as nodes fail, are replaced, etc.


>At what cluster size and concurrency does asking every node break down?

None, a follower only needs to ask the leader. So regardless of the size of the cluster, in 6.0 querying a follower only introduces a single hop to the leader before responding to the client. While this hop was not required in earlier versions, earlier versions had to maintain state -- and stateful systems are generally more prone to bugs.


I am curious about where things broke down with the 301 based solution y'all used earlier.


I included details in the blog post, the 3.x to 5.x design had the following issues:

- stateful system, with extra data stored in Raft. Always a chance for bugs with stateful systems.

- some corner cases whereby the state rqlite was storing got out of sync with the some other cluster configuration. Finding the root cause of these bugs could have been very time-consuming.

- certain failure cases happened during automatic cluster operations, meaning an operator mightn't notice and be able to deal with them. Now those failures cases -- while still very rare -- happen at query time. The operators know immediately sometime is up, and can deal with the problem there and then, usually by just re-issuing the query.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: