Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I imagine it won't be as fast? The cool thing about Bitcask is that all of the keys are in memory - I imagine that would also be beneficial with secondary indexes now supported...

LevelDB seems mostly well suited for data that becomes (in terms of key size and number of keys) bigger than your RAM...



I think it will be a welcome change for anyone who runs a decent sized Riak deployment. We are currently adding machines simply to increase available RAM in the cluster.


Why not bring a node down, and then replace it with a node that has more RAM? Are you exceeding the size of a node you can supply (in terms of RAM) for your cluster?

I'd be very curious to know a bit about the character of your data, the size of your cluster, etc. (I've only run test clusters at this point, so hearing from someone doing production work would be informative.)


Replacement vs. addition is a situational trade-off, but ultimately the problem remains that you need to bring more RAM to the party.

My biggest RAM consumer stores historical data for a goods trading platform. Each trade is a unique key, with all the trade data being the value. Access speed is important, but not as critical as the other goodies I get from Riak (replication and automated rebalancing). Metadata is stored separately, but I hope to change that with Riak 1.0 secondary indexes.


Secondary indexes are currently only supported on levelDB.


Interesting - well, that would get me to choose leveldb then!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: