Bit of a tangent, but what I'm looking for is a S3-compatible server with transparent storage, ie storing each file (object) as an individual file on disk.
Minio used to do that but changed many years ago. Production-grade systems don't do that, for good reason. The only tool I've found is Rclone but it's not really meant to be exposed as a service.
minikv actually supports a fully S3-compatible API (PUT/GET/BATCH, including TTL extensions and real-time notifications).
By default, the storage engine is segmented/append-only with object records in blob files, not “one file per object”.
However, you can configure a backend (like the in-memory mode for dev/test, or Sled/RocksDB) and get predictable, transparent storage behavior for objects.
Storing each object as an individual file isn’t the default — for durability and atomics, objects are grouped inside segment files to enable fast compaction, consistent snapshots, and better I/O performance.
If you need “one file per object” for a specific workflow, it’s possible to add a custom backend or tweak volume logic — but as you noted, most production systems move away from that model for robustness.
That said, minikv’s flexible storage API makes experimentation possible if that’s what your use-case demands and you’re fine with the trade-offs.
Let me know what your usage scenario is, and I can advise on config or feature options!
When the device sync finally works reliably or bookmark tags work on mobile, I will concede that you've turned a corner. What I expect is more theming and AI bullshit, time will tell.
I'm not trying to argue against it, I think "slave" branches make no sense anyway, but to GP's point BitKeeper didn't enslave anybody, just used the word.
If we believe we should remove allusions to negative things why are we ok with "kill", "orphan", "evict", "bash", "cut", "isolate" etc? What is special about that terrible concept that we should stop using the word even when not applied to people at all?
The point of bringing up Bitkeeper is as much because why use a word divorced from its original meaning at all? "master" wasn't an explicit choice by a git maintainer, it was inherited noise. When confronted with where that choice came from, in finding it wasn't a choice but a bad legacy, the git maintainers generally agreed it might be nice to pick something that made more sense as a choice (rather than bullshit noise from a practically dead and gone upstream project) and after much debate "main" made sense as something a lot of people were using anyway.
That's what is "special" about it, that it wasn't special. It wasn't chosen. It was just a stupid inherited default that didn't make sense when questioned.
It was never an intentional allusion to a negative thing, it was accidentally a negative thing causing real people some harm, and it was easier to fix than to justify why it was a negative thing in the first place.
There exists winden.app which is a magic wormhole webapp. They use their own mailbox and relay so you need to use the right options in the wormhole CLI.
When operations complete in 200ns instead of blocking for microseconds/milliseconds on fsync, you avoid thread pool exhaustion and connection queueing. Each sync operation blocks that thread until disk confirms - tying up memory, connection slots, and causing tail latency spikes.
With FeOxDB's write-behind approach:
- Operations return immediately, threads stay available
- Background workers batch writes, amortizing sync costs across many operations
- Same hardware can handle 100x more concurrent requests
- Lower cloud bills from needing fewer instances
For desktop apps, this means your KV store doesn't tie up threads that the UI needs. For servers, it means handling more users without scaling up.
The durability tradeoff makes sense when you realize most KV workloads are derived data that can be rebuilt. Why block threads and exhaust IOPS for fsync-level durability on data that doesn't need it?
reply