Sure, something like ZFS would be nice, but in a router not needed as you'd mount the fs rw only during update, wouldn't you? Logs are either on a separate, RAM based fs (tmpfs) or (preferably) sent to a remote logging host.
FFS can crash quite safely. The chance of lost data is incredibly small, especially with "softdep" out of the way. It just doesn't come back online as fast as a journaled file system will.
One of our mac mini CI server also failed to boot after update today. When it happens on Linux I usually try to troubleshoot the problem rather than resetting a whole machine. It does not seems to be feasible on macos with reasonable effort because of recently-introduced security features (and also my lack of understanding of macos boot process).
I was wondering the same thing. I suspect its value would decrease in those cases, due to the lower-level representation providing less semantic association, and therefore less information (per unit object) for modeling hit rate and lifetime. Also, the higher objects count would require more resources.
Edit: perhaps in a file system, the higher-level information could still be provided?
Even within database caches, while the page size is often fixed you are still caching and operating on more abstract multi-page objects of variable size. In a good cache replacement algorithm there is some implicit awareness of the relative sizes of these objects to optimize the replacement policy.