Not just that, but getting a deep understanding of how the RDBMS (or NoSQL solutions, for that matter) implements the database is important as well is very important. It requires a low level understanding of how the implementations write things to disk, perform indexing, structure data in memory, and their characteristics when distributed to make truly informed, non-cargo-cult decisions about database technologies. Otherwise you're just shooting in the dark.
I generally don't use MySQL, but iirc it confirms that nothing went wrong saving the data (whereas with mongo there is no confirmation step, you just have to hope that it was successful), and the data is written to disk. Additionally, i've heard of zero "Oh no, MySQL just lost 50% of my production database!" where it wasn't the user's fault, and i've heard enough of that from Mongo to stay away.
Respectfully, this isn't true. Yes, fire and forget is the default behavior, but there is a confirmation step that you can check. It may be implemented a bit differently from driver to driver, but it is generally called a "safe insert". Numerous people use this to ensure their writes across single databases as well as multi-node master-slave and replica sets.
Respectfully, this isn't true. Safe inserts are safer, but not safe. There could still be a problem writing the data to disk, and (My|Postgre)SQL just doesn't have this problem.
Assuming the changes that were introduced in MongoDB 1.8 for single-server durability, writes are being put to disk with journaling. So, the data is written to disk.
That doesn't actually guarantee the write was written to disk unless an fsync was issued. Of course that comes with a significant effect on MongoDB's famously marketed write performance.
Then in MongoDB's case durability is a function of scale, which leads back to the parent's suggestion that it is a technology optimized for performance.
Personally I think this is a bad foundation for data that is important. There are probably a lot of use cases where data not being on disk for n seconds (or one minute in the case of MongoDB) is ok.
Even when that is the case, I still think that is the most important question to be addressed when choosing MongoDB as a data store.
The flexible query API, schema-less document format, secondary indices... those are siren songs of rapid development.
I really absolutely do not understand "verses" mentality and that if everything must support same feature set or it's "bad".
You shouldn't choosing SQL over noSQL, you choose both (or none, or other stuff as your problem requires) and use them in appropriate places in your infrastructure. Sometimes you want performance over durability.
Would you mind elaborating on why MongoDB cares about speed over reliability. Sure, you can go that route (and sometimes that is what developers want to do), but MongoDB, as a database, reliably stores and persists your data.
Just curious what your experiences were that made you think MongoDB isn't a reliable data store.
It's pretty easy to make Redis basically ACIDic (use MULTI/EXEC and WATCH with AOF on and fsync on every command) if you really need to.
In practice you probably don't. Financial transactions should take another path, but for almost everything else, you can probably afford the few seconds of data loss you may encounter.