When I told him about SQL databases. Relational stuff, normalization. Awesome.
He told me that was all fine and dandy but just too slow, this new fangled SQL database stuff. He used databases where you simply accessed rows by key. If you needed to access something by a different key you just made a different table where the same data was arranged by that key instead. Super performant.
Of course my dad also programmed in languages like 370 assembler.
Funny how the young folks today talk about NoSQL databases indeed.
SQL databases were a dumb misstep, it's always baffled me how they ever caught on. In 10 or 20 years we'll look back on them the same way we look on C++/Java-style OO today.
It's not really such a dumb idea actually. It just depends on what you favour. There are plenty of nice things about SQL databases. I suppose it's the regular back and forth between one extreme and another. Both driven by "we need something new to work on" and changing technology landscape.
Relational databases really weren't all that practical back in the day with the hardware that was available. Putting thought into defining how you are going to query your data was available though. Relational algebra wasn't a thing from the start either.
If someone comes along and tells you that you don't have to know all this up-front and you can come up with a query you want to ask about the data you have, you will be able to, isn't that awesome? Ad-hoc, just like that. No need to carefully transform the data you have, ensure you keep it all up-to-date in multiple places etc. Of course data volumes grow and even the newer hardware you have can soon no longer handle what you got in a timeframe that you like. Indexing will be a thing. Of course even indexes grow way too huge to really perform, but hardware to the rescue, where at least all the indexes you need frequently will fit in RAM. Lots of caching going on too for your regular workloads.
Guess where the story is going? Well of course there's the old analytical vs. transactional load thing, i.e. your "Data Warehouse" is a separate database that is optimized for the pre-defined queries again, actually de-normalizing lots of things, being on different hardware, so as not to disturb warm caches for the transactional load etc. And yes, finally NoSQL again, i.e. back to the roots. Put more thought into how you're going to query this as your globe spanning SaaS load won't fit onto the hardware you have available. Of course this brings problems because we're just so good at predicting what kind of query we want to ask about our data. Databases like MongoDB, Cassandra, AWS DocumentDB etc. grow indexes supporting querying arbitrarily ... There's a hole in my bucket dear Liza, dear Liza ... :)
[I'm sure this nice story line is not globally completely correct/adhering to exact timelines but illustrates the point]
A different index, surely, not a different table? This sounds like an ISAM database to me, where you'd have to do lookups manually one by one, picking the right index for each yourself.
To be honest, I don't really know much about what he told me any more and I can't go back and ask him any longer. It's possible but I can't tell you yes or no for sure. It just evoked the memory of that conversation. Same with the Pick and MUMPS the other reply mentions. Doesn't ring a bell, seems possible though.
When I told him about SQL databases. Relational stuff, normalization. Awesome.
He told me that was all fine and dandy but just too slow, this new fangled SQL database stuff. He used databases where you simply accessed rows by key. If you needed to access something by a different key you just made a different table where the same data was arranged by that key instead. Super performant.
Of course my dad also programmed in languages like 370 assembler.
Funny how the young folks today talk about NoSQL databases indeed.