Just an idea... I had a script doing record dumps at a prior job... (there were about 30 joins for getting a single record (denormalized) into json), so having each one in a separate .json.gz file was a useful backup solution... it was about zero cost when doing mirroring to another system for query/display purposes.
In the file system, to prevent too many records in a single directory it was split up per 1000 records... base/00001000/(1000-1999).json.gz ... this was mainly for being able to navigate this structure via a gui. I would suggest if your system can't do "basepath//*.json.gz" that you consider it.
In the file system, to prevent too many records in a single directory it was split up per 1000 records... base/00001000/(1000-1999).json.gz ... this was mainly for being able to navigate this structure via a gui. I would suggest if your system can't do "basepath//*.json.gz" that you consider it.