That has some merit, a friend did a 'choir in a parking lot' project (COVID thing) where people started with a calibration phase to capture the timing relationships. Once basic calibration was done a low frequency tone in the signal was used to keep everything aligned. (that was the part I advised on, a 32 channel PLL :-))
The synthesizer project had me looking back at some of the firefly experiments (units syncing up by blinking an LED that other units can see). To some extent though "real" orchestras have phasing/timing issues between sections which gives them some of their timbre and removing it makes the music sound "computer generated". A loooong time ago another person I know was trying to inject this sort of variation into MIDI streams in order to make game music more "realistic." I don't know if that made it into production though.
The synthesizer project had me looking back at some of the firefly experiments (units syncing up by blinking an LED that other units can see). To some extent though "real" orchestras have phasing/timing issues between sections which gives them some of their timbre and removing it makes the music sound "computer generated". A loooong time ago another person I know was trying to inject this sort of variation into MIDI streams in order to make game music more "realistic." I don't know if that made it into production though.