It seems inevitable, that without careful source patronage and curation, bad AI/ML responses will make their way back into models as training data in a feedback loop. This can be either from direct pastes of responses or from the information being re-synthesized by humans who digested the information previously.