The references in the comments suggest ChatGPT as providing this effect. But that is (or should be) unlikely, the "training" or moderation (tweaking?) should actually solve this problem. It should be relatively easy to separate it's own generation from sources.
BUT where it will happen is when multiple instances of these language models compete with each other. ChatGPT quoting Bing or Bard output probably can't be reliably countered with internal training of ChatGTP, and the same goes for Bing & Bard and all the other myriad manifestations of these data mining techniques.
(Unless they merge them togther?)
Sorry a bit late replying-been away.
It is not the competition that is bad, it that anything produced by them cannot be tweaked and so become "circular" sources. There will be no way to test for "truth", at least on an individual bot the training data can be tweaked to not use its own production as source. The competition will make the validity or "truth" of most data questionable. I guess it should be possible for an individual LMM to be trained for "truth" (reality?) but it becomes almost impossible for a LMM to discern truth when the sources it is analyzing are of generated by another LMM