Neat idea. This could be refined by adding a git hook that runs (rip)grep on the entire codebase and fails if anything is found upon commit may accomplish a similar result and stop the code from being committed entirely. Requires a bit more setup work on de developers end, though.
A very similar situation to that analysed in this paper that was recently published. The quality of what is generated degrades significantly.
Although they mostly investigate replacing the data with ai generated data in each step, so I doubt the effect will be as pronounced in practice. Human writing will still be included and even curation of ai generated text by people can skew the distribution of the training data (as the process by these editors would inevitably do, as reasonable text could get through the cracks.)