A problem has already emerged with generative AI such as the Chatbot ChatGPT. Two enterprising PhD students have shown through modelling that generative AI uses human generated content, from the internet. When this synthetic content is fed many times into the AI, a type of data inbreeding occurs, where programs go MAD: Model Autophagy Disorder. It takes only a mere five cycles of data inbreeding for the whole thing to “blow up.” The actual details are complex, but material explaining further, including a link to the paper are below.
The paper from Futurism.com asks what this means for the use of generative AI. I think the paper makes clear that there is an inescapable quality control problem, which could explain some bizarre things ChatGPT has done, such as fabrication of material.