We're in some ways already there. Not in terms of personality. But we're in a post-llm world. Training data contains some level of LLM generated material.
I guess its on the model creators to ensure their data is good. But it seems like we might have a situation where the training material degrades over time. I imagine it being like if you apply a lossy compression algorithm to the same item many times. IE resaving a JPEG as JPEG. You lose data every time and it eventually becomes shit.