There are already quite complex language models which can run on a CPU. Outside of the government banning personal LLMs, the chance of there not existing a working fully FOSS and open data rewrite model, if it becomes known that ChatGPT output is marked, seems very low.
The water marking techniques also can not work after some level of sophisticated rewriting. There simply will be no data encoded in the probabilities of the words.
That is not a reliable indicator even today. GPT-4 (not the ChatGPT RLHF one) is not distinguishable from human writing. You could ask it about modern events, but that's not a long term plan, and it could just make the excuse they don't follow the news.
The water marking techniques also can not work after some level of sophisticated rewriting. There simply will be no data encoded in the probabilities of the words.