I don't know about neither of those. How is it intrinsic? What stops detection improving just because AI gets better? Assuming it just doesn't become sentient human replica or something I mean AI like this where it's just a language model thing. Plus that's assuming future stuff you can track in the meanwhile and still don't justify "remove it because people dumb and do bad stuff with tool", that'd only justify removing it later as they do get better.
The algorithms are trained on minimizing the difference between what the algorithm produces and what a human produces. The better the algorithms the less the difference. The algorithms are at the point where there is very little difference and it won’t be long until there is no difference.