Funny incentive problem, OpenAI obviously has an incentive to use it's best AI detection tool for adversarial training, with the result that it's detection tool will not be very good against chatGPT generated text because it is trained to defeat the detection tool.
I'm not saying that they are trying to disguise, I'm saying that their goal is natural language and a way you can distinguish chatGPT (aside from trivial things like "as an ai chatbot...") from human speech is most likely a way to improve chatGPT, because the goal is speaking like a human.