ChatGPT has been blowing every single translation task I've thrown it out of the water, even compared to other modern systems. I have no idea why more people aren't talking about that aspect of it either, other than the Anglosphere in general is kind of oblivious to things that aren't English.
For Russian, at least, sticking the article (bit by bit) into ChatGPT produces results that are broadly comparable to Bing and Google translators. It is somewhat more likely to pick words that are not direct translations, but might convey the idea better given the likely cultural background of someone speaking the language - for example, it will sometimes (but not always) replace "voodoo" with "witchcraft". However, the overall sentence structure is rather stilted and obviously non-native in places.
As others have noted, it doesn't seem to be fully language-aware outside of English. For example, if you ask it to write a poem or a song in English, it will usually make something that rhymes (or you can specifically demand that). But if you do the same for Russian, the result will not rhyme, even when specifically requested, and despite the model claiming that it does. If you ask it to explain what exactly the rhymes are, it will get increasingly nonsensical from there. I tried that after someone on HN complained about the same thing with Dutch, except they also noted that the generated text seemed like it would rhyme in English.
I wonder if that has something to do with sentence structure also being wrong. Given that English was predominant in the training corpus, I wonder if the resulting model "thinks" in English, so to speak - i.e. that some part of the resulting net is basically a translator, and the output of that is ultimately fed to the nodes that handle the correlation of tokens if you force it to talk in other languages.
I'm sure you're on the right track, regarding the % of the training corpus in English vs. other languages. It has done very well with colloquial Spanish as spoken in California, for example, which probably isn't too surprising.
What amazes me (and that you hint at) is that it still manages to pick more appropriate word/phrase choices, most of the time, even compared to dedicated translation software. I get the feeling (and I fully admit, this is just a feeling) that it's not using English, or any other language, as a pivot, but that there's some higher-dimensionality translation going on that allows it to perform as well as it does.