Completely agree. I've not seen it tested legally, but the EU now has a 'right to explanation' where automated decisions are made about people. This would prohibit closed ML from most arenas.
I don't see why the training data would need to be provided. The model would, and that's derived from the training data, but the training data itself shouldn't need to be provided. It is hard to explain a model with any degree of complexity with or without training data, so it might not be easy, but that's just the nature of complex models.
Yes this is why I'm not sure how it's been tested. In the scientific literature in many cases the data is anonymised and made available publicly or to those interested, but you can't always anonymise the data adequately, so an audit process might be necessary